Some checks failed
Build and Push Images / determine-version (push) Successful in 7s
Build and Push Images / build (Dockerfile.ray-worker-intel, intel) (push) Has been cancelled
Build and Push Images / build (Dockerfile.ray-worker-nvidia, nvidia) (push) Has been cancelled
Build and Push Images / build (Dockerfile.ray-worker-rdna2, rdna2) (push) Has been cancelled
Build and Push Images / build (Dockerfile.ray-worker-strixhalo, strixhalo) (push) Has been cancelled
Build and Push Images / Release (push) Has been cancelled
Build and Push Images / Notify (push) Has been cancelled
- Build vLLM v0.15.1 from source against vendor torch 2.9.1 - Preserve AMD's vendor PyTorch from rocm/pytorch:rocm7.0.2 base - use_existing_torch.py --prefix to strip torch from build-requires - Compile C++/HIP extensions for gfx1100 (mapped from gfx1151) - Install triton/flash-attn from wheels.vllm.ai/rocm with --no-deps - Add torch vendor verification step to catch accidental overwrites - Fix GPU_RESOURCE default to match cluster (gpu_strixhalo) - Remove unsupported expandable_segments from PYTORCH_ALLOC_CONF - AITER is gfx9-only; gfx11 uses TRITON_ATTN backend by default
9.7 KiB
9.7 KiB