Some checks failed
Build and Push Images / determine-version (push) Successful in 5s
Build and Push Images / build (Dockerfile.ray-worker-rdna2, rdna2) (push) Has been cancelled
Build and Push Images / build (Dockerfile.ray-worker-strixhalo, strixhalo) (push) Has been cancelled
Build and Push Images / Release (push) Has been cancelled
Build and Push Images / Notify (push) Has been cancelled
Build and Push Images / build (Dockerfile.ray-worker-nvidia, nvidia) (push) Has been cancelled
Build and Push Images / build (Dockerfile.ray-worker-intel, intel) (push) Has been cancelled
vllm was installed with --no-deps to avoid torch/xgrammar pin conflicts. This left msgspec, fastapi, openai, xgrammar, and other runtime deps missing. Now explicitly installs all vllm runtime deps in a separate layer, with xgrammar in the --no-deps ROCm layer.
12 KiB
12 KiB