fix(strixhalo): add remaining vllm deps for v1.0.24
Some checks failed
Build and Push Images / build (Dockerfile.ray-worker-intel, intel) (push) Has been cancelled
Build and Push Images / build (Dockerfile.ray-worker-nvidia, nvidia) (push) Has been cancelled
Build and Push Images / build (Dockerfile.ray-worker-rdna2, rdna2) (push) Has been cancelled
Build and Push Images / build (Dockerfile.ray-worker-strixhalo, strixhalo) (push) Has been cancelled
Build and Push Images / Release (push) Has been cancelled
Build and Push Images / Notify (push) Has been cancelled
Build and Push Images / determine-version (push) Has been cancelled
Some checks failed
Build and Push Images / build (Dockerfile.ray-worker-intel, intel) (push) Has been cancelled
Build and Push Images / build (Dockerfile.ray-worker-nvidia, nvidia) (push) Has been cancelled
Build and Push Images / build (Dockerfile.ray-worker-rdna2, rdna2) (push) Has been cancelled
Build and Push Images / build (Dockerfile.ray-worker-strixhalo, strixhalo) (push) Has been cancelled
Build and Push Images / Release (push) Has been cancelled
Build and Push Images / Notify (push) Has been cancelled
Build and Push Images / determine-version (push) Has been cancelled
openai-harmony, llguidance, conch-triton-kernels, model-hosting-container-standards, runai-model-streamer, timm
This commit is contained in:
@@ -215,7 +215,13 @@ RUN --mount=type=cache,target=/root/.cache/uv \
|
|||||||
'grpcio-tools>=1.60.0' \
|
'grpcio-tools>=1.60.0' \
|
||||||
'anthropic>=0.20.0' \
|
'anthropic>=0.20.0' \
|
||||||
'mcp>=1.0' \
|
'mcp>=1.0' \
|
||||||
'tensorizer>=2.9.0'
|
'tensorizer>=2.9.0' \
|
||||||
|
'openai-harmony>=0.0.6' \
|
||||||
|
'llguidance>=1.0' \
|
||||||
|
'conch-triton-kernels>=1.0' \
|
||||||
|
'model-hosting-container-standards>=0.1.0' \
|
||||||
|
'runai-model-streamer>=0.15.0' \
|
||||||
|
'timm>=1.0'
|
||||||
|
|
||||||
# ── Ray Serve application package ──────────────────────────────────────
|
# ── Ray Serve application package ──────────────────────────────────────
|
||||||
# Baked into the image so the LLM serve app can use the source-built vllm
|
# Baked into the image so the LLM serve app can use the source-built vllm
|
||||||
|
|||||||
Reference in New Issue
Block a user