fix(strixhalo): add all missing vllm runtime deps for v1.0.23
Some checks failed
Build and Push Images / build (Dockerfile.ray-worker-intel, intel) (push) Has been cancelled
Build and Push Images / build (Dockerfile.ray-worker-nvidia, nvidia) (push) Has been cancelled
Build and Push Images / build (Dockerfile.ray-worker-rdna2, rdna2) (push) Has been cancelled
Build and Push Images / build (Dockerfile.ray-worker-strixhalo, strixhalo) (push) Has been cancelled
Build and Push Images / Release (push) Has been cancelled
Build and Push Images / Notify (push) Has been cancelled
Build and Push Images / determine-version (push) Has been cancelled
Some checks failed
Build and Push Images / build (Dockerfile.ray-worker-intel, intel) (push) Has been cancelled
Build and Push Images / build (Dockerfile.ray-worker-nvidia, nvidia) (push) Has been cancelled
Build and Push Images / build (Dockerfile.ray-worker-rdna2, rdna2) (push) Has been cancelled
Build and Push Images / build (Dockerfile.ray-worker-strixhalo, strixhalo) (push) Has been cancelled
Build and Push Images / Release (push) Has been cancelled
Build and Push Images / Notify (push) Has been cancelled
Build and Push Images / determine-version (push) Has been cancelled
Added: cachetools, ijson, opencv-python-headless, grpcio-reflection, grpcio-tools, anthropic, mcp, tensorizer (plus pybase64, setproctitle from prior edit)
This commit is contained in:
@@ -204,7 +204,18 @@ RUN --mount=type=cache,target=/root/.cache/uv \
|
||||
'einops>=0.7.0' \
|
||||
'depyf>=0.18.0' \
|
||||
'grpcio>=1.60.0' \
|
||||
'protobuf>=4.25.0'
|
||||
'protobuf>=4.25.0' \
|
||||
'pybase64>=1.0' \
|
||||
'setproctitle>=1.3.0' \
|
||||
# ── additional vllm deps missing from --no-deps install ──
|
||||
'cachetools>=5.0' \
|
||||
'ijson>=3.2' \
|
||||
'opencv-python-headless>=4.8' \
|
||||
'grpcio-reflection>=1.60.0' \
|
||||
'grpcio-tools>=1.60.0' \
|
||||
'anthropic>=0.20.0' \
|
||||
'mcp>=1.0' \
|
||||
'tensorizer>=2.9.0'
|
||||
|
||||
# ── Ray Serve application package ──────────────────────────────────────
|
||||
# Baked into the image so the LLM serve app can use the source-built vllm
|
||||
|
||||
Reference in New Issue
Block a user