feat: add pyproject.toml and CI for ray-serve-apps package
Some checks failed
Build and Push Images / build-nvidia (push) Failing after 7m25s
Build and Push Images / build-rdna2 (push) Failing after 7m29s
Build and Push Images / build-strixhalo (push) Failing after 6m45s
Build and Push Images / build-intel (push) Failing after 6m22s
Build and Push Images / Release (push) Has been skipped
Build and Push Images / Notify (push) Successful in 1s
Build and Publish ray-serve-apps / lint (push) Failing after 3m9s
Build and Publish ray-serve-apps / publish (push) Has been skipped
Some checks failed
Build and Push Images / build-nvidia (push) Failing after 7m25s
Build and Push Images / build-rdna2 (push) Failing after 7m29s
Build and Push Images / build-strixhalo (push) Failing after 6m45s
Build and Push Images / build-intel (push) Failing after 6m22s
Build and Push Images / Release (push) Has been skipped
Build and Push Images / Notify (push) Successful in 1s
Build and Publish ray-serve-apps / lint (push) Failing after 3m9s
Build and Publish ray-serve-apps / publish (push) Has been skipped
- Restructure ray-serve as proper Python package (ray_serve/) - Add pyproject.toml with hatch build system - Add CI workflow to publish to Gitea PyPI - Add py.typed for PEP 561 compliance - Aligns with ADR-0019 handler deployment strategy
This commit is contained in:
14
ray-serve/ray_serve/__init__.py
Normal file
14
ray-serve/ray_serve/__init__.py
Normal file
@@ -0,0 +1,14 @@
|
||||
# Ray Serve deployments for GPU-shared AI inference
|
||||
from ray_serve.serve_embeddings import app as embeddings_app
|
||||
from ray_serve.serve_llm import app as llm_app
|
||||
from ray_serve.serve_reranker import app as reranker_app
|
||||
from ray_serve.serve_tts import app as tts_app
|
||||
from ray_serve.serve_whisper import app as whisper_app
|
||||
|
||||
__all__ = [
|
||||
"embeddings_app",
|
||||
"llm_app",
|
||||
"reranker_app",
|
||||
"tts_app",
|
||||
"whisper_app",
|
||||
]
|
||||
Reference in New Issue
Block a user