MyGit

Lightning-AI/LitServe

Fork: 47 Star: 940 (更新于 2024-08-27 09:39:44)

license: Apache-2.0

Language: Python .

Lightning-fast serving engine for AI models. Flexible. Easy. Enterprise-scale.

最后发布版本: v0.2.1 ( 2024-08-26 19:51:31)

官方网址 GitHub网址

Easily serve AI models Lightning fast ⚡

Lightning

 

Lightning-fast serving engine for AI models.
Easy. Flexible. Enterprise-scale.


LitServe is an easy-to-use, flexible serving engine for AI models built on FastAPI. Features like batching, streaming, and GPU autoscaling eliminate the need to rebuild a FastAPI server per model.

LitServe is at least 2x faster than plain FastAPI due to AI-specific multi-worker handling.

✅ (2x)+ faster serving  ✅ Easy to use        ✅ Batching, Streaming   
✅ Bring your own model  ✅ PyTorch/JAX/TF/... ✅ Built on FastAPI      
✅ GPU autoscaling       ✅ Multi-modal        ✅ Self-host or ⚡️ managed

Discord cpu-tests license

Quick startExamplesFeaturesPerformanceHostingDocs

 

Get started

 

Quick start

Install LitServe via pip (more options):

pip install litserve

Define a server

This toy example with 2 models (AI compound system) shows LitServe's flexibility (see real examples):

# server.py
import litserve as ls

# (STEP 1) - DEFINE THE API (compound AI system)
class SimpleLitAPI(ls.LitAPI):
    def setup(self, device):
        # setup is called once at startup. Build a compound AI system (1+ models), connect DBs, load data, etc...
        self.model1 = lambda x: x**2
        self.model2 = lambda x: x**3

    def decode_request(self, request):
        # Convert the request payload to model input.
        return request["input"] 

    def predict(self, x):
        # Easily build compound systems. Run inference and return the output.
        squared = self.model1(x)
        cubed = self.model2(x)
        output = squared + cubed
        return {"output": output}

    def encode_response(self, output):
        # Convert the model output to a response payload.
        return {"output": output} 

# (STEP 2) - START THE SERVER
if __name__ == "__main__":
    # scale with advanced features (batching, GPUs, etc...)
    server = ls.LitServer(SimpleLitAPI(), accelerator="auto", max_batch_size=1)
    server.run(port=8000)

Now run the server via the command-line

python server.py
  • LitAPI gives full control to build scalable compound AI systems (1 or more models).
  • LitServer handles optimizations like batching, auto-GPU scaling, etc...

Query the server

Use the auto-generated LitServe client:

python client.py
Write a custom client
import requests
response = requests.post(
    "http://127.0.0.1:8000/predict",
    json={"input": 4.0}
)

Learn how to make this server 200x faster.

 

Featured examples

Use LitServe to deploy any model or AI service: (Gen AI, classical ML, embedding servers, LLMs, vision, audio, multi-modal systems, etc...)

Featured examples
Toy model: Hello world LLMs: Llama 3 (8B), LLM Proxy server, Agent with tool use NLP: Hugging face, BERT, Text embedding API Multimodal: OpenAI Clip, MiniCPM, Phi-3.5 Vision Instruct Audio: Whisper, AudioCraft, StableAudio, Noise cancellation (DeepFilterNet) Vision: Stable diffusion 2, AuraFlow, Flux, Image super resolution (Aura SR) Speech: Text-speech (XTTS V2) Classical ML: Random forest, XGBoost Miscellaneous: Media conversion API (ffmpeg)

Browse 100+ community-built templates

 

Features

State-of-the-art features:

(2x)+ faster than plain FastAPI
Bring your own model
Build compound systems (1+ models)
GPU autoscaling
Batching
Streaming
Worker autoscaling
Self-host on your machines
Host fully managed on Lightning AI
Serve all models: (LLMs, vision, etc.)
Scale to zero (serverless)
Supports PyTorch, JAX, TF, etc...
OpenAPI compliant
Open AI compatibility
Authentication

10+ features...

Note: We prioritize scalable, enterprise-level features over hype.

 

Performance

LitServe is designed for AI workloads. Specialized multi-worker handling delivers a minimum 2x speedup over FastAPI.

Additional features like batching and GPU autoscaling can drive performance well beyond 2x, scaling efficiently to handle more simultaneous requests than FastAPI and TorchServe.

Reproduce the full benchmarks here (higher is better).

LitServe

These results are for image and text classification ML tasks. The performance relationships hold for other ML tasks (embedding, LLM serving, audio, segmentation, object detection, summarization etc...).

💡 Note on LLM serving: For high-performance LLM serving (like Ollama/VLLM), use LitGPT or build your custom VLLM-like server with LitServe. Optimizations like kv-caching, which can be done with LitServe, are needed to maximize LLM performance.

 

Hosting options

LitServe can be hosted independently on your own machines or fully managed via Lightning Studios.

Self-hosting is ideal for hackers, students, and DIY developers, while fully managed hosting is ideal for enterprise developers needing easy autoscaling, security, release management, and 99.995% uptime and observability.

 

Host on Lightning

 

Feature Self Managed Fully Managed on Studios
Deployment ✅ Do it yourself deployment ✅ One-button cloud deploy
Load balancing
Autoscaling
Scale to zero
Multi-machine inference
Authentication
Own VPC
AWS, GCP
Use your own cloud commits

 

Community

LitServe is a community project accepting contributions - Let's make the world's most advanced AI inference engine.

💬 Get help on Discord
📋 License: Apache 2.0

最近版本更新:(数据更新于 2024-08-28 16:26:41)

2024-08-26 19:51:31 v0.2.1

2024-08-22 21:04:03 v0.2.0

2024-08-12 18:45:48 v0.2.0.dev0

2024-08-02 19:52:54 v0.1.5

2024-07-24 22:04:00 v0.1.4

2024-07-15 21:06:43 v0.1.3

2024-06-13 05:07:11 v0.1.2

2024-06-07 01:55:36 v0.1.1

2024-06-05 03:37:04 v0.1.1dev0

2024-04-23 21:08:48 v0.1.0

主题(topics):

ai, api, serving

Lightning-AI/LitServe同语言 Python最近更新仓库

2024-09-18 18:07:12 jxxghp/MoviePilot

2024-09-18 17:54:43 ultralytics/ultralytics

2024-09-18 08:08:50 mvdctop/Movie_Data_Capture

2024-09-17 21:01:45 Zipstack/unstract

2024-09-17 20:13:22 linruowuyin/Fhoe-Rail

2024-09-17 15:04:22 mikumifa/biliTickerBuy