xorbitsai/inference
Fork: 470 Star: 5650 (更新于 2024-12-13 22:20:44)
license: Apache-2.0
Language: Python .
Replace OpenAI GPT with another LLM in your app by changing a single line of code. Xinference gives you the freedom to use any LLM you need. With Xinference, you're empowered to run inference with any open-source language models, speech recognition models, and multimodal models, whether in the cloud, on-premises, or even on your laptop.
最后发布版本: v1.1.0 ( 2024-12-13 18:29:37)
Xorbits Inference: Model Serving Made Easy 🤖
Xinference Cloud · Xinference Enterprise · Self-hosting · Documentation
Xorbits Inference(Xinference) is a powerful and versatile library designed to serve language, speech recognition, and multimodal models. With Xorbits Inference, you can effortlessly deploy and serve your or state-of-the-art built-in models using just a single command. Whether you are a researcher, developer, or data scientist, Xorbits Inference empowers you to unleash the full potential of cutting-edge AI models.
🔥 Hot Topics
Framework Enhancements
- Support Continuous batching for Transformers engine: #1724
- Support MLX backend for Apple Silicon chips: #1765
- Support specifying worker and GPU indexes for launching models: #1195
- Support SGLang backend: #1161
- Support LoRA for LLM and image models: #1080
- Support speech recognition model: #929
- Metrics support: #906
New Models
- Built-in support for F5-TTS: #2626
- Built-in support for GLM Edge: #2582
- Built-in support for QwQ-32B-Preview: #2602
- Built-in support for Qwen 2.5 Series: #2325
- Built-in support for Fish Speech V1.4: #2295
- Built-in support for DeepSeek-V2.5: #2292
- Built-in support for Qwen2-Audio: #2271
- Built-in support for Qwen2-vl-instruct: #2205
Integrations
- Dify: an LLMOps platform that enables developers (and even non-developers) to quickly build useful applications based on large language models, ensuring they are visual, operable, and improvable.
- FastGPT: a knowledge-based platform built on the LLM, offers out-of-the-box data processing and model invocation capabilities, allows for workflow orchestration through Flow visualization.
- Chatbox: a desktop client for multiple cutting-edge LLM models, available on Windows, Mac and Linux.
- RAGFlow: is an open-source RAG engine based on deep document understanding.
Key Features
🌟 Model Serving Made Easy: Simplify the process of serving large language, speech recognition, and multimodal models. You can set up and deploy your models for experimentation and production with a single command.
⚡️ State-of-the-Art Models: Experiment with cutting-edge built-in models using a single command. Inference provides access to state-of-the-art open-source models!
🖥 Heterogeneous Hardware Utilization: Make the most of your hardware resources with ggml. Xorbits Inference intelligently utilizes heterogeneous hardware, including GPUs and CPUs, to accelerate your model inference tasks.
⚙️ Flexible API and Interfaces: Offer multiple interfaces for interacting with your models, supporting OpenAI compatible RESTful API (including Function Calling API), RPC, CLI and WebUI for seamless model management and interaction.
🌐 Distributed Deployment: Excel in distributed deployment scenarios, allowing the seamless distribution of model inference across multiple devices or machines.
🔌 Built-in Integration with Third-Party Libraries: Xorbits Inference seamlessly integrates with popular third-party libraries including LangChain, LlamaIndex, Dify, and Chatbox.
Why Xinference
Feature | Xinference | FastChat | OpenLLM | RayLLM |
---|---|---|---|---|
OpenAI-Compatible RESTful API | ✅ | ✅ | ✅ | ✅ |
vLLM Integrations | ✅ | ✅ | ✅ | ✅ |
More Inference Engines (GGML, TensorRT) | ✅ | ❌ | ✅ | ✅ |
More Platforms (CPU, Metal) | ✅ | ✅ | ❌ | ❌ |
Multi-node Cluster Deployment | ✅ | ❌ | ❌ | ✅ |
Image Models (Text-to-Image) | ✅ | ✅ | ❌ | ❌ |
Text Embedding Models | ✅ | ❌ | ❌ | ❌ |
Multimodal Models | ✅ | ❌ | ❌ | ❌ |
Audio Models | ✅ | ❌ | ❌ | ❌ |
More OpenAI Functionalities (Function Calling) | ✅ | ❌ | ❌ | ❌ |
Using Xinference
-
Cloud We host a Xinference Cloud service for anyone to try with zero setup.
-
Self-hosting Xinference Community Edition Quickly get Xinference running in your environment with this starter guide. Use our documentation for further references and more in-depth instructions.
-
Xinference for enterprise / organizations We provide additional enterprise-centric features. send us an email to discuss enterprise needs.
Staying Ahead
Star Xinference on GitHub and be instantly notified of new releases.
Getting Started
Jupyter Notebook
The lightest way to experience Xinference is to try our Jupyter Notebook on Google Colab.
Docker
Nvidia GPU users can start Xinference server using Xinference Docker Image. Prior to executing the installation command, ensure that both Docker and CUDA are set up on your system.
docker run --name xinference -d -p 9997:9997 -e XINFERENCE_HOME=/data -v </on/your/host>:/data --gpus all xprobe/xinference:latest xinference-local -H 0.0.0.0
K8s via helm
Ensure that you have GPU support in your Kubernetes cluster, then install as follows.
# add repo
helm repo add xinference https://xorbitsai.github.io/xinference-helm-charts
# update indexes and query xinference versions
helm repo update xinference
helm search repo xinference/xinference --devel --versions
# install xinference
helm install xinference xinference/xinference -n xinference --version 0.0.1-v<xinference_release_version>
For more customized installation methods on K8s, please refer to the documentation.
Quick Start
Install Xinference by using pip as follows. (For more options, see Installation page.)
pip install "xinference[all]"
To start a local instance of Xinference, run the following command:
$ xinference-local
Once Xinference is running, there are multiple ways you can try it: via the web UI, via cURL, via the command line, or via the Xinference’s python client. Check out our docs for the guide.
Getting involved
Platform | Purpose |
---|---|
Github Issues | Reporting bugs and filing feature requests. |
Slack | Collaborating with other Xorbits users. |
Staying up-to-date on new features. |
Citation
If this work is helpful, please kindly cite as:
@inproceedings{lu2024xinference,
title = "Xinference: Making Large Model Serving Easy",
author = "Lu, Weizheng and Xiong, Lingfeng and Zhang, Feng and Qin, Xuye and Chen, Yueguo",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-demo.30",
pages = "291--300",
}
Contributors
Star History
最近版本更新:(数据更新于 2024-12-13 22:05:10)
2024-12-13 18:29:37 v1.1.0
2024-11-29 18:22:44 v1.0.1
2024-11-15 18:15:44 v1.0.0
2024-11-08 13:47:44 v0.16.3
2024-11-01 18:09:08 v0.16.2
2024-10-25 15:33:40 v0.16.1
2024-10-18 19:40:36 v0.16.0
2024-10-12 18:38:44 v0.15.4
2024-09-30 21:42:58 v0.15.3
2024-09-20 17:05:03 v0.15.2
主题(topics):
artificial-intelligence, chatglm, deployment, flan-t5, gemma, ggml, glm4, inference, llama, llama3, llamacpp, llm, machine-learning, mistral, openai-api, pytorch, qwen, vllm, whisper, wizardlm
xorbitsai/inference同语言 Python最近更新仓库
2024-12-13 19:36:58 QuivrHQ/MegaParse
2024-12-13 19:16:50 home-assistant/core
2024-12-13 18:41:49 yt-dlp/yt-dlp
2024-12-13 16:16:50 zhayujie/chatgpt-on-wechat
2024-12-13 01:22:08 midoks/mdserver-web
2024-12-13 00:31:12 rashevskyv/dbi