MyGit
🚩收到GitHub仓库的更新通知

xorbitsai/inference

Fork: 196 Star: 2511 (更新于 2024-04-24 19:32:50)

license: Apache-2.0

Language: Python .

Replace OpenAI GPT with another LLM in your app by changing a single line of code. Xinference gives you the freedom to use any LLM you need. With Xinference, you're empowered to run inference with any open-source language models, speech recognition models, and multimodal models, whether in the cloud, on-premises, or even on your laptop.

最后发布版本: v0.10.3 ( 2024-04-24 10:57:21)

官方网址 GitHub网址

✨免费申请网站SSL证书,支持多域名和泛域名,点击查看
xorbits

Xorbits Inference: Model Serving Made Easy 🤖

PyPI Latest Release License Build Status Slack Twitter

English | 中文介绍 | 日本語


Xorbits Inference(Xinference) is a powerful and versatile library designed to serve language, speech recognition, and multimodal models. With Xorbits Inference, you can effortlessly deploy and serve your or state-of-the-art built-in models using just a single command. Whether you are a researcher, developer, or data scientist, Xorbits Inference empowers you to unleash the full potential of cutting-edge AI models.

👉 Join our Slack community!

🔥 Hot Topics

Framework Enhancements

  • Support specifying worker and GPU indexes for launching models: #1195
  • Support SGLang backend: #1161
  • Support LoRA for LLM and image models: #1080
  • Support speech recognition model: #929
  • Metrics support: #906
  • Docker image: #855
  • Support multimodal: #829

New Models

Integrations

  • FastGPT: a knowledge-based platform built on the LLM, offers out-of-the-box data processing and model invocation capabilities, allows for workflow orchestration through Flow visualization.
  • Dify: an LLMOps platform that enables developers (and even non-developers) to quickly build useful applications based on large language models, ensuring they are visual, operable, and improvable.
  • Chatbox: a desktop client for multiple cutting-edge LLM models, available on Windows, Mac and Linux.

Key Features

🌟 Model Serving Made Easy: Simplify the process of serving large language, speech recognition, and multimodal models. You can set up and deploy your models for experimentation and production with a single command.

⚡️ State-of-the-Art Models: Experiment with cutting-edge built-in models using a single command. Inference provides access to state-of-the-art open-source models!

🖥 Heterogeneous Hardware Utilization: Make the most of your hardware resources with ggml. Xorbits Inference intelligently utilizes heterogeneous hardware, including GPUs and CPUs, to accelerate your model inference tasks.

⚙️ Flexible API and Interfaces: Offer multiple interfaces for interacting with your models, supporting OpenAI compatible RESTful API (including Function Calling API), RPC, CLI and WebUI for seamless model management and interaction.

🌐 Distributed Deployment: Excel in distributed deployment scenarios, allowing the seamless distribution of model inference across multiple devices or machines.

🔌 Built-in Integration with Third-Party Libraries: Xorbits Inference seamlessly integrates with popular third-party libraries including LangChain, LlamaIndex, Dify, and Chatbox.

Why Xinference

Feature Xinference FastChat OpenLLM RayLLM
OpenAI-Compatible RESTful API
vLLM Integrations
More Inference Engines (GGML, TensorRT)
More Platforms (CPU, Metal)
Multi-node Cluster Deployment
Image Models (Text-to-Image)
Text Embedding Models
Multimodal Models
Audio Models
More OpenAI Functionalities (Function Calling)

Getting Started

Please give us a star before you begin, and you'll receive instant notifications for every new release on GitHub!

Jupyter Notebook

The lightest way to experience Xinference is to try our Juypter Notebook on Google Colab.

Docker

Nvidia GPU users can start Xinference server using Xinference Docker Image. Prior to executing the installation command, ensure that both Docker and CUDA are set up on your system.

docker run --name xinference -d -p 9997:9997 -e XINFERENCE_HOME=/data -v </on/your/host>:/data --gpus all xprobe/xinference:latest xinference-local -H 0.0.0.0

Quick Start

Install Xinference by using pip as follows. (For more options, see Installation page.)

pip install "xinference[all]"

To start a local instance of Xinference, run the following command:

$ xinference-local

Once Xinference is running, there are multiple ways you can try it: via the web UI, via cURL, via the command line, or via the Xinference’s python client. Check out our docs for the guide.

web UI

Getting involved

Platform Purpose
Github Issues Reporting bugs and filing feature requests.
Slack Collaborating with other Xorbits users.
Twitter Staying up-to-date on new features.

Contributors

最近版本更新:(数据更新于 2024-04-29 00:38:04)

2024-04-24 10:57:21 v0.10.3

2024-04-19 14:48:24 v0.10.2.post1

2024-04-19 14:19:47 v0.10.2

2024-04-12 10:47:05 v0.10.1

2024-03-29 12:56:34 v0.10.0

2024-03-21 15:06:29 v0.9.4

2024-03-15 14:36:19 v0.9.3

2024-03-08 14:09:49 v0.9.2

2024-03-01 15:04:29 v0.9.1

2024-02-22 16:03:44 v0.9.0

主题(topics):

artificial-intelligence, chatglm, chatglm2, deployment, flan-t5, gemma, ggml, inference, llama, llama2, llamacpp, llm, machine-learning, mistral, openai-api, pytorch, qwen, vllm, whisper, wizardlm

xorbitsai/inference同语言 Python最近更新仓库

2024-04-27 15:28:28 home-assistant/core

2024-04-27 09:20:24 jxxghp/MoviePilot

2024-04-27 01:55:30 Dao-AILab/flash-attention

2024-04-27 00:42:42 LmeSzinc/AzurLaneAutoScript

2024-04-26 16:49:36 netease-youdao/QAnything

2024-04-26 16:19:17 zhayujie/chatgpt-on-wechat