bigscience-workshop/petals
Fork: 525 Star: 9245 (更新于 2024-11-18 15:15:58)
license: MIT
Language: Python .
🌸 Run LLMs at home, BitTorrent-style. Fine-tuning and inference up to 10x faster than offloading
最后发布版本: v2.2.0 ( 2023-09-07 01:29:56)
Run large language models at home, BitTorrent-style.
Fine-tuning and inference up to 10x faster than offloading
Generate text with distributed Llama 3.1 (up to 405B), Mixtral (8x22B), Falcon (40B+) or BLOOM (176B) and fine‑tune them for your own tasks — right from your desktop computer or Google Colab:
from transformers import AutoTokenizer
from petals import AutoDistributedModelForCausalLM
# Choose any model available at https://health.petals.dev
model_name = "meta-llama/Meta-Llama-3.1-405B-Instruct"
# Connect to a distributed network hosting model layers
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoDistributedModelForCausalLM.from_pretrained(model_name)
# Run the model as if it were on your computer
inputs = tokenizer("A cat sat", return_tensors="pt")["input_ids"]
outputs = model.generate(inputs, max_new_tokens=5)
print(tokenizer.decode(outputs[0])) # A cat sat on a mat...
🦙 Want to run Llama? Request access to its weights, then run huggingface-cli login
in the terminal before loading the model. Or just try it in our chatbot app.
🔏 Privacy. Your data will be processed with the help of other people in the public swarm. Learn more about privacy here. For sensitive data, you can set up a private swarm among people you trust.
💬 Any questions? Ping us in our Discord!
Connect your GPU and increase Petals capacity
Petals is a community-run system — we rely on people sharing their GPUs. You can help serving one of the available models or host a new model from 🤗 Model Hub!
As an example, here is how to host a part of Llama 3.1 (405B) Instruct on your GPU:
🦙 Want to host Llama? Request access to its weights, then run huggingface-cli login
in the terminal before loading the model.
🐧 Linux + Anaconda. Run these commands for NVIDIA GPUs (or follow this for AMD):
conda install pytorch pytorch-cuda=11.7 -c pytorch -c nvidia
pip install git+https://github.com/bigscience-workshop/petals
python -m petals.cli.run_server meta-llama/Meta-Llama-3.1-405B-Instruct
🪟 Windows + WSL. Follow this guide on our Wiki.
🐋 Docker. Run our Docker image for NVIDIA GPUs (or follow this for AMD):
sudo docker run -p 31330:31330 --ipc host --gpus all --volume petals-cache:/cache --rm \
learningathome/petals:main \
python -m petals.cli.run_server --port 31330 meta-llama/Meta-Llama-3.1-405B-Instruct
🍏 macOS + Apple M1/M2 GPU. Install Homebrew, then run these commands:
brew install python
python3 -m pip install git+https://github.com/bigscience-workshop/petals
python3 -m petals.cli.run_server meta-llama/Meta-Llama-3.1-405B-Instruct
📚 Learn more (how to use multiple GPUs, start the server on boot, etc.)
🔒 Security. Hosting a server does not allow others to run custom code on your computer. Learn more here.
💬 Any questions? Ping us in our Discord!
🏆 Thank you! Once you load and host 10+ blocks, we can show your name or link on the swarm monitor as a way to say thanks. You can specify them with --public_name YOUR_NAME
.
How does it work?
- You load a small part of the model, then join a network of people serving the other parts. Single‑batch inference runs at up to 6 tokens/sec for Llama 2 (70B) and up to 4 tokens/sec for Falcon (180B) — enough for chatbots and interactive apps.
- You can employ any fine-tuning and sampling methods, execute custom paths through the model, or see its hidden states. You get the comforts of an API with the flexibility of PyTorch and 🤗 Transformers.
📜 Read paper 📚 See FAQ
📚 Tutorials, examples, and more
Basic tutorials:
- Getting started: tutorial
- Prompt-tune Llama-65B for text semantic classification: tutorial
- Prompt-tune BLOOM to create a personified chatbot: tutorial
Useful tools:
- Chatbot web app (connects to Petals via an HTTP/WebSocket endpoint): source code
- Monitor for the public swarm: source code
Advanced guides:
Benchmarks
Please see Section 3.3 of our paper.
🛠️ Contributing
Please see our FAQ on contributing.
📜 Citations
Alexander Borzunov, Dmitry Baranchuk, Tim Dettmers, Max Ryabinin, Younes Belkada, Artem Chumachenko, Pavel Samygin, and Colin Raffel. Petals: Collaborative Inference and Fine-tuning of Large Models. Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 3: System Demonstrations). 2023.
@inproceedings{borzunov2023petals,
title = {Petals: Collaborative Inference and Fine-tuning of Large Models},
author = {Borzunov, Alexander and Baranchuk, Dmitry and Dettmers, Tim and Riabinin, Maksim and Belkada, Younes and Chumachenko, Artem and Samygin, Pavel and Raffel, Colin},
booktitle = {Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 3: System Demonstrations)},
pages = {558--568},
year = {2023},
url = {https://arxiv.org/abs/2209.01188}
}
Alexander Borzunov, Max Ryabinin, Artem Chumachenko, Dmitry Baranchuk, Tim Dettmers, Younes Belkada, Pavel Samygin, and Colin Raffel. Distributed inference and fine-tuning of large language models over the Internet. Advances in Neural Information Processing Systems 36 (2023).
@inproceedings{borzunov2023distributed,
title = {Distributed inference and fine-tuning of large language models over the {I}nternet},
author = {Borzunov, Alexander and Ryabinin, Max and Chumachenko, Artem and Baranchuk, Dmitry and Dettmers, Tim and Belkada, Younes and Samygin, Pavel and Raffel, Colin},
booktitle = {Advances in Neural Information Processing Systems},
volume = {36},
pages = {12312--12331},
year = {2023},
url = {https://arxiv.org/abs/2312.08361}
}
This project is a part of the BigScience research workshop.
最近版本更新:(数据更新于 2024-11-21 17:51:45)
2023-09-07 01:29:56 v2.2.0
2023-08-25 00:42:00 v2.1.0
2023-07-23 22:54:09 v2.0.1
2023-07-20 02:29:48 v2.0.0.post1
2023-05-10 07:03:13 v1.1.5
2023-04-21 10:26:19 v1.1.4
2023-03-01 17:15:25 v1.1.3
2023-01-31 04:38:50 v1.1.2
2023-01-14 05:41:32 v1.1.1
2023-01-10 19:53:49 v1.1.0
主题(topics):
bloom, chatbot, deep-learning, distributed-systems, falcon, gpt, guanaco, language-models, large-language-models, llama, machine-learning, mixtral, neural-networks, nlp, pipeline-parallelism, pretrained-models, pytorch, tensor-parallelism, transformer, volunteer-computing
bigscience-workshop/petals同语言 Python最近更新仓库
2024-11-22 19:26:55 ultralytics/ultralytics
2024-11-22 08:12:43 jxxghp/MoviePilot
2024-11-22 06:12:44 dagster-io/dagster
2024-11-22 02:39:01 goauthentik/authentik
2024-11-22 00:15:39 jumpserver/jumpserver
2024-11-22 00:03:47 comfyanonymous/ComfyUI