adapter-hub/adapters
Fork: 346 Star: 2576 (更新于 2024-11-14 16:26:03)
license: Apache-2.0
Language: Jupyter Notebook .
A Unified Library for Parameter-Efficient and Modular Transfer Learning
最后发布版本: v1.0.0 ( 2024-08-10 23:47:41)
Adapters
A Unified Library for Parameter-Efficient and Modular Transfer Learning
Website • Documentation • Paper
Adapters is an add-on library to HuggingFace's Transformers, integrating 10+ adapter methods into 20+ state-of-the-art Transformer models with minimal coding overhead for training and inference.
Adapters provides a unified interface for efficient fine-tuning and modular transfer learning, supporting a myriad of features like full-precision or quantized training (e.g. Q-LoRA, Q-Bottleneck Adapters, or Q-PrefixTuning), adapter merging via task arithmetics or the composition of multiple adapters via composition blocks, allowing advanced research in parameter-efficient transfer learning for NLP tasks.
Note: The Adapters library has replaced the
adapter-transformers
package. All previously trained adapters are compatible with the new library. For transitioning, please read: https://docs.adapterhub.ml/transitioning.html.
Installation
adapters
currently supports Python 3.8+ and PyTorch 1.10+.
After installing PyTorch, you can install adapters
from PyPI ...
pip install -U adapters
... or from source by cloning the repository:
git clone https://github.com/adapter-hub/adapters.git
cd adapters
pip install .
Quick Tour
Load pre-trained adapters:
from adapters import AutoAdapterModel
from transformers import AutoTokenizer
model = AutoAdapterModel.from_pretrained("roberta-base")
tokenizer = AutoTokenizer.from_pretrained("roberta-base")
model.load_adapter("AdapterHub/roberta-base-pf-imdb", source="hf", set_active=True)
print(model(**tokenizer("This works great!", return_tensors="pt")).logits)
Adapt existing model setups:
import adapters
from transformers import AutoModelForSequenceClassification
model = AutoModelForSequenceClassification.from_pretrained("t5-base")
adapters.init(model)
model.add_adapter("my_lora_adapter", config="lora")
model.train_adapter("my_lora_adapter")
# Your regular training loop...
Flexibly configure adapters:
from adapters import ConfigUnion, PrefixTuningConfig, ParBnConfig, AutoAdapterModel
model = AutoAdapterModel.from_pretrained("microsoft/deberta-v3-base")
adapter_config = ConfigUnion(
PrefixTuningConfig(prefix_length=20),
ParBnConfig(reduction_factor=4),
)
model.add_adapter("my_adapter", config=adapter_config, set_active=True)
Easily compose adapters in a single model:
from adapters import AdapterSetup, AutoAdapterModel
import adapters.composition as ac
model = AutoAdapterModel.from_pretrained("roberta-base")
qc = model.load_adapter("AdapterHub/roberta-base-pf-trec")
sent = model.load_adapter("AdapterHub/roberta-base-pf-imdb")
with AdapterSetup(ac.Parallel(qc, sent)):
print(model(**tokenizer("What is AdapterHub?", return_tensors="pt")))
Useful Resources
HuggingFace's great documentation on getting started with Transformers can be found here. adapters
is fully compatible with Transformers.
To get started with adapters, refer to these locations:
- Colab notebook tutorials, a series notebooks providing an introduction to all the main concepts of (adapter-)transformers and AdapterHub
- https://docs.adapterhub.ml, our documentation on training and using adapters with adapters
- https://adapterhub.ml to explore available pre-trained adapter modules and share your own adapters
- Examples folder of this repository containing HuggingFace's example training scripts, many adapted for training adapters
Implemented Methods
Currently, adapters integrates all architectures and methods listed below:
Method | Paper(s) | Quick Links |
---|---|---|
Bottleneck adapters | Houlsby et al. (2019) Bapna and Firat (2019) |
Quickstart, Notebook |
AdapterFusion | Pfeiffer et al. (2021) | Docs: Training, Notebook |
MAD-X, Invertible adapters |
Pfeiffer et al. (2020) | Notebook |
AdapterDrop | Rücklé et al. (2021) | Notebook |
MAD-X 2.0, Embedding training |
Pfeiffer et al. (2021) | Docs: Embeddings, Notebook |
Prefix Tuning | Li and Liang (2021) | Docs |
Parallel adapters, Mix-and-Match adapters |
He et al. (2021) | Docs |
Compacter | Mahabadi et al. (2021) | Docs |
LoRA | Hu et al. (2021) | Docs |
(IA)^3 | Liu et al. (2022) | Docs |
UniPELT | Mao et al. (2022) | Docs |
Prompt Tuning | Lester et al. (2021) | Docs |
QLoRA | Dettmers et al. (2023) | Notebook |
ReFT | Wu et al. (2024) | Docs |
Adapter Task Arithmetics | Chronopoulou et al. (2023) Zhang et al. (2023) |
Docs, Notebook |
Supported Models
We currently support the PyTorch versions of all models listed on the Model Overview page in our documentation.
Developing & Contributing
To get started with developing on Adapters yourself and learn more about ways to contribute, please see https://docs.adapterhub.ml/contributing.html.
Citation
If you use Adapters in your work, please consider citing our library paper: Adapters: A Unified Library for Parameter-Efficient and Modular Transfer Learning
@inproceedings{poth-etal-2023-adapters,
title = "Adapters: A Unified Library for Parameter-Efficient and Modular Transfer Learning",
author = {Poth, Clifton and
Sterz, Hannah and
Paul, Indraneil and
Purkayastha, Sukannya and
Engl{\"a}nder, Leon and
Imhof, Timo and
Vuli{\'c}, Ivan and
Ruder, Sebastian and
Gurevych, Iryna and
Pfeiffer, Jonas},
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-demo.13",
pages = "149--160",
}
Alternatively, for the predecessor adapter-transformers
, the Hub infrastructure and adapters uploaded by the AdapterHub team, please consider citing our initial paper: AdapterHub: A Framework for Adapting Transformers
@inproceedings{pfeiffer2020AdapterHub,
title={AdapterHub: A Framework for Adapting Transformers},
author={Pfeiffer, Jonas and
R{\"u}ckl{\'e}, Andreas and
Poth, Clifton and
Kamath, Aishwarya and
Vuli{\'c}, Ivan and
Ruder, Sebastian and
Cho, Kyunghyun and
Gurevych, Iryna},
booktitle={Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations},
pages={46--54},
year={2020}
}
最近版本更新:(数据更新于 2024-09-27 02:49:52)
2024-08-10 23:47:41 v1.0.0
2024-06-28 04:23:39 v0.2.2
2024-05-22 03:16:35 v0.2.1
2024-04-25 21:54:25 v0.2.0
2024-02-29 05:47:30 v0.1.2
2024-01-10 05:16:07 v0.1.1
2023-11-24 18:10:50 v0.1.0
2023-04-07 05:17:41 adapters3.2.1
2023-03-03 22:08:47 adapters3.2.0
2022-09-15 17:39:42 adapters3.1.0
主题(topics):
adapters, bert, lora, natural-language-processing, nlp, parameter-efficient-learning, parameter-efficient-tuning, pytorch, transformers
adapter-hub/adapters同语言 Jupyter Notebook最近更新仓库
2024-11-15 05:39:53 KindXiaoming/pykan
2024-11-11 10:53:33 microsoft/autogen
2024-10-30 14:24:30 neo4j-labs/llm-graph-builder
2024-10-09 04:20:42 Arize-ai/phoenix
2024-10-03 01:07:52 langchain-ai/langchain
2024-10-02 03:17:33 udlbook/udlbook