MyGit

CarperAI/trlx

Fork: 470 Star: 4515 (更新于 2024-12-04 16:59:22)

license: MIT

Language: Python .

A repo for distributed training of language models with Reinforcement Learning via Human Feedback (RLHF)

最后发布版本: v0.7.0 ( 2023-06-24 06:21:52)

GitHub网址

EMNLP Paper DOI License

Transformer Reinforcement Learning X

trlX is a distributed training framework designed from the ground up to focus on fine-tuning large language models with reinforcement learning using either a provided reward function or a reward-labeled dataset.

Training support for 🤗 Hugging Face models is provided by Accelerate-backed trainers, allowing users to fine-tune causal and T5-based language models of up to 20B parameters, such as facebook/opt-6.7b, EleutherAI/gpt-neox-20b, and google/flan-t5-xxl. For models beyond 20B parameters, trlX provides NVIDIA NeMo-backed trainers that leverage efficient parallelism techniques to scale effectively.

The following RL algorithms are currently implemented:

Algorithm Accelerate Trainer NeMo Trainer
Proximal Policy Optimization (PPO)
Implicit Language Q-Learning (ILQL)

📖 Documentation

🧀 CHEESE Collect human annotations for your RL application with our human-in-the-loop data collection library.

Installation

git clone https://github.com/CarperAI/trlx.git
cd trlx
pip install torch --extra-index-url https://download.pytorch.org/whl/cu118
pip install -e .

Examples

For more usage see examples. You can also try the colab notebooks below:

Description Link
Simulacra (GPT2, ILQL) Open In Colab
Sentiment (GPT2, ILQL) Open In Colab

Latest runs of the examples are on our Weights & Biases

How to Train

You can train a model using a reward function or a reward-labeled dataset.

Using a reward function

trainer = trlx.train('gpt2', reward_fn=lambda samples, **kwargs: [sample.count('cats') for sample in samples])

For reward model training refer to our autocrit library.

Using a reward-labeled dataset

trainer = trlx.train('EleutherAI/gpt-j-6B', samples=['dolphins', 'geese'], rewards=[1.0, 100.0])

Using a prompt-completion dataset

trainer = trlx.train('gpt2', samples=[['Question: 1 + 2 Answer:', '3'], ['Question: Solve this equation: ∀n>0, s=2, sum(n ** -s). Answer:', '(pi ** 2)/ 6']])

Trainers provide a wrapper over their underlying model

trainer.generate(**tokenizer('Q: Who rules the world? A:', return_tensors='pt'), do_sample=True)

Configure Hyperparameters

from trlx.data.default_configs import default_ppo_config

config = default_ppo_config()
config.model.model_path = 'EleutherAI/gpt-neox-20b'
config.tokenizer.tokenizer_path = 'EleutherAI/gpt-neox-20b'
config.train.seq_length = 2048

trainer = trlx.train(config=config, reward_fn=lambda samples, **kwargs: [len(sample) for sample in samples])

To reduce memory usage (if you're experiencing CUDA Out of Memory errors), first try the lowest setting for the following hyperparameters and eventually increase them:

# micro batch size per gpu
config.train.batch_size = 1
# freeze all transformer layers
config.model.num_layers_unfrozen = 0
# maximum sample length, prompts or samples longer than that will be truncated
config.train.seq_length = 128

# micro batch size for sampling (specific for PPO)
config.method.chunk_size = 1
# use an additional Q-head (specific for ILQL)
config.method.two_qs = False

Save the resulting model to a Hugging Face pretrained language model. (Ready to upload to the Hub!)

trainer.save_pretrained('/path/to/output/folder/')

Use 🤗 Accelerate to launch distributed training

accelerate config # choose DeepSpeed option
accelerate launch examples/simulacra.py

Use NeMo-Megatron to launch distributed training

Follow the setup instructions in the NeMo README.

python examples/nemo_ilql_sentiments.py

For more usage see the NeMo README

Use Ray Tune to launch hyperparameter sweep

ray start --head --port=6379
python -m trlx.sweep --config configs/sweeps/ppo_sweep.yml --accelerate_config configs/accelerate/ddp.yaml --num_gpus 4 examples/ppo_sentiments.py

Benchmark your trlX fork against trlX's main branch

python -m trlx.reference octocat/trlx-fork:fix-branch

Logging

trlX uses the standard Python logging library to log training information to the console. The default logger is set to the INFO level, which means that INFO, WARNING, ERROR, and CRITICAL level messages will be printed to standard output.

To change the log level directly, you can use the verbosity setter. For example, to set the log level to WARNING use:

import trlx

trlx.logging.set_verbosity(trlx.logging.WARNING)

This will suppress INFO level messages, but still print WARNING, ERROR, and CRITICAL level messages.

You can also control logging verbosity by setting the TRLX_VERBOSITY environment variable to one of the standard logging level names:

  • CRITICAL (trlx.logging.CRITICAL)
  • ERROR (trlx.logging.ERROR)
  • WARNING (trlx.logging.WARNING)
  • INFO (trlx.logging.INFO)
  • DEBUG (trlx.logging.DEBUG)
export TRLX_VERBOSITY=WARNING

By default, tqdm progress bars are used to display training progress. You can disable them by calling trlx.logging.disable_progress_bar(), otherwise trlx.logging.enable_progress_bar() to enable.

Messages can be formatted with greater detail by setting trlx.logging.enable_explicit_format(). This will inject call-site information into each log which may be helpful for debugging.

[2023-01-01 05:00:00,000] [INFO] [ppo_orchestrator.py:63:make_experience] [RANK 0] Message...

💡 Tip: To reduce the amount of logging output, you might find it helpful to change log levels of third-party libraries used by trlX. For example, try adding transformers.logging.set_verbosity_error() to the top of your trlX scripts to silence verbose messages from the transformers library (see their logging docs for more details).

Contributing

For development check out these guidelines and also read our docs

Citing trlX

@inproceedings{havrilla-etal-2023-trlx,
    title = "trl{X}: A Framework for Large Scale Reinforcement Learning from Human Feedback",
    author = "Havrilla, Alexander  and
      Zhuravinskyi, Maksym  and
      Phung, Duy  and
      Tiwari, Aman  and
      Tow, Jonathan  and
      Biderman, Stella  and
      Anthony, Quentin  and
      Castricato, Louis",
    booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
    month = dec,
    year = "2023",
    address = "Singapore",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2023.emnlp-main.530",
    doi = "10.18653/v1/2023.emnlp-main.530",
    pages = "8578--8595",
}

Acknowledgements

Many thanks to Leandro von Werra for contributing with trl, a library that initially inspired this repo.

最近版本更新:(数据更新于 2024-10-04 23:06:17)

2023-06-24 06:21:52 v0.7.0

2023-04-01 05:41:29 v0.6.0

2023-02-23 07:50:34 v0.5.0

2023-01-14 00:50:14 v0.4

2022-11-22 00:27:21 v0.3

2022-10-22 06:20:39 v0.2

主题(topics):

machine-learning, pytorch, reinforcement-learning

CarperAI/trlx同语言 Python最近更新仓库

2024-12-22 18:18:34 LeslieLeung/heimdallr

2024-12-22 09:03:32 ultralytics/ultralytics

2024-12-21 13:26:40 notepad-plus-plus/nppPluginList

2024-12-21 11:42:53 XiaoMi/ha_xiaomi_home

2024-12-21 04:33:22 comfyanonymous/ComfyUI

2024-12-20 18:47:56 home-assistant/core