MyGit

v0.9.0

mosaicml/composer

版本发布时间: 2022-08-16 14:11:12

mosaicml/composer最新发布版本:v0.23.5(2024-07-03 10:08:01)

🚀 Composer v0.9.0

Excited to share the release of Composer v0.9.0, which comes with an Inference Export API, beta support for Apple Silicon and TPU training, as well as expanded usability of NLP-related speed-up methods. This release includes 175 commits from 34 contributors, including 10 new contributors :raised_hands: !

pip install --upgrade mosaicml==0.9.0

Alternatively, install Composer with Conda:

conda install -c mosaicml mosaicml=0.9.0

New Features

  1. :package: Export for inference APIs

    Train with Composer and deploy anywhere! We have added a dedicated export API as well as an export training callback to allow you to export Composer-trained models for inference, supporting popular formats such as torchscript and ONNX.

    For example, here’s how to export a model in torchscript format:

    from composer.utils import export_for_inference
    
    # Invoking export with a trained model
    export_for_inference(model=model, 
                         save_format='torchscript', 
                         save_path=model_save_path)
    

    Here’s an example of using the training callback, which automatically exports the model at the end of training to ONNX format:

    from composer.callbacks import ExportForInferenceCallback
    
    # Initializing Trainer with the export callback
    callback = ExportForInferenceCallback(save_format='onnx', 
                                                                                save_path=model_save_path)
    trainer = Trainer(model=model,
                                    callbacks=callback,
                                    train_dataloader=dataloader,
                                    max_duration='10ep')
    
    # Model will be exported at the end of training
    trainer.fit()
    

    Please see our Exporting for Inference notebook for more information.

  2. :chart_with_upwards_trend: ALiBi support for BERT training

    You can now use ALiBi (Attention with Linear Biases; Press et al., 2021) when training BERT models with Composer, delivering faster training and higher accuracy by leveraging shorter sequence lengths.

    ALiBi improves the quality of BERT pre-training, especially when pre-training uses shorter sequence lengths than the downstream (fine-tuning) task. This allows models with ALiBi to reach higher downstream accuracy with less pre-training time.

    Example of using ALiBi as an algorithm with the Composer Trainer:

    # Create an instance of a BERT masked language model
    model = composer.models.create_bert_mlm()
    
    # Apply ALiBi (when training is initialized)
    alibi = composer.algorithms.alibi(max_sequence_length=1024)
    
    # Train with ALiBi
    trainer = composer.trainer.Trainer(
        model=model,
        train_dataloader=train_dataloader,
        algorithms=[alibi]
    )
    trainer.fit()
    

    Example using the Composer Functional API:

    import composer.functional as cf
    
    # Create an instance of a BERT masked language model
    model = composer.models.create_bert_mlm()
    
    # Apply ALiBi and expand the model's maximum sequence length to 1024
    cf.apply_alibi(model=model, max_sequence_length=1024)
    

    AliBi can also now be extended to work with custom models by registering your attention and embedding layers. Please see our ALiBi method card for more information.

  3. 🧐 Entry point for GLUE tasks pre-training and fine-tuning

    You can now easily pre-train and fine-tune NLP models across all GLUE (General Language Understanding Evaluation) tasks through one simple entry point! The entry point handles model saving and loading, spawns GLUE tasks in parallel across all available GPUs, and delivers a highly efficient evaluation of model performance.

    Example of launching the entrypoint:

    # This runs pre-training followed by fine-tuning.
    # --training_scheme can take either pretrain, finetune, or all depending on the task!
    python run_glue_trainer.py -f glue_example.yaml --training_scheme all
    

    Please see our GLUE entrypoint notebook for more information.

  4. 🤖 TPU support (in beta)

    You can now use Composer to train your models on TPUs! Support is now available in Beta, and currently only supports single-core TPU training. Try it out, explore optimizations, and share your feedback and feature requests with us so we can make it better for you and for the community.

    To use TPUs with Composer, simply specify a tpu device:

    # Set device to `tpu`
    trainer = composer.trainer.Trainer(
        model=model,
        train_dataloader=train_dataloader,
        max_duration=train_epochs,
        device='tpu')
    
    # Run fit
    trainer.fit()
    

    Please see our Training with TPUs notebook for more information.

  5. :apple: Apple Silicon support (beta)

    Leverage Apple Silicon chips to train your models with Composer by providing the device='mps' argument:

    trainer = Trainer(
        ...,
        device='mps'
    )
    

    We use the latest PyTorch MPS backend to execute the training. This requires torch version ≥1.12, and Max OSX 12.3+.

    For more information on training with Apple M chips, see the PyTorch 1.12 blog and our API Reference for Composer specific details.

  6. :construction: Contrib repository

    Got a new method idea, or published a paper and want those methods to be easily accessible? We’ve created the mcontrib repository, with a lightweight process to contribute new algorithms. We’re happy to work directly with you to benchmark these methods and eventually “promote” them to Composer for use by end customers.

    Please checkout the README for details on how to contribute a new algorithm. For more details on how to write speed-up methods, see our notebook on custom speed-up methods.

Additional API Changes

  1. :1234: Passes Module

    The order in which algorithms are run matters significantly during composition. With this release we refactored algorithm passes into their own passes module. Users can now register custom passes (for custom algorithms) with the Engine. Please see #1377 for more information.

  2. :file_cabinet: Default Checkpoint Extension

    The CheckpointSaver now defaults to using the *.pt extension for checkpoint fienames. Please see #1370 for more information.

  3. :eye: Models Refactor

    Most vision models (ResNet, MNIST, ViT, EfficientNet) have been refactored from classes to a factory function. For example ComposerResNet -> composer_resnet.

    # before
    from composer.models import ComposerResNet
    model = ComposerResNet(..)
    
    from composer.models import composer_resnet  # after
    model = composer_resnet(..)
    

    The same refactor has been done for NLP as well, e.g. BERTModel -> create_bert_mlm and create_bert_classification.

    See #1227 (vision) and #1130 (NLP) for more details.

  4. :heavy_plus_sign: Misc API Changes

    • BreakEpochException has been removed.
    • state.is_model_deepspeed has been moved to composer.utils.is_model_deepspeed.
    • Helper function monitored_barrier has been added to composer distributed.

Bug Fixes

Commits

What's Changed

Full Changelog: https://github.com/mosaicml/composer/compare/v0.8.2...v0.9.0

New Contributors

相关地址:原始地址 下载(tar) 下载(zip)

查看:2022-08-16发行的版本