MyGit

2.4.0

Lightning-AI/pytorch-lightning

版本发布时间: 2024-08-07 17:44:30

Lightning-AI/pytorch-lightning最新发布版本:2.4.0(2024-08-07 17:44:30)

Lightning AI :zap: is excited to announce the release of Lightning 2.4. This is mainly a compatibility upgrade for PyTorch 2.4 and Python 3.12, with a sprinkle of a few features and bug fixes.

Did you know? The Lightning philosophy extends beyond a boilerplate-free deep learning framework: We've been hard at work bringing you Lightning Studio. Code together, prototype, train, deploy, host AI web apps. All from your browser, with zero setup.

Changes

PyTorch Lightning

Added
  • Made saving non-distributed checkpoints fully atomic (#20011)
  • Added dump_stats flag to AdvancedProfiler (#19703)
  • Added a flag verbose to the seed_everything() function (#20108)
  • Added support for PyTorch 2.4 (#20010)
  • Added support for Python 3.12 (20078)
  • The TQDMProgressBar now provides an option to retain prior training epoch bars (#19578)
  • Added the count of modules in train and eval mode to the printed ModelSummary table (#20159)
Changed
  • Triggering KeyboardInterrupt (Ctrl+C) during .fit(), .evaluate(), .test() or .predict() now terminates all processes launched by the Trainer and exits the program (#19976)
  • Changed the implementation of how seeds are chosen for dataloader workers when using seed_everything(..., workers=True) (#20055)
  • NumPy is no longer a required dependency (#20090)
Removed
  • Removed support for PyTorch 2.1 (#20009)
  • Removed support for Python 3.8 (#20071)
Fixed
  • Avoid LightningCLI saving hyperparameters with class_path and init_args since this would be a breaking change (#20068)
  • Fixed an issue that would cause too many printouts of the seed info when using seed_everything() (#20108)
  • Fixed _LoggerConnector's _ResultMetric to move all registered keys to the device of the logged value if needed (#19814)
  • Fixed _optimizer_to_device logic for special 'step' key in optimizer state causing performance regression (#20019)
  • Fixed parameter counts in ModelSummary when model has distributed parameters (DTensor) (#20163)

Lightning Fabric

Added
  • Made saving non-distributed checkpoints fully atomic (#20011)
  • Added a flag verbose to the seed_everything() function (#20108)
  • Added support for PyTorch 2.4 (#20028)
  • Added support for Python 3.12 (20078)
Changed
  • Changed the implementation of how seeds are chosen for dataloader workers when using seed_everything(..., workers=True) (#20055)
  • NumPy is no longer a required dependency (#20090)
Removed
  • Removed support for PyTorch 2.1 (#20009)
  • Removed support for Python 3.8 (#20071)
Fixed
  • Fixed an attribute error when loading a checkpoint into a quantized model using the _lazy_load() function (#20121)
  • Fixed _optimizer_to_device logic for special 'step' key in optimizer state causing performance regression (#20019)

Full commit list: 2.3.0 -> 2.4.0

Contributors

We thank all our contributors who submitted pull requests for features, bug fixes and documentation updates.

New Contributors

Did you know?

Chuck Norris can solve NP-hard problems in polynomial time. In fact, any problem is easy when Chuck Norris solves it.

相关地址:原始地址 下载(tar) 下载(zip)

1、 lightning-2.4.0-py3-none-any.whl 791.96KB

2、 lightning-2.4.0.tar.gz 606.09KB

3、 lightning-fabric-2.4.0.tar.gz 189.64KB

4、 lightning_fabric-2.4.0-py3-none-any.whl 244.16KB

5、 pytorch-lightning-2.4.0.tar.gz 610.66KB

6、 pytorch_lightning-2.4.0-py3-none-any.whl 796.05KB

查看:2024-08-07发行的版本