ray-1.2.0
版本发布时间: 2021-02-13 09:42:06
ray-project/ray最新发布版本:ray-2.37.0(2024-09-25 07:37:52)
Release v1.2.0 Notes
Highlights
- Ray client is now in beta! Check out more details here: https://docs.ray.io/en/master/ray-client.html XGBoost-Ray is now in beta! Check out more details about this project at https://github.com/ray-project/xgboost_ray.
- Check out the Serve migration guide: https://docs.google.com/document/d/1CG4y5WTTc4G_MRQGyjnb_eZ7GK3G9dUX6TNLKLnKRAc/edit
- Ray’s C++ support is now in beta: https://docs.ray.io/en/master/#getting-started-with-ray
- An alpha version of object spilling is now available: https://docs.ray.io/en/master/memory-management.html#object-spilling
Ray Autoscaler
🎉 New Features:
- A new autoscaler output format in monitor.log (#12772, #13561)
- Piping autoscaler events to driver logs (#13434)
💫Enhancements
- Full support of ray.autoscaler.sdk.request_resources() API (https://docs.ray.io/en/master/cluster/autoscaling.html?highlight=request_resources#ray.autoscaler.sdk.request_resources) .
- Make placement groups bypass max launch limit (#13089)
- [K8s] Retry getting home directory in command runner. (#12925)
- [docker] Pull if image is not present (#13136)
- [Autoscaler] Ensure ubuntu is owner of docker host mount folder (#13579)
🔨 Fixes:
- Many autoscaler bug fixes (#12952, #12689, #13058, #13671, #13637, #13588, #13505, #13154, #13151, #13138, #13008, #12980, #12918, #12829, #12714, #12661, #13567, #13663, #13623, #13437, #13498, #13472, #13392, #12514, #13325, #13161, #13129, #12987, #13410, #12942, #12868, #12866, #12865, #12098, #12609)
RLLib
🎉 New Features:
- Fast Attention Nets (using the trajectory view API) (#12753).
- Attention Nets: Full PyTorch support (#12029).
- Attention Nets: Support auto-wrapping around default- or custom models by specifying “use_attention=True” in the model’s config. * * * This works completely analogously now to “use_lstm=True”. (#11698)
- New Offline RL Algorithm: CQL (based on SAC) (#13118).
- MAML: Discrete actions support (added CartPole mass test case).
- Support Atari framestacking via the trajectory view API (#13315).
- Support for D4RL environments/benchmarks (#13550).
- Preliminary work on JAX support (#13077, #13091).
💫 Enhancements:
- Rollout lengths: Allow unit to be configured as “agent_steps” in multi-agent settings (default: “env_steps”) (#12420).
- TFModelV2: Soft-deprecate register_variables and unify var names wrt TorchModelV2 (#13339, #13363).
📖 Documentation:
- Added documentation on Model building API (#13260, #13261).
- Added documentation for the trajectory view API. (#12718)
- Added documentation for SlateQ (#13266).
- Readme.md documentation for almost all algorithms in rllib/agents (#12943, #13035).
- Type annotations for the “rllib/execution” folder (#12760, #13036).
🔨 Fixes:
- MARWIL and BC: Add grad-clipping config option to stabilize learning (#13455).
- A3C: Solve PyTorch- and TF-eager async race condition between calling model and its value function (#13467).
- Various issues- and bug fixes (#12619, #12682, #12704, #12706, #12708, #12765, #12786, #12787, #12793, #12832, #12844, #12846, #12915, #12941, #13039, #13040, #13064, #13083, #13121, #13126, #13237, #13238, #13308, #13332, #13397, #13459, #13553). ###🏗 Architecture refactoring:
- Env directory has been cleaned up and is now divided in: Core part (rllib/env) with all basic env classes, and rllib/env/wrappers containing third-party wrapper classes (Atari, Unity3D, etc..) (#13082).
Tune
🎉 New Features:
- Ray Tune has updated and improved its integration with MLflow. See this blog post for details (#12840, #13301, #13533)
💫 Enhancements
- Ray Tune now uses ray.cloudpickle underneath the hood, allowing you to checkpoint large models (>4GB) (#12958).
- Using the 'reuse_actors' flag can now speed up training for general Trainable API usage. (#13549)
- Ray Tune will now automatically buffer results from trainables, allowing you to use an arbitrary reporting frequency on your training functions. (#13236)
- Ray Tune now has a variety of experiment stoppers (#12750)
- Ray Tune now supports an integer loguniform search space distribution (#12994)
- Ray Tune now has an initial support for the Ray placement group API. (#13370)
- The Weights and Bias integration (
WandbLogger
) now also accepts wandb.data_types.Video (#13169) - The Hyperopt integration (
HyperoptSearch
) can now directly accept category variables instead of indices (#12715) - Ray Tune now supports experiment checkpointing when using grid search (#13357)
🔨Fixes and Updates
- The Optuna integration was updated to support the 2.4.0 API while maintaining backwards compatibility (#13631)
- All search algorithms now support
points_to_evaluate
(#12790, #12916) - PBT Transformers example was updated and improved (#13174, #13131)
- The scikit-optimize integration was improved (#12970)
- Various bug fixes (#13423, #12785, #13171, #12877, #13255, #13355)
SGD
🔨Fixes and Updates
- Fix Docstring for
as_trainable
(#13173) - Fix process group timeout units (#12477)
- Disable Elastic Training by default when using with Tune (#12927)
Serve
🎉 New Features:
- Ray Serve backends now accept a Starlette request object instead of a Flask request object (#12852). This is a breaking change, so please read the migration guide.
- Ray Serve backends now have the option of returning a Starlette Response object (#12811, #13328). This allows for more customizable responses, including responses with custom status codes.
- [Experimental] The new Ray Serve MLflow plugin makes it easy to deploy your MLflow models on Ray Serve. It comes with a Python API and a command-line interface.
- Using “ImportedBackend” you can now specify a backend based on a class that is installed in the Python environment that the workers will run in, even if the Python environment of the driver script (the one making the Serve API calls) doesn’t have it installed (#12923).
💫 Enhancements:
- Dependency management using conda no longer requires the driver script to be running in an activated conda environment (#13269).
- Ray ObjectRef can now be used as argument to
serve_handle.remote(...)
. (#12592) - Backends are now shut down gracefully. You can set the graceful timeout in BackendConfig. (#13028)
📖 Documentation:
- A tutorial page has been added for integrating Ray Serve with your existing FastAPI web server or with your existing AIOHTTP web server (#13127).
- Documentation has been added for Ray Serve metrics (#13096).