v0.1.11rc4
版本发布时间: 2022-11-23 17:26:45
hpcaitech/ColossalAI最新发布版本:v0.4.4(2024-09-19 10:53:35)
What's Changed
Workflow
- [workflow] fixed the python and cpu arch mismatch (#2010) by Frank Lee
- [workflow] fixed the typo in condarc (#2006) by Frank Lee
- [workflow] added conda cache and fixed no-compilation bug in release (#2005) by Frank Lee
Gemini
- [Gemini] add an inline_op_module to common test models and polish unitests. (#2004) by Jiarui Fang
- [Gemini] open grad checkpoint when model building (#1984) by Jiarui Fang
- [Gemini] add bert for MemtracerWrapper unintests (#1982) by Jiarui Fang
- [Gemini] MemtracerWrapper unittests (#1981) by Jiarui Fang
- [Gemini] memory trace hook (#1978) by Jiarui Fang
- [Gemini] independent runtime tracer (#1974) by Jiarui Fang
- [Gemini] ZeROHookV2 -> GeminiZeROHook (#1972) by Jiarui Fang
- [Gemini] clean no used MemTraceOp (#1970) by Jiarui Fang
- [Gemini] polish memstats collector (#1962) by Jiarui Fang
- [Gemini] add GeminiAdamOptimizer (#1960) by Jiarui Fang
Autoparallel
- [autoparallel] Add metainfo support for F.linear (#1987) by Boyuan Yao
- [autoparallel] use pytree map style to process data (#1989) by YuliangLiu0306
- [autoparallel] adapt handlers with attention block (#1990) by YuliangLiu0306
- [autoparallel] support more flexible data type (#1967) by YuliangLiu0306
- [autoparallel] add pooling metainfo (#1968) by Boyuan Yao
- [autoparallel] support distributed dataloader option (#1906) by YuliangLiu0306
- [autoparallel] Add alpha beta (#1973) by Genghan Zhang
- [autoparallel] add torch.nn.ReLU metainfo (#1868) by Boyuan Yao
- [autoparallel] support addmm in tracer and solver (#1961) by YuliangLiu0306
- [autoparallel] remove redundancy comm node (#1893) by YuliangLiu0306
Fx
- [fx] add more meta_registry for MetaTensor execution. (#2000) by Super Daniel
Hotfix
- [hotfix] make Gemini work for conv DNN (#1998) by Jiarui Fang
Example
- [example] add diffusion inference (#1986) by Fazzie-Maqianli
- [example] enhance GPT demo (#1959) by Jiarui Fang
- [example] add vit (#1942) by Jiarui Fang
Kernel
- [kernel] move all symlinks of kernel to
colossalai._C
(#1971) by ver217
Polish
- [polish] remove useless file _mem_tracer_hook.py (#1963) by Jiarui Fang
Zero
- [zero] fix memory leak for zero2 (#1955) by HELSON
Colotensor
- [ColoTensor] reconfig ColoInitContext, decouple default_pg and default_dist_spec. (#1953) by Jiarui Fang
- [ColoTensor] ColoInitContext initialize parameters in shard mode. (#1937) by Jiarui Fang
Tutorial
- [tutorial] polish all README (#1946) by binmakeswell
- [tutorial] added missing dummy dataloader (#1944) by Frank Lee
- [tutorial] fixed pipeline bug for sequence parallel (#1943) by Frank Lee
Tensorparallel
- [tensorparallel] fixed tp layers (#1938) by アマデウス
Sc demo
- [sc demo] add requirements to spmd README (#1941) by YuliangLiu0306
Sc
- [SC] remove redundant hands on (#1939) by Boyuan Yao
Full Changelog: https://github.com/hpcaitech/ColossalAI/compare/v0.1.11rc4...v0.1.11rc3