v0.2.0
版本发布时间: 2023-01-03 20:29:58
hpcaitech/ColossalAI最新发布版本:v0.4.4(2024-09-19 10:53:35)
What's Changed
Version
- [version] 0.1.14 -> 0.2.0 (#2286) by Jiarui Fang
Examples
- [examples] using args and combining two versions for PaLM (#2284) by ZijianYY
- [examples] replace einsum with matmul (#2210) by ZijianYY
Doc
- [doc] add feature diffusion v2, bloom, auto-parallel (#2282) by binmakeswell
- [doc] updated the stable diffussion on docker usage (#2244) by Frank Lee
Zero
- [zero] polish low level zero optimizer (#2275) by HELSON
- [zero] fix error for BEiT models (#2169) by HELSON
Example
- [example] add benchmark (#2276) by Ziyue Jiang
- [example] fix save_load bug for dreambooth (#2280) by BlueRum
- [example] GPT polish readme (#2274) by Jiarui Fang
- [example] fix gpt example with 0.1.10 (#2265) by HELSON
- [example] clear diffuser image (#2262) by Fazzie-Maqianli
- [example] diffusion install from docker (#2239) by Jiarui Fang
- [example] fix benchmark.sh for gpt example (#2229) by HELSON
- [example] make palm + GeminiDPP work (#2227) by Jiarui Fang
- [example] Palm adding gemini, still has bugs (#2221) by ZijianYY
- [example] update gpt example (#2225) by HELSON
- [example] add benchmark.sh for gpt (#2226) by Jiarui Fang
- [example] update gpt benchmark (#2219) by HELSON
- [example] update GPT example benchmark results (#2212) by Jiarui Fang
- [example] update gpt example for larger model scale (#2211) by Jiarui Fang
- [example] update gpt readme with performance (#2206) by Jiarui Fang
- [example] polish doc (#2201) by ziyuhuang123
- [example] Change some training settings for diffusion (#2195) by BlueRum
- [example] support Dreamblooth (#2188) by Fazzie-Maqianli
- [example] gpt demo more accuracy tflops (#2178) by Jiarui Fang
- [example] add palm pytorch version (#2172) by Jiarui Fang
- [example] update vit readme (#2155) by Jiarui Fang
- [example] add zero1, zero2 example in GPT examples (#2146) by HELSON
Hotfix
- [hotfix] fix fp16 optimzier bug (#2273) by YuliangLiu0306
- [hotfix] fix error for torch 2.0 (#2243) by xcnick
- [hotfix] Fixing the bug related to ipv6 support by Tongping Liu
- [hotfix] correcnt cpu_optim runtime compilation (#2197) by Jiarui Fang
- [hotfix] add kwargs for colo_addmm (#2171) by Tongping Liu
- [hotfix] Jit type hint #2161 (#2164) by アマデウス
- [hotfix] fix auto policy of test_sharded_optim_v2 (#2157) by Jiarui Fang
- [hotfix] fix aten default bug (#2158) by YuliangLiu0306
Autoparallel
- [autoparallel] fix spelling error (#2270) by YuliangLiu0306
- [autoparallel] gpt2 autoparallel examples (#2267) by YuliangLiu0306
- [autoparallel] patch torch.flatten metainfo for autoparallel (#2247) by Boyuan Yao
- [autoparallel] autoparallel initialize (#2238) by YuliangLiu0306
- [autoparallel] fix construct meta info. (#2245) by Super Daniel
- [autoparallel] record parameter attribute in colotracer (#2217) by YuliangLiu0306
- [autoparallel] Attach input, buffer and output tensor to MetaInfo class (#2162) by Boyuan Yao
- [autoparallel] new metainfoprop based on metainfo class (#2179) by Boyuan Yao
- [autoparallel] update getitem handler (#2207) by YuliangLiu0306
- [autoparallel] update_getattr_handler (#2193) by YuliangLiu0306
- [autoparallel] add gpt2 performance test code (#2194) by YuliangLiu0306
- [autoparallel] integrate_gpt_related_tests (#2134) by YuliangLiu0306
- [autoparallel] memory estimation for shape consistency (#2144) by Boyuan Yao
- [autoparallel] use metainfo in handler (#2149) by YuliangLiu0306
Gemini
- [Gemini] fix the convert_to_torch_module bug (#2269) by Jiarui Fang
Pipeline middleware
- [Pipeline Middleware] Reduce comm redundancy by getting accurate output (#2232) by Ziyue Jiang
Builder
- [builder] builder for scaled_upper_triang_masked_softmax (#2234) by Jiarui Fang
- [builder] polish builder with better base class (#2216) by Jiarui Fang
- [builder] raise Error when CUDA_HOME is not set (#2213) by Jiarui Fang
- [builder] multihead attn runtime building (#2203) by Jiarui Fang
- [builder] unified cpu_optim fused_optim inferface (#2190) by Jiarui Fang
- [builder] use runtime builder for fused_optim (#2189) by Jiarui Fang
- [builder] runtime adam and fused_optim builder (#2184) by Jiarui Fang
- [builder] use builder() for cpu adam and fused optim in setup.py (#2187) by Jiarui Fang
Logger
- [logger] hotfix, missing _FORMAT (#2231) by Super Daniel
Diffusion
- [diffusion] update readme (#2214) by HELSON
Testing
- [testing] add beit model for unit testings (#2196) by HELSON
NFC
- [NFC] fix some typos' (#2175) by ziyuhuang123
- [NFC] update news link (#2191) by binmakeswell
- [NFC] fix a typo 'stable-diffusion-typo-fine-tune' by Arsmart1
Exmaple
- [exmaple] diffuser, support quant inference for stable diffusion (#2186) by BlueRum
- [exmaple] add vit missing functions (#2154) by Jiarui Fang
Pipeline middleware
- [Pipeline Middleware ] Fix deadlock when num_microbatch=num_stage (#2156) by Ziyue Jiang
Full Changelog: https://github.com/hpcaitech/ColossalAI/compare/v0.2.0...v0.1.13