v1.3.2
版本发布时间: 2023-03-02 23:06:08
modelscope/modelscope最新发布版本:v1.18.1(2024-09-21 22:08:11)
中文版本
新模型列表及快捷访问
该小版本共新增上架6个模型,其中新增2个模型支持finetune能力。
序号 | 模型名称&链接 | 支持finetune |
---|---|---|
1 | ControlNet可控图像生成 | |
2 | 兰丁宫颈细胞AI辅助诊断模型 | |
3 | 读光-文字检测-DB行检测模型-中英-通用领域 | |
4 | SOND说话人日志-中文-alimeeting-16k-离线-pytorch | |
5 | NeRF快速三维重建模型 | √ |
6 | DCT-Net人像卡通化 | √ |
Feature
- GPT3 Finetune功能完善,支持DDP+tensor parallel, finetune流程串接推理流程优化
- checkpoint保存逻辑优化,确保周期性保存和最优保存的文件可以直接用于推理
- Hooks方案重构,解耦各个功能hook,支持hooks间交互
- 支持ImagePaintbyExamplePipeline demo service
- 支持多种音频类型
- 支持Petr3D CPU推理支持兼容新版mmcv
- deberta v2 预处理器更新
- 支持NLP下游任务模型初始化仅加载backbone预训练权重
- 更新librosa.resample()参数支持最新版本
- 添加下游工具箱调用埋点统计功能
不兼容行问题
- checkpoint保存分拆了模型参数和训练状态参数,老版本的模型参数需要转换后加载
问题修复:
- 修复asr vad/lm/punc输入处理
- 修复gpt moe finetune checkpoint path error
- 修复args lm_train_conf is invalid
- 修复删除已有文件ci测试报错
- 修复OCR识别bug
- 移除preprocessing stage中图像分辨率的限制
- 修复输出wav文件是32-bit float而不是预期的16-bit int
- 设置num_workers=0,以防止在demo-service中创建子进程
English Version
New Model List and Quick Access
This minor version adds a total of six new models, including two models with finetuning capability.
Features
- GPT-3 finetune has been improved to support DDP+tensor parallel
- Checkpoint saving logic has been optimized to ensure that files saved periodically and those saved as the best can be used directly by pipeline
- The Hooks scheme has been refactored to decouple various functional hooks and support interaction between hooks.
- Supports ImagePaintbyExamplePipeline demo service
- Supports multi-machine data and tensor parallel finetuning for cartoon task
- Supports various audio types
- Supports Petr3D CPU inference with compatibility for the latest version of mmcv
- Updates deberta v2 preprocessor
- Supports initialization of downstream NLP task models with only backbone pre-training weights loaded
- Updates librosa.resample() parameter support to the latest version
- Adds downstream toolbox call tracking function
Break changes
- Saving model parameters and training state seperately, so previous trained checkpoints should be converted before resume training
Bug Fixes:
- Fixes asr vad/lm/punc input processing
- Fixes gpt moe finetune checkpoint path error
- Fixes args lm_train_conf is invalid
- Fixes ci test errors when deleting existing files
- Fixes OCR recognition bugs
- Removes image resolution restrictions in preprocessing stage
- Fixes output wav file being 32-bit float instead of expected 16-bit int
- Sets num_workers=0 to prevent creating sub-processes in demo-service.