Skip to content

Commit

Permalink
bump version to 0.3.0 (#751)
Browse files Browse the repository at this point in the history
  • Loading branch information
geniuspatrick authored Jan 17, 2024
1 parent 4cdf60c commit f9478c6
Show file tree
Hide file tree
Showing 4 changed files with 97 additions and 17 deletions.
48 changes: 32 additions & 16 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -295,22 +295,38 @@ Please see [configs](./configs) for the details about model performance and pret
## What is New
- 2023/6/16
1. New version `0.2.2` is released! We upgrade to support `MindSpore` v2.0 while maintaining compatibility of v1.8
2. New models:
- [ConvNextV2](configs/convnextv2)
- mini of [CoAT](configs/coat)
- 1.3 of [MnasNet](configs/mnasnet)
- AMP(O3) version of [ShuffleNetV2](configs/shufflenetv2)
3. New features:
- Gradient Accumulation
- DynamicLossScale for customized [TrainStep](mindcv/utils/train_step.py)
- OneCycleLR and CyclicLR learning rate scheduler
- Refactored Logging
- Pyramid Feature Extraction
4. Bug fixes:
- Serving Deployment Tutorial(mobilenet_v3 doesn't work on ms1.8 when using Ascend backend)
- Some broken links on our documentation website.
- 2024/1/17
Release `0.3.0` is published. We will drop MindSpore 1.x in the future release.
1. New models:
- Y-16GF of [RegNet](configs/regnet)
- [SwinTransformerV2](configs/swintransformerv2)
- [VOLO](configs/volo)
- [CMT](configs/cmt)
- [HaloNet](configs/halonet)
- [SSD](examples/det/ssd)
- [DeepLabV3](examples/seg/deeplabv3)
- [CLIP](examples/clip) & [OpenCLIP](examples/open_clip)
2. Features:
- AsymmetricLoss & JSDCrossEntropy
- Augmentations Split
- Customized AMP
3. Bug fixes:
- Since the classifier weights are not fully deleted, you may encounter an error passing in the `num_classes` when creating a pre-trained model.
4. Refactoring:
- The names of many models have been refactored for better understanding.
- [Script](mindcv/models/vit.py) of `VisionTransformer`.
- [Script](train_with_func.py) of Mixed(PyNative+jit) mode training.
5. Documentation:
- A guide of how to extract multiscale features from backbone.
- A guide of how to finetune the pre-trained model on a custom dataset.
6. BREAKING CHANGES:
- We are going to drop support of MindSpore 1.x for it's EOL.
- Configuration `filter_bias_and_bn` will be removed and renamed as `weight_decay_filter`,
due to a prolonged misunderstanding of the MindSpore optimizer.
We will migrate the existing training recipes, but the signature change of function `create_optimizer` will be incompatible
and the old version training recipes will also be incompatible. See [PR/752](https://github.com/mindspore-lab/mindcv/pull/752) for details.
See [RELEASE](RELEASE.md) for detailed history.
Expand Down
31 changes: 31 additions & 0 deletions README_CN.md
Original file line number Diff line number Diff line change
Expand Up @@ -298,6 +298,37 @@ python train.py --model=resnet50 --dataset=cifar10 \

## 更新

- 2024/1/17

新版本`0.3.0`发布。我们将在未来的发布中丢弃对MindSpore1.x版本的支持

1. 新模型:
- [RegNet](configs/regnet)的Y-16GF规格
- [SwinTransformerV2](configs/swintransformerv2)
- [VOLO](configs/volo)
- [CMT](configs/cmt)
- [HaloNet](configs/halonet)
- [SSD](examples/det/ssd)
- [DeepLabV3](examples/seg/deeplabv3)
- [CLIP](examples/clip) & [OpenCLIP](examples/open_clip)
2. 特性:
- 损失函数AsymmetricLoss及JSDCrossEntropy
- 数据增强分离(Augmentations Split)
- 自定义混合精度策略
3. 错误修复:
- 由于分类器参数未完全弹出,您在新建预训练模型时传入参数`num_classes`可能会导致错误。
4. 重构:
- 许多模型的名字发生了变更,以便更好的理解。
- `VisionTransformer`的模型定义[脚本](mindcv/models/vit.py)。
- 混合模式(PyNative+jit)的训练[脚本](train_with_func.py)。
5. 文档:
- 如何提取多尺度特征的教程指引。
- 如何在自定义数据集上微调预训练模型的教程指引。
6. BREAKING CHANGES:
- 我们将在此小版本的未来发布中丢弃对MindSpore1.x的支持。
- 配置项`filter_bias_and_bn`将被移除并更名为`weight_decay_filter`
我们会对已有训练策略进行迁移,但函数`create_optimizer`的签名变更将是不兼容的,且未迁移旧版本的训练策略也将变得不兼容。详见[PR/752](https://github.com/mindspore-lab/mindcv/pull/752)。

- 2023/6/16
1. 新版本 `0.2.2` 发布啦!我们将`MindSpore`升级到了2.0版本,同时保持了对1.8版本的兼容
2. 新模型:
Expand Down
33 changes: 33 additions & 0 deletions RELEASE.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,38 @@
# Release Note

## 0.3.0 (2024/1/17)

Release `0.3.0` is published. We will drop MindSpore 1.x in the future release.

1. New models:
- Y-16GF of [RegNet](configs/regnet)
- [SwinTransformerV2](configs/swintransformerv2)
- [VOLO](configs/volo)
- [CMT](configs/cmt)
- [HaloNet](configs/halonet)
- [SSD](examples/det/ssd)
- [DeepLabV3](examples/seg/deeplabv3)
- [CLIP](examples/clip) & [OpenCLIP](examples/open_clip)
2. Features:
- AsymmetricLoss & JSDCrossEntropy
- Augmentations Split
- Customized AMP
3. Bug fixes:
- Since the classifier weights are not fully deleted, you may encounter an error passing in the `num_classes` when creating a pre-trained model.
4. Refactoring:
- The names of many models have been refactored for better understanding.
- [Script](mindcv/models/vit.py) of `VisionTransformer`.
- [Script](train_with_func.py) of Mixed(PyNative+jit) mode training.
5. Documentation:
- A guide of how to extract multiscale features from backbone.
- A guide of how to finetune the pre-trained model on a custom dataset.
6. BREAKING CHANGES:
- We are going to drop support of MindSpore 1.x for it's EOL.
- Configuration `filter_bias_and_bn` will be removed and renamed as `weight_decay_filter`,
due to a prolonged misunderstanding of the MindSpore optimizer.
We will migrate the existing training recipes, but the signature change of function a will be incompatible
and the old version training recipes will also be incompatible. See [PR/752](https://github.com/mindspore-lab/mindcv/pull/752) for details.

## 0.2.2 (2023/6/16)

1. New version `0.2.2` is released! We upgrade to support `MindSpore` v2.0 while maintaining compatibility of v1.8
Expand Down
2 changes: 1 addition & 1 deletion mindcv/version.py
Original file line number Diff line number Diff line change
@@ -1,2 +1,2 @@
"""version init"""
__version__ = "0.2.2"
__version__ = "0.3.0"

0 comments on commit f9478c6

Please sign in to comment.