- [Description of Random Situation](#description-of-random-situation)
- [ModelZoo Homepage](#modelzoo-homepage)
# [ResNet Description](#contents)
## Description
ResNet (residual neural network) was proposed by Kaiming He and other four Chinese of Microsoft Research Institute. Through the use of ResNet unit, it successfully trained 152 layers of neural network, and won the championship in ilsvrc2015. The error rate on top 5 was 3.57%, and the parameter quantity was lower than vggnet, so the effect was very outstanding. Traditional convolution network or full connection network will have more or less information loss. At the same time, it will lead to the disappearance or explosion of gradient, which leads to the failure of deep network training. ResNet solves this problem to a certain extent. By passing the input information to the output, the integrity of the information is protected. The whole network only needs to learn the part of the difference between input and output, which simplifies the learning objectives and difficulties.The structure of ResNet can accelerate the training of neural network very quickly, and the accuracy of the model is also greatly improved. At the same time, ResNet is very popular, even can be directly used in the concept net network.
These are examples of training ResNet50/ResNet101/SE-ResNet50 with CIFAR-10/ImageNet2012 dataset in MindSpore.ResNet50 and ResNet101 can reference [paper 1](https://arxiv.org/pdf/1512.03385.pdf) below, and SE-ResNet50 is a variant of ResNet50 which reference [paper 2](https://arxiv.org/abs/1709.01507) and [paper 3](https://arxiv.org/abs/1812.01187) below, Training SE-ResNet50 for just 24 epochs using 8 Ascend 910, we can reach top-1 accuracy of 75.9%.(Training ResNet101 with dataset CIFAR-10 and SE-ResNet50 with CIFAR-10 is not supported yet.)
2.[paper](https://arxiv.org/abs/1709.01507):Jie Hu, Li Shen, Samuel Albanie, Gang Sun, Enhua Wu. "Squeeze-and-Excitation Networks"
3.[paper](https://arxiv.org/abs/1812.01187):Tong He, Zhi Zhang, Hang Zhang, Zhongyue Zhang, Junyuan Xie, Mu Li. "Bag of Tricks for Image Classification with Convolutional Neural Networks"
# [Model Architecture](#contents)
The overall network architecture of ResNet is show below:
- Dataset size 224*224 colorful images in 1000 classes
- Train:1,281,167 images
- Test: 50,000 images
- Data format:jpeg
- Note:Data will be processed in dataset.py
- Download the dataset, the directory structure is as follows:
```
└─dataset
├─ilsvrc # train dataset
└─validation_preprocess # evaluate dataset
```
# [Features](#contents)
## Mixed Precision
The [mixed precision](https://www.mindspore.cn/tutorial/zh-CN/master/advanced_use/mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data types, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware.
For FP16 operators, if the input data type is FP32, the backend of MindSpore will automatically handle it with reduced precision. Users could check the reduced-precision operators by enabling INFO log and then searching ‘reduce precision’.
# [Environment Requirements](#contents)
- Hardware(Ascend/GPU)
- Prepare hardware environment with Ascend or GPU processor. If you want to try Ascend , please send the [application form](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx) to ascend@huawei.com. Once approved, you can get the resources.
├── run_parameter_server_train_gpu.sh # launch gpu parameter server training(8 pcs)
├── run_eval_gpu.sh # launch gpu evaluation
@@ -54,17 +138,16 @@ ImageNet2012
├── dataset.py # data preprocessing
├── crossentropy.py # loss definition for ImageNet2012 dataset
├── lr_generator.py # generate learning rate for each step
└── resnet.py # resnet backbone, including resnet50 and resnet101
└── resnet.py # resnet backbone, including resnet50 and resnet101 and se-resnet50
├── eval.py # eval net
└── train.py # train net
```
## Parameter configuration
## [Script Parameters](#contents)
Parameters for both training and evaluation can be set in config.py.
- config for ResNet-50, CIFAR-10 dataset
- Config for ResNet50, CIFAR-10 dataset
```
"class_num": 10, # dataset class num
@@ -85,7 +168,7 @@ Parameters for both training and evaluation can be set in config.py.
"lr_max": 0.1, # maximum learning rate
```
- config for ResNet-50, ImageNet2012 dataset
- Config for ResNet50, ImageNet2012 dataset
```
"class_num": 1001, # dataset class number
@@ -107,7 +190,7 @@ Parameters for both training and evaluation can be set in config.py.
"lr_max": 0.1, # maximum learning rate
```
- config for ResNet-101, ImageNet2012 dataset
- Config for ResNet101, ImageNet2012 dataset
```
"class_num": 1001, # dataset class number
@@ -128,7 +211,7 @@ Parameters for both training and evaluation can be set in config.py.
"lr": 0.1 # base learning rate
```
- config for SE-ResNet-50, ImageNet2012 dataset
- Config for SE-ResNet50, ImageNet2012 dataset
```
"class_num": 1001, # dataset class number
@@ -152,13 +235,10 @@ Parameters for both training and evaluation can be set in config.py.
"lr_end": 0.0001, # end learning rate
```
## [Training Process](#contents)
## Running the example
### Train
#### Usage
### Usage
#### Running on Ascend
```
# distributed training
Usage: sh run_distribute_train.sh [resnet50|resnet101|se-resnet50] [cifar10|imagenet2012] [RANK_TABLE_FILE] [DATASET_PATH] [PRETRAINED_CKPT_PATH](optional)
@@ -166,26 +246,46 @@ Usage: sh run_distribute_train.sh [resnet50|resnet101|se-resnet50] [cifar10|imag
# standalone training
Usage: sh run_standalone_train.sh [resnet50|resnet101|se-resnet50] [cifar10|imagenet2012] [DATASET_PATH]
[PRETRAINED_CKPT_PATH](optional)
# run evaluation example
Usage: sh run_eval.sh [resnet50|resnet101|se-resnet50] [cifar10|imagenet2012] [DATASET_PATH] [CHECKPOINT_PATH]
```
For distributed training, a hccl configuration file with JSON format needs to be created in advance.
Please follow the instructions in the link [hccn_tools](https://gitee.com/mindspore/mindspore/tree/master/model_zoo/utils/hccl_tools).
#### Launch
Training result will be stored in the example path, whose folder name begins with "train" or "train_parallel". Under this, you can find checkpoint file together with result like the followings in log.
#### Running on GPU
```
# distribute training example
sh run_distribute_train.sh resnet50 cifar10 rank_table.json ~/cifar-10-batches-bin
# distributed training example
sh run_distribute_train_gpu.sh [resnet50|resnet101] [cifar10|imagenet2012] [DATASET_PATH] [PRETRAINED_CKPT_PATH](optional)
# standalone training example
sh run_standalone_train.sh resnet50 cifar10 ~/cifar-10-batches-bin
sh run_standalone_train_gpu.sh [resnet50|resnet101] [cifar10|imagenet2012] [DATASET_PATH] [PRETRAINED_CKPT_PATH](optional)
# infer example
sh run_eval_gpu.sh [resnet50|resnet101] [cifar10|imagenet2012] [DATASET_PATH] [CHECKPOINT_PATH]
```
> About rank_table.json, you can refer to the [distributed training tutorial](https://www.mindspore.cn/tutorial/en/master/advanced_use/distributed_training.html).
#### Running parameter server mode training
#### Result
- Parameter server training Ascend example
Training result will be stored in the example path, whose folder name begins with "train" or "train_parallel". Under this, you can find checkpoint file together with result like the followings in log.
```
sh run_parameter_server_train.sh [resnet50|resnet101] [cifar10|imagenet2012] [RANK_TABLE_FILE] [DATASET_PATH] [PRETRAINED_CKPT_PATH](optional)
```
- Parameter server training GPU example
```
sh run_parameter_server_train_gpu.sh [resnet50|resnet101] [cifar10|imagenet2012] [DATASET_PATH] [PRETRAINED_CKPT_PATH](optional)
```
- training ResNet-50 with CIFAR-10 dataset
### Result
- Training ResNet50 with CIFAR-10 dataset
```
# distribute training result(8 pcs)
@@ -197,7 +297,7 @@ epoch: 5 step: 195, loss is 1.393667
...
```
- training ResNet-50 with ImageNet2012 dataset
- Training ResNet50 with ImageNet2012 dataset
```
# distribute training result(8 pcs)
@@ -209,7 +309,7 @@ epoch: 5 step: 5004, loss is 3.1978393
...
```
- training ResNet-101 with ImageNet2012 dataset
- Training ResNet101 with ImageNet2012 dataset
```
# distribute training result(8p)
@@ -225,7 +325,7 @@ epoch: 69 step: 5004, loss is 2.0665488
epoch: 70 step: 5004, loss is 1.8717369
...
```
- training SE-ResNet-50 with ImageNet2012 dataset
- Training SE-ResNet50 with ImageNet2012 dataset
```
# distribute training result(8 pcs)
@@ -236,17 +336,17 @@ epoch: 4 step: 5004, loss is 3.5011306
epoch: 5 step: 5004, loss is 3.3501816
...
```
### Evaluation
#### Usage
## [Evaluation Process](#contents)
### Usage
#### Running on Ascend
```
# evaluation
Usage: sh run_eval.sh [resnet50|resnet101|se-resnet50] [cifar10|imagenet2012] [DATASET_PATH] [CHECKPOINT_PATH]
```
#### Launch
```
# evaluation example
sh run_eval.sh resnet50 cifar10 ~/cifar10-10-verify-bin ~/resnet50_cifar10/train_parallel0/resnet-90_195.ckpt
@@ -254,53 +354,126 @@ sh run_eval.sh resnet50 cifar10 ~/cifar10-10-verify-bin ~/resnet50_cifar10/train
> checkpoint can be produced in training process.
#### Result
#### Running on GPU
```
sh run_eval_gpu.sh [resnet50|resnet101] [cifar10|imagenet2012] [DATASET_PATH] [CHECKPOINT_PATH]
```
### Result
Evaluation result will be stored in the example path, whose folder name is "eval". Under this, you can find result like the followings in log.