You can not select more than 25 topics Topics must start with a chinese character,a letter or number, can include dashes ('-') and can be up to 35 characters long.

README.md 40 kB

5 years ago
4 years ago
4 years ago
4 years ago
4 years ago
4 years ago
4 years ago
4 years ago
4 years ago
5 years ago
5 years ago
4 years ago
4 years ago
5 years ago
5 years ago
5 years ago
4 years ago
4 years ago
4 years ago
4 years ago
4 years ago
4 years ago
4 years ago
4 years ago
4 years ago
4 years ago
4 years ago
4 years ago
4 years ago
5 years ago
4 years ago
4 years ago
4 years ago
4 years ago
4 years ago
4 years ago
4 years ago
4 years ago
4 years ago
4 years ago
4 years ago
4 years ago
5 years ago
5 years ago
4 years ago
4 years ago
4 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462463464465466467468469470471472473474475476477478479480481482483484485486487488489490491492493494495496497498499500501502503504505506507508509510511512513514515516517518519520521522523524525526527528529530531532533534535536537538539540541542543544545546547548549550551552553554555556557558559560561562563564565566567568569570571572573574575576577578579580581582583584585586587588589590591592593594595596597598599600601602603604605606607608609610611612613614615616617618619620621622623624625626627628629630631632633634635636637638639640641642643644645646647648649650651652653654655656657658659660661662663664665666667668669670671672673674675676677678679680681682683684685686687688689690691692693694695696697698699700701702703704705706707708709710711712713714715716717718719720721722723724725726727728729730731732733734735736737738739740741742743744745746747748749750751752753754755
  1. # Contents
  2. - [ResNet Description](#resnet-description)
  3. - [Model Architecture](#model-architecture)
  4. - [Dataset](#dataset)
  5. - [Features](#features)
  6. - [Mixed Precision](#mixed-precision)
  7. - [Environment Requirements](#environment-requirements)
  8. - [Quick Start](#quick-start)
  9. - [Script Description](#script-description)
  10. - [Script and Sample Code](#script-and-sample-code)
  11. - [Script Parameters](#script-parameters)
  12. - [Training Process](#training-process)
  13. - [Evaluation Process](#evaluation-process)
  14. - [Inference Process](#inference-process)
  15. - [Export MindIR](#export-mindir)
  16. - [Infer on Ascend310](#infer-on-ascend310)
  17. - [result](#result)
  18. - [Model Description](#model-description)
  19. - [Performance](#performance)
  20. - [Evaluation Performance](#evaluation-performance)
  21. - [Inference Performance](#inference-performance)
  22. - [Description of Random Situation](#description-of-random-situation)
  23. - [ModelZoo Homepage](#modelzoo-homepage)
  24. # [ResNet Description](#contents)
  25. ## Description
  26. ResNet (residual neural network) was proposed by Kaiming He and other four Chinese of Microsoft Research Institute. Through the use of ResNet unit, it successfully trained 152 layers of neural network, and won the championship in ilsvrc2015. The error rate on top 5 was 3.57%, and the parameter quantity was lower than vggnet, so the effect was very outstanding. Traditional convolution network or full connection network will have more or less information loss. At the same time, it will lead to the disappearance or explosion of gradient, which leads to the failure of deep network training. ResNet solves this problem to a certain extent. By passing the input information to the output, the integrity of the information is protected. The whole network only needs to learn the part of the difference between input and output, which simplifies the learning objectives and difficulties.The structure of ResNet can accelerate the training of neural network very quickly, and the accuracy of the model is also greatly improved. At the same time, ResNet is very popular, even can be directly used in the concept net network.
  27. These are examples of training ResNet18/ResNet50/ResNet101/SE-ResNet50 with CIFAR-10/ImageNet2012 dataset in MindSpore.ResNet50 and ResNet101 can reference [paper 1](https://arxiv.org/pdf/1512.03385.pdf) below, and SE-ResNet50 is a variant of ResNet50 which reference [paper 2](https://arxiv.org/abs/1709.01507) and [paper 3](https://arxiv.org/abs/1812.01187) below, Training SE-ResNet50 for just 24 epochs using 8 Ascend 910, we can reach top-1 accuracy of 75.9%.(Training ResNet101 with dataset CIFAR-10 and SE-ResNet50 with CIFAR-10 is not supported yet.)
  28. ## Paper
  29. 1.[paper](https://arxiv.org/pdf/1512.03385.pdf):Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun. "Deep Residual Learning for Image Recognition"
  30. 2.[paper](https://arxiv.org/abs/1709.01507):Jie Hu, Li Shen, Samuel Albanie, Gang Sun, Enhua Wu. "Squeeze-and-Excitation Networks"
  31. 3.[paper](https://arxiv.org/abs/1812.01187):Tong He, Zhi Zhang, Hang Zhang, Zhongyue Zhang, Junyuan Xie, Mu Li. "Bag of Tricks for Image Classification with Convolutional Neural Networks"
  32. # [Model Architecture](#contents)
  33. The overall network architecture of ResNet is show below:
  34. [Link](https://arxiv.org/pdf/1512.03385.pdf)
  35. # [Dataset](#contents)
  36. Dataset used: [CIFAR-10](<http://www.cs.toronto.edu/~kriz/cifar.html>)
  37. - Dataset size:60,000 32*32 colorful images in 10 classes
  38. - Train:50,000 images
  39. - Test: 10,000 images
  40. - Data format:binary files
  41. - Note:Data will be processed in dataset.py
  42. - Download the dataset, the directory structure is as follows:
  43. ```bash
  44. ├─cifar-10-batches-bin
  45. └─cifar-10-verify-bin
  46. ```
  47. Dataset used: [ImageNet2012](http://www.image-net.org/)
  48. - Dataset size 224*224 colorful images in 1000 classes
  49. - Train:1,281,167 images
  50. - Test: 50,000 images
  51. - Data format:jpeg
  52. - Note:Data will be processed in dataset.py
  53. - Download the dataset, the directory structure is as follows:
  54. ```bash
  55. └─dataset
  56. ├─ilsvrc # train dataset
  57. └─validation_preprocess # evaluate dataset
  58. ```
  59. # [Features](#contents)
  60. ## Mixed Precision
  61. The [mixed precision](https://www.mindspore.cn/tutorial/training/en/master/advanced_use/enable_mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data types, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware.
  62. For FP16 operators, if the input data type is FP32, the backend of MindSpore will automatically handle it with reduced precision. Users could check the reduced-precision operators by enabling INFO log and then searching ‘reduce precision’.
  63. # [Environment Requirements](#contents)
  64. - Hardware(Ascend/GPU/CPU)
  65. - Prepare hardware environment with Ascend, GPU or CPU processor.
  66. - Framework
  67. - [MindSpore](https://www.mindspore.cn/install/en)
  68. - For more information, please check the resources below:
  69. - [MindSpore Tutorials](https://www.mindspore.cn/tutorial/training/en/master/index.html)
  70. - [MindSpore Python API](https://www.mindspore.cn/doc/api_python/en/master/index.html)
  71. # [Quick Start](#contents)
  72. After installing MindSpore via the official website, you can start training and evaluation as follows:
  73. - Running on Ascend
  74. ```bash
  75. # distributed training
  76. Usage: bash run_distribute_train.sh [resnet18|resnet50|resnet101|se-resnet50] [cifar10|imagenet2012] [RANK_TABLE_FILE] [DATASET_PATH] [PRETRAINED_CKPT_PATH](optional)
  77. # standalone training
  78. Usage: bash run_standalone_train.sh [resnet18|resnet50|resnet101|se-resnet50] [cifar10|imagenet2012] [DATASET_PATH]
  79. [PRETRAINED_CKPT_PATH](optional)
  80. # run evaluation example
  81. Usage: bash run_eval.sh [resnet18|resnet50|resnet101|se-resnet50] [cifar10|imagenet2012] [DATASET_PATH] [CHECKPOINT_PATH]
  82. ```
  83. - Running on GPU
  84. ```bash
  85. # distributed training example
  86. bash run_distribute_train_gpu.sh [resnet50|resnet101] [cifar10|imagenet2012] [DATASET_PATH] [PRETRAINED_CKPT_PATH](optional)
  87. # standalone training example
  88. bash run_standalone_train_gpu.sh [resnet50|resnet101] [cifar10|imagenet2012] [DATASET_PATH] [PRETRAINED_CKPT_PATH](optional)
  89. # infer example
  90. bash run_eval_gpu.sh [resnet50|resnet101] [cifar10|imagenet2012] [DATASET_PATH] [CHECKPOINT_PATH]
  91. # gpu benchmark example
  92. bash run_gpu_resnet_benchmark.sh [DATASET_PATH] [BATCH_SIZE](optional) [DTYPE](optional) [DEVICE_NUM](optional) [SAVE_CKPT](optional) [SAVE_PATH](optional)
  93. ```
  94. - Running on CPU
  95. ```bash
  96. # standalone training example
  97. python train.py --net=[resnet50|resnet101] --dataset=[cifar10|imagenet2012] --device_target=CPU --dataset_path=[DATASET_PATH] --pre_trained=[CHECKPOINT_PATH](optional)
  98. # infer example
  99. python eval.py --net=[resnet50|resnet101] --dataset=[cifar10|imagenet2012] --dataset_path=[DATASET_PATH] --checkpoint_path=[CHECKPOINT_PATH] --device_target=CPU
  100. ```
  101. # [Script Description](#contents)
  102. ## [Script and Sample Code](#contents)
  103. ```shell
  104. .
  105. └──resnet
  106. ├── README.md
  107. ├── scripts
  108. ├── run_distribute_train.sh # launch ascend distributed training(8 pcs)
  109. ├── run_parameter_server_train.sh # launch ascend parameter server training(8 pcs)
  110. ├── run_eval.sh # launch ascend evaluation
  111. ├── run_standalone_train.sh # launch ascend standalone training(1 pcs)
  112. ├── run_distribute_train_gpu.sh # launch gpu distributed training(8 pcs)
  113. ├── run_parameter_server_train_gpu.sh # launch gpu parameter server training(8 pcs)
  114. ├── run_eval_gpu.sh # launch gpu evaluation
  115. ├── run_standalone_train_gpu.sh # launch gpu standalone training(1 pcs)
  116. ├── run_gpu_resnet_benchmark.sh # launch gpu benchmark train for resnet50 with imagenet2012
  117. |── run_eval_gpu_resnet_benckmark.sh # launch gpu benchmark eval for resnet50 with imagenet2012
  118. └── cache_util.sh # a collection of helper functions to manage cache
  119. ├── src
  120. ├── config.py # parameter configuration
  121. ├── dataset.py # data preprocessing
  122. ├─ eval_callback.py # evaluation callback while training
  123. ├── CrossEntropySmooth.py # loss definition for ImageNet2012 dataset
  124. ├── lr_generator.py # generate learning rate for each step
  125. ├── resnet.py # resnet backbone, including resnet50 and resnet101 and se-resnet50
  126. └── resnet_gpu_benchmark.py # resnet50 for GPU benchmark
  127. ├── export.py # export model for inference
  128. ├── mindspore_hub_conf.py # mindspore hub interface
  129. ├── eval.py # eval net
  130. ├── train.py # train net
  131. └── gpu_resent_benchmark.py # GPU benchmark for resnet50
  132. ```
  133. ## [Script Parameters](#contents)
  134. Parameters for both training and evaluation can be set in config.py.
  135. - Config for ResNet18 and ResNet50, CIFAR-10 dataset
  136. ```bash
  137. "class_num": 10, # dataset class num
  138. "batch_size": 32, # batch size of input tensor
  139. "loss_scale": 1024, # loss scale
  140. "momentum": 0.9, # momentum
  141. "weight_decay": 1e-4, # weight decay
  142. "epoch_size": 90, # only valid for taining, which is always 1 for inference
  143. "pretrain_epoch_size": 0, # epoch size that model has been trained before loading pretrained checkpoint, actual training epoch size is equal to epoch_size minus pretrain_epoch_size
  144. "save_checkpoint": True, # whether save checkpoint or not
  145. "save_checkpoint_epochs": 5, # the epoch interval between two checkpoints. By default, the last checkpoint will be saved after the last step
  146. "keep_checkpoint_max": 10, # only keep the last keep_checkpoint_max checkpoint
  147. "save_checkpoint_path": "./", # path to save checkpoint
  148. "warmup_epochs": 5, # number of warmup epoch
  149. "lr_decay_mode": "poly" # decay mode can be selected in steps, ploy and default
  150. "lr_init": 0.01, # initial learning rate
  151. "lr_end": 0.00001, # final learning rate
  152. "lr_max": 0.1, # maximum learning rate
  153. ```
  154. - Config for ResNet18 and ResNet50, ImageNet2012 dataset
  155. ```bash
  156. "class_num": 1001, # dataset class number
  157. "batch_size": 256, # batch size of input tensor
  158. "loss_scale": 1024, # loss scale
  159. "momentum": 0.9, # momentum optimizer
  160. "weight_decay": 1e-4, # weight decay
  161. "epoch_size": 90, # only valid for taining, which is always 1 for inference
  162. "pretrain_epoch_size": 0, # epoch size that model has been trained before loading pretrained checkpoint, actual training epoch size is equal to epoch_size minus pretrain_epoch_size
  163. "save_checkpoint": True, # whether save checkpoint or not
  164. "save_checkpoint_epochs": 5, # the epoch interval between two checkpoints. By default, the last checkpoint will be saved after the last epoch
  165. "keep_checkpoint_max": 10, # only keep the last keep_checkpoint_max checkpoint
  166. "save_checkpoint_path": "./", # path to save checkpoint relative to the executed path
  167. "warmup_epochs": 0, # number of warmup epoch
  168. "lr_decay_mode": "Linear", # decay mode for generating learning rate
  169. "use_label_smooth": True, # label smooth
  170. "label_smooth_factor": 0.1, # label smooth factor
  171. "lr_init": 0, # initial learning rate
  172. "lr_max": 0.8, # maximum learning rate
  173. "lr_end": 0.0, # minimum learning rate
  174. ```
  175. - Config for ResNet101, ImageNet2012 dataset
  176. ```bash
  177. "class_num": 1001, # dataset class number
  178. "batch_size": 32, # batch size of input tensor
  179. "loss_scale": 1024, # loss scale
  180. "momentum": 0.9, # momentum optimizer
  181. "weight_decay": 1e-4, # weight decay
  182. "epoch_size": 120, # epoch size for training
  183. "pretrain_epoch_size": 0, # epoch size that model has been trained before loading pretrained checkpoint, actual training epoch size is equal to epoch_size minus pretrain_epoch_size
  184. "save_checkpoint": True, # whether save checkpoint or not
  185. "save_checkpoint_epochs": 5, # the epoch interval between two checkpoints. By default, the last checkpoint will be saved after the last epoch
  186. "keep_checkpoint_max": 10, # only keep the last keep_checkpoint_max checkpoint
  187. "save_checkpoint_path": "./", # path to save checkpoint relative to the executed path
  188. "warmup_epochs": 0, # number of warmup epoch
  189. "lr_decay_mode": "cosine" # decay mode for generating learning rate
  190. "use_label_smooth": True, # label_smooth
  191. "label_smooth_factor": 0.1, # label_smooth_factor
  192. "lr": 0.1 # base learning rate
  193. ```
  194. - Config for SE-ResNet50, ImageNet2012 dataset
  195. ```bash
  196. "class_num": 1001, # dataset class number
  197. "batch_size": 32, # batch size of input tensor
  198. "loss_scale": 1024, # loss scale
  199. "momentum": 0.9, # momentum optimizer
  200. "weight_decay": 1e-4, # weight decay
  201. "epoch_size": 28 , # epoch size for creating learning rate
  202. "train_epoch_size": 24 # actual train epoch size
  203. "pretrain_epoch_size": 0, # epoch size that model has been trained before loading pretrained checkpoint, actual training epoch size is equal to epoch_size minus pretrain_epoch_size
  204. "save_checkpoint": True, # whether save checkpoint or not
  205. "save_checkpoint_epochs": 4, # the epoch interval between two checkpoints. By default, the last checkpoint will be saved after the last epoch
  206. "keep_checkpoint_max": 10, # only keep the last keep_checkpoint_max checkpoint
  207. "save_checkpoint_path": "./", # path to save checkpoint relative to the executed path
  208. "warmup_epochs": 3, # number of warmup epoch
  209. "lr_decay_mode": "cosine" # decay mode for generating learning rate
  210. "use_label_smooth": True, # label_smooth
  211. "label_smooth_factor": 0.1, # label_smooth_factor
  212. "lr_init": 0.0, # initial learning rate
  213. "lr_max": 0.3, # maximum learning rate
  214. "lr_end": 0.0001, # end learning rate
  215. ```
  216. ## [Training Process](#contents)
  217. ### Usage
  218. #### Running on Ascend
  219. ```bash
  220. # distributed training
  221. Usage: bash run_distribute_train.sh [resnet18|resnet50|resnet101|se-resnet50] [cifar10|imagenet2012] [RANK_TABLE_FILE] [DATASET_PATH] [PRETRAINED_CKPT_PATH](optional)
  222. # standalone training
  223. Usage: bash run_standalone_train.sh [resnet18|resnet50|resnet101|se-resnet50] [cifar10|imagenet2012] [DATASET_PATH]
  224. [PRETRAINED_CKPT_PATH](optional)
  225. # run evaluation example
  226. Usage: bash run_eval.sh [resnet18|resnet50|resnet101|se-resnet50] [cifar10|imagenet2012] [DATASET_PATH] [CHECKPOINT_PATH]
  227. ```
  228. For distributed training, a hccl configuration file with JSON format needs to be created in advance.
  229. Please follow the instructions in the link [hccn_tools](https://gitee.com/mindspore/mindspore/tree/master/model_zoo/utils/hccl_tools).
  230. Training result will be stored in the example path, whose folder name begins with "train" or "train_parallel". Under this, you can find checkpoint file together with result like the following in log.
  231. If you want to change device_id for standalone training, you can set environment variable `export DEVICE_ID=x` or set `device_id=x` in context.
  232. #### Running on GPU
  233. ```bash
  234. # distributed training example
  235. bash run_distribute_train_gpu.sh [resnet50|resnet101] [cifar10|imagenet2012] [DATASET_PATH] [PRETRAINED_CKPT_PATH](optional)
  236. # standalone training example
  237. bash run_standalone_train_gpu.sh [resnet50|resnet101] [cifar10|imagenet2012] [DATASET_PATH] [PRETRAINED_CKPT_PATH](optional)
  238. # infer example
  239. bash run_eval_gpu.sh [resnet50|resnet101] [cifar10|imagenet2012] [DATASET_PATH] [CHECKPOINT_PATH]
  240. # gpu benchmark training example
  241. bash run_gpu_resnet_benchmark.sh [DATASET_PATH] [BATCH_SIZE](optional) [DTYPE](optional) [DEVICE_NUM](optional) [SAVE_CKPT](optional) [SAVE_PATH](optional)
  242. # gpu benchmark infer example
  243. bash run_eval_gpu_resnet_benchmark.sh [DATASET_PATH] [CKPT_PATH] [BATCH_SIZE](optional) [DTYPE](optional)
  244. ```
  245. For distributed training, a hostfile configuration needs to be created in advance.
  246. Please follow the instructions in the link [GPU-Multi-Host](https://www.mindspore.cn/tutorial/training/zh-CN/r1.0/advanced_use/distributed_training_gpu.html).
  247. #### Running parameter server mode training
  248. - Parameter server training Ascend example
  249. ```bash
  250. bash run_parameter_server_train.sh [resnet18|resnet50|resnet101] [cifar10|imagenet2012] [RANK_TABLE_FILE] [DATASET_PATH] [PRETRAINED_CKPT_PATH](optional)
  251. ```
  252. - Parameter server training GPU example
  253. ```bash
  254. bash run_parameter_server_train_gpu.sh [resnet50|resnet101] [cifar10|imagenet2012] [DATASET_PATH] [PRETRAINED_CKPT_PATH](optional)
  255. ```
  256. #### Evaluation while training
  257. ```bash
  258. # evaluation with distributed training Ascend example:
  259. bash run_distribute_train.sh [resnet18|resnet50|resnet101|se-resnet50] [cifar10|imagenet2012] [RANK_TABLE_FILE] [DATASET_PATH] [RUN_EVAL](optional) [EVAL_DATASET_PATH](optional)
  260. # evaluation with standalone training Ascend example:
  261. bash run_standalone_train.sh [resnet18|resnet50|resnet101|se-resnet50] [cifar10|imagenet2012] [RANK_TABLE_FILE] [DATASET_PATH] [RUN_EVAL](optional) [EVAL_DATASET_PATH](optional)
  262. # evaluation with distributed training GPU example:
  263. bash run_distribute_train_gpu.sh [resnet50|resnet101] [cifar10|imagenet2012] [DATASET_PATH] [RUN_EVAL](optional) [EVAL_DATASET_PATH](optional)
  264. # evaluation with standalone training GPU example:
  265. bash run_standalone_train_gpu.sh [resnet50|resnet101] [cifar10|imagenet2012] [DATASET_PATH] [RUN_EVAL](optional) [EVAL_DATASET_PATH](optional)
  266. ```
  267. `RUN_EVAL` and `EVAL_DATASET_PATH` are optional arguments, setting `RUN_EVAL`=True allows you to do evaluation while training. When `RUN_EVAL` is set, `EVAL_DATASET_PATH` must also be set.
  268. And you can also set these optional arguments: `save_best_ckpt`, `eval_start_epoch`, `eval_interval` for python script when `RUN_EVAL` is True.
  269. By default, a standalone cache server would be started to cache all eval images in tensor format in memory to improve the evaluation performance. Please make sure the dataset fits in memory (Around 30GB of memory required for ImageNet2012 eval dataset, 6GB of memory required for CIFAR-10 eval dataset).
  270. Users can choose to shutdown the cache server after training or leave it alone for future usage.
  271. ### Result
  272. - Training ResNet18 with CIFAR-10 dataset
  273. ```bash
  274. # distribute training result(8 pcs)
  275. epoch: 1 step: 195, loss is 1.5783054
  276. epoch: 2 step: 195, loss is 1.0682616
  277. epoch: 3 step: 195, loss is 0.8836588
  278. epoch: 4 step: 195, loss is 0.36090446
  279. epoch: 5 step: 195, loss is 0.80853784
  280. ...
  281. ```
  282. - Training ResNet18 with ImageNet2012 dataset
  283. ```bash
  284. # distribute training result(8 pcs)
  285. epoch: 1 step: 625, loss is 4.757934
  286. epoch: 2 step: 625, loss is 4.0891967
  287. epoch: 3 step: 625, loss is 3.9131956
  288. epoch: 4 step: 625, loss is 3.5302577
  289. epoch: 5 step: 625, loss is 3.597817
  290. ...
  291. ```
  292. - Training ResNet50 with CIFAR-10 dataset
  293. ```bash
  294. # distribute training result(8 pcs)
  295. epoch: 1 step: 195, loss is 1.9601055
  296. epoch: 2 step: 195, loss is 1.8555021
  297. epoch: 3 step: 195, loss is 1.6707983
  298. epoch: 4 step: 195, loss is 1.8162166
  299. epoch: 5 step: 195, loss is 1.393667
  300. ...
  301. ```
  302. - Training ResNet50 with ImageNet2012 dataset
  303. ```bash
  304. # distribute training result(8 pcs)
  305. epoch: 1 step: 5004, loss is 4.8995576
  306. epoch: 2 step: 5004, loss is 3.9235563
  307. epoch: 3 step: 5004, loss is 3.833077
  308. epoch: 4 step: 5004, loss is 3.2795618
  309. epoch: 5 step: 5004, loss is 3.1978393
  310. ...
  311. ```
  312. - Training ResNet101 with ImageNet2012 dataset
  313. ```bash
  314. # distribute training result(8 pcs)
  315. epoch: 1 step: 5004, loss is 4.805483
  316. epoch: 2 step: 5004, loss is 3.2121816
  317. epoch: 3 step: 5004, loss is 3.429647
  318. epoch: 4 step: 5004, loss is 3.3667371
  319. epoch: 5 step: 5004, loss is 3.1718972
  320. ...
  321. ```
  322. - Training SE-ResNet50 with ImageNet2012 dataset
  323. ```bash
  324. # distribute training result(8 pcs)
  325. epoch: 1 step: 5004, loss is 5.1779146
  326. epoch: 2 step: 5004, loss is 4.139395
  327. epoch: 3 step: 5004, loss is 3.9240637
  328. epoch: 4 step: 5004, loss is 3.5011306
  329. epoch: 5 step: 5004, loss is 3.3501816
  330. ...
  331. ```
  332. - GPU Benchmark of ResNet50 with ImageNet2012 dataset
  333. ```bash
  334. # ========START RESNET50 GPU BENCHMARK========
  335. epoch: [0/1] step: [20/5004], loss is 6.940182 Epoch time: 12416.098 ms, fps: 412 img/sec.
  336. epoch: [0/1] step: [40/5004], loss is 7.078993Epoch time: 3438.972 ms, fps: 1488 img/sec.
  337. epoch: [0/1] step: [60/5004], loss is 7.559594Epoch time: 3431.516 ms, fps: 1492 img/sec.
  338. epoch: [0/1] step: [80/5004], loss is 6.920937Epoch time: 3435.777 ms, fps: 1490 img/sec.
  339. epoch: [0/1] step: [100/5004], loss is 6.814013Epoch time: 3437.154 ms, fps: 1489 img/sec.
  340. ...
  341. ```
  342. ## [Evaluation Process](#contents)
  343. ### Usage
  344. #### Running on Ascend
  345. ```bash
  346. # evaluation
  347. Usage: bash run_eval.sh [resnet18|resnet50|resnet101|se-resnet50] [cifar10|imagenet2012] [DATASET_PATH] [CHECKPOINT_PATH]
  348. ```
  349. ```bash
  350. # evaluation example
  351. bash run_eval.sh resnet50 cifar10 ~/cifar10-10-verify-bin ~/resnet50_cifar10/train_parallel0/resnet-90_195.ckpt
  352. ```
  353. > checkpoint can be produced in training process.
  354. #### Running on GPU
  355. ```bash
  356. bash run_eval_gpu.sh [resnet50|resnet101] [cifar10|imagenet2012] [DATASET_PATH] [CHECKPOINT_PATH]
  357. ```
  358. ### Result
  359. Evaluation result will be stored in the example path, whose folder name is "eval". Under this, you can find result like the following in log.
  360. - Evaluating ResNet18 with CIFAR-10 dataset
  361. ```bash
  362. result: {'acc': 0.9363061543521088} ckpt=~/resnet50_cifar10/train_parallel0/resnet-90_195.ckpt
  363. ```
  364. - Evaluating ResNet18 with ImageNet2012 dataset
  365. ```bash
  366. result: {'acc': 0.7053685897435897} ckpt=train_parallel0/resnet-90_5004.ckpt
  367. ```
  368. - Evaluating ResNet50 with CIFAR-10 dataset
  369. ```bash
  370. result: {'acc': 0.91446314102564111} ckpt=~/resnet50_cifar10/train_parallel0/resnet-90_195.ckpt
  371. ```
  372. - Evaluating ResNet50 with ImageNet2012 dataset
  373. ```bash
  374. result: {'acc': 0.7671054737516005} ckpt=train_parallel0/resnet-90_5004.ckpt
  375. ```
  376. - Evaluating ResNet101 with ImageNet2012 dataset
  377. ```bash
  378. result: {'top_5_accuracy': 0.9429417413572343, 'top_1_accuracy': 0.7853513124199744} ckpt=train_parallel0/resnet-120_5004.ckpt
  379. ```
  380. - Evaluating SE-ResNet50 with ImageNet2012 dataset
  381. ```bash
  382. result: {'top_5_accuracy': 0.9342589628681178, 'top_1_accuracy': 0.768065781049936} ckpt=train_parallel0/resnet-24_5004.ckpt
  383. ```
  384. ## Inference Process
  385. ### [Export MindIR](#contents)
  386. ```shell
  387. python export.py --ckpt_file [CKPT_PATH] --file_name [FILE_NAME] --file_format [FILE_FORMAT]
  388. ```
  389. The ckpt_file parameter is required,
  390. `EXPORT_FORMAT` should be in ["AIR", "MINDIR"]
  391. ### Infer on Ascend310
  392. Before performing inference, the mindir file must bu exported by `export.py` script. We only provide an example of inference using MINDIR model.
  393. Current batch_Size can only be set to 1. The precision calculation process needs about 70G+ memory space, otherwise the process will be killed for execeeding memory limits.
  394. ```shell
  395. # Ascend310 inference
  396. bash run_infer_310.sh [MINDIR_PATH] [DATA_PATH] [DEVICE_ID]
  397. ```
  398. - `DEVICE_ID` is optional, default value is 0.
  399. ### result
  400. Inference result is saved in current path, you can find result like this in acc.log file.
  401. ```bash
  402. top1_accuracy:70.42, top5_accuracy:89.7
  403. ```
  404. # [Model Description](#contents)
  405. ## [Performance](#contents)
  406. ### Evaluation Performance
  407. #### ResNet18 on CIFAR-10
  408. | Parameters | Ascend 910 |
  409. | -------------------------- | -------------------------------------- |
  410. | Model Version | ResNet18 |
  411. | Resource | Ascend 910; CPU 2.60GHz, 192cores; Memory 755G; OS Euler2.8 |
  412. | uploaded Date | 02/25/2021 (month/day/year) |
  413. | MindSpore Version | 1.1.1-alpha |
  414. | Dataset | CIFAR-10 |
  415. | Training Parameters | epoch=90, steps per epoch=195, batch_size = 32 |
  416. | Optimizer | Momentum |
  417. | Loss Function | Softmax Cross Entropy |
  418. | outputs | probability |
  419. | Loss | 0.0002519517 |
  420. | Speed | 13 ms/step(8pcs) |
  421. | Total time | 4 mins |
  422. | Parameters (M) | 11.2 |
  423. | Checkpoint for Fine tuning | 86M (.ckpt file) |
  424. | Scripts | [Link](https://gitee.com/mindspore/mindspore/tree/master/model_zoo/official/cv/resnet) |
  425. #### ResNet18 on ImageNet2012
  426. | Parameters | Ascend 910 |
  427. | -------------------------- | -------------------------------------- |
  428. | Model Version | ResNet18 |
  429. | Resource | Ascend 910; CPU 2.60GHz, 192cores; Memory 755G; OS Euler2.8 |
  430. | uploaded Date | 02/25/2021 (month/day/year) ; |
  431. | MindSpore Version | 1.1.1-alpha |
  432. | Dataset | ImageNet2012 |
  433. | Training Parameters | epoch=90, steps per epoch=626, batch_size = 256 |
  434. | Optimizer | Momentum |
  435. | Loss Function | Softmax Cross Entropy |
  436. | outputs | probability |
  437. | Loss | 2.15702 |
  438. | Speed | 110ms/step(8pcs) (may need to set_numa_enbale in dataset.py) |
  439. | Total time | 110 mins |
  440. | Parameters (M) | 11.7 |
  441. | Checkpoint for Fine tuning | 90M (.ckpt file) |
  442. | Scripts | [Link](https://gitee.com/mindspore/mindspore/tree/master/model_zoo/official/cv/resnet) |
  443. #### ResNet50 on CIFAR-10
  444. | Parameters | Ascend 910 | GPU |
  445. | -------------------------- | -------------------------------------- |---------------------------------- |
  446. | Model Version | ResNet50-v1.5 |ResNet50-v1.5|
  447. | Resource | Ascend 910; CPU 2.60GHz, 192cores; Memory 755G; OS Euler2.8 | GPU(Tesla V100 SXM2),CPU 2.1GHz 24cores,Memory 128G
  448. | uploaded Date | 04/01/2020 (month/day/year) | 08/01/2020 (month/day/year)
  449. | MindSpore Version | 0.1.0-alpha |0.6.0-alpha |
  450. | Dataset | CIFAR-10 | CIFAR-10
  451. | Training Parameters | epoch=90, steps per epoch=195, batch_size = 32 |epoch=90, steps per epoch=195, batch_size = 32 |
  452. | Optimizer | Momentum |Momentum|
  453. | Loss Function | Softmax Cross Entropy |Softmax Cross Entropy |
  454. | outputs | probability | probability |
  455. | Loss | 0.000356 | 0.000716 |
  456. | Speed | 18.4ms/step(8pcs) |69ms/step(8pcs)|
  457. | Total time | 6 mins | 20.2 mins|
  458. | Parameters (M) | 25.5 | 25.5 |
  459. | Checkpoint for Fine tuning | 179.7M (.ckpt file) |179.7M (.ckpt file)|
  460. | Scripts | [Link](https://gitee.com/mindspore/mindspore/tree/master/model_zoo/official/cv/resnet) | [Link](https://gitee.com/mindspore/mindspore/tree/master/model_zoo/official/cv/resnet) |
  461. #### ResNet50 on ImageNet2012
  462. | Parameters | Ascend 910 | GPU |
  463. | -------------------------- | -------------------------------------- |---------------------------------- |
  464. | Model Version | ResNet50-v1.5 |ResNet50-v1.5|
  465. | Resource | Ascend 910; CPU 2.60GHz, 192cores; Memory 755G; OS Euler2.8 | GPU(Tesla V100 SXM2),CPU 2.1GHz 24cores,Memory 128G
  466. | uploaded Date | 04/01/2020 (month/day/year) ; | 08/01/2020 (month/day/year)
  467. | MindSpore Version | 0.1.0-alpha |0.6.0-alpha |
  468. | Dataset | ImageNet2012 | ImageNet2012|
  469. | Training Parameters | epoch=90, steps per epoch=626, batch_size = 256 |epoch=90, steps per epoch=626, batch_size = 256 |
  470. | Optimizer | Momentum |Momentum|
  471. | Loss Function | Softmax Cross Entropy |Softmax Cross Entropy |
  472. | outputs | probability | probability |
  473. | Loss | 1.8464266 | 1.9023 |
  474. | Speed | 118ms/step(8pcs) |270ms/step(8pcs)|
  475. | Total time | 114 mins | 260 mins|
  476. | Parameters (M) | 25.5 | 25.5 |
  477. | Checkpoint for Fine tuning | 197M (.ckpt file) |197M (.ckpt file) |
  478. | Scripts | [Link](https://gitee.com/mindspore/mindspore/tree/master/model_zoo/official/cv/resnet) | [Link](https://gitee.com/mindspore/mindspore/tree/master/model_zoo/official/cv/resnet) |
  479. #### ResNet101 on ImageNet2012
  480. | Parameters | Ascend 910 | GPU |
  481. | -------------------------- | -------------------------------------- |---------------------------------- |
  482. | Model Version | ResNet101 |ResNet101|
  483. | Resource | Ascend 910; CPU 2.60GHz, 192cores; Memory 755G; OS Euler2.8 | GPU(Tesla V100 SXM2),CPU 2.1GHz 24cores,Memory 128G
  484. | uploaded Date | 04/01/2020 (month/day/year) | 08/01/2020 (month/day/year)
  485. | MindSpore Version | 0.1.0-alpha |0.6.0-alpha |
  486. | Dataset | ImageNet2012 | ImageNet2012|
  487. | Training Parameters | epoch=120, steps per epoch=5004, batch_size = 32 |epoch=120, steps per epoch=5004, batch_size = 32 |
  488. | Optimizer | Momentum |Momentum|
  489. | Loss Function | Softmax Cross Entropy |Softmax Cross Entropy |
  490. | outputs | probability | probability |
  491. | Loss | 1.6453942 | 1.7023412 |
  492. | Speed | 30.3ms/step(8pcs) |108.6ms/step(8pcs)|
  493. | Total time | 301 mins | 1100 mins|
  494. | Parameters (M) | 44.6 | 44.6 |
  495. | Checkpoint for Fine tuning | 343M (.ckpt file) |343M (.ckpt file) |
  496. | Scripts | [Link](https://gitee.com/mindspore/mindspore/tree/master/model_zoo/official/cv/resnet) | [Link](https://gitee.com/mindspore/mindspore/tree/master/model_zoo/official/cv/resnet) |
  497. #### SE-ResNet50 on ImageNet2012
  498. | Parameters | Ascend 910
  499. | -------------------------- | ------------------------------------------------------------------------ |
  500. | Model Version | SE-ResNet50 |
  501. | Resource | Ascend 910; CPU 2.60GHz, 192cores; Memory 755G; OS Euler2.8 |
  502. | uploaded Date | 08/16/2020 (month/day/year) |
  503. | MindSpore Version | 0.7.0-alpha |
  504. | Dataset | ImageNet2012 |
  505. | Training Parameters | epoch=24, steps per epoch=5004, batch_size = 32 |
  506. | Optimizer | Momentum |
  507. | Loss Function | Softmax Cross Entropy |
  508. | outputs | probability |
  509. | Loss | 1.754404 |
  510. | Speed | 24.6ms/step(8pcs) |
  511. | Total time | 49.3 mins |
  512. | Parameters (M) | 25.5 |
  513. | Checkpoint for Fine tuning | 215.9M (.ckpt file) |
  514. | Scripts | [Link](https://gitee.com/mindspore/mindspore/tree/master/model_zoo/official/cv/resnet) |
  515. ### Inference Performance
  516. #### ResNet18 on CIFAR-10
  517. | Parameters | Ascend |
  518. | ------------------- | --------------------------- |
  519. | Model Version | ResNet18 |
  520. | Resource | Ascend 910; OS Euler2.8 |
  521. | Uploaded Date | 02/25/2021 (month/day/year) |
  522. | MindSpore Version | 1.1.1-alpha |
  523. | Dataset | CIFAR-10 |
  524. | batch_size | 32 |
  525. | outputs | probability |
  526. | Accuracy | 94.02% |
  527. | Model for inference | 43M (.air file) |
  528. #### ResNet18 on ImageNet2012
  529. | Parameters | Ascend |
  530. | ------------------- | --------------------------- |
  531. | Model Version | ResNet18 |
  532. | Resource | Ascend 910; OS Euler2.8 |
  533. | Uploaded Date | 02/25/2021 (month/day/year) |
  534. | MindSpore Version | 1.1.1-alpha |
  535. | Dataset | ImageNet2012 |
  536. | batch_size | 256 |
  537. | outputs | probability |
  538. | Accuracy | 70.53% |
  539. | Model for inference | 45M (.air file) |
  540. #### ResNet50 on CIFAR-10
  541. | Parameters | Ascend | GPU |
  542. | ------------------- | --------------------------- | --------------------------- |
  543. | Model Version | ResNet50-v1.5 | ResNet50-v1.5 |
  544. | Resource | Ascend 910; OS Euler2.8 | GPU |
  545. | Uploaded Date | 04/01/2020 (month/day/year) | 08/01/2020 (month/day/year) |
  546. | MindSpore Version | 0.1.0-alpha | 0.6.0-alpha |
  547. | Dataset | CIFAR-10 | CIFAR-10 |
  548. | batch_size | 32 | 32 |
  549. | outputs | probability | probability |
  550. | Accuracy | 91.44% | 91.37% |
  551. | Model for inference | 91M (.air file) | |
  552. #### ResNet50 on ImageNet2012
  553. | Parameters | Ascend | GPU |
  554. | ------------------- | --------------------------- | --------------------------- |
  555. | Model Version | ResNet50-v1.5 | ResNet50-v1.5 |
  556. | Resource | Ascend 910; OS Euler2.8 | GPU |
  557. | Uploaded Date | 04/01/2020 (month/day/year) | 08/01/2020 (month/day/year) |
  558. | MindSpore Version | 0.1.0-alpha | 0.6.0-alpha |
  559. | Dataset | ImageNet2012 | ImageNet2012 |
  560. | batch_size | 256 | 256 |
  561. | outputs | probability | probability |
  562. | Accuracy | 76.70% | 76.74% |
  563. | Model for inference | 98M (.air file) | |
  564. #### ResNet101 on ImageNet2012
  565. | Parameters | Ascend | GPU |
  566. | ------------------- | --------------------------- | --------------------------- |
  567. | Model Version | ResNet101 | ResNet101 |
  568. | Resource | Ascend 910; OS Euler2.8 | GPU |
  569. | Uploaded Date | 04/01/2020 (month/day/year) | 08/01/2020 (month/day/year) |
  570. | MindSpore Version | 0.1.0-alpha | 0.6.0-alpha |
  571. | Dataset | ImageNet2012 | ImageNet2012 |
  572. | batch_size | 32 | 32 |
  573. | outputs | probability | probability |
  574. | Accuracy | 78.53% | 78.64% |
  575. | Model for inference | 171M (.air file) | |
  576. #### SE-ResNet50 on ImageNet2012
  577. | Parameters | Ascend |
  578. | ------------------- | --------------------------- |
  579. | Model Version | SE-ResNet50 |
  580. | Resource | Ascend 910; OS Euler2.8 |
  581. | Uploaded Date | 08/16/2020 (month/day/year) |
  582. | MindSpore Version | 0.7.0-alpha |
  583. | Dataset | ImageNet2012 |
  584. | batch_size | 32 |
  585. | outputs | probability |
  586. | Accuracy | 76.80% |
  587. | Model for inference | 109M (.air file) |
  588. # [Description of Random Situation](#contents)
  589. In dataset.py, we set the seed inside “create_dataset" function. We also use random seed in train.py.
  590. # [ModelZoo Homepage](#contents)
  591. Please check the official [homepage](https://gitee.com/mindspore/mindspore/tree/master/model_zoo).