You can not select more than 25 topics Topics must start with a chinese character,a letter or number, can include dashes ('-') and can be up to 35 characters long.

README.md 37 kB

5 years ago
4 years ago
4 years ago
4 years ago
4 years ago
4 years ago
4 years ago
4 years ago
4 years ago
5 years ago
5 years ago
4 years ago
4 years ago
5 years ago
5 years ago
5 years ago
4 years ago
4 years ago
4 years ago
4 years ago
4 years ago
4 years ago
4 years ago
4 years ago
4 years ago
4 years ago
4 years ago
4 years ago
4 years ago
5 years ago
4 years ago
4 years ago
4 years ago
4 years ago
4 years ago
4 years ago
4 years ago
4 years ago
5 years ago
5 years ago
4 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462463464465466467468469470471472473474475476477478479480481482483484485486487488489490491492493494495496497498499500501502503504505506507508509510511512513514515516517518519520521522523524525526527528529530531532533534535536537538539540541542543544545546547548549550551552553554555556557558559560561562563564565566567568569570571572573574575576577578579580581582583584585586587588589590591592593594595596597598599600601602603604605606607608609610611612613614615616617618619620621622623624625626627628629630631632633634635636637638639640641642643644645646647648649650651652653654655656657658659660661662663664665666667668669670671672673674675676677678679680681682683684685686687688689690691692693694695696697698699700
  1. # Contents
  2. - [ResNet Description](#resnet-description)
  3. - [Model Architecture](#model-architecture)
  4. - [Dataset](#dataset)
  5. - [Features](#features)
  6. - [Mixed Precision](#mixed-precision)
  7. - [Environment Requirements](#environment-requirements)
  8. - [Quick Start](#quick-start)
  9. - [Script Description](#script-description)
  10. - [Script and Sample Code](#script-and-sample-code)
  11. - [Script Parameters](#script-parameters)
  12. - [Training Process](#training-process)
  13. - [Evaluation Process](#evaluation-process)
  14. - [Model Description](#model-description)
  15. - [Performance](#performance)
  16. - [Evaluation Performance](#evaluation-performance)
  17. - [Inference Performance](#inference-performance)
  18. - [Description of Random Situation](#description-of-random-situation)
  19. - [ModelZoo Homepage](#modelzoo-homepage)
  20. # [ResNet Description](#contents)
  21. ## Description
  22. ResNet (residual neural network) was proposed by Kaiming He and other four Chinese of Microsoft Research Institute. Through the use of ResNet unit, it successfully trained 152 layers of neural network, and won the championship in ilsvrc2015. The error rate on top 5 was 3.57%, and the parameter quantity was lower than vggnet, so the effect was very outstanding. Traditional convolution network or full connection network will have more or less information loss. At the same time, it will lead to the disappearance or explosion of gradient, which leads to the failure of deep network training. ResNet solves this problem to a certain extent. By passing the input information to the output, the integrity of the information is protected. The whole network only needs to learn the part of the difference between input and output, which simplifies the learning objectives and difficulties.The structure of ResNet can accelerate the training of neural network very quickly, and the accuracy of the model is also greatly improved. At the same time, ResNet is very popular, even can be directly used in the concept net network.
  23. These are examples of training ResNet18/ResNet50/ResNet101/SE-ResNet50 with CIFAR-10/ImageNet2012 dataset in MindSpore.ResNet50 and ResNet101 can reference [paper 1](https://arxiv.org/pdf/1512.03385.pdf) below, and SE-ResNet50 is a variant of ResNet50 which reference [paper 2](https://arxiv.org/abs/1709.01507) and [paper 3](https://arxiv.org/abs/1812.01187) below, Training SE-ResNet50 for just 24 epochs using 8 Ascend 910, we can reach top-1 accuracy of 75.9%.(Training ResNet101 with dataset CIFAR-10 and SE-ResNet50 with CIFAR-10 is not supported yet.)
  24. ## Paper
  25. 1.[paper](https://arxiv.org/pdf/1512.03385.pdf):Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun. "Deep Residual Learning for Image Recognition"
  26. 2.[paper](https://arxiv.org/abs/1709.01507):Jie Hu, Li Shen, Samuel Albanie, Gang Sun, Enhua Wu. "Squeeze-and-Excitation Networks"
  27. 3.[paper](https://arxiv.org/abs/1812.01187):Tong He, Zhi Zhang, Hang Zhang, Zhongyue Zhang, Junyuan Xie, Mu Li. "Bag of Tricks for Image Classification with Convolutional Neural Networks"
  28. # [Model Architecture](#contents)
  29. The overall network architecture of ResNet is show below:
  30. [Link](https://arxiv.org/pdf/1512.03385.pdf)
  31. # [Dataset](#contents)
  32. Dataset used: [CIFAR-10](<http://www.cs.toronto.edu/~kriz/cifar.html>)
  33. - Dataset size:60,000 32*32 colorful images in 10 classes
  34. - Train:50,000 images
  35. - Test: 10,000 images
  36. - Data format:binary files
  37. - Note:Data will be processed in dataset.py
  38. - Download the dataset, the directory structure is as follows:
  39. ```bash
  40. ├─cifar-10-batches-bin
  41. └─cifar-10-verify-bin
  42. ```
  43. Dataset used: [ImageNet2012](http://www.image-net.org/)
  44. - Dataset size 224*224 colorful images in 1000 classes
  45. - Train:1,281,167 images
  46. - Test: 50,000 images
  47. - Data format:jpeg
  48. - Note:Data will be processed in dataset.py
  49. - Download the dataset, the directory structure is as follows:
  50. ```bash
  51. └─dataset
  52. ├─ilsvrc # train dataset
  53. └─validation_preprocess # evaluate dataset
  54. ```
  55. # [Features](#contents)
  56. ## Mixed Precision
  57. The [mixed precision](https://www.mindspore.cn/tutorial/training/en/master/advanced_use/enable_mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data types, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware.
  58. For FP16 operators, if the input data type is FP32, the backend of MindSpore will automatically handle it with reduced precision. Users could check the reduced-precision operators by enabling INFO log and then searching ‘reduce precision’.
  59. # [Environment Requirements](#contents)
  60. - Hardware(Ascend/GPU/CPU)
  61. - Prepare hardware environment with Ascend, GPU or CPU processor.
  62. - Framework
  63. - [MindSpore](https://www.mindspore.cn/install/en)
  64. - For more information, please check the resources below:
  65. - [MindSpore Tutorials](https://www.mindspore.cn/tutorial/training/en/master/index.html)
  66. - [MindSpore Python API](https://www.mindspore.cn/doc/api_python/en/master/index.html)
  67. # [Quick Start](#contents)
  68. After installing MindSpore via the official website, you can start training and evaluation as follows:
  69. - Running on Ascend
  70. ```bash
  71. # distributed training
  72. Usage: bash run_distribute_train.sh [resnet18|resnet50|resnet101|se-resnet50] [cifar10|imagenet2012] [RANK_TABLE_FILE] [DATASET_PATH] [PRETRAINED_CKPT_PATH](optional)
  73. # standalone training
  74. Usage: bash run_standalone_train.sh [resnet18|resnet50|resnet101|se-resnet50] [cifar10|imagenet2012] [DATASET_PATH]
  75. [PRETRAINED_CKPT_PATH](optional)
  76. # run evaluation example
  77. Usage: bash run_eval.sh [resnet18|resnet50|resnet101|se-resnet50] [cifar10|imagenet2012] [DATASET_PATH] [CHECKPOINT_PATH]
  78. ```
  79. - Running on GPU
  80. ```bash
  81. # distributed training example
  82. bash run_distribute_train_gpu.sh [resnet50|resnet101] [cifar10|imagenet2012] [DATASET_PATH] [PRETRAINED_CKPT_PATH](optional)
  83. # standalone training example
  84. bash run_standalone_train_gpu.sh [resnet50|resnet101] [cifar10|imagenet2012] [DATASET_PATH] [PRETRAINED_CKPT_PATH](optional)
  85. # infer example
  86. bash run_eval_gpu.sh [resnet50|resnet101] [cifar10|imagenet2012] [DATASET_PATH] [CHECKPOINT_PATH]
  87. # gpu benchmark example
  88. bash run_gpu_resnet_benchmark.sh [DATASET_PATH] [BATCH_SIZE](optional) [DTYPE](optional) [DEVICE_NUM](optional) [SAVE_CKPT](optional) [SAVE_PATH](optional)
  89. ```
  90. - Running on CPU
  91. ```bash
  92. # standalone training example
  93. python train.py --net=[resnet50|resnet101] --dataset=[cifar10|imagenet2012] --device_target=CPU --dataset_path=[DATASET_PATH] --pre_trained=[CHECKPOINT_PATH](optional)
  94. # infer example
  95. python eval.py --net=[resnet50|resnet101] --dataset=[cifar10|imagenet2012] --dataset_path=[DATASET_PATH] --checkpoint_path=[CHECKPOINT_PATH] --device_target=CPU
  96. ```
  97. # [Script Description](#contents)
  98. ## [Script and Sample Code](#contents)
  99. ```shell
  100. .
  101. └──resnet
  102. ├── README.md
  103. ├── scripts
  104. ├── run_distribute_train.sh # launch ascend distributed training(8 pcs)
  105. ├── run_parameter_server_train.sh # launch ascend parameter server training(8 pcs)
  106. ├── run_eval.sh # launch ascend evaluation
  107. ├── run_standalone_train.sh # launch ascend standalone training(1 pcs)
  108. ├── run_distribute_train_gpu.sh # launch gpu distributed training(8 pcs)
  109. ├── run_parameter_server_train_gpu.sh # launch gpu parameter server training(8 pcs)
  110. ├── run_eval_gpu.sh # launch gpu evaluation
  111. ├── run_standalone_train_gpu.sh # launch gpu standalone training(1 pcs)
  112. ├── run_gpu_resnet_benchmark.sh # launch gpu benchmark train for resnet50 with imagenet2012
  113. └── run_eval_gpu_resnet_benckmark.sh # launch gpu benchmark eval for resnet50 with imagenet2012
  114. ├── src
  115. ├── config.py # parameter configuration
  116. ├── dataset.py # data preprocessing
  117. ├─ eval_callback.py # evaluation callback while training
  118. ├── CrossEntropySmooth.py # loss definition for ImageNet2012 dataset
  119. ├── lr_generator.py # generate learning rate for each step
  120. ├── resnet.py # resnet backbone, including resnet50 and resnet101 and se-resnet50
  121. └── resnet_gpu_benchmark.py # resnet50 for GPU benchmark
  122. ├── export.py # export model for inference
  123. ├── mindspore_hub_conf.py # mindspore hub interface
  124. ├── eval.py # eval net
  125. ├── train.py # train net
  126. └── gpu_resent_benchmark.py # GPU benchmark for resnet50
  127. ```
  128. ## [Script Parameters](#contents)
  129. Parameters for both training and evaluation can be set in config.py.
  130. - Config for ResNet18 and ResNet50, CIFAR-10 dataset
  131. ```bash
  132. "class_num": 10, # dataset class num
  133. "batch_size": 32, # batch size of input tensor
  134. "loss_scale": 1024, # loss scale
  135. "momentum": 0.9, # momentum
  136. "weight_decay": 1e-4, # weight decay
  137. "epoch_size": 90, # only valid for taining, which is always 1 for inference
  138. "pretrain_epoch_size": 0, # epoch size that model has been trained before loading pretrained checkpoint, actual training epoch size is equal to epoch_size minus pretrain_epoch_size
  139. "save_checkpoint": True, # whether save checkpoint or not
  140. "save_checkpoint_epochs": 5, # the epoch interval between two checkpoints. By default, the last checkpoint will be saved after the last step
  141. "keep_checkpoint_max": 10, # only keep the last keep_checkpoint_max checkpoint
  142. "save_checkpoint_path": "./", # path to save checkpoint
  143. "warmup_epochs": 5, # number of warmup epoch
  144. "lr_decay_mode": "poly" # decay mode can be selected in steps, ploy and default
  145. "lr_init": 0.01, # initial learning rate
  146. "lr_end": 0.00001, # final learning rate
  147. "lr_max": 0.1, # maximum learning rate
  148. ```
  149. - Config for ResNet18 and ResNet50, ImageNet2012 dataset
  150. ```bash
  151. "class_num": 1001, # dataset class number
  152. "batch_size": 256, # batch size of input tensor
  153. "loss_scale": 1024, # loss scale
  154. "momentum": 0.9, # momentum optimizer
  155. "weight_decay": 1e-4, # weight decay
  156. "epoch_size": 90, # only valid for taining, which is always 1 for inference
  157. "pretrain_epoch_size": 0, # epoch size that model has been trained before loading pretrained checkpoint, actual training epoch size is equal to epoch_size minus pretrain_epoch_size
  158. "save_checkpoint": True, # whether save checkpoint or not
  159. "save_checkpoint_epochs": 5, # the epoch interval between two checkpoints. By default, the last checkpoint will be saved after the last epoch
  160. "keep_checkpoint_max": 10, # only keep the last keep_checkpoint_max checkpoint
  161. "save_checkpoint_path": "./", # path to save checkpoint relative to the executed path
  162. "warmup_epochs": 0, # number of warmup epoch
  163. "lr_decay_mode": "Linear", # decay mode for generating learning rate
  164. "use_label_smooth": True, # label smooth
  165. "label_smooth_factor": 0.1, # label smooth factor
  166. "lr_init": 0, # initial learning rate
  167. "lr_max": 0.8, # maximum learning rate
  168. "lr_end": 0.0, # minimum learning rate
  169. ```
  170. - Config for ResNet101, ImageNet2012 dataset
  171. ```bash
  172. "class_num": 1001, # dataset class number
  173. "batch_size": 32, # batch size of input tensor
  174. "loss_scale": 1024, # loss scale
  175. "momentum": 0.9, # momentum optimizer
  176. "weight_decay": 1e-4, # weight decay
  177. "epoch_size": 120, # epoch size for training
  178. "pretrain_epoch_size": 0, # epoch size that model has been trained before loading pretrained checkpoint, actual training epoch size is equal to epoch_size minus pretrain_epoch_size
  179. "save_checkpoint": True, # whether save checkpoint or not
  180. "save_checkpoint_epochs": 5, # the epoch interval between two checkpoints. By default, the last checkpoint will be saved after the last epoch
  181. "keep_checkpoint_max": 10, # only keep the last keep_checkpoint_max checkpoint
  182. "save_checkpoint_path": "./", # path to save checkpoint relative to the executed path
  183. "warmup_epochs": 0, # number of warmup epoch
  184. "lr_decay_mode": "cosine" # decay mode for generating learning rate
  185. "use_label_smooth": True, # label_smooth
  186. "label_smooth_factor": 0.1, # label_smooth_factor
  187. "lr": 0.1 # base learning rate
  188. ```
  189. - Config for SE-ResNet50, ImageNet2012 dataset
  190. ```bash
  191. "class_num": 1001, # dataset class number
  192. "batch_size": 32, # batch size of input tensor
  193. "loss_scale": 1024, # loss scale
  194. "momentum": 0.9, # momentum optimizer
  195. "weight_decay": 1e-4, # weight decay
  196. "epoch_size": 28 , # epoch size for creating learning rate
  197. "train_epoch_size": 24 # actual train epoch size
  198. "pretrain_epoch_size": 0, # epoch size that model has been trained before loading pretrained checkpoint, actual training epoch size is equal to epoch_size minus pretrain_epoch_size
  199. "save_checkpoint": True, # whether save checkpoint or not
  200. "save_checkpoint_epochs": 4, # the epoch interval between two checkpoints. By default, the last checkpoint will be saved after the last epoch
  201. "keep_checkpoint_max": 10, # only keep the last keep_checkpoint_max checkpoint
  202. "save_checkpoint_path": "./", # path to save checkpoint relative to the executed path
  203. "warmup_epochs": 3, # number of warmup epoch
  204. "lr_decay_mode": "cosine" # decay mode for generating learning rate
  205. "use_label_smooth": True, # label_smooth
  206. "label_smooth_factor": 0.1, # label_smooth_factor
  207. "lr_init": 0.0, # initial learning rate
  208. "lr_max": 0.3, # maximum learning rate
  209. "lr_end": 0.0001, # end learning rate
  210. ```
  211. ## [Training Process](#contents)
  212. ### Usage
  213. #### Running on Ascend
  214. ```bash
  215. # distributed training
  216. Usage: bash run_distribute_train.sh [resnet18|resnet50|resnet101|se-resnet50] [cifar10|imagenet2012] [RANK_TABLE_FILE] [DATASET_PATH] [PRETRAINED_CKPT_PATH](optional)
  217. # standalone training
  218. Usage: bash run_standalone_train.sh [resnet18|resnet50|resnet101|se-resnet50] [cifar10|imagenet2012] [DATASET_PATH]
  219. [PRETRAINED_CKPT_PATH](optional)
  220. # run evaluation example
  221. Usage: bash run_eval.sh [resnet18|resnet50|resnet101|se-resnet50] [cifar10|imagenet2012] [DATASET_PATH] [CHECKPOINT_PATH]
  222. ```
  223. For distributed training, a hccl configuration file with JSON format needs to be created in advance.
  224. Please follow the instructions in the link [hccn_tools](https://gitee.com/mindspore/mindspore/tree/master/model_zoo/utils/hccl_tools).
  225. Training result will be stored in the example path, whose folder name begins with "train" or "train_parallel". Under this, you can find checkpoint file together with result like the following in log.
  226. If you want to change device_id for standalone training, you can set environment variable `export DEVICE_ID=x` or set `device_id=x` in context.
  227. #### Running on GPU
  228. ```bash
  229. # distributed training example
  230. bash run_distribute_train_gpu.sh [resnet50|resnet101] [cifar10|imagenet2012] [DATASET_PATH] [PRETRAINED_CKPT_PATH](optional)
  231. # standalone training example
  232. bash run_standalone_train_gpu.sh [resnet50|resnet101] [cifar10|imagenet2012] [DATASET_PATH] [PRETRAINED_CKPT_PATH](optional)
  233. # infer example
  234. bash run_eval_gpu.sh [resnet50|resnet101] [cifar10|imagenet2012] [DATASET_PATH] [CHECKPOINT_PATH]
  235. # gpu benchmark training example
  236. bash run_gpu_resnet_benchmark.sh [DATASET_PATH] [BATCH_SIZE](optional) [DTYPE](optional) [DEVICE_NUM](optional) [SAVE_CKPT](optional) [SAVE_PATH](optional)
  237. # gpu benchmark infer example
  238. bash run_eval_gpu_resnet_benchmark.sh [DATASET_PATH] [CKPT_PATH] [BATCH_SIZE](optional) [DTYPE](optional)
  239. ```
  240. For distributed training, a hostfile configuration needs to be created in advance.
  241. Please follow the instructions in the link [GPU-Multi-Host](https://www.mindspore.cn/tutorial/training/zh-CN/r1.0/advanced_use/distributed_training_gpu.html).
  242. #### Running parameter server mode training
  243. - Parameter server training Ascend example
  244. ```bash
  245. bash run_parameter_server_train.sh [resnet18|resnet50|resnet101] [cifar10|imagenet2012] [RANK_TABLE_FILE] [DATASET_PATH] [PRETRAINED_CKPT_PATH](optional)
  246. ```
  247. - Parameter server training GPU example
  248. ```bash
  249. bash run_parameter_server_train_gpu.sh [resnet50|resnet101] [cifar10|imagenet2012] [DATASET_PATH] [PRETRAINED_CKPT_PATH](optional)
  250. ```
  251. #### Evaluation while training
  252. You can add `run_eval` to start shell and set it True, if you want evaluation while training. And you can set argument option: `eval_dataset_path`, `save_best_ckpt`, `eval_start_epoch`, `eval_interval` when `run_eval` is True.
  253. ### Result
  254. - Training ResNet18 with CIFAR-10 dataset
  255. ```bash
  256. # distribute training result(8 pcs)
  257. epoch: 1 step: 195, loss is 1.5783054
  258. epoch: 2 step: 195, loss is 1.0682616
  259. epoch: 3 step: 195, loss is 0.8836588
  260. epoch: 4 step: 195, loss is 0.36090446
  261. epoch: 5 step: 195, loss is 0.80853784
  262. ...
  263. ```
  264. - Training ResNet18 with ImageNet2012 dataset
  265. ```bash
  266. # distribute training result(8 pcs)
  267. epoch: 1 step: 625, loss is 4.757934
  268. epoch: 2 step: 625, loss is 4.0891967
  269. epoch: 3 step: 625, loss is 3.9131956
  270. epoch: 4 step: 625, loss is 3.5302577
  271. epoch: 5 step: 625, loss is 3.597817
  272. ...
  273. ```
  274. - Training ResNet50 with CIFAR-10 dataset
  275. ```bash
  276. # distribute training result(8 pcs)
  277. epoch: 1 step: 195, loss is 1.9601055
  278. epoch: 2 step: 195, loss is 1.8555021
  279. epoch: 3 step: 195, loss is 1.6707983
  280. epoch: 4 step: 195, loss is 1.8162166
  281. epoch: 5 step: 195, loss is 1.393667
  282. ...
  283. ```
  284. - Training ResNet50 with ImageNet2012 dataset
  285. ```bash
  286. # distribute training result(8 pcs)
  287. epoch: 1 step: 5004, loss is 4.8995576
  288. epoch: 2 step: 5004, loss is 3.9235563
  289. epoch: 3 step: 5004, loss is 3.833077
  290. epoch: 4 step: 5004, loss is 3.2795618
  291. epoch: 5 step: 5004, loss is 3.1978393
  292. ...
  293. ```
  294. - Training ResNet101 with ImageNet2012 dataset
  295. ```bash
  296. # distribute training result(8 pcs)
  297. epoch: 1 step: 5004, loss is 4.805483
  298. epoch: 2 step: 5004, loss is 3.2121816
  299. epoch: 3 step: 5004, loss is 3.429647
  300. epoch: 4 step: 5004, loss is 3.3667371
  301. epoch: 5 step: 5004, loss is 3.1718972
  302. ...
  303. ```
  304. - Training SE-ResNet50 with ImageNet2012 dataset
  305. ```bash
  306. # distribute training result(8 pcs)
  307. epoch: 1 step: 5004, loss is 5.1779146
  308. epoch: 2 step: 5004, loss is 4.139395
  309. epoch: 3 step: 5004, loss is 3.9240637
  310. epoch: 4 step: 5004, loss is 3.5011306
  311. epoch: 5 step: 5004, loss is 3.3501816
  312. ...
  313. ```
  314. - GPU Benchmark of ResNet50 with ImageNet2012 dataset
  315. ```bash
  316. # ========START RESNET50 GPU BENCHMARK========
  317. epoch: [0/1] step: [20/5004], loss is 6.940182 Epoch time: 12416.098 ms, fps: 412 img/sec.
  318. epoch: [0/1] step: [40/5004], loss is 7.078993Epoch time: 3438.972 ms, fps: 1488 img/sec.
  319. epoch: [0/1] step: [60/5004], loss is 7.559594Epoch time: 3431.516 ms, fps: 1492 img/sec.
  320. epoch: [0/1] step: [80/5004], loss is 6.920937Epoch time: 3435.777 ms, fps: 1490 img/sec.
  321. epoch: [0/1] step: [100/5004], loss is 6.814013Epoch time: 3437.154 ms, fps: 1489 img/sec.
  322. ...
  323. ```
  324. ## [Evaluation Process](#contents)
  325. ### Usage
  326. #### Running on Ascend
  327. ```bash
  328. # evaluation
  329. Usage: bash run_eval.sh [resnet18|resnet50|resnet101|se-resnet50] [cifar10|imagenet2012] [DATASET_PATH] [CHECKPOINT_PATH]
  330. ```
  331. ```bash
  332. # evaluation example
  333. bash run_eval.sh resnet50 cifar10 ~/cifar10-10-verify-bin ~/resnet50_cifar10/train_parallel0/resnet-90_195.ckpt
  334. ```
  335. > checkpoint can be produced in training process.
  336. #### Running on GPU
  337. ```bash
  338. bash run_eval_gpu.sh [resnet50|resnet101] [cifar10|imagenet2012] [DATASET_PATH] [CHECKPOINT_PATH]
  339. ```
  340. ### Result
  341. Evaluation result will be stored in the example path, whose folder name is "eval". Under this, you can find result like the following in log.
  342. - Evaluating ResNet18 with CIFAR-10 dataset
  343. ```bash
  344. result: {'acc': 0.9402043269230769} ckpt=~/resnet50_cifar10/train_parallel0/resnet-90_195.ckpt
  345. ```
  346. - Evaluating ResNet18 with ImageNet2012 dataset
  347. ```bash
  348. result: {'acc': 0.7053685897435897} ckpt=train_parallel0/resnet-90_5004.ckpt
  349. ```
  350. - Evaluating ResNet50 with CIFAR-10 dataset
  351. ```bash
  352. result: {'acc': 0.91446314102564111} ckpt=~/resnet50_cifar10/train_parallel0/resnet-90_195.ckpt
  353. ```
  354. - Evaluating ResNet50 with ImageNet2012 dataset
  355. ```bash
  356. result: {'acc': 0.7671054737516005} ckpt=train_parallel0/resnet-90_5004.ckpt
  357. ```
  358. - Evaluating ResNet101 with ImageNet2012 dataset
  359. ```bash
  360. result: {'top_5_accuracy': 0.9429417413572343, 'top_1_accuracy': 0.7853513124199744} ckpt=train_parallel0/resnet-120_5004.ckpt
  361. ```
  362. - Evaluating SE-ResNet50 with ImageNet2012 dataset
  363. ```bash
  364. result: {'top_5_accuracy': 0.9342589628681178, 'top_1_accuracy': 0.768065781049936} ckpt=train_parallel0/resnet-24_5004.ckpt
  365. ```
  366. # [Model Description](#contents)
  367. ## [Performance](#contents)
  368. ### Evaluation Performance
  369. #### ResNet18 on CIFAR-10
  370. | Parameters | Ascend 910 |
  371. | -------------------------- | -------------------------------------- |
  372. | Model Version | ResNet18 |
  373. | Resource | Ascend 910,CPU 2.60GHz 192cores,Memory 755G |
  374. | uploaded Date | 02/25/2021 (month/day/year) |
  375. | MindSpore Version | 1.1.1-alpha |
  376. | Dataset | CIFAR-10 |
  377. | Training Parameters | epoch=90, steps per epoch=195, batch_size = 32 |
  378. | Optimizer | Momentum |
  379. | Loss Function | Softmax Cross Entropy |
  380. | outputs | probability |
  381. | Loss | 0.0002519517 |
  382. | Speed | 13 ms/step(8pcs) |
  383. | Total time | 4 mins |
  384. | Parameters (M) | 11.2 |
  385. | Checkpoint for Fine tuning | 86M (.ckpt file) |
  386. | Scripts | [Link](https://gitee.com/mindspore/mindspore/tree/master/model_zoo/official/cv/resnet) |
  387. #### ResNet18 on ImageNet2012
  388. | Parameters | Ascend 910 |
  389. | -------------------------- | -------------------------------------- |
  390. | Model Version | ResNet18 |
  391. | Resource | Ascend 910,CPU 2.60GHz 192cores,Memory 755G |
  392. | uploaded Date | 02/25/2021 (month/day/year) ; |
  393. | MindSpore Version | 1.1.1-alpha |
  394. | Dataset | ImageNet2012 |
  395. | Training Parameters | epoch=90, steps per epoch=626, batch_size = 256 |
  396. | Optimizer | Momentum |
  397. | Loss Function | Softmax Cross Entropy |
  398. | outputs | probability |
  399. | Loss | 2.15702 |
  400. | Speed | 110ms/step(8pcs) (may need to set_numa_enbale in dataset.py) |
  401. | Total time | 110 mins |
  402. | Parameters (M) | 11.7 |
  403. | Checkpoint for Fine tuning | 90M (.ckpt file) |
  404. | Scripts | [Link](https://gitee.com/mindspore/mindspore/tree/master/model_zoo/official/cv/resnet) |
  405. #### ResNet50 on CIFAR-10
  406. | Parameters | Ascend 910 | GPU |
  407. | -------------------------- | -------------------------------------- |---------------------------------- |
  408. | Model Version | ResNet50-v1.5 |ResNet50-v1.5|
  409. | Resource | Ascend 910,CPU 2.60GHz 192cores,Memory 755G | GPU(Tesla V100 SXM2),CPU 2.1GHz 24cores,Memory 128G
  410. | uploaded Date | 04/01/2020 (month/day/year) | 08/01/2020 (month/day/year)
  411. | MindSpore Version | 0.1.0-alpha |0.6.0-alpha |
  412. | Dataset | CIFAR-10 | CIFAR-10
  413. | Training Parameters | epoch=90, steps per epoch=195, batch_size = 32 |epoch=90, steps per epoch=195, batch_size = 32 |
  414. | Optimizer | Momentum |Momentum|
  415. | Loss Function | Softmax Cross Entropy |Softmax Cross Entropy |
  416. | outputs | probability | probability |
  417. | Loss | 0.000356 | 0.000716 |
  418. | Speed | 18.4ms/step(8pcs) |69ms/step(8pcs)|
  419. | Total time | 6 mins | 20.2 mins|
  420. | Parameters (M) | 25.5 | 25.5 |
  421. | Checkpoint for Fine tuning | 179.7M (.ckpt file) |179.7M (.ckpt file)|
  422. | Scripts | [Link](https://gitee.com/mindspore/mindspore/tree/master/model_zoo/official/cv/resnet) | [Link](https://gitee.com/mindspore/mindspore/tree/master/model_zoo/official/cv/resnet) |
  423. #### ResNet50 on ImageNet2012
  424. | Parameters | Ascend 910 | GPU |
  425. | -------------------------- | -------------------------------------- |---------------------------------- |
  426. | Model Version | ResNet50-v1.5 |ResNet50-v1.5|
  427. | Resource | Ascend 910,CPU 2.60GHz 192cores,Memory 755G | GPU(Tesla V100 SXM2),CPU 2.1GHz 24cores,Memory 128G
  428. | uploaded Date | 04/01/2020 (month/day/year) ; | 08/01/2020 (month/day/year)
  429. | MindSpore Version | 0.1.0-alpha |0.6.0-alpha |
  430. | Dataset | ImageNet2012 | ImageNet2012|
  431. | Training Parameters | epoch=90, steps per epoch=626, batch_size = 256 |epoch=90, steps per epoch=626, batch_size = 256 |
  432. | Optimizer | Momentum |Momentum|
  433. | Loss Function | Softmax Cross Entropy |Softmax Cross Entropy |
  434. | outputs | probability | probability |
  435. | Loss | 1.8464266 | 1.9023 |
  436. | Speed | 118ms/step(8pcs) |270ms/step(8pcs)|
  437. | Total time | 114 mins | 260 mins|
  438. | Parameters (M) | 25.5 | 25.5 |
  439. | Checkpoint for Fine tuning | 197M (.ckpt file) |197M (.ckpt file) |
  440. | Scripts | [Link](https://gitee.com/mindspore/mindspore/tree/master/model_zoo/official/cv/resnet) | [Link](https://gitee.com/mindspore/mindspore/tree/master/model_zoo/official/cv/resnet) |
  441. #### ResNet101 on ImageNet2012
  442. | Parameters | Ascend 910 | GPU |
  443. | -------------------------- | -------------------------------------- |---------------------------------- |
  444. | Model Version | ResNet101 |ResNet101|
  445. | Resource | Ascend 910,CPU 2.60GHz 192cores,Memory 755G | GPU(Tesla V100 SXM2),CPU 2.1GHz 24cores,Memory 128G
  446. | uploaded Date | 04/01/2020 (month/day/year) | 08/01/2020 (month/day/year)
  447. | MindSpore Version | 0.1.0-alpha |0.6.0-alpha |
  448. | Dataset | ImageNet2012 | ImageNet2012|
  449. | Training Parameters | epoch=120, steps per epoch=5004, batch_size = 32 |epoch=120, steps per epoch=5004, batch_size = 32 |
  450. | Optimizer | Momentum |Momentum|
  451. | Loss Function | Softmax Cross Entropy |Softmax Cross Entropy |
  452. | outputs | probability | probability |
  453. | Loss | 1.6453942 | 1.7023412 |
  454. | Speed | 30.3ms/step(8pcs) |108.6ms/step(8pcs)|
  455. | Total time | 301 mins | 1100 mins|
  456. | Parameters (M) | 44.6 | 44.6 |
  457. | Checkpoint for Fine tuning | 343M (.ckpt file) |343M (.ckpt file) |
  458. | Scripts | [Link](https://gitee.com/mindspore/mindspore/tree/master/model_zoo/official/cv/resnet) | [Link](https://gitee.com/mindspore/mindspore/tree/master/model_zoo/official/cv/resnet) |
  459. #### SE-ResNet50 on ImageNet2012
  460. | Parameters | Ascend 910
  461. | -------------------------- | ------------------------------------------------------------------------ |
  462. | Model Version | SE-ResNet50 |
  463. | Resource | Ascend 910,CPU 2.60GHz 192cores,Memory 755G |
  464. | uploaded Date | 08/16/2020 (month/day/year) |
  465. | MindSpore Version | 0.7.0-alpha |
  466. | Dataset | ImageNet2012 |
  467. | Training Parameters | epoch=24, steps per epoch=5004, batch_size = 32 |
  468. | Optimizer | Momentum |
  469. | Loss Function | Softmax Cross Entropy |
  470. | outputs | probability |
  471. | Loss | 1.754404 |
  472. | Speed | 24.6ms/step(8pcs) |
  473. | Total time | 49.3 mins |
  474. | Parameters (M) | 25.5 |
  475. | Checkpoint for Fine tuning | 215.9M (.ckpt file) |
  476. | Scripts | [Link](https://gitee.com/mindspore/mindspore/tree/master/model_zoo/official/cv/resnet) |
  477. ### Inference Performance
  478. #### ResNet18 on CIFAR-10
  479. | Parameters | Ascend |
  480. | ------------------- | --------------------------- |
  481. | Model Version | ResNet18 |
  482. | Resource | Ascend 910 |
  483. | Uploaded Date | 02/25/2021 (month/day/year) |
  484. | MindSpore Version | 1.1.1-alpha |
  485. | Dataset | CIFAR-10 |
  486. | batch_size | 32 |
  487. | outputs | probability |
  488. | Accuracy | 94.02% |
  489. | Model for inference | 43M (.air file) |
  490. #### ResNet18 on ImageNet2012
  491. | Parameters | Ascend |
  492. | ------------------- | --------------------------- |
  493. | Model Version | ResNet18 |
  494. | Resource | Ascend 910 |
  495. | Uploaded Date | 02/25/2021 (month/day/year) |
  496. | MindSpore Version | 1.1.1-alpha |
  497. | Dataset | ImageNet2012 |
  498. | batch_size | 256 |
  499. | outputs | probability |
  500. | Accuracy | 70.53% |
  501. | Model for inference | 45M (.air file) |
  502. #### ResNet50 on CIFAR-10
  503. | Parameters | Ascend | GPU |
  504. | ------------------- | --------------------------- | --------------------------- |
  505. | Model Version | ResNet50-v1.5 | ResNet50-v1.5 |
  506. | Resource | Ascend 910 | GPU |
  507. | Uploaded Date | 04/01/2020 (month/day/year) | 08/01/2020 (month/day/year) |
  508. | MindSpore Version | 0.1.0-alpha | 0.6.0-alpha |
  509. | Dataset | CIFAR-10 | CIFAR-10 |
  510. | batch_size | 32 | 32 |
  511. | outputs | probability | probability |
  512. | Accuracy | 91.44% | 91.37% |
  513. | Model for inference | 91M (.air file) | |
  514. #### ResNet50 on ImageNet2012
  515. | Parameters | Ascend | GPU |
  516. | ------------------- | --------------------------- | --------------------------- |
  517. | Model Version | ResNet50-v1.5 | ResNet50-v1.5 |
  518. | Resource | Ascend 910 | GPU |
  519. | Uploaded Date | 04/01/2020 (month/day/year) | 08/01/2020 (month/day/year) |
  520. | MindSpore Version | 0.1.0-alpha | 0.6.0-alpha |
  521. | Dataset | ImageNet2012 | ImageNet2012 |
  522. | batch_size | 256 | 256 |
  523. | outputs | probability | probability |
  524. | Accuracy | 76.70% | 76.74% |
  525. | Model for inference | 98M (.air file) | |
  526. #### ResNet101 on ImageNet2012
  527. | Parameters | Ascend | GPU |
  528. | ------------------- | --------------------------- | --------------------------- |
  529. | Model Version | ResNet101 | ResNet101 |
  530. | Resource | Ascend 910 | GPU |
  531. | Uploaded Date | 04/01/2020 (month/day/year) | 08/01/2020 (month/day/year) |
  532. | MindSpore Version | 0.1.0-alpha | 0.6.0-alpha |
  533. | Dataset | ImageNet2012 | ImageNet2012 |
  534. | batch_size | 32 | 32 |
  535. | outputs | probability | probability |
  536. | Accuracy | 78.53% | 78.64% |
  537. | Model for inference | 171M (.air file) | |
  538. #### SE-ResNet50 on ImageNet2012
  539. | Parameters | Ascend |
  540. | ------------------- | --------------------------- |
  541. | Model Version | SE-ResNet50 |
  542. | Resource | Ascend 910 |
  543. | Uploaded Date | 08/16/2020 (month/day/year) |
  544. | MindSpore Version | 0.7.0-alpha |
  545. | Dataset | ImageNet2012 |
  546. | batch_size | 32 |
  547. | outputs | probability |
  548. | Accuracy | 76.80% |
  549. | Model for inference | 109M (.air file) |
  550. # [Description of Random Situation](#contents)
  551. In dataset.py, we set the seed inside “create_dataset" function. We also use random seed in train.py.
  552. # [ModelZoo Homepage](#contents)
  553. Please check the official [homepage](https://gitee.com/mindspore/mindspore/tree/master/model_zoo).