You can not select more than 25 topics Topics must start with a chinese character,a letter or number, can include dashes ('-') and can be up to 35 characters long.

README.md 30 kB

5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462463464465466467468469470471472473474475476477478479480481482483484485486487488489490491492493494495496497498499500501502503504505506507508509510511512513514515516517518519520521522523524525526527528529530531532533534535536537538539540541542543544545546547548549
  1. # Contents
  2. - [ResNet Description](#resnet-description)
  3. - [Model Architecture](#model-architecture)
  4. - [Dataset](#dataset)
  5. - [Features](#features)
  6. - [Mixed Precision](#mixed-precision)
  7. - [Environment Requirements](#environment-requirements)
  8. - [Quick Start](#quick-start)
  9. - [Script Description](#script-description)
  10. - [Script and Sample Code](#script-and-sample-code)
  11. - [Script Parameters](#script-parameters)
  12. - [Training Process](#training-process)
  13. - [Evaluation Process](#evaluation-process)
  14. - [Model Description](#model-description)
  15. - [Performance](#performance)
  16. - [Evaluation Performance](#evaluation-performance)
  17. - [Inference Performance](#inference-performance)
  18. - [Description of Random Situation](#description-of-random-situation)
  19. - [ModelZoo Homepage](#modelzoo-homepage)
  20. # [ResNet Description](#contents)
  21. ## Description
  22. ResNet (residual neural network) was proposed by Kaiming He and other four Chinese of Microsoft Research Institute. Through the use of ResNet unit, it successfully trained 152 layers of neural network, and won the championship in ilsvrc2015. The error rate on top 5 was 3.57%, and the parameter quantity was lower than vggnet, so the effect was very outstanding. Traditional convolution network or full connection network will have more or less information loss. At the same time, it will lead to the disappearance or explosion of gradient, which leads to the failure of deep network training. ResNet solves this problem to a certain extent. By passing the input information to the output, the integrity of the information is protected. The whole network only needs to learn the part of the difference between input and output, which simplifies the learning objectives and difficulties.The structure of ResNet can accelerate the training of neural network very quickly, and the accuracy of the model is also greatly improved. At the same time, ResNet is very popular, even can be directly used in the concept net network.
  23. These are examples of training ResNet50/ResNet101/SE-ResNet50 with CIFAR-10/ImageNet2012 dataset in MindSpore.ResNet50 and ResNet101 can reference [paper 1](https://arxiv.org/pdf/1512.03385.pdf) below, and SE-ResNet50 is a variant of ResNet50 which reference [paper 2](https://arxiv.org/abs/1709.01507) and [paper 3](https://arxiv.org/abs/1812.01187) below, Training SE-ResNet50 for just 24 epochs using 8 Ascend 910, we can reach top-1 accuracy of 75.9%.(Training ResNet101 with dataset CIFAR-10 and SE-ResNet50 with CIFAR-10 is not supported yet.)
  24. ## Paper
  25. 1.[paper](https://arxiv.org/pdf/1512.03385.pdf):Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun. "Deep Residual Learning for Image Recognition"
  26. 2.[paper](https://arxiv.org/abs/1709.01507):Jie Hu, Li Shen, Samuel Albanie, Gang Sun, Enhua Wu. "Squeeze-and-Excitation Networks"
  27. 3.[paper](https://arxiv.org/abs/1812.01187):Tong He, Zhi Zhang, Hang Zhang, Zhongyue Zhang, Junyuan Xie, Mu Li. "Bag of Tricks for Image Classification with Convolutional Neural Networks"
  28. # [Model Architecture](#contents)
  29. The overall network architecture of ResNet is show below:
  30. [Link](https://arxiv.org/pdf/1512.03385.pdf)
  31. # [Dataset](#contents)
  32. Dataset used: [CIFAR-10](<http://www.cs.toronto.edu/~kriz/cifar.html>)
  33. - Dataset size:60,000 32*32 colorful images in 10 classes
  34. - Train:50,000 images
  35. - Test: 10,000 images
  36. - Data format:binary files
  37. - Note:Data will be processed in dataset.py
  38. - Download the dataset, the directory structure is as follows:
  39. ```
  40. ├─cifar-10-batches-bin
  41. └─cifar-10-verify-bin
  42. ```
  43. Dataset used: [ImageNet2012](http://www.image-net.org/)
  44. - Dataset size 224*224 colorful images in 1000 classes
  45. - Train:1,281,167 images
  46. - Test: 50,000 images
  47. - Data format:jpeg
  48. - Note:Data will be processed in dataset.py
  49. - Download the dataset, the directory structure is as follows:
  50. ```
  51. └─dataset
  52. ├─ilsvrc # train dataset
  53. └─validation_preprocess # evaluate dataset
  54. ```
  55. # [Features](#contents)
  56. ## Mixed Precision
  57. The [mixed precision](https://www.mindspore.cn/tutorial/training/en/master/advanced_use/enable_mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data types, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware.
  58. For FP16 operators, if the input data type is FP32, the backend of MindSpore will automatically handle it with reduced precision. Users could check the reduced-precision operators by enabling INFO log and then searching ‘reduce precision’.
  59. # [Environment Requirements](#contents)
  60. - Hardware(Ascend/GPU)
  61. - Prepare hardware environment with Ascend or GPU processor. If you want to try Ascend , please send the [application form](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx) to ascend@huawei.com. Once approved, you can get the resources.
  62. - Framework
  63. - [MindSpore](https://www.mindspore.cn/install/en)
  64. - For more information, please check the resources below:
  65. - [MindSpore Tutorials](https://www.mindspore.cn/tutorial/training/en/master/index.html)
  66. - [MindSpore Python API](https://www.mindspore.cn/doc/api_python/en/master/index.html)
  67. # [Quick Start](#contents)
  68. After installing MindSpore via the official website, you can start training and evaluation as follows:
  69. - Running on Ascend
  70. ```
  71. # distributed training
  72. Usage: sh run_distribute_train.sh [resnet50|resnet101|se-resnet50] [cifar10|imagenet2012] [RANK_TABLE_FILE] [DATASET_PATH] [PRETRAINED_CKPT_PATH](optional)
  73. # standalone training
  74. Usage: sh run_standalone_train.sh [resnet50|resnet101|se-resnet50] [cifar10|imagenet2012] [DATASET_PATH]
  75. [PRETRAINED_CKPT_PATH](optional)
  76. # run evaluation example
  77. Usage: sh run_eval.sh [resnet50|resnet101|se-resnet50] [cifar10|imagenet2012] [DATASET_PATH] [CHECKPOINT_PATH]
  78. ```
  79. - Running on GPU
  80. ```
  81. # distributed training example
  82. sh run_distribute_train_gpu.sh [resnet50|resnet101] [cifar10|imagenet2012] [DATASET_PATH] [PRETRAINED_CKPT_PATH](optional)
  83. # standalone training example
  84. sh run_standalone_train_gpu.sh [resnet50|resnet101] [cifar10|imagenet2012] [DATASET_PATH] [PRETRAINED_CKPT_PATH](optional)
  85. # infer example
  86. sh run_eval_gpu.sh [resnet50|resnet101] [cifar10|imagenet2012] [DATASET_PATH] [CHECKPOINT_PATH]
  87. ```
  88. # [Script Description](#contents)
  89. ## [Script and Sample Code](#contents)
  90. ```shell
  91. .
  92. └──resnet
  93. ├── README.md
  94. ├── scripts
  95. ├── run_distribute_train.sh # launch ascend distributed training(8 pcs)
  96. ├── run_parameter_server_train.sh # launch ascend parameter server training(8 pcs)
  97. ├── run_eval.sh # launch ascend evaluation
  98. ├── run_standalone_train.sh # launch ascend standalone training(1 pcs)
  99. ├── run_distribute_train_gpu.sh # launch gpu distributed training(8 pcs)
  100. ├── run_parameter_server_train_gpu.sh # launch gpu parameter server training(8 pcs)
  101. ├── run_eval_gpu.sh # launch gpu evaluation
  102. ├── run_standalone_train_gpu.sh # launch gpu standalone training(1 pcs)
  103. └── run_gpu_resnet_benchmark.sh # GPU benchmark for resnet50 with imagenet2012(1 pcs)
  104. ├── src
  105. ├── config.py # parameter configuration
  106. ├── dataset.py # data preprocessing
  107. ├── CrossEntropySmooth.py # loss definition for ImageNet2012 dataset
  108. ├── lr_generator.py # generate learning rate for each step
  109. ├── resnet.py # resnet backbone, including resnet50 and resnet101 and se-resnet50
  110. └── resnet_gpu_benchmark.py # resnet50 for GPU benchmark
  111. ├── export.py # export model for inference
  112. ├── mindspore_hub_conf.py # mindspore hub interface
  113. ├── eval.py # eval net
  114. ├── train.py # train net
  115. └── gpu_resent_benchmark.py # GPU benchmark for resnet50
  116. ```
  117. ## [Script Parameters](#contents)
  118. Parameters for both training and evaluation can be set in config.py.
  119. - Config for ResNet50, CIFAR-10 dataset
  120. ```
  121. "class_num": 10, # dataset class num
  122. "batch_size": 32, # batch size of input tensor
  123. "loss_scale": 1024, # loss scale
  124. "momentum": 0.9, # momentum
  125. "weight_decay": 1e-4, # weight decay
  126. "epoch_size": 90, # only valid for taining, which is always 1 for inference
  127. "pretrain_epoch_size": 0, # epoch size that model has been trained before loading pretrained checkpoint, actual training epoch size is equal to epoch_size minus pretrain_epoch_size
  128. "save_checkpoint": True, # whether save checkpoint or not
  129. "save_checkpoint_epochs": 5, # the epoch interval between two checkpoints. By default, the last checkpoint will be saved after the last step
  130. "keep_checkpoint_max": 10, # only keep the last keep_checkpoint_max checkpoint
  131. "save_checkpoint_path": "./", # path to save checkpoint
  132. "warmup_epochs": 5, # number of warmup epoch
  133. "lr_decay_mode": "poly" # decay mode can be selected in steps, ploy and default
  134. "lr_init": 0.01, # initial learning rate
  135. "lr_end": 0.00001, # final learning rate
  136. "lr_max": 0.1, # maximum learning rate
  137. ```
  138. - Config for ResNet50, ImageNet2012 dataset
  139. ```
  140. "class_num": 1001, # dataset class number
  141. "batch_size": 256, # batch size of input tensor
  142. "loss_scale": 1024, # loss scale
  143. "momentum": 0.9, # momentum optimizer
  144. "weight_decay": 1e-4, # weight decay
  145. "epoch_size": 90, # only valid for taining, which is always 1 for inference
  146. "pretrain_epoch_size": 0, # epoch size that model has been trained before loading pretrained checkpoint, actual training epoch size is equal to epoch_size minus pretrain_epoch_size
  147. "save_checkpoint": True, # whether save checkpoint or not
  148. "save_checkpoint_epochs": 5, # the epoch interval between two checkpoints. By default, the last checkpoint will be saved after the last epoch
  149. "keep_checkpoint_max": 10, # only keep the last keep_checkpoint_max checkpoint
  150. "save_checkpoint_path": "./", # path to save checkpoint relative to the executed path
  151. "warmup_epochs": 0, # number of warmup epoch
  152. "lr_decay_mode": "Linear", # decay mode for generating learning rate
  153. "use_label_smooth": True, # label smooth
  154. "label_smooth_factor": 0.1, # label smooth factor
  155. "lr_init": 0, # initial learning rate
  156. "lr_max": 0.8, # maximum learning rate
  157. "lr_end": 0.0, # minimum learning rate
  158. ```
  159. - Config for ResNet101, ImageNet2012 dataset
  160. ```
  161. "class_num": 1001, # dataset class number
  162. "batch_size": 32, # batch size of input tensor
  163. "loss_scale": 1024, # loss scale
  164. "momentum": 0.9, # momentum optimizer
  165. "weight_decay": 1e-4, # weight decay
  166. "epoch_size": 120, # epoch size for training
  167. "pretrain_epoch_size": 0, # epoch size that model has been trained before loading pretrained checkpoint, actual training epoch size is equal to epoch_size minus pretrain_epoch_size
  168. "save_checkpoint": True, # whether save checkpoint or not
  169. "save_checkpoint_epochs": 5, # the epoch interval between two checkpoints. By default, the last checkpoint will be saved after the last epoch
  170. "keep_checkpoint_max": 10, # only keep the last keep_checkpoint_max checkpoint
  171. "save_checkpoint_path": "./", # path to save checkpoint relative to the executed path
  172. "warmup_epochs": 0, # number of warmup epoch
  173. "lr_decay_mode": "cosine" # decay mode for generating learning rate
  174. "use_label_smooth": True, # label_smooth
  175. "label_smooth_factor": 0.1, # label_smooth_factor
  176. "lr": 0.1 # base learning rate
  177. ```
  178. - Config for SE-ResNet50, ImageNet2012 dataset
  179. ```
  180. "class_num": 1001, # dataset class number
  181. "batch_size": 32, # batch size of input tensor
  182. "loss_scale": 1024, # loss scale
  183. "momentum": 0.9, # momentum optimizer
  184. "weight_decay": 1e-4, # weight decay
  185. "epoch_size": 28 , # epoch size for creating learning rate
  186. "train_epoch_size": 24 # actual train epoch size
  187. "pretrain_epoch_size": 0, # epoch size that model has been trained before loading pretrained checkpoint, actual training epoch size is equal to epoch_size minus pretrain_epoch_size
  188. "save_checkpoint": True, # whether save checkpoint or not
  189. "save_checkpoint_epochs": 4, # the epoch interval between two checkpoints. By default, the last checkpoint will be saved after the last epoch
  190. "keep_checkpoint_max": 10, # only keep the last keep_checkpoint_max checkpoint
  191. "save_checkpoint_path": "./", # path to save checkpoint relative to the executed path
  192. "warmup_epochs": 3, # number of warmup epoch
  193. "lr_decay_mode": "cosine" # decay mode for generating learning rate
  194. "use_label_smooth": True, # label_smooth
  195. "label_smooth_factor": 0.1, # label_smooth_factor
  196. "lr_init": 0.0, # initial learning rate
  197. "lr_max": 0.3, # maximum learning rate
  198. "lr_end": 0.0001, # end learning rate
  199. ```
  200. ## [Training Process](#contents)
  201. ### Usage
  202. #### Running on Ascend
  203. ```
  204. # distributed training
  205. Usage: sh run_distribute_train.sh [resnet50|resnet101|se-resnet50] [cifar10|imagenet2012] [RANK_TABLE_FILE] [DATASET_PATH] [PRETRAINED_CKPT_PATH](optional)
  206. # standalone training
  207. Usage: sh run_standalone_train.sh [resnet50|resnet101|se-resnet50] [cifar10|imagenet2012] [DATASET_PATH]
  208. [PRETRAINED_CKPT_PATH](optional)
  209. # run evaluation example
  210. Usage: sh run_eval.sh [resnet50|resnet101|se-resnet50] [cifar10|imagenet2012] [DATASET_PATH] [CHECKPOINT_PATH]
  211. ```
  212. For distributed training, a hccl configuration file with JSON format needs to be created in advance.
  213. Please follow the instructions in the link [hccn_tools](https://gitee.com/mindspore/mindspore/tree/master/model_zoo/utils/hccl_tools).
  214. Training result will be stored in the example path, whose folder name begins with "train" or "train_parallel". Under this, you can find checkpoint file together with result like the followings in log.
  215. #### Running on GPU
  216. ```
  217. # distributed training example
  218. sh run_distribute_train_gpu.sh [resnet50|resnet101] [cifar10|imagenet2012] [DATASET_PATH] [PRETRAINED_CKPT_PATH](optional)
  219. # standalone training example
  220. sh run_standalone_train_gpu.sh [resnet50|resnet101] [cifar10|imagenet2012] [DATASET_PATH] [PRETRAINED_CKPT_PATH](optional)
  221. # infer example
  222. sh run_eval_gpu.sh [resnet50|resnet101] [cifar10|imagenet2012] [DATASET_PATH] [CHECKPOINT_PATH]
  223. # gpu benchmark example
  224. sh run_gpu_resnet_benchmark.sh [IMAGENET_DATASET_PATH] [BATCH_SIZE](optional) [DTYPE](optional) [DEVICE_NUM](optional)
  225. ```
  226. #### Running parameter server mode training
  227. - Parameter server training Ascend example
  228. ```
  229. sh run_parameter_server_train.sh [resnet50|resnet101] [cifar10|imagenet2012] [RANK_TABLE_FILE] [DATASET_PATH] [PRETRAINED_CKPT_PATH](optional)
  230. ```
  231. - Parameter server training GPU example
  232. ```
  233. sh run_parameter_server_train_gpu.sh [resnet50|resnet101] [cifar10|imagenet2012] [DATASET_PATH] [PRETRAINED_CKPT_PATH](optional)
  234. ```
  235. ### Result
  236. - Training ResNet50 with CIFAR-10 dataset
  237. ```
  238. # distribute training result(8 pcs)
  239. epoch: 1 step: 195, loss is 1.9601055
  240. epoch: 2 step: 195, loss is 1.8555021
  241. epoch: 3 step: 195, loss is 1.6707983
  242. epoch: 4 step: 195, loss is 1.8162166
  243. epoch: 5 step: 195, loss is 1.393667
  244. ...
  245. ```
  246. - Training ResNet50 with ImageNet2012 dataset
  247. ```
  248. # distribute training result(8 pcs)
  249. epoch: 1 step: 5004, loss is 4.8995576
  250. epoch: 2 step: 5004, loss is 3.9235563
  251. epoch: 3 step: 5004, loss is 3.833077
  252. epoch: 4 step: 5004, loss is 3.2795618
  253. epoch: 5 step: 5004, loss is 3.1978393
  254. ...
  255. ```
  256. - Training ResNet101 with ImageNet2012 dataset
  257. ```
  258. # distribute training result(8 pcs)
  259. epoch: 1 step: 5004, loss is 4.805483
  260. epoch: 2 step: 5004, loss is 3.2121816
  261. epoch: 3 step: 5004, loss is 3.429647
  262. epoch: 4 step: 5004, loss is 3.3667371
  263. epoch: 5 step: 5004, loss is 3.1718972
  264. ...
  265. ```
  266. - Training SE-ResNet50 with ImageNet2012 dataset
  267. ```
  268. # distribute training result(8 pcs)
  269. epoch: 1 step: 5004, loss is 5.1779146
  270. epoch: 2 step: 5004, loss is 4.139395
  271. epoch: 3 step: 5004, loss is 3.9240637
  272. epoch: 4 step: 5004, loss is 3.5011306
  273. epoch: 5 step: 5004, loss is 3.3501816
  274. ...
  275. ```
  276. - GPU Benchmark of ResNet50 with ImageNet2012 dataset
  277. ```
  278. # ========START RESNET50 GPU BENCHMARK========
  279. Epoch time: 12416.098 ms, fps: 412 img/sec. epoch: 1 step: 20, loss is 6.940182
  280. Epoch time: 3472.037 ms, fps: 1474 img/sec. epoch: 2 step: 20, loss is 7.078993
  281. Epoch time: 3469.523 ms, fps: 1475 img/sec. epoch: 3 step: 20, loss is 7.559594
  282. Epoch time: 3460.311 ms, fps: 1479 img/sec. epoch: 4 step: 20, loss is 6.920937
  283. Epoch time: 3460.543 ms, fps: 1479 img/sec. epoch: 5 step: 20, loss is 6.814013
  284. ...
  285. ```
  286. ## [Evaluation Process](#contents)
  287. ### Usage
  288. #### Running on Ascend
  289. ```
  290. # evaluation
  291. Usage: sh run_eval.sh [resnet50|resnet101|se-resnet50] [cifar10|imagenet2012] [DATASET_PATH] [CHECKPOINT_PATH]
  292. ```
  293. ```
  294. # evaluation example
  295. sh run_eval.sh resnet50 cifar10 ~/cifar10-10-verify-bin ~/resnet50_cifar10/train_parallel0/resnet-90_195.ckpt
  296. ```
  297. > checkpoint can be produced in training process.
  298. #### Running on GPU
  299. ```
  300. sh run_eval_gpu.sh [resnet50|resnet101] [cifar10|imagenet2012] [DATASET_PATH] [CHECKPOINT_PATH]
  301. ```
  302. ### Result
  303. Evaluation result will be stored in the example path, whose folder name is "eval". Under this, you can find result like the followings in log.
  304. - Evaluating ResNet50 with CIFAR-10 dataset
  305. ```
  306. result: {'acc': 0.91446314102564111} ckpt=~/resnet50_cifar10/train_parallel0/resnet-90_195.ckpt
  307. ```
  308. - Evaluating ResNet50 with ImageNet2012 dataset
  309. ```
  310. result: {'acc': 0.7671054737516005} ckpt=train_parallel0/resnet-90_5004.ckpt
  311. ```
  312. - Evaluating ResNet101 with ImageNet2012 dataset
  313. ```
  314. result: {'top_5_accuracy': 0.9429417413572343, 'top_1_accuracy': 0.7853513124199744} ckpt=train_parallel0/resnet-120_5004.ckpt
  315. ```
  316. - Evaluating SE-ResNet50 with ImageNet2012 dataset
  317. ```
  318. result: {'top_5_accuracy': 0.9342589628681178, 'top_1_accuracy': 0.768065781049936} ckpt=train_parallel0/resnet-24_5004.ckpt
  319. ```
  320. # [Model Description](#contents)
  321. ## [Performance](#contents)
  322. ### Evaluation Performance
  323. #### ResNet50 on CIFAR-10
  324. | Parameters | Ascend 910 | GPU |
  325. | -------------------------- | -------------------------------------- |---------------------------------- |
  326. | Model Version | ResNet50-v1.5 |ResNet50-v1.5|
  327. | Resource | Ascend 910,CPU 2.60GHz 192cores,Memory 755G | GPU(Tesla V100 SXM2),CPU 2.1GHz 24cores,Memory 128G
  328. | uploaded Date | 04/01/2020 (month/day/year) | 08/01/2020 (month/day/year)
  329. | MindSpore Version | 0.1.0-alpha |0.6.0-alpha |
  330. | Dataset | CIFAR-10 | CIFAR-10
  331. | Training Parameters | epoch=90, steps per epoch=195, batch_size = 32 |epoch=90, steps per epoch=195, batch_size = 32 |
  332. | Optimizer | Momentum |Momentum|
  333. | Loss Function | Softmax Cross Entropy |Softmax Cross Entropy |
  334. | outputs | probability | probability |
  335. | Loss | 0.000356 | 0.000716 |
  336. | Speed | 18.4ms/step(8pcs) |69ms/step(8pcs)|
  337. | Total time | 6 mins | 20.2 mins|
  338. | Parameters (M) | 25.5 | 25.5 |
  339. | Checkpoint for Fine tuning | 179.7M (.ckpt file) |179.7M (.ckpt file)|
  340. | Scripts | [Link](https://gitee.com/mindspore/mindspore/tree/master/model_zoo/official/cv/resnet) | [Link](https://gitee.com/mindspore/mindspore/tree/master/model_zoo/official/cv/resnet) |
  341. #### ResNet50 on ImageNet2012
  342. | Parameters | Ascend 910 | GPU |
  343. | -------------------------- | -------------------------------------- |---------------------------------- |
  344. | Model Version | ResNet50-v1.5 |ResNet50-v1.5|
  345. | Resource | Ascend 910,CPU 2.60GHz 192cores,Memory 755G | GPU(Tesla V100 SXM2),CPU 2.1GHz 24cores,Memory 128G
  346. | uploaded Date | 04/01/2020 (month/day/year) ; | 08/01/2020 (month/day/year)
  347. | MindSpore Version | 0.1.0-alpha |0.6.0-alpha |
  348. | Dataset | ImageNet2012 | ImageNet2012|
  349. | Training Parameters | epoch=90, steps per epoch=626, batch_size = 256 |epoch=90, steps per epoch=626, batch_size = 256 |
  350. | Optimizer | Momentum |Momentum|
  351. | Loss Function | Softmax Cross Entropy |Softmax Cross Entropy |
  352. | outputs | probability | probability |
  353. | Loss | 1.8464266 | 1.9023 |
  354. | Speed | 118ms/step(8pcs) |270ms/step(8pcs)|
  355. | Total time | 114 mins | 260 mins|
  356. | Parameters (M) | 25.5 | 25.5 |
  357. | Checkpoint for Fine tuning | 197M (.ckpt file) |197M (.ckpt file) |
  358. | Scripts | [Link](https://gitee.com/mindspore/mindspore/tree/master/model_zoo/official/cv/resnet) | [Link](https://gitee.com/mindspore/mindspore/tree/master/model_zoo/official/cv/resnet) |
  359. #### ResNet101 on ImageNet2012
  360. | Parameters | Ascend 910 | GPU |
  361. | -------------------------- | -------------------------------------- |---------------------------------- |
  362. | Model Version | ResNet101 |ResNet101|
  363. | Resource | Ascend 910,CPU 2.60GHz 192cores,Memory 755G | GPU(Tesla V100 SXM2),CPU 2.1GHz 24cores,Memory 128G
  364. | uploaded Date | 04/01/2020 (month/day/year) | 08/01/2020 (month/day/year)
  365. | MindSpore Version | 0.1.0-alpha |0.6.0-alpha |
  366. | Dataset | ImageNet2012 | ImageNet2012|
  367. | Training Parameters | epoch=120, steps per epoch=5004, batch_size = 32 |epoch=120, steps per epoch=5004, batch_size = 32 |
  368. | Optimizer | Momentum |Momentum|
  369. | Loss Function | Softmax Cross Entropy |Softmax Cross Entropy |
  370. | outputs | probability | probability |
  371. | Loss | 1.6453942 | 1.7023412 |
  372. | Speed | 30.3ms/step(8pcs) |108.6ms/step(8pcs)|
  373. | Total time | 301 mins | 1100 mins|
  374. | Parameters (M) | 44.6 | 44.6 |
  375. | Checkpoint for Fine tuning | 343M (.ckpt file) |343M (.ckpt file) |
  376. | Scripts | [Link](https://gitee.com/mindspore/mindspore/tree/master/model_zoo/official/cv/resnet) | [Link](https://gitee.com/mindspore/mindspore/tree/master/model_zoo/official/cv/resnet) |
  377. #### SE-ResNet50 on ImageNet2012
  378. | Parameters | Ascend 910
  379. | -------------------------- | ------------------------------------------------------------------------ |
  380. | Model Version | SE-ResNet50 |
  381. | Resource | Ascend 910,CPU 2.60GHz 192cores,Memory 755G |
  382. | uploaded Date | 08/16/2020 (month/day/year) |
  383. | MindSpore Version | 0.7.0-alpha |
  384. | Dataset | ImageNet2012 |
  385. | Training Parameters | epoch=24, steps per epoch=5004, batch_size = 32 |
  386. | Optimizer | Momentum |
  387. | Loss Function | Softmax Cross Entropy |
  388. | outputs | probability |
  389. | Loss | 1.754404 |
  390. | Speed | 24.6ms/step(8pcs) |
  391. | Total time | 49.3 mins |
  392. | Parameters (M) | 25.5 |
  393. | Checkpoint for Fine tuning | 215.9M (.ckpt file) |
  394. | Scripts | [Link](https://gitee.com/mindspore/mindspore/tree/master/model_zoo/official/cv/resnet) |
  395. ### Inference Performance
  396. #### ResNet50 on CIFAR-10
  397. | Parameters | Ascend | GPU |
  398. | ------------------- | --------------------------- | --------------------------- |
  399. | Model Version | ResNet50-v1.5 | ResNet50-v1.5 |
  400. | Resource | Ascend 910 | GPU |
  401. | Uploaded Date | 04/01/2020 (month/day/year) | 08/01/2020 (month/day/year) |
  402. | MindSpore Version | 0.1.0-alpha | 0.6.0-alpha |
  403. | Dataset | CIFAR-10 | CIFAR-10 |
  404. | batch_size | 32 | 32 |
  405. | outputs | probability | probability |
  406. | Accuracy | 91.44% | 91.37% |
  407. | Model for inference | 91M (.air file) | |
  408. #### ResNet50 on ImageNet2012
  409. | Parameters | Ascend | GPU |
  410. | ------------------- | --------------------------- | --------------------------- |
  411. | Model Version | ResNet50-v1.5 | ResNet50-v1.5 |
  412. | Resource | Ascend 910 | GPU |
  413. | Uploaded Date | 04/01/2020 (month/day/year) | 08/01/2020 (month/day/year) |
  414. | MindSpore Version | 0.1.0-alpha | 0.6.0-alpha |
  415. | Dataset | ImageNet2012 | ImageNet2012 |
  416. | batch_size | 256 | 256 |
  417. | outputs | probability | probability |
  418. | Accuracy | 76.70% | 76.74% |
  419. | Model for inference | 98M (.air file) | |
  420. #### ResNet101 on ImageNet2012
  421. | Parameters | Ascend | GPU |
  422. | ------------------- | --------------------------- | --------------------------- |
  423. | Model Version | ResNet101 | ResNet101 |
  424. | Resource | Ascend 910 | GPU |
  425. | Uploaded Date | 04/01/2020 (month/day/year) | 08/01/2020 (month/day/year) |
  426. | MindSpore Version | 0.1.0-alpha | 0.6.0-alpha |
  427. | Dataset | ImageNet2012 | ImageNet2012 |
  428. | batch_size | 32 | 32 |
  429. | outputs | probability | probability |
  430. | Accuracy | 78.53% | 78.64% |
  431. | Model for inference | 171M (.air file) | |
  432. #### SE-ResNet50 on ImageNet2012
  433. | Parameters | Ascend |
  434. | ------------------- | --------------------------- |
  435. | Model Version | SE-ResNet50 |
  436. | Resource | Ascend 910 |
  437. | Uploaded Date | 08/16/2020 (month/day/year) |
  438. | MindSpore Version | 0.7.0-alpha |
  439. | Dataset | ImageNet2012 |
  440. | batch_size | 32 |
  441. | outputs | probability |
  442. | Accuracy | 76.80% |
  443. | Model for inference | 109M (.air file) |
  444. # [Description of Random Situation](#contents)
  445. In dataset.py, we set the seed inside “create_dataset" function. We also use random seed in train.py.
  446. # [ModelZoo Homepage](#contents)
  447. Please check the official [homepage](https://gitee.com/mindspore/mindspore/tree/master/model_zoo).