You can not select more than 25 topics Topics must start with a chinese character,a letter or number, can include dashes ('-') and can be up to 35 characters long.

README.md 26 kB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462463464465466467468469470471472473474475476477478479
  1. # Contents
  2. - [ResNet Description](#resnet-description)
  3. - [Model Architecture](#model-architecture)
  4. - [Dataset](#dataset)
  5. - [Features](#features)
  6. - [Mixed Precision](#mixed-precision)
  7. - [Environment Requirements](#environment-requirements)
  8. - [Quick Start](#quick-start)
  9. - [Script Description](#script-description)
  10. - [Script and Sample Code](#script-and-sample-code)
  11. - [Script Parameters](#script-parameters)
  12. - [Training Process](#training-process)
  13. - [Evaluation Process](#evaluation-process)
  14. - [Model Description](#model-description)
  15. - [Performance](#performance)
  16. - [Evaluation Performance](#evaluation-performance)
  17. - [Description of Random Situation](#description-of-random-situation)
  18. - [ModelZoo Homepage](#modelzoo-homepage)
  19. # [ResNet Description](#contents)
  20. ## Description
  21. ResNet (residual neural network) was proposed by Kaiming He and other four Chinese of Microsoft Research Institute. Through the use of ResNet unit, it successfully trained 152 layers of neural network, and won the championship in ilsvrc2015. The error rate on top 5 was 3.57%, and the parameter quantity was lower than vggnet, so the effect was very outstanding. Traditional convolution network or full connection network will have more or less information loss. At the same time, it will lead to the disappearance or explosion of gradient, which leads to the failure of deep network training. ResNet solves this problem to a certain extent. By passing the input information to the output, the integrity of the information is protected. The whole network only needs to learn the part of the difference between input and output, which simplifies the learning objectives and difficulties.The structure of ResNet can accelerate the training of neural network very quickly, and the accuracy of the model is also greatly improved. At the same time, ResNet is very popular, even can be directly used in the concept net network.
  22. These are examples of training ResNet50/ResNet101/SE-ResNet50 with CIFAR-10/ImageNet2012 dataset in MindSpore.ResNet50 and ResNet101 can reference [paper 1](https://arxiv.org/pdf/1512.03385.pdf) below, and SE-ResNet50 is a variant of ResNet50 which reference [paper 2](https://arxiv.org/abs/1709.01507) and [paper 3](https://arxiv.org/abs/1812.01187) below, Training SE-ResNet50 for just 24 epochs using 8 Ascend 910, we can reach top-1 accuracy of 75.9%.(Training ResNet101 with dataset CIFAR-10 and SE-ResNet50 with CIFAR-10 is not supported yet.)
  23. ## Paper
  24. 1.[paper](https://arxiv.org/pdf/1512.03385.pdf):Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun. "Deep Residual Learning for Image Recognition"
  25. 2.[paper](https://arxiv.org/abs/1709.01507):Jie Hu, Li Shen, Samuel Albanie, Gang Sun, Enhua Wu. "Squeeze-and-Excitation Networks"
  26. 3.[paper](https://arxiv.org/abs/1812.01187):Tong He, Zhi Zhang, Hang Zhang, Zhongyue Zhang, Junyuan Xie, Mu Li. "Bag of Tricks for Image Classification with Convolutional Neural Networks"
  27. # [Model Architecture](#contents)
  28. The overall network architecture of ResNet is show below:
  29. [Link](https://arxiv.org/pdf/1512.03385.pdf)
  30. # [Dataset](#contents)
  31. Dataset used: [CIFAR-10](<http://www.cs.toronto.edu/~kriz/cifar.html>)
  32. - Dataset size:60,000 32*32 colorful images in 10 classes
  33. - Train:50,000 images
  34. - Test: 10,000 images
  35. - Data format:binary files
  36. - Note:Data will be processed in dataset.py
  37. - Download the dataset, the directory structure is as follows:
  38. ```
  39. ├─cifar-10-batches-bin
  40. └─cifar-10-verify-bin
  41. ```
  42. Dataset used: [ImageNet2012](http://www.image-net.org/)
  43. - Dataset size 224*224 colorful images in 1000 classes
  44. - Train:1,281,167 images
  45. - Test: 50,000 images
  46. - Data format:jpeg
  47. - Note:Data will be processed in dataset.py
  48. - Download the dataset, the directory structure is as follows:
  49. ```
  50. └─dataset
  51. ├─ilsvrc # train dataset
  52. └─validation_preprocess # evaluate dataset
  53. ```
  54. # [Features](#contents)
  55. ## Mixed Precision
  56. The [mixed precision](https://www.mindspore.cn/tutorial/zh-CN/master/advanced_use/mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data types, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware.
  57. For FP16 operators, if the input data type is FP32, the backend of MindSpore will automatically handle it with reduced precision. Users could check the reduced-precision operators by enabling INFO log and then searching ‘reduce precision’.
  58. # [Environment Requirements](#contents)
  59. - Hardware(Ascend/GPU)
  60. - Prepare hardware environment with Ascend or GPU processor. If you want to try Ascend , please send the [application form](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx) to ascend@huawei.com. Once approved, you can get the resources.
  61. - Framework
  62. - [MindSpore](https://www.mindspore.cn/install/en)
  63. - For more information, please check the resources below:
  64. - [MindSpore tutorials](https://www.mindspore.cn/tutorial/zh-CN/master/index.html)
  65. - [MindSpore API](https://www.mindspore.cn/api/zh-CN/master/index.html)
  66. # [Quick Start](#contents)
  67. After installing MindSpore via the official website, you can start training and evaluation as follows:
  68. - Runing on Ascend
  69. ```
  70. # distributed training
  71. Usage: sh run_distribute_train.sh [resnet50|resnet101|se-resnet50] [cifar10|imagenet2012] [RANK_TABLE_FILE] [DATASET_PATH] [PRETRAINED_CKPT_PATH](optional)
  72. # standalone training
  73. Usage: sh run_standalone_train.sh [resnet50|resnet101|se-resnet50] [cifar10|imagenet2012] [DATASET_PATH]
  74. [PRETRAINED_CKPT_PATH](optional)
  75. # run evaluation example
  76. Usage: sh run_eval.sh [resnet50|resnet101|se-resnet50] [cifar10|imagenet2012] [DATASET_PATH] [CHECKPOINT_PATH]
  77. ```
  78. - Runing on GPU
  79. ```
  80. # distributed training example
  81. sh run_distribute_train_gpu.sh [resnet50|resnet101] [cifar10|imagenet2012] [DATASET_PATH] [PRETRAINED_CKPT_PATH](optional)
  82. # standalone training example
  83. sh run_standalone_train_gpu.sh [resnet50|resnet101] [cifar10|imagenet2012] [DATASET_PATH] [PRETRAINED_CKPT_PATH](optional)
  84. # infer example
  85. sh run_eval_gpu.sh [resnet50|resnet101] [cifar10|imagenet2012] [DATASET_PATH] [CHECKPOINT_PATH]
  86. ```
  87. # [Script Description](#contents)
  88. ## [Script and Sample Code](#contents)
  89. ```shell
  90. .
  91. └──resnet
  92. ├── README.md
  93. ├── script
  94. ├── run_distribute_train.sh # launch ascend distributed training(8 pcs)
  95. ├── run_parameter_server_train.sh # launch ascend parameter server training(8 pcs)
  96. ├── run_eval.sh # launch ascend evaluation
  97. ├── run_standalone_train.sh # launch ascend standalone training(1 pcs)
  98. ├── run_distribute_train_gpu.sh # launch gpu distributed training(8 pcs)
  99. ├── run_parameter_server_train_gpu.sh # launch gpu parameter server training(8 pcs)
  100. ├── run_eval_gpu.sh # launch gpu evaluation
  101. └── run_standalone_train_gpu.sh # launch gpu standalone training(1 pcs)
  102. ├── src
  103. ├── config.py # parameter configuration
  104. ├── dataset.py # data preprocessing
  105. ├── crossentropy.py # loss definition for ImageNet2012 dataset
  106. ├── lr_generator.py # generate learning rate for each step
  107. └── resnet.py # resnet backbone, including resnet50 and resnet101 and se-resnet50
  108. ├── eval.py # eval net
  109. └── train.py # train net
  110. ```
  111. ## [Script Parameters](#contents)
  112. Parameters for both training and evaluation can be set in config.py.
  113. - Config for ResNet50, CIFAR-10 dataset
  114. ```
  115. "class_num": 10, # dataset class num
  116. "batch_size": 32, # batch size of input tensor
  117. "loss_scale": 1024, # loss scale
  118. "momentum": 0.9, # momentum
  119. "weight_decay": 1e-4, # weight decay
  120. "epoch_size": 90, # only valid for taining, which is always 1 for inference
  121. "pretrain_epoch_size": 0, # epoch size that model has been trained before loading pretrained checkpoint, actual training epoch size is equal to epoch_size minus pretrain_epoch_size
  122. "save_checkpoint": True, # whether save checkpoint or not
  123. "save_checkpoint_steps": 195, # the step interval between two checkpoints. By default, the last checkpoint will be saved after the last step
  124. "keep_checkpoint_max": 10, # only keep the last keep_checkpoint_max checkpoint
  125. "save_checkpoint_path": "./", # path to save checkpoint
  126. "warmup_epochs": 5, # number of warmup epoch
  127. "lr_decay_mode": "poly" # decay mode can be selected in steps, ploy and default
  128. "lr_init": 0.01, # initial learning rate
  129. "lr_end": 0.00001, # final learning rate
  130. "lr_max": 0.1, # maximum learning rate
  131. ```
  132. - Config for ResNet50, ImageNet2012 dataset
  133. ```
  134. "class_num": 1001, # dataset class number
  135. "batch_size": 32, # batch size of input tensor
  136. "loss_scale": 1024, # loss scale
  137. "momentum": 0.9, # momentum optimizer
  138. "weight_decay": 1e-4, # weight decay
  139. "epoch_size": 90, # only valid for taining, which is always 1 for inference
  140. "pretrain_epoch_size": 0, # epoch size that model has been trained before loading pretrained checkpoint, actual training epoch size is equal to epoch_size minus pretrain_epoch_size
  141. "save_checkpoint": True, # whether save checkpoint or not
  142. "save_checkpoint_epochs": 1, # the epoch interval between two checkpoints. By default, the last checkpoint will be saved after the last epoch
  143. "keep_checkpoint_max": 10, # only keep the last keep_checkpoint_max checkpoint
  144. "save_checkpoint_path": "./", # path to save checkpoint relative to the executed path
  145. "warmup_epochs": 0, # number of warmup epoch
  146. "lr_decay_mode": "cosine", # decay mode for generating learning rate
  147. "label_smooth": True, # label smooth
  148. "label_smooth_factor": 0.1, # label smooth factor
  149. "lr_init": 0, # initial learning rate
  150. "lr_max": 0.1, # maximum learning rate
  151. ```
  152. - Config for ResNet101, ImageNet2012 dataset
  153. ```
  154. "class_num": 1001, # dataset class number
  155. "batch_size": 32, # batch size of input tensor
  156. "loss_scale": 1024, # loss scale
  157. "momentum": 0.9, # momentum optimizer
  158. "weight_decay": 1e-4, # weight decay
  159. "epoch_size": 120, # epoch size for training
  160. "pretrain_epoch_size": 0, # epoch size that model has been trained before loading pretrained checkpoint, actual training epoch size is equal to epoch_size minus pretrain_epoch_size
  161. "save_checkpoint": True, # whether save checkpoint or not
  162. "save_checkpoint_epochs": 1, # the epoch interval between two checkpoints. By default, the last checkpoint will be saved after the last epoch
  163. "keep_checkpoint_max": 10, # only keep the last keep_checkpoint_max checkpoint
  164. "save_checkpoint_path": "./", # path to save checkpoint relative to the executed path
  165. "warmup_epochs": 0, # number of warmup epoch
  166. "lr_decay_mode": "cosine" # decay mode for generating learning rate
  167. "label_smooth": 1, # label_smooth
  168. "label_smooth_factor": 0.1, # label_smooth_factor
  169. "lr": 0.1 # base learning rate
  170. ```
  171. - Config for SE-ResNet50, ImageNet2012 dataset
  172. ```
  173. "class_num": 1001, # dataset class number
  174. "batch_size": 32, # batch size of input tensor
  175. "loss_scale": 1024, # loss scale
  176. "momentum": 0.9, # momentum optimizer
  177. "weight_decay": 1e-4, # weight decay
  178. "epoch_size": 28 , # epoch size for creating learning rate
  179. "train_epoch_size": 24 # actual train epoch size
  180. "pretrain_epoch_size": 0, # epoch size that model has been trained before loading pretrained checkpoint, actual training epoch size is equal to epoch_size minus pretrain_epoch_size
  181. "save_checkpoint": True, # whether save checkpoint or not
  182. "save_checkpoint_epochs": 4, # the epoch interval between two checkpoints. By default, the last checkpoint will be saved after the last epoch
  183. "keep_checkpoint_max": 10, # only keep the last keep_checkpoint_max checkpoint
  184. "save_checkpoint_path": "./", # path to save checkpoint relative to the executed path
  185. "warmup_epochs": 3, # number of warmup epoch
  186. "lr_decay_mode": "cosine" # decay mode for generating learning rate
  187. "label_smooth": True, # label_smooth
  188. "label_smooth_factor": 0.1, # label_smooth_factor
  189. "lr_init": 0.0, # initial learning rate
  190. "lr_max": 0.3, # maximum learning rate
  191. "lr_end": 0.0001, # end learning rate
  192. ```
  193. ## [Training Process](#contents)
  194. ### Usage
  195. #### Running on Ascend
  196. ```
  197. # distributed training
  198. Usage: sh run_distribute_train.sh [resnet50|resnet101|se-resnet50] [cifar10|imagenet2012] [RANK_TABLE_FILE] [DATASET_PATH] [PRETRAINED_CKPT_PATH](optional)
  199. # standalone training
  200. Usage: sh run_standalone_train.sh [resnet50|resnet101|se-resnet50] [cifar10|imagenet2012] [DATASET_PATH]
  201. [PRETRAINED_CKPT_PATH](optional)
  202. # run evaluation example
  203. Usage: sh run_eval.sh [resnet50|resnet101|se-resnet50] [cifar10|imagenet2012] [DATASET_PATH] [CHECKPOINT_PATH]
  204. ```
  205. For distributed training, a hccl configuration file with JSON format needs to be created in advance.
  206. Please follow the instructions in the link [hccn_tools](https://gitee.com/mindspore/mindspore/tree/master/model_zoo/utils/hccl_tools).
  207. Training result will be stored in the example path, whose folder name begins with "train" or "train_parallel". Under this, you can find checkpoint file together with result like the followings in log.
  208. #### Running on GPU
  209. ```
  210. # distributed training example
  211. sh run_distribute_train_gpu.sh [resnet50|resnet101] [cifar10|imagenet2012] [DATASET_PATH] [PRETRAINED_CKPT_PATH](optional)
  212. # standalone training example
  213. sh run_standalone_train_gpu.sh [resnet50|resnet101] [cifar10|imagenet2012] [DATASET_PATH] [PRETRAINED_CKPT_PATH](optional)
  214. # infer example
  215. sh run_eval_gpu.sh [resnet50|resnet101] [cifar10|imagenet2012] [DATASET_PATH] [CHECKPOINT_PATH]
  216. ```
  217. #### Running parameter server mode training
  218. - Parameter server training Ascend example
  219. ```
  220. sh run_parameter_server_train.sh [resnet50|resnet101] [cifar10|imagenet2012] [RANK_TABLE_FILE] [DATASET_PATH] [PRETRAINED_CKPT_PATH](optional)
  221. ```
  222. - Parameter server training GPU example
  223. ```
  224. sh run_parameter_server_train_gpu.sh [resnet50|resnet101] [cifar10|imagenet2012] [DATASET_PATH] [PRETRAINED_CKPT_PATH](optional)
  225. ```
  226. ### Result
  227. - Training ResNet50 with CIFAR-10 dataset
  228. ```
  229. # distribute training result(8 pcs)
  230. epoch: 1 step: 195, loss is 1.9601055
  231. epoch: 2 step: 195, loss is 1.8555021
  232. epoch: 3 step: 195, loss is 1.6707983
  233. epoch: 4 step: 195, loss is 1.8162166
  234. epoch: 5 step: 195, loss is 1.393667
  235. ...
  236. ```
  237. - Training ResNet50 with ImageNet2012 dataset
  238. ```
  239. # distribute training result(8 pcs)
  240. epoch: 1 step: 5004, loss is 4.8995576
  241. epoch: 2 step: 5004, loss is 3.9235563
  242. epoch: 3 step: 5004, loss is 3.833077
  243. epoch: 4 step: 5004, loss is 3.2795618
  244. epoch: 5 step: 5004, loss is 3.1978393
  245. ...
  246. ```
  247. - Training ResNet101 with ImageNet2012 dataset
  248. ```
  249. # distribute training result(8p)
  250. epoch: 1 step: 5004, loss is 4.805483
  251. epoch: 2 step: 5004, loss is 3.2121816
  252. epoch: 3 step: 5004, loss is 3.429647
  253. epoch: 4 step: 5004, loss is 3.3667371
  254. epoch: 5 step: 5004, loss is 3.1718972
  255. ...
  256. epoch: 67 step: 5004, loss is 2.2768745
  257. epoch: 68 step: 5004, loss is 1.7223864
  258. epoch: 69 step: 5004, loss is 2.0665488
  259. epoch: 70 step: 5004, loss is 1.8717369
  260. ...
  261. ```
  262. - Training SE-ResNet50 with ImageNet2012 dataset
  263. ```
  264. # distribute training result(8 pcs)
  265. epoch: 1 step: 5004, loss is 5.1779146
  266. epoch: 2 step: 5004, loss is 4.139395
  267. epoch: 3 step: 5004, loss is 3.9240637
  268. epoch: 4 step: 5004, loss is 3.5011306
  269. epoch: 5 step: 5004, loss is 3.3501816
  270. ...
  271. ```
  272. ## [Evaluation Process](#contents)
  273. ### Usage
  274. #### Running on Ascend
  275. ```
  276. # evaluation
  277. Usage: sh run_eval.sh [resnet50|resnet101|se-resnet50] [cifar10|imagenet2012] [DATASET_PATH] [CHECKPOINT_PATH]
  278. ```
  279. ```
  280. # evaluation example
  281. sh run_eval.sh resnet50 cifar10 ~/cifar10-10-verify-bin ~/resnet50_cifar10/train_parallel0/resnet-90_195.ckpt
  282. ```
  283. > checkpoint can be produced in training process.
  284. #### Running on GPU
  285. ```
  286. sh run_eval_gpu.sh [resnet50|resnet101] [cifar10|imagenet2012] [DATASET_PATH] [CHECKPOINT_PATH]
  287. ```
  288. ### Result
  289. Evaluation result will be stored in the example path, whose folder name is "eval". Under this, you can find result like the followings in log.
  290. - Evaluating ResNet50 with CIFAR-10 dataset
  291. ```
  292. result: {'acc': 0.91446314102564111} ckpt=~/resnet50_cifar10/train_parallel0/resnet-90_195.ckpt
  293. ```
  294. - Evaluating ResNet50 with ImageNet2012 dataset
  295. ```
  296. result: {'acc': 0.7671054737516005} ckpt=train_parallel0/resnet-90_5004.ckpt
  297. ```
  298. - Evaluating ResNet101 with ImageNet2012 dataset
  299. ```
  300. result: {'top_5_accuracy': 0.9429417413572343, 'top_1_accuracy': 0.7853513124199744} ckpt=train_parallel0/resnet-120_5004.ckpt
  301. ```
  302. - Evaluating SE-ResNet50 with ImageNet2012 dataset
  303. ```
  304. result: {'top_5_accuracy': 0.9342589628681178, 'top_1_accuracy': 0.768065781049936} ckpt=train_parallel0/resnet-24_5004.ckpt
  305. ```
  306. # [Model Description](#contents)
  307. ## [Performance](#contents)
  308. ### Evaluation Performance
  309. #### ResNet50 on CIFAR-10
  310. | Parameters | Ascend 910 | GPU |
  311. | -------------------------- | -------------------------------------- |---------------------------------- |
  312. | Model Version | ResNet50-v1.5 |ResNet50-v1.5|
  313. | Resource | Ascend 910,CPU 2.60GHz 56cores,Memory 314G | GPU(Tesla V100 SXM2),CPU 2.1GHz 24cores,Memory 128G
  314. | uploaded Date | 04/01/2020 (month/day/year) | 08/01/2020 (month/day/year)
  315. | MindSpore Version | 0.1.0-alpha |0.6.0-alpha |
  316. | Dataset | CIFAR-10 | CIFAR-10
  317. | Training Parameters | epoch=90, steps per epoch=195, batch_size = 32 |epoch=90, steps per epoch=195, batch_size = 32 |
  318. | Optimizer | Momentum |Momentum|
  319. | Loss Function | Softmax Cross Entropy |Softmax Cross Entropy |
  320. | outputs | probability | probability |
  321. | Loss | 0.000356 | 0.000716 |
  322. | Speed | 18.4ms/step(8pcs) |69ms/step(8pcs)|
  323. | Total time | 6 mins | 20.2 mins|
  324. | Parameters (M) | 25.5 | 25.5 |
  325. | Checkpoint for Fine tuning | 179.7M (.ckpt file) |179.7M (.ckpt file)|
  326. | Scripts | [Link](https://gitee.com/mindspore/mindspore/tree/master/model_zoo/official/cv/resnet) | [Link](https://gitee.com/mindspore/mindspore/tree/master/model_zoo/official/cv/resnet) |
  327. #### ResNet50 on ImageNet2012
  328. | Parameters | Ascend 910 | GPU |
  329. | -------------------------- | -------------------------------------- |---------------------------------- |
  330. | Model Version | ResNet50-v1.5 |ResNet50-v1.5|
  331. | Resource | Ascend 910,CPU 2.60GHz 56cores,Memory 314G | GPU(Tesla V100 SXM2),CPU 2.1GHz 24cores,Memory 128G
  332. | uploaded Date | 04/01/2020 (month/day/year) ; | 08/01/2020 (month/day/year)
  333. | MindSpore Version | 0.1.0-alpha |0.6.0-alpha |
  334. | Dataset | ImageNet2012 | ImageNet2012|
  335. | Training Parameters | epoch=90, steps per epoch=5004, batch_size = 32 |epoch=90, steps per epoch=5004, batch_size = 32 |
  336. | Optimizer | Momentum |Momentum|
  337. | Loss Function | Softmax Cross Entropy |Softmax Cross Entropy |
  338. | outputs | probability | probability |
  339. | Loss | 1.8464266 | 1.9023 |
  340. | Speed | 18.4ms/step(8pcs) |67.1ms/step(8pcs)|
  341. | Total time | 139 mins | 500 mins|
  342. | Parameters (M) | 25.5 | 25.5 |
  343. | Checkpoint for Fine tuning | 197M (.ckpt file) |197M (.ckpt file) |
  344. | Scripts | [Link](https://gitee.com/mindspore/mindspore/tree/master/model_zoo/official/cv/resnet) | [Link](https://gitee.com/mindspore/mindspore/tree/master/model_zoo/official/cv/resnet) |
  345. #### ResNet101 on ImageNet2012
  346. | Parameters | Ascend 910 | GPU |
  347. | -------------------------- | -------------------------------------- |---------------------------------- |
  348. | Model Version | ResNet101 |ResNet101|
  349. | Resource | Ascend 910,CPU 2.60GHz 56cores,Memory 314G | GPU(Tesla V100 SXM2),CPU 2.1GHz 24cores,Memory 128G
  350. | uploaded Date | 04/01/2020 (month/day/year) | 08/01/2020 (month/day/year)
  351. | MindSpore Version | 0.1.0-alpha |0.6.0-alpha |
  352. | Dataset | ImageNet2012 | ImageNet2012|
  353. | Training Parameters | epoch=120, steps per epoch=5004, batch_size = 32 |epoch=120, steps per epoch=5004, batch_size = 32 |
  354. | Optimizer | Momentum |Momentum|
  355. | Loss Function | Softmax Cross Entropy |Softmax Cross Entropy |
  356. | outputs | probability | probability |
  357. | Loss | 1.6453942 | 1.7023412 |
  358. | Speed | 30.3ms/step(8pcs) |108.6ms/step(8pcs)|
  359. | Total time | 301 mins | 1100 mins|
  360. | Parameters (M) | 44.6 | 44.6 |
  361. | Checkpoint for Fine tuning | 343M (.ckpt file) |343M (.ckpt file) |
  362. | Scripts | [Link](https://gitee.com/mindspore/mindspore/tree/master/model_zoo/official/cv/resnet) | [Link](https://gitee.com/mindspore/mindspore/tree/master/model_zoo/official/cv/resnet) |
  363. #### SE-ResNet50 on ImageNet2012
  364. | Parameters | Ascend 910
  365. | -------------------------- | ------------------------------------------------------------------------ |
  366. | Model Version | SE-ResNet50 |
  367. | Resource | Ascend 910,CPU 2.60GHz 56cores,Memory 314G |
  368. | uploaded Date | 08/16/2020 (month/day/year) ; |
  369. | MindSpore Version | 0.7.0-alpha |
  370. | Dataset | ImageNet2012 |
  371. | Training Parameters | epoch=24, steps per epoch=5004, batch_size = 32 |
  372. | Optimizer | Momentum |
  373. | Loss Function | Softmax Cross Entropy |
  374. | outputs | probability |
  375. | Loss | 1.754404 |
  376. | Speed | 24.6ms/step(8pcs) |
  377. | Total time | 49.3 mins |
  378. | Parameters (M) | 25.5 |
  379. | Checkpoint for Fine tuning | 215.9M (.ckpt file) |
  380. | Scripts | [Link](https://gitee.com/mindspore/mindspore/tree/master/model_zoo/official/cv/resnet) |
  381. # [Description of Random Situation](#contents)
  382. In dataset.py, we set the seed inside “create_dataset" function. We also use random seed in train.py.
  383. # [ModelZoo Homepage](#contents)
  384. Please check the official [homepage](https://gitee.com/mindspore/mindspore/tree/master/model_zoo).