You can not select more than 25 topics Topics must start with a chinese character,a letter or number, can include dashes ('-') and can be up to 35 characters long.

README.md 26 kB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462463464465466467468469470471472473474475476477478479480
  1. # Contents
  2. - [ResNet Description](#resnet-description)
  3. - [Model Architecture](#model-architecture)
  4. - [Dataset](#dataset)
  5. - [Features](#features)
  6. - [Mixed Precision](#mixed-precision)
  7. - [Environment Requirements](#environment-requirements)
  8. - [Quick Start](#quick-start)
  9. - [Script Description](#script-description)
  10. - [Script and Sample Code](#script-and-sample-code)
  11. - [Script Parameters](#script-parameters)
  12. - [Training Process](#training-process)
  13. - [Evaluation Process](#evaluation-process)
  14. - [Model Description](#model-description)
  15. - [Performance](#performance)
  16. - [Evaluation Performance](#evaluation-performance)
  17. - [Description of Random Situation](#description-of-random-situation)
  18. - [ModelZoo Homepage](#modelzoo-homepage)
  19. # [ResNet Description](#contents)
  20. ## Description
  21. ResNet (residual neural network) was proposed by Kaiming He and other four Chinese of Microsoft Research Institute. Through the use of ResNet unit, it successfully trained 152 layers of neural network, and won the championship in ilsvrc2015. The error rate on top 5 was 3.57%, and the parameter quantity was lower than vggnet, so the effect was very outstanding. Traditional convolution network or full connection network will have more or less information loss. At the same time, it will lead to the disappearance or explosion of gradient, which leads to the failure of deep network training. ResNet solves this problem to a certain extent. By passing the input information to the output, the integrity of the information is protected. The whole network only needs to learn the part of the difference between input and output, which simplifies the learning objectives and difficulties.The structure of ResNet can accelerate the training of neural network very quickly, and the accuracy of the model is also greatly improved. At the same time, ResNet is very popular, even can be directly used in the concept net network.
  22. These are examples of training ResNet50/ResNet101/SE-ResNet50 with CIFAR-10/ImageNet2012 dataset in MindSpore.ResNet50 and ResNet101 can reference [paper 1](https://arxiv.org/pdf/1512.03385.pdf) below, and SE-ResNet50 is a variant of ResNet50 which reference [paper 2](https://arxiv.org/abs/1709.01507) and [paper 3](https://arxiv.org/abs/1812.01187) below, Training SE-ResNet50 for just 24 epochs using 8 Ascend 910, we can reach top-1 accuracy of 75.9%.(Training ResNet101 with dataset CIFAR-10 and SE-ResNet50 with CIFAR-10 is not supported yet.)
  23. ## Paper
  24. 1.[paper](https://arxiv.org/pdf/1512.03385.pdf):Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun. "Deep Residual Learning for Image Recognition"
  25. 2.[paper](https://arxiv.org/abs/1709.01507):Jie Hu, Li Shen, Samuel Albanie, Gang Sun, Enhua Wu. "Squeeze-and-Excitation Networks"
  26. 3.[paper](https://arxiv.org/abs/1812.01187):Tong He, Zhi Zhang, Hang Zhang, Zhongyue Zhang, Junyuan Xie, Mu Li. "Bag of Tricks for Image Classification with Convolutional Neural Networks"
  27. # [Model Architecture](#contents)
  28. The overall network architecture of ResNet is show below:
  29. [Link](https://arxiv.org/pdf/1512.03385.pdf)
  30. # [Dataset](#contents)
  31. Dataset used: [CIFAR-10](<http://www.cs.toronto.edu/~kriz/cifar.html>)
  32. - Dataset size:60,000 32*32 colorful images in 10 classes
  33. - Train:50,000 images
  34. - Test: 10,000 images
  35. - Data format:binary files
  36. - Note:Data will be processed in dataset.py
  37. - Download the dataset, the directory structure is as follows:
  38. ```
  39. ├─cifar-10-batches-bin
  40. └─cifar-10-verify-bin
  41. ```
  42. Dataset used: [ImageNet2012](http://www.image-net.org/)
  43. - Dataset size 224*224 colorful images in 1000 classes
  44. - Train:1,281,167 images
  45. - Test: 50,000 images
  46. - Data format:jpeg
  47. - Note:Data will be processed in dataset.py
  48. - Download the dataset, the directory structure is as follows:
  49. ```
  50. └─dataset
  51. ├─ilsvrc # train dataset
  52. └─validation_preprocess # evaluate dataset
  53. ```
  54. # [Features](#contents)
  55. ## Mixed Precision
  56. The [mixed precision](https://www.mindspore.cn/tutorial/training/en/master/advanced_use/enable_mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data types, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware.
  57. For FP16 operators, if the input data type is FP32, the backend of MindSpore will automatically handle it with reduced precision. Users could check the reduced-precision operators by enabling INFO log and then searching ‘reduce precision’.
  58. # [Environment Requirements](#contents)
  59. - Hardware(Ascend/GPU)
  60. - Prepare hardware environment with Ascend or GPU processor. If you want to try Ascend , please send the [application form](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx) to ascend@huawei.com. Once approved, you can get the resources.
  61. - Framework
  62. - [MindSpore](https://www.mindspore.cn/install/en)
  63. - For more information, please check the resources below:
  64. - [MindSpore Tutorials](https://www.mindspore.cn/tutorial/training/en/master/index.html)
  65. - [MindSpore Python API](https://www.mindspore.cn/doc/api_python/en/master/index.html)
  66. # [Quick Start](#contents)
  67. After installing MindSpore via the official website, you can start training and evaluation as follows:
  68. - Running on Ascend
  69. ```
  70. # distributed training
  71. Usage: sh run_distribute_train.sh [resnet50|resnet101|se-resnet50] [cifar10|imagenet2012] [RANK_TABLE_FILE] [DATASET_PATH] [PRETRAINED_CKPT_PATH](optional)
  72. # standalone training
  73. Usage: sh run_standalone_train.sh [resnet50|resnet101|se-resnet50] [cifar10|imagenet2012] [DATASET_PATH]
  74. [PRETRAINED_CKPT_PATH](optional)
  75. # run evaluation example
  76. Usage: sh run_eval.sh [resnet50|resnet101|se-resnet50] [cifar10|imagenet2012] [DATASET_PATH] [CHECKPOINT_PATH]
  77. ```
  78. - Running on GPU
  79. ```
  80. # distributed training example
  81. sh run_distribute_train_gpu.sh [resnet50|resnet101] [cifar10|imagenet2012] [DATASET_PATH] [PRETRAINED_CKPT_PATH](optional)
  82. # standalone training example
  83. sh run_standalone_train_gpu.sh [resnet50|resnet101] [cifar10|imagenet2012] [DATASET_PATH] [PRETRAINED_CKPT_PATH](optional)
  84. # infer example
  85. sh run_eval_gpu.sh [resnet50|resnet101] [cifar10|imagenet2012] [DATASET_PATH] [CHECKPOINT_PATH]
  86. ```
  87. # [Script Description](#contents)
  88. ## [Script and Sample Code](#contents)
  89. ```shell
  90. .
  91. └──resnet
  92. ├── README.md
  93. ├── scripts
  94. ├── run_distribute_train.sh # launch ascend distributed training(8 pcs)
  95. ├── run_parameter_server_train.sh # launch ascend parameter server training(8 pcs)
  96. ├── run_eval.sh # launch ascend evaluation
  97. ├── run_standalone_train.sh # launch ascend standalone training(1 pcs)
  98. ├── run_distribute_train_gpu.sh # launch gpu distributed training(8 pcs)
  99. ├── run_parameter_server_train_gpu.sh # launch gpu parameter server training(8 pcs)
  100. ├── run_eval_gpu.sh # launch gpu evaluation
  101. └── run_standalone_train_gpu.sh # launch gpu standalone training(1 pcs)
  102. ├── src
  103. ├── config.py # parameter configuration
  104. ├── dataset.py # data preprocessing
  105. ├── CrossEntropySmooth.py # loss definition for ImageNet2012 dataset
  106. ├── lr_generator.py # generate learning rate for each step
  107. └── resnet.py # resnet backbone, including resnet50 and resnet101 and se-resnet50
  108. ├── eval.py # eval net
  109. └── train.py # train net
  110. ```
  111. ## [Script Parameters](#contents)
  112. Parameters for both training and evaluation can be set in config.py.
  113. - Config for ResNet50, CIFAR-10 dataset
  114. ```
  115. "class_num": 10, # dataset class num
  116. "batch_size": 32, # batch size of input tensor
  117. "loss_scale": 1024, # loss scale
  118. "momentum": 0.9, # momentum
  119. "weight_decay": 1e-4, # weight decay
  120. "epoch_size": 90, # only valid for taining, which is always 1 for inference
  121. "pretrain_epoch_size": 0, # epoch size that model has been trained before loading pretrained checkpoint, actual training epoch size is equal to epoch_size minus pretrain_epoch_size
  122. "save_checkpoint": True, # whether save checkpoint or not
  123. "save_checkpoint_epochs": 5, # the epoch interval between two checkpoints. By default, the last checkpoint will be saved after the last step
  124. "keep_checkpoint_max": 10, # only keep the last keep_checkpoint_max checkpoint
  125. "save_checkpoint_path": "./", # path to save checkpoint
  126. "warmup_epochs": 5, # number of warmup epoch
  127. "lr_decay_mode": "poly" # decay mode can be selected in steps, ploy and default
  128. "lr_init": 0.01, # initial learning rate
  129. "lr_end": 0.00001, # final learning rate
  130. "lr_max": 0.1, # maximum learning rate
  131. ```
  132. - Config for ResNet50, ImageNet2012 dataset
  133. ```
  134. "class_num": 1001, # dataset class number
  135. "batch_size": 256, # batch size of input tensor
  136. "loss_scale": 1024, # loss scale
  137. "momentum": 0.9, # momentum optimizer
  138. "weight_decay": 1e-4, # weight decay
  139. "epoch_size": 90, # only valid for taining, which is always 1 for inference
  140. "pretrain_epoch_size": 0, # epoch size that model has been trained before loading pretrained checkpoint, actual training epoch size is equal to epoch_size minus pretrain_epoch_size
  141. "save_checkpoint": True, # whether save checkpoint or not
  142. "save_checkpoint_epochs": 5, # the epoch interval between two checkpoints. By default, the last checkpoint will be saved after the last epoch
  143. "keep_checkpoint_max": 10, # only keep the last keep_checkpoint_max checkpoint
  144. "save_checkpoint_path": "./", # path to save checkpoint relative to the executed path
  145. "warmup_epochs": 0, # number of warmup epoch
  146. "lr_decay_mode": "Linear", # decay mode for generating learning rate
  147. "use_label_smooth": True, # label smooth
  148. "label_smooth_factor": 0.1, # label smooth factor
  149. "lr_init": 0, # initial learning rate
  150. "lr_max": 0.8, # maximum learning rate
  151. "lr_end": 0.0, # minimum learning rate
  152. ```
  153. - Config for ResNet101, ImageNet2012 dataset
  154. ```
  155. "class_num": 1001, # dataset class number
  156. "batch_size": 32, # batch size of input tensor
  157. "loss_scale": 1024, # loss scale
  158. "momentum": 0.9, # momentum optimizer
  159. "weight_decay": 1e-4, # weight decay
  160. "epoch_size": 120, # epoch size for training
  161. "pretrain_epoch_size": 0, # epoch size that model has been trained before loading pretrained checkpoint, actual training epoch size is equal to epoch_size minus pretrain_epoch_size
  162. "save_checkpoint": True, # whether save checkpoint or not
  163. "save_checkpoint_epochs": 5, # the epoch interval between two checkpoints. By default, the last checkpoint will be saved after the last epoch
  164. "keep_checkpoint_max": 10, # only keep the last keep_checkpoint_max checkpoint
  165. "save_checkpoint_path": "./", # path to save checkpoint relative to the executed path
  166. "warmup_epochs": 0, # number of warmup epoch
  167. "lr_decay_mode": "cosine" # decay mode for generating learning rate
  168. "use_label_smooth": True, # label_smooth
  169. "label_smooth_factor": 0.1, # label_smooth_factor
  170. "lr": 0.1 # base learning rate
  171. ```
  172. - Config for SE-ResNet50, ImageNet2012 dataset
  173. ```
  174. "class_num": 1001, # dataset class number
  175. "batch_size": 32, # batch size of input tensor
  176. "loss_scale": 1024, # loss scale
  177. "momentum": 0.9, # momentum optimizer
  178. "weight_decay": 1e-4, # weight decay
  179. "epoch_size": 28 , # epoch size for creating learning rate
  180. "train_epoch_size": 24 # actual train epoch size
  181. "pretrain_epoch_size": 0, # epoch size that model has been trained before loading pretrained checkpoint, actual training epoch size is equal to epoch_size minus pretrain_epoch_size
  182. "save_checkpoint": True, # whether save checkpoint or not
  183. "save_checkpoint_epochs": 4, # the epoch interval between two checkpoints. By default, the last checkpoint will be saved after the last epoch
  184. "keep_checkpoint_max": 10, # only keep the last keep_checkpoint_max checkpoint
  185. "save_checkpoint_path": "./", # path to save checkpoint relative to the executed path
  186. "warmup_epochs": 3, # number of warmup epoch
  187. "lr_decay_mode": "cosine" # decay mode for generating learning rate
  188. "use_label_smooth": True, # label_smooth
  189. "label_smooth_factor": 0.1, # label_smooth_factor
  190. "lr_init": 0.0, # initial learning rate
  191. "lr_max": 0.3, # maximum learning rate
  192. "lr_end": 0.0001, # end learning rate
  193. ```
  194. ## [Training Process](#contents)
  195. ### Usage
  196. #### Running on Ascend
  197. ```
  198. # distributed training
  199. Usage: sh run_distribute_train.sh [resnet50|resnet101|se-resnet50] [cifar10|imagenet2012] [RANK_TABLE_FILE] [DATASET_PATH] [PRETRAINED_CKPT_PATH](optional)
  200. # standalone training
  201. Usage: sh run_standalone_train.sh [resnet50|resnet101|se-resnet50] [cifar10|imagenet2012] [DATASET_PATH]
  202. [PRETRAINED_CKPT_PATH](optional)
  203. # run evaluation example
  204. Usage: sh run_eval.sh [resnet50|resnet101|se-resnet50] [cifar10|imagenet2012] [DATASET_PATH] [CHECKPOINT_PATH]
  205. ```
  206. For distributed training, a hccl configuration file with JSON format needs to be created in advance.
  207. Please follow the instructions in the link [hccn_tools](https://gitee.com/mindspore/mindspore/tree/master/model_zoo/utils/hccl_tools).
  208. Training result will be stored in the example path, whose folder name begins with "train" or "train_parallel". Under this, you can find checkpoint file together with result like the followings in log.
  209. #### Running on GPU
  210. ```
  211. # distributed training example
  212. sh run_distribute_train_gpu.sh [resnet50|resnet101] [cifar10|imagenet2012] [DATASET_PATH] [PRETRAINED_CKPT_PATH](optional)
  213. # standalone training example
  214. sh run_standalone_train_gpu.sh [resnet50|resnet101] [cifar10|imagenet2012] [DATASET_PATH] [PRETRAINED_CKPT_PATH](optional)
  215. # infer example
  216. sh run_eval_gpu.sh [resnet50|resnet101] [cifar10|imagenet2012] [DATASET_PATH] [CHECKPOINT_PATH]
  217. ```
  218. #### Running parameter server mode training
  219. - Parameter server training Ascend example
  220. ```
  221. sh run_parameter_server_train.sh [resnet50|resnet101] [cifar10|imagenet2012] [RANK_TABLE_FILE] [DATASET_PATH] [PRETRAINED_CKPT_PATH](optional)
  222. ```
  223. - Parameter server training GPU example
  224. ```
  225. sh run_parameter_server_train_gpu.sh [resnet50|resnet101] [cifar10|imagenet2012] [DATASET_PATH] [PRETRAINED_CKPT_PATH](optional)
  226. ```
  227. ### Result
  228. - Training ResNet50 with CIFAR-10 dataset
  229. ```
  230. # distribute training result(8 pcs)
  231. epoch: 1 step: 195, loss is 1.9601055
  232. epoch: 2 step: 195, loss is 1.8555021
  233. epoch: 3 step: 195, loss is 1.6707983
  234. epoch: 4 step: 195, loss is 1.8162166
  235. epoch: 5 step: 195, loss is 1.393667
  236. ...
  237. ```
  238. - Training ResNet50 with ImageNet2012 dataset
  239. ```
  240. # distribute training result(8 pcs)
  241. epoch: 1 step: 5004, loss is 4.8995576
  242. epoch: 2 step: 5004, loss is 3.9235563
  243. epoch: 3 step: 5004, loss is 3.833077
  244. epoch: 4 step: 5004, loss is 3.2795618
  245. epoch: 5 step: 5004, loss is 3.1978393
  246. ...
  247. ```
  248. - Training ResNet101 with ImageNet2012 dataset
  249. ```
  250. # distribute training result(8p)
  251. epoch: 1 step: 5004, loss is 4.805483
  252. epoch: 2 step: 5004, loss is 3.2121816
  253. epoch: 3 step: 5004, loss is 3.429647
  254. epoch: 4 step: 5004, loss is 3.3667371
  255. epoch: 5 step: 5004, loss is 3.1718972
  256. ...
  257. epoch: 67 step: 5004, loss is 2.2768745
  258. epoch: 68 step: 5004, loss is 1.7223864
  259. epoch: 69 step: 5004, loss is 2.0665488
  260. epoch: 70 step: 5004, loss is 1.8717369
  261. ...
  262. ```
  263. - Training SE-ResNet50 with ImageNet2012 dataset
  264. ```
  265. # distribute training result(8 pcs)
  266. epoch: 1 step: 5004, loss is 5.1779146
  267. epoch: 2 step: 5004, loss is 4.139395
  268. epoch: 3 step: 5004, loss is 3.9240637
  269. epoch: 4 step: 5004, loss is 3.5011306
  270. epoch: 5 step: 5004, loss is 3.3501816
  271. ...
  272. ```
  273. ## [Evaluation Process](#contents)
  274. ### Usage
  275. #### Running on Ascend
  276. ```
  277. # evaluation
  278. Usage: sh run_eval.sh [resnet50|resnet101|se-resnet50] [cifar10|imagenet2012] [DATASET_PATH] [CHECKPOINT_PATH]
  279. ```
  280. ```
  281. # evaluation example
  282. sh run_eval.sh resnet50 cifar10 ~/cifar10-10-verify-bin ~/resnet50_cifar10/train_parallel0/resnet-90_195.ckpt
  283. ```
  284. > checkpoint can be produced in training process.
  285. #### Running on GPU
  286. ```
  287. sh run_eval_gpu.sh [resnet50|resnet101] [cifar10|imagenet2012] [DATASET_PATH] [CHECKPOINT_PATH]
  288. ```
  289. ### Result
  290. Evaluation result will be stored in the example path, whose folder name is "eval". Under this, you can find result like the followings in log.
  291. - Evaluating ResNet50 with CIFAR-10 dataset
  292. ```
  293. result: {'acc': 0.91446314102564111} ckpt=~/resnet50_cifar10/train_parallel0/resnet-90_195.ckpt
  294. ```
  295. - Evaluating ResNet50 with ImageNet2012 dataset
  296. ```
  297. result: {'acc': 0.7671054737516005} ckpt=train_parallel0/resnet-90_5004.ckpt
  298. ```
  299. - Evaluating ResNet101 with ImageNet2012 dataset
  300. ```
  301. result: {'top_5_accuracy': 0.9429417413572343, 'top_1_accuracy': 0.7853513124199744} ckpt=train_parallel0/resnet-120_5004.ckpt
  302. ```
  303. - Evaluating SE-ResNet50 with ImageNet2012 dataset
  304. ```
  305. result: {'top_5_accuracy': 0.9342589628681178, 'top_1_accuracy': 0.768065781049936} ckpt=train_parallel0/resnet-24_5004.ckpt
  306. ```
  307. # [Model Description](#contents)
  308. ## [Performance](#contents)
  309. ### Evaluation Performance
  310. #### ResNet50 on CIFAR-10
  311. | Parameters | Ascend 910 | GPU |
  312. | -------------------------- | -------------------------------------- |---------------------------------- |
  313. | Model Version | ResNet50-v1.5 |ResNet50-v1.5|
  314. | Resource | Ascend 910,CPU 2.60GHz 192cores,Memory 755G | GPU(Tesla V100 SXM2),CPU 2.1GHz 24cores,Memory 128G
  315. | uploaded Date | 04/01/2020 (month/day/year) | 08/01/2020 (month/day/year)
  316. | MindSpore Version | 0.1.0-alpha |0.6.0-alpha |
  317. | Dataset | CIFAR-10 | CIFAR-10
  318. | Training Parameters | epoch=90, steps per epoch=195, batch_size = 32 |epoch=90, steps per epoch=195, batch_size = 32 |
  319. | Optimizer | Momentum |Momentum|
  320. | Loss Function | Softmax Cross Entropy |Softmax Cross Entropy |
  321. | outputs | probability | probability |
  322. | Loss | 0.000356 | 0.000716 |
  323. | Speed | 18.4ms/step(8pcs) |69ms/step(8pcs)|
  324. | Total time | 6 mins | 20.2 mins|
  325. | Parameters (M) | 25.5 | 25.5 |
  326. | Checkpoint for Fine tuning | 179.7M (.ckpt file) |179.7M (.ckpt file)|
  327. | Scripts | [Link](https://gitee.com/mindspore/mindspore/tree/master/model_zoo/official/cv/resnet) | [Link](https://gitee.com/mindspore/mindspore/tree/master/model_zoo/official/cv/resnet) |
  328. #### ResNet50 on ImageNet2012
  329. | Parameters | Ascend 910 | GPU |
  330. | -------------------------- | -------------------------------------- |---------------------------------- |
  331. | Model Version | ResNet50-v1.5 |ResNet50-v1.5|
  332. | Resource | Ascend 910,CPU 2.60GHz 192cores,Memory 755G | GPU(Tesla V100 SXM2),CPU 2.1GHz 24cores,Memory 128G
  333. | uploaded Date | 04/01/2020 (month/day/year) ; | 08/01/2020 (month/day/year)
  334. | MindSpore Version | 0.1.0-alpha |0.6.0-alpha |
  335. | Dataset | ImageNet2012 | ImageNet2012|
  336. | Training Parameters | epoch=90, steps per epoch=626, batch_size = 256 |epoch=90, steps per epoch=5004, batch_size = 32 |
  337. | Optimizer | Momentum |Momentum|
  338. | Loss Function | Softmax Cross Entropy |Softmax Cross Entropy |
  339. | outputs | probability | probability |
  340. | Loss | 1.8464266 | 1.9023 |
  341. | Speed | 118ms/step(8pcs) |67.1ms/step(8pcs)|
  342. | Total time | 114 mins | 500 mins|
  343. | Parameters (M) | 25.5 | 25.5 |
  344. | Checkpoint for Fine tuning | 197M (.ckpt file) |197M (.ckpt file) |
  345. | Scripts | [Link](https://gitee.com/mindspore/mindspore/tree/master/model_zoo/official/cv/resnet) | [Link](https://gitee.com/mindspore/mindspore/tree/master/model_zoo/official/cv/resnet) |
  346. #### ResNet101 on ImageNet2012
  347. | Parameters | Ascend 910 | GPU |
  348. | -------------------------- | -------------------------------------- |---------------------------------- |
  349. | Model Version | ResNet101 |ResNet101|
  350. | Resource | Ascend 910,CPU 2.60GHz 192cores,Memory 755G | GPU(Tesla V100 SXM2),CPU 2.1GHz 24cores,Memory 128G
  351. | uploaded Date | 04/01/2020 (month/day/year) | 08/01/2020 (month/day/year)
  352. | MindSpore Version | 0.1.0-alpha |0.6.0-alpha |
  353. | Dataset | ImageNet2012 | ImageNet2012|
  354. | Training Parameters | epoch=120, steps per epoch=5004, batch_size = 32 |epoch=120, steps per epoch=5004, batch_size = 32 |
  355. | Optimizer | Momentum |Momentum|
  356. | Loss Function | Softmax Cross Entropy |Softmax Cross Entropy |
  357. | outputs | probability | probability |
  358. | Loss | 1.6453942 | 1.7023412 |
  359. | Speed | 30.3ms/step(8pcs) |108.6ms/step(8pcs)|
  360. | Total time | 301 mins | 1100 mins|
  361. | Parameters (M) | 44.6 | 44.6 |
  362. | Checkpoint for Fine tuning | 343M (.ckpt file) |343M (.ckpt file) |
  363. | Scripts | [Link](https://gitee.com/mindspore/mindspore/tree/master/model_zoo/official/cv/resnet) | [Link](https://gitee.com/mindspore/mindspore/tree/master/model_zoo/official/cv/resnet) |
  364. #### SE-ResNet50 on ImageNet2012
  365. | Parameters | Ascend 910
  366. | -------------------------- | ------------------------------------------------------------------------ |
  367. | Model Version | SE-ResNet50 |
  368. | Resource | Ascend 910,CPU 2.60GHz 192cores,Memory 755G |
  369. | uploaded Date | 08/16/2020 (month/day/year) ; |
  370. | MindSpore Version | 0.7.0-alpha |
  371. | Dataset | ImageNet2012 |
  372. | Training Parameters | epoch=24, steps per epoch=5004, batch_size = 32 |
  373. | Optimizer | Momentum |
  374. | Loss Function | Softmax Cross Entropy |
  375. | outputs | probability |
  376. | Loss | 1.754404 |
  377. | Speed | 24.6ms/step(8pcs) |
  378. | Total time | 49.3 mins |
  379. | Parameters (M) | 25.5 |
  380. | Checkpoint for Fine tuning | 215.9M (.ckpt file) |
  381. | Scripts | [Link](https://gitee.com/mindspore/mindspore/tree/master/model_zoo/official/cv/resnet) |
  382. # [Description of Random Situation](#contents)
  383. In dataset.py, we set the seed inside “create_dataset" function. We also use random seed in train.py.
  384. # [ModelZoo Homepage](#contents)
  385. Please check the official [homepage](https://gitee.com/mindspore/mindspore/tree/master/model_zoo).