You can not select more than 25 topics Topics must start with a chinese character,a letter or number, can include dashes ('-') and can be up to 35 characters long.

README.md 12 kB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306
  1. # ResNet Example
  2. ## Description
  3. These are examples of training ResNet-50/ResNet-101 with CIFAR-10/ImageNet2012 dataset in MindSpore.
  4. (Training ResNet-101 with dataset CIFAR-10 is unsupported now.)
  5. ## Requirements
  6. - Install [MindSpore](https://www.mindspore.cn/install/en).
  7. - Download the dataset CIFAR-10 or ImageNet2012
  8. CIFAR-10
  9. > Unzip the CIFAR-10 dataset to any path you want and the folder structure should include train and eval dataset as follows:
  10. > ```
  11. > .
  12. > └─dataset
  13. > ├─ cifar-10-batches-bin # train dataset
  14. > └─ cifar-10-verify-bin # evaluate dataset
  15. > ```
  16. ImageNet2012
  17. > Unzip the ImageNet2012 dataset to any path you want and the folder should include train and eval dataset as follows:
  18. >
  19. > ```
  20. > .
  21. > └─dataset
  22. > ├─ilsvrc # train dataset
  23. > └─validation_preprocess # evaluate dataset
  24. > ```
  25. ## Structure
  26. ```shell
  27. .
  28. └──resnet
  29. ├── README.md
  30. ├── script
  31. ├── run_distribute_train.sh # launch distributed training(8 pcs)
  32. ├── run_parameter_server_train.sh # launch Ascend parameter server training(8 pcs)
  33. ├── run_eval.sh # launch evaluation
  34. ├── run_standalone_train.sh # launch standalone training(1 pcs)
  35. ├── run_distribute_train_gpu.sh # launch gpu distributed training(8 pcs)
  36. ├── run_parameter_server_train_gpu.sh # launch gpu parameter server training(8 pcs)
  37. ├── run_eval_gpu.sh # launch gpu evaluation
  38. └── run_standalone_train_gpu.sh # launch gpu standalone training(1 pcs)
  39. ├── src
  40. ├── config.py # parameter configuration
  41. ├── dataset.py # data preprocessing
  42. ├── crossentropy.py # loss definition for ImageNet2012 dataset
  43. ├── lr_generator.py # generate learning rate for each step
  44. └── resnet.py # resnet backbone, including resnet50 and resnet101
  45. ├── eval.py # eval net
  46. └── train.py # train net
  47. ```
  48. ## Parameter configuration
  49. Parameters for both training and evaluation can be set in config.py.
  50. - config for ResNet-50, CIFAR-10 dataset
  51. ```
  52. "class_num": 10, # dataset class num
  53. "batch_size": 32, # batch size of input tensor
  54. "loss_scale": 1024, # loss scale
  55. "momentum": 0.9, # momentum
  56. "weight_decay": 1e-4, # weight decay
  57. "epoch_size": 90, # only valid for taining, which is always 1 for inference
  58. "pretrain_epoch_size": 0, # epoch size that model has been trained before loading pretrained checkpoint, actual training epoch size is equal to epoch_size minus pretrain_epoch_size
  59. "save_checkpoint": True, # whether save checkpoint or not
  60. "save_checkpoint_steps": 195, # the step interval between two checkpoints. By default, the last checkpoint will be saved after the last step
  61. "keep_checkpoint_max": 10, # only keep the last keep_checkpoint_max checkpoint
  62. "save_checkpoint_path": "./", # path to save checkpoint
  63. "warmup_epochs": 5, # number of warmup epoch
  64. "lr_decay_mode": "poly" # decay mode can be selected in steps, ploy and default
  65. "lr_init": 0.01, # initial learning rate
  66. "lr_end": 0.00001, # final learning rate
  67. "lr_max": 0.1, # maximum learning rate
  68. ```
  69. - config for ResNet-50, ImageNet2012 dataset
  70. ```
  71. "class_num": 1001, # dataset class number
  72. "batch_size": 32, # batch size of input tensor
  73. "loss_scale": 1024, # loss scale
  74. "momentum": 0.9, # momentum optimizer
  75. "weight_decay": 1e-4, # weight decay
  76. "epoch_size": 90, # only valid for taining, which is always 1 for inference
  77. "pretrain_epoch_size": 0, # epoch size that model has been trained before loading pretrained checkpoint, actual training epoch size is equal to epoch_size minus pretrain_epoch_size
  78. "save_checkpoint": True, # whether save checkpoint or not
  79. "save_checkpoint_epochs": 1, # the epoch interval between two checkpoints. By default, the last checkpoint will be saved after the last epoch
  80. "keep_checkpoint_max": 10, # only keep the last keep_checkpoint_max checkpoint
  81. "save_checkpoint_path": "./", # path to save checkpoint relative to the executed path
  82. "warmup_epochs": 0, # number of warmup epoch
  83. "lr_decay_mode": "cosine", # decay mode for generating learning rate
  84. "label_smooth": True, # label smooth
  85. "label_smooth_factor": 0.1, # label smooth factor
  86. "lr_init": 0, # initial learning rate
  87. "lr_max": 0.1, # maximum learning rate
  88. ```
  89. - config for ResNet-101, ImageNet2012 dataset
  90. ```
  91. "class_num": 1001, # dataset class number
  92. "batch_size": 32, # batch size of input tensor
  93. "loss_scale": 1024, # loss scale
  94. "momentum": 0.9, # momentum optimizer
  95. "weight_decay": 1e-4, # weight decay
  96. "epoch_size": 120, # epoch size for training
  97. "pretrain_epoch_size": 0, # epoch size that model has been trained before loading pretrained checkpoint, actual training epoch size is equal to epoch_size minus pretrain_epoch_size
  98. "save_checkpoint": True, # whether save checkpoint or not
  99. "save_checkpoint_epochs": 1, # the epoch interval between two checkpoints. By default, the last checkpoint will be saved after the last epoch
  100. "keep_checkpoint_max": 10, # only keep the last keep_checkpoint_max checkpoint
  101. "save_checkpoint_path": "./", # path to save checkpoint relative to the executed path
  102. "warmup_epochs": 0, # number of warmup epoch
  103. "lr_decay_mode": "cosine" # decay mode for generating learning rate
  104. "label_smooth": 1, # label_smooth
  105. "label_smooth_factor": 0.1, # label_smooth_factor
  106. "lr": 0.1 # base learning rate
  107. ```
  108. - config for SE-ResNet-50, ImageNet2012 dataset
  109. ```
  110. "class_num": 1001, # dataset class number
  111. "batch_size": 32, # batch size of input tensor
  112. "loss_scale": 1024, # loss scale
  113. "momentum": 0.9, # momentum optimizer
  114. "weight_decay": 1e-4, # weight decay
  115. "epoch_size": 28 , # epoch size for creating learning rate
  116. "train_epoch_size": 24 # actual train epoch size
  117. "pretrain_epoch_size": 0, # epoch size that model has been trained before loading pretrained checkpoint, actual training epoch size is equal to epoch_size minus pretrain_epoch_size
  118. "save_checkpoint": True, # whether save checkpoint or not
  119. "save_checkpoint_epochs": 4, # the epoch interval between two checkpoints. By default, the last checkpoint will be saved after the last epoch
  120. "keep_checkpoint_max": 10, # only keep the last keep_checkpoint_max checkpoint
  121. "save_checkpoint_path": "./", # path to save checkpoint relative to the executed path
  122. "warmup_epochs": 3, # number of warmup epoch
  123. "lr_decay_mode": "cosine" # decay mode for generating learning rate
  124. "label_smooth": True, # label_smooth
  125. "label_smooth_factor": 0.1, # label_smooth_factor
  126. "lr_init": 0.0, # initial learning rate
  127. "lr_max": 0.3, # maximum learning rate
  128. "lr_end": 0.0001, # end learning rate
  129. ```
  130. ## Running the example
  131. ### Train
  132. #### Usage
  133. ```
  134. # distributed training
  135. Usage: sh run_distribute_train.sh [resnet50|resnet101|se-resnet50] [cifar10|imagenet2012] [RANK_TABLE_FILE] [DATASET_PATH] [PRETRAINED_CKPT_PATH](optional)
  136. # standalone training
  137. Usage: sh run_standalone_train.sh [resnet50|resnet101|se-resnet50] [cifar10|imagenet2012] [DATASET_PATH]
  138. [PRETRAINED_CKPT_PATH](optional)
  139. ```
  140. #### Launch
  141. ```
  142. # distribute training example
  143. sh run_distribute_train.sh resnet50 cifar10 rank_table.json ~/cifar-10-batches-bin
  144. # standalone training example
  145. sh run_standalone_train.sh resnet50 cifar10 ~/cifar-10-batches-bin
  146. ```
  147. > About rank_table.json, you can refer to the [distributed training tutorial](https://www.mindspore.cn/tutorial/en/master/advanced_use/distributed_training.html).
  148. #### Result
  149. Training result will be stored in the example path, whose folder name begins with "train" or "train_parallel". Under this, you can find checkpoint file together with result like the followings in log.
  150. - training ResNet-50 with CIFAR-10 dataset
  151. ```
  152. # distribute training result(8 pcs)
  153. epoch: 1 step: 195, loss is 1.9601055
  154. epoch: 2 step: 195, loss is 1.8555021
  155. epoch: 3 step: 195, loss is 1.6707983
  156. epoch: 4 step: 195, loss is 1.8162166
  157. epoch: 5 step: 195, loss is 1.393667
  158. ...
  159. ```
  160. - training ResNet-50 with ImageNet2012 dataset
  161. ```
  162. # distribute training result(8 pcs)
  163. epoch: 1 step: 5004, loss is 4.8995576
  164. epoch: 2 step: 5004, loss is 3.9235563
  165. epoch: 3 step: 5004, loss is 3.833077
  166. epoch: 4 step: 5004, loss is 3.2795618
  167. epoch: 5 step: 5004, loss is 3.1978393
  168. ...
  169. ```
  170. - training ResNet-101 with ImageNet2012 dataset
  171. ```
  172. # distribute training result(8p)
  173. epoch: 1 step: 5004, loss is 4.805483
  174. epoch: 2 step: 5004, loss is 3.2121816
  175. epoch: 3 step: 5004, loss is 3.429647
  176. epoch: 4 step: 5004, loss is 3.3667371
  177. epoch: 5 step: 5004, loss is 3.1718972
  178. ...
  179. epoch: 67 step: 5004, loss is 2.2768745
  180. epoch: 68 step: 5004, loss is 1.7223864
  181. epoch: 69 step: 5004, loss is 2.0665488
  182. epoch: 70 step: 5004, loss is 1.8717369
  183. ...
  184. ```
  185. - training SE-ResNet-50 with ImageNet2012 dataset
  186. ```
  187. # distribute training result(8 pcs)
  188. epoch: 1 step: 5004, loss is 5.1779146
  189. epoch: 2 step: 5004, loss is 4.139395
  190. epoch: 3 step: 5004, loss is 3.9240637
  191. epoch: 4 step: 5004, loss is 3.5011306
  192. epoch: 5 step: 5004, loss is 3.3501816
  193. ...
  194. ```
  195. ### Evaluation
  196. #### Usage
  197. ```
  198. # evaluation
  199. Usage: sh run_eval.sh [resnet50|resnet101|se-resnet50] [cifar10|imagenet2012] [DATASET_PATH] [CHECKPOINT_PATH]
  200. ```
  201. #### Launch
  202. ```
  203. # evaluation example
  204. sh run_eval.sh resnet50 cifar10 ~/cifar10-10-verify-bin ~/resnet50_cifar10/train_parallel0/resnet-90_195.ckpt
  205. ```
  206. > checkpoint can be produced in training process.
  207. #### Result
  208. Evaluation result will be stored in the example path, whose folder name is "eval". Under this, you can find result like the followings in log.
  209. - evaluating ResNet-50 with CIFAR-10 dataset
  210. ```
  211. result: {'acc': 0.91446314102564111} ckpt=~/resnet50_cifar10/train_parallel0/resnet-90_195.ckpt
  212. ```
  213. - evaluating ResNet-50 with ImageNet2012 dataset
  214. ```
  215. result: {'acc': 0.7671054737516005} ckpt=train_parallel0/resnet-90_5004.ckpt
  216. ```
  217. - evaluating ResNet-101 with ImageNet2012 dataset
  218. ```
  219. result: {'top_5_accuracy': 0.9429417413572343, 'top_1_accuracy': 0.7853513124199744} ckpt=train_parallel0/resnet-120_5004.ckpt
  220. ```
  221. - evaluating SE-ResNet-50 with ImageNet2012 dataset
  222. ```
  223. result: {'top_5_accuracy': 0.9342589628681178, 'top_1_accuracy': 0.768065781049936} ckpt=train_parallel0/resnet-24_5004.ckpt
  224. ```
  225. ### Running on GPU
  226. ```
  227. # distributed training example
  228. sh run_distribute_train_gpu.sh [resnet50|resnet101] [cifar10|imagenet2012] [DATASET_PATH] [PRETRAINED_CKPT_PATH](optional)
  229. # standalone training example
  230. sh run_standalone_train_gpu.sh [resnet50|resnet101] [cifar10|imagenet2012] [DATASET_PATH] [PRETRAINED_CKPT_PATH](optional)
  231. # infer example
  232. sh run_eval_gpu.sh [resnet50|resnet101] [cifar10|imagenet2012] [DATASET_PATH] [CHECKPOINT_PATH]
  233. ```
  234. ### Running parameter server mode training
  235. ```
  236. # parameter server training Ascend example
  237. sh run_parameter_server_train.sh [resnet50|resnet101] [cifar10|imagenet2012] [RANK_TABLE_FILE] [DATASET_PATH] [PRETRAINED_CKPT_PATH](optional)
  238. # parameter server training GPU example
  239. sh run_parameter_server_train_gpu.sh [resnet50|resnet101] [cifar10|imagenet2012] [DATASET_PATH] [PRETRAINED_CKPT_PATH](optional)
  240. > The way to evaluate is the same as the examples above.
  241. ```