You can not select more than 25 topics Topics must start with a chinese character,a letter or number, can include dashes ('-') and can be up to 35 characters long.

README.md 4.5 kB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128
  1. # ResNet-50-THOR Example
  2. ## Description
  3. This is an example of training ResNet-50 V1.5 with ImageNet2012 dataset by second-order optimizer THOR. THOR is a novel approximate seond-order optimization method in MindSpore. With fewer iterations, THOR can finish ResNet-50 V1.5 training in 72 minutes to top-1 accuracy of 75.9% using 8 Ascend 910, which is much faster than SGD with Momentum.
  4. ## Requirements
  5. - Install [MindSpore](https://www.mindspore.cn/install/en).
  6. - Download the dataset ImageNet2012
  7. > Unzip the ImageNet2012 dataset to any path you want and the folder structure should include train and eval dataset as follows:
  8. > ```
  9. > .
  10. > ├── ilsvrc # train dataset
  11. > └── ilsvrc_eval # infer dataset
  12. > ```
  13. ## Example structure
  14. ```shell
  15. .
  16. ├── resnet_thor
  17. ├── README.md
  18. ├── src
  19. ├── crossentropy.py # CrossEntropy loss function
  20. ├── config.py # parameter configuration
  21. ├── resnet50.py # resnet50 backbone
  22. ├── dataset_helper.py # dataset help for minddata dataset
  23. ├── grad_reducer_thor.py # grad reducer for thor
  24. ├── model_thor.py # model
  25. ├── resnet_thor.py # resnet50_thor backone
  26. ├── thor.py # thor
  27. ├── thor_layer.py # thor layer
  28. └── dataset_imagenet.py # data preprocessing
  29. ├── scripts
  30. ├── run_distribute_train.sh # launch distributed training(8 pcs)
  31. └── run_eval.sh # launch infering
  32. ├── eval.py # infer script
  33. └── train.py # train script
  34. ```
  35. ## Parameter configuration
  36. Parameters for both training and inference can be set in config.py.
  37. ```
  38. "class_num": 1000, # dataset class number
  39. "batch_size": 32, # batch size of input tensor
  40. "loss_scale": 128, # loss scale
  41. "momentum": 0.9, # momentum of THOR optimizer
  42. "weight_decay": 5e-4, # weight decay
  43. "epoch_size": 45, # only valid for taining, which is always 1 for inference
  44. "buffer_size": 1000, # number of queue size in data preprocessing
  45. "image_height": 224, # image height
  46. "image_width": 224, # image width
  47. "save_checkpoint": True, # whether save checkpoint or not
  48. "save_checkpoint_steps": 5004, # the step interval between two checkpoints. By default, the checkpoint will be saved every epoch
  49. "keep_checkpoint_max": 20, # only keep the last keep_checkpoint_max checkpoint
  50. "save_checkpoint_path": "./", # path to save checkpoint relative to the executed path
  51. "label_smooth": True, # label smooth
  52. "label_smooth_factor": 0.1, # label smooth factor
  53. "frequency": 834, # the step interval to update second-order information matrix
  54. ```
  55. ## Running the example
  56. ### Train
  57. #### Usage
  58. ```
  59. # distributed training
  60. Usage: sh run_distribute_train.sh [MINDSPORE_HCCL_CONFIG_PATH] [DATASET_PATH] [DEVICE_NUM]
  61. ```
  62. #### Launch
  63. ```bash
  64. # distributed training example(8 pcs)
  65. sh run_distribute_train.sh rank_table_8p.json dataset/ilsvrc
  66. ```
  67. > About rank_table.json, you can refer to the [distributed training tutorial](https://www.mindspore.cn/tutorial/en/master/advanced_use/distributed_training.html).
  68. #### Result
  69. Training result will be stored in the example path, whose folder name begins with "train_parallel". Under this, you can find checkpoint file together with result like the followings in log.
  70. ```
  71. # distribute training result(8 pcs)
  72. epoch: 1 step: 5004, loss is 4.4182425
  73. epoch: 2 step: 5004, loss is 3.740064
  74. epoch: 3 step: 5004, loss is 4.0546017
  75. epoch: 4 step: 5004, loss is 3.7598825
  76. epoch: 5 step: 5004, loss is 3.3744206
  77. ......
  78. ```
  79. ### Infer
  80. #### Usage
  81. ```
  82. # infer
  83. Usage: sh run_eval.sh [DATASET_PATH] [CHECKPOINT_PATH]
  84. ```
  85. #### Launch
  86. ```bash
  87. # infer with checkpoint
  88. sh run_eval.sh dataset/ilsvrc_eval train_parallel0/resnet-42_5004.ckpt
  89. ```
  90. > checkpoint can be produced in training process.
  91. #### Result
  92. Inference result will be stored in the example path, whose folder name is "infer". Under this, you can find result like the followings in log.
  93. ```
  94. result: {'acc': 0.759503041} ckpt=train_parallel0/resnet-42_5004.ckpt
  95. ```