You can not select more than 25 topics Topics must start with a chinese character,a letter or number, can include dashes ('-') and can be up to 35 characters long.

README.md 5.1 kB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150
  1. # ResNet-50 Example
  2. ## Description
  3. This is an example of training ResNet-50 with ImageNet2012 dataset in MindSpore.
  4. ## Requirements
  5. - Install [MindSpore](https://www.mindspore.cn/install/en).
  6. - Download the dataset ImageNet2012
  7. > Unzip the ImageNet2012 dataset to any path you want and the folder structure should include train and eval dataset as follows:
  8. > ```
  9. > .
  10. > ├── ilsvrc # train dataset
  11. > └── ilsvrc_eval # infer dataset
  12. > ```
  13. ## Example structure
  14. ```shell
  15. .
  16. ├── crossentropy.py # CrossEntropy loss function
  17. ├── config.py # parameter configuration
  18. ├── dataset.py # data preprocessing
  19. ├── eval.py # infer script
  20. ├── lr_generator.py # generate learning rate for each step
  21. ├── run_distribute_train.sh # launch distributed training(8 pcs)
  22. ├── run_infer.sh # launch infering
  23. ├── run_standalone_train.sh # launch standalone training(1 pcs)
  24. └── train.py # train script
  25. ```
  26. ## Parameter configuration
  27. Parameters for both training and inference can be set in config.py.
  28. ```
  29. "class_num": 1001, # dataset class number
  30. "batch_size": 32, # batch size of input tensor
  31. "loss_scale": 1024, # loss scale
  32. "momentum": 0.9, # momentum optimizer
  33. "weight_decay": 1e-4, # weight decay
  34. "epoch_size": 90, # only valid for taining, which is always 1 for inference
  35. "pretrained_epoch_size": 1, # epoch size that model has been trained before load pretrained checkpoint
  36. "buffer_size": 1000, # number of queue size in data preprocessing
  37. "image_height": 224, # image height
  38. "image_width": 224, # image width
  39. "save_checkpoint": True, # whether save checkpoint or not
  40. "save_checkpoint_epochs": 1, # the epoch interval between two checkpoints. By default, the last checkpoint will be saved after the last epoch
  41. "keep_checkpoint_max": 10, # only keep the last keep_checkpoint_max checkpoint
  42. "save_checkpoint_path": "./", # path to save checkpoint relative to the executed path
  43. "warmup_epochs": 0, # number of warmup epoch
  44. "lr_decay_mode": "cosine", # decay mode for generating learning rate
  45. "label_smooth": True, # label smooth
  46. "label_smooth_factor": 0.1, # label smooth factor
  47. "lr_init": 0, # initial learning rate
  48. "lr_max": 0.1, # maximum learning rate
  49. ```
  50. ## Running the example
  51. ### Train
  52. #### Usage
  53. ```
  54. # distributed training
  55. Usage: sh run_distribute_train.sh [MINDSPORE_HCCL_CONFIG_PATH] [DATASET_PATH] [PRETRAINED_CKPT_PATH](optional)
  56. # standalone training
  57. Usage: sh run_standalone_train.sh [DATASET_PATH] [PRETRAINED_CKPT_PATH](optional)
  58. ```
  59. #### Launch
  60. ```bash
  61. # distributed training example(8 pcs)
  62. sh run_distribute_train.sh rank_table_8p.json dataset/ilsvrc
  63. # If you want to load pretrained ckpt file
  64. sh run_distribute_train.sh rank_table_8p.json dataset/ilsvrc ./pretrained.ckpt
  65. # standalone training example(1 pcs)
  66. sh run_standalone_train.sh dataset/ilsvrc
  67. # If you want to load pretrained ckpt file
  68. sh run_standalone_train.sh dataset/ilsvrc ./pretrained.ckpt
  69. ```
  70. > About rank_table.json, you can refer to the [distributed training tutorial](https://www.mindspore.cn/tutorial/en/master/advanced_use/distributed_training.html).
  71. #### Result
  72. Training result will be stored in the example path, whose folder name begins with "train" or "train_parallel". Under this, you can find checkpoint file together with result like the followings in log.
  73. ```
  74. # distribute training result(8 pcs)
  75. epoch: 1 step: 5004, loss is 4.8995576
  76. epoch: 2 step: 5004, loss is 3.9235563
  77. epoch: 3 step: 5004, loss is 3.833077
  78. epoch: 4 step: 5004, loss is 3.2795618
  79. epoch: 5 step: 5004, loss is 3.1978393
  80. ```
  81. ### Infer
  82. #### Usage
  83. ```
  84. # infer
  85. Usage: sh run_infer.sh [DATASET_PATH] [CHECKPOINT_PATH]
  86. ```
  87. #### Launch
  88. ```bash
  89. # infer with checkpoint
  90. sh run_infer.sh dataset/ilsvrc_eval train_parallel0/resnet-90_5004.ckpt
  91. ```
  92. > checkpoint can be produced in training process.
  93. #### Result
  94. Inference result will be stored in the example path, whose folder name is "infer". Under this, you can find result like the followings in log.
  95. ```
  96. result: {'acc': 0.7671054737516005} ckpt=train_parallel0/resnet-90_5004.ckpt
  97. ```
  98. ### Running on GPU
  99. ```
  100. # distributed training example
  101. mpirun -n 8 python train.py --dataset_path=dataset/ilsvrc/train --device_target="GPU" --run_distribute=True
  102. # standalone training example
  103. python train.py --dataset_path=dataset/ilsvrc/train --device_target="GPU"
  104. # standalone training example with pretrained checkpoint
  105. python train.py --dataset_path=dataset/ilsvrc/train --device_target="GPU" --pre_trained=pretrained.ckpt
  106. # infer example
  107. python eval.py --dataset_path=dataset/ilsvrc/val --device_target="GPU" --checkpoint_path=resnet-90_5004ss.ckpt
  108. ```