You can not select more than 25 topics Topics must start with a chinese character,a letter or number, can include dashes ('-') and can be up to 35 characters long.

README.md 4.6 kB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147
  1. # ResNet101 Example
  2. ## Description
  3. This is an example of training ResNet101 with ImageNet dataset in MindSpore.
  4. ## Requirements
  5. - Install [MindSpore](https://www.mindspore.cn/install/en).
  6. - Download the dataset ImageNet2012.
  7. > Unzip the ImageNet2012 dataset to any path you want, the folder should include train and eval dataset as follows:
  8. ```
  9. .
  10. └─dataset
  11. ├─ilsvrc
  12. └─validation_preprocess
  13. ```
  14. ## Structure
  15. ```shell
  16. .
  17. └─resnet101
  18. ├─README.md
  19. ├─scripts
  20. ├─run_standalone_train.sh # launch standalone training(1p)
  21. ├─run_distribute_train.sh # launch distributed training(8p)
  22. └─run_eval.sh # launch evaluating
  23. ├─src
  24. ├─config.py # parameter configuration
  25. ├─crossentropy.py # CrossEntropy loss function
  26. ├─dataset.py # data preprocessin
  27. ├─lr_generator.py # generate learning rate
  28. ├─resnet101.py # resnet101 backbone
  29. ├─eval.py # eval net
  30. └─train.py # train net
  31. ```
  32. ## Parameter configuration
  33. Parameters for both training and evaluating can be set in config.py.
  34. ```
  35. "class_num": 1001, # dataset class number
  36. "batch_size": 32, # batch size of input tensor
  37. "loss_scale": 1024, # loss scale
  38. "momentum": 0.9, # momentum optimizer
  39. "weight_decay": 1e-4, # weight decay
  40. "epoch_size": 120, # epoch sizes for training
  41. "pretrain_epoch_size": 0, # epoch size of pretrain checkpoint
  42. "buffer_size": 1000, # number of queue size in data preprocessing
  43. "image_height": 224, # image height
  44. "image_width": 224, # image width
  45. "save_checkpoint": True, # whether save checkpoint or not
  46. "save_checkpoint_epochs": 1, # the epoch interval between two checkpoints. By default, the last checkpoint will be saved after the last epoch
  47. "keep_checkpoint_max": 10, # only keep the last keep_checkpoint_max checkpoint
  48. "save_checkpoint_path": "./", # path to save checkpoint relative to the executed path
  49. "warmup_epochs": 0, # number of warmup epoch
  50. "lr_decay_mode": "cosine" # decay mode for generating learning rate
  51. "label_smooth": 1, # label_smooth
  52. "label_smooth_factor": 0.1, # label_smooth_factor
  53. "lr": 0.1 # base learning rate
  54. ```
  55. ## Running the example
  56. ### Train
  57. #### Usage
  58. ```
  59. # distributed training
  60. sh run_distribute_train.sh [MINDSPORE_HCCL_CONFIG_PATH] [DATASET_PATH] [PRETRAINED_PATH](optional)
  61. # standalone training
  62. sh run_standalone_train.sh [DATASET_PATH] [PRETRAINED_PATH](optional)
  63. ```
  64. #### Launch
  65. ```bash
  66. # distributed training example(8p)
  67. sh run_distribute_train.sh rank_table_8p.json dataset/ilsvrc
  68. If you want to load pretrained ckpt file,
  69. sh run_distribute_train.sh rank_table_8p.json dataset/ilsvrc ./ckpt/pretrained.ckpt
  70. # standalone training example(1p)
  71. sh run_standalone_train.sh dataset/ilsvrc
  72. If you want to load pretrained ckpt file,
  73. sh run_standalone_train.sh dataset/ilsvrc ./ckpt/pretrained.ckpt
  74. ```
  75. > About rank_table.json, you can refer to the [distributed training tutorial](https://www.mindspore.cn/tutorial/en/master/advanced_use/distributed_training.html).
  76. #### Result
  77. Training result will be stored in the scripts path, whose folder name begins with "train" or "train_parallel". You can find checkpoint file together with result like the followings in log.
  78. ```
  79. # distribute training result(8p)
  80. epoch: 1 step: 5004, loss is 4.805483
  81. epoch: 2 step: 5004, loss is 3.2121816
  82. epoch: 3 step: 5004, loss is 3.429647
  83. epoch: 4 step: 5004, loss is 3.3667371
  84. epoch: 5 step: 5004, loss is 3.1718972
  85. ...
  86. epoch: 67 step: 5004, loss is 2.2768745
  87. epoch: 68 step: 5004, loss is 1.7223864
  88. epoch: 69 step: 5004, loss is 2.0665488
  89. epoch: 70 step: 5004, loss is 1.8717369
  90. ...
  91. ```
  92. ### Infer
  93. #### Usage
  94. ```
  95. # infer
  96. sh run_eval.sh [VALIDATION_DATASET_PATH] [CHECKPOINT_PATH]
  97. ```
  98. #### Launch
  99. ```bash
  100. # infer with checkpoint
  101. sh run_eval.sh dataset/validation_preprocess/ train_parallel0/resnet-120_5004.ckpt
  102. ```
  103. > checkpoint can be produced in training process.
  104. #### Result
  105. Inference result will be stored in the scripts path, whose folder name is "eval". Under this, you can find result like the followings in log.
  106. ```
  107. result: {'top_5_accuracy': 0.9429417413572343, 'top_1_accuracy': 0.7853513124199744} ckpt=train_parallel0/resnet-120_5004.ckpt
  108. ```