You can not select more than 25 topics Topics must start with a chinese character,a letter or number, can include dashes ('-') and can be up to 35 characters long.

README.md 4.1 kB

5 years ago
5 years ago
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135
  1. # ResNet101 Example
  2. ## Description
  3. This is an example of training ResNet101 with ImageNet dataset in MindSpore.
  4. ## Requirements
  5. - Install [MindSpore](https://www.mindspore.cn/install/en).
  6. - Download the dataset ImageNet2012.
  7. > Unzip the ImageNet2012 dataset to any path you want, the folder should include train and eval dataset as follows:
  8. ```
  9. .
  10. └─dataset
  11. ├─ilsvrc
  12. └─validation_preprocess
  13. ```
  14. ## Example structure
  15. ```shell
  16. .
  17. ├── crossentropy.py # CrossEntropy loss function
  18. ├── config.py # parameter configuration
  19. ├── dataset.py # data preprocessing
  20. ├── eval.py # eval net
  21. ├── lr_generator.py # generate learning rate
  22. ├── run_distribute_train.sh # launch distributed training(8p)
  23. ├── run_infer.sh # launch evaluating
  24. ├── run_standalone_train.sh # launch standalone training(1p)
  25. └── train.py # train net
  26. ```
  27. ## Parameter configuration
  28. Parameters for both training and evaluating can be set in config.py.
  29. ```
  30. "class_num": 1001, # dataset class number
  31. "batch_size": 32, # batch size of input tensor
  32. "loss_scale": 1024, # loss scale
  33. "momentum": 0.9, # momentum optimizer
  34. "weight_decay": 1e-4, # weight decay
  35. "epoch_size": 120, # epoch sizes for training
  36. "buffer_size": 1000, # number of queue size in data preprocessing
  37. "image_height": 224, # image height
  38. "image_width": 224, # image width
  39. "save_checkpoint": True, # whether save checkpoint or not
  40. "save_checkpoint_epochs": 1, # the epoch interval between two checkpoints. By default, the last checkpoint will be saved after the last epoch
  41. "keep_checkpoint_max": 10, # only keep the last keep_checkpoint_max checkpoint
  42. "save_checkpoint_path": "./", # path to save checkpoint relative to the executed path
  43. "warmup_epochs": 0, # number of warmup epoch
  44. "lr_decay_mode": "cosine" # decay mode for generating learning rate
  45. "label_smooth": 1, # label_smooth
  46. "label_smooth_factor": 0.1, # label_smooth_factor
  47. "lr": 0.1 # base learning rate
  48. ```
  49. ## Running the example
  50. ### Train
  51. #### Usage
  52. ```
  53. # distributed training
  54. sh run_distribute_train.sh [MINDSPORE_HCCL_CONFIG_PATH] [DATASET_PATH]
  55. # standalone training
  56. sh run_standalone_train.sh [DATASET_PATH]
  57. ```
  58. #### Launch
  59. ```bash
  60. # distributed training example(8p)
  61. sh run_distribute_train.sh rank_table_8p.json dataset/ilsvrc
  62. # standalone training example(1p)
  63. sh run_standalone_train.sh dataset/ilsvrc
  64. ```
  65. > About rank_table.json, you can refer to the [distributed training tutorial](https://www.mindspore.cn/tutorial/en/master/advanced_use/distributed_training.html).
  66. #### Result
  67. Training result will be stored in the example path, whose folder name begins with "train" or "train_parallel". You can find checkpoint file together with result like the followings in log.
  68. ```
  69. # distribute training result(8p)
  70. epoch: 1 step: 5004, loss is 4.805483
  71. epoch: 2 step: 5004, loss is 3.2121816
  72. epoch: 3 step: 5004, loss is 3.429647
  73. epoch: 4 step: 5004, loss is 3.3667371
  74. epoch: 5 step: 5004, loss is 3.1718972
  75. ...
  76. epoch: 67 step: 5004, loss is 2.2768745
  77. epoch: 68 step: 5004, loss is 1.7223864
  78. epoch: 69 step: 5004, loss is 2.0665488
  79. epoch: 70 step: 5004, loss is 1.8717369
  80. ...
  81. ```
  82. ### Infer
  83. #### Usage
  84. ```
  85. # infer
  86. sh run_infer.sh [VALIDATION_DATASET_PATH] [CHECKPOINT_PATH]
  87. ```
  88. #### Launch
  89. ```bash
  90. # infer with checkpoint
  91. sh run_infer.sh dataset/validation_preprocess/ train_parallel0/resnet-120_5004.ckpt
  92. ```
  93. > checkpoint can be produced in training process.
  94. #### Result
  95. Inference result will be stored in the example path, whose folder name is "infer". Under this, you can find result like the followings in log.
  96. ```
  97. result: {'top_5_accuracy': 0.9429417413572343, 'top_1_accuracy': 0.7853513124199744} ckpt=train_parallel0/resnet-120_5004.ckpt
  98. ```