You can not select more than 25 topics Topics must start with a chinese character,a letter or number, can include dashes ('-') and can be up to 35 characters long.

README.md 3.9 kB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125
  1. # ResNet-50 Example
  2. ## Description
  3. This is an example of training ResNet-50 with CIFAR-10 dataset in MindSpore.
  4. ## Requirements
  5. - Install [MindSpore](https://www.mindspore.cn/install/en).
  6. - Download the dataset CIFAR-10
  7. > Unzip the CIFAR-10 dataset to any path you want and the folder structure should include train and eval dataset as follows:
  8. > ```
  9. > .
  10. > ├── cifar-10-batches-bin # train dataset
  11. > └── cifar-10-verify-bin # infer dataset
  12. > ```
  13. ## Example structure
  14. ```shell
  15. .
  16. ├── config.py # parameter configuration
  17. ├── dataset.py # data preprocessing
  18. ├── eval.py # infer script
  19. ├── lr_generator.py # generate learning rate for each step
  20. ├── run_distribute_train.sh # launch distributed training(8 pcs)
  21. ├── run_infer.sh # launch infering
  22. ├── run_standalone_train.sh # launch standalone training(1 pcs)
  23. └── train.py # train script
  24. ```
  25. ## Parameter configuration
  26. Parameters for both training and inference can be set in config.py.
  27. ```
  28. "class_num": 10, # dataset class num
  29. "batch_size": 32, # batch size of input tensor
  30. "loss_scale": 1024, # loss scale
  31. "momentum": 0.9, # momentum
  32. "weight_decay": 1e-4, # weight decay
  33. "epoch_size": 90, # only valid for taining, which is always 1 for inference
  34. "buffer_size": 100, # number of queue size in data preprocessing
  35. "image_height": 224, # image height
  36. "image_width": 224, # image width
  37. "save_checkpoint": True, # whether save checkpoint or not
  38. "save_checkpoint_steps": 195, # the step interval between two checkpoints. By default, the last checkpoint will be saved after the last step
  39. "keep_checkpoint_max": 10, # only keep the last keep_checkpoint_max checkpoint
  40. "save_checkpoint_path": "./", # path to save checkpoint
  41. "warmup_epochs": 5, # number of warmup epoch
  42. "lr_decay_mode": "poly" # decay mode can be selected in steps, ploy and default
  43. "lr_init": 0.01, # initial learning rate
  44. "lr_end": 0.00001, # final learning rate
  45. "lr_max": 0.1, # maximum learning rate
  46. ```
  47. ## Running the example
  48. ### Train
  49. #### Usage
  50. ```
  51. # distributed training
  52. Usage: sh run_distribute_train.sh [MINDSPORE_HCCL_CONFIG_PATH] [DATASET_PATH]
  53. # standalone training
  54. Usage: sh run_standalone_train.sh [DATASET_PATH]
  55. ```
  56. #### Launch
  57. ```
  58. # distribute training example
  59. sh run_distribute_train.sh rank_table.json ~/cifar-10-batches-bin
  60. # standalone training example
  61. sh run_standalone_train.sh ~/cifar-10-batches-bin
  62. ```
  63. > About rank_table.json, you can refer to the [distributed training tutorial](https://www.mindspore.cn/tutorial/en/master/advanced_use/distributed_training.html).
  64. #### Result
  65. Training result will be stored in the example path, whose folder name begins with "train" or "train_parallel". Under this, you can find checkpoint file together with result like the followings in log.
  66. ```
  67. # distribute training result(8 pcs)
  68. epoch: 1 step: 195, loss is 1.9601055
  69. epoch: 2 step: 195, loss is 1.8555021
  70. epoch: 3 step: 195, loss is 1.6707983
  71. epoch: 4 step: 195, loss is 1.8162166
  72. epoch: 5 step: 195, loss is 1.393667
  73. ```
  74. ### Infer
  75. #### Usage
  76. ```
  77. # infer
  78. Usage: sh run_infer.sh [DATASET_PATH] [CHECKPOINT_PATH]
  79. ```
  80. #### Launch
  81. ```
  82. # infer example
  83. sh run_infer.sh ~/cifar10-10-verify-bin ~/resnet50_cifar10/train_parallel0/resnet-90_195.ckpt
  84. ```
  85. > checkpoint can be produced in training process.
  86. #### Result
  87. Inference result will be stored in the example path, whose folder name is "infer". Under this, you can find result like the followings in log.
  88. ```
  89. result: {'acc': 0.91446314102564111} ckpt=~/resnet50_cifar10/train_parallel0/resnet-90_195.ckpt
  90. ```