You can not select more than 25 topics Topics must start with a chinese character,a letter or number, can include dashes ('-') and can be up to 35 characters long.

README.md 3.2 kB

5 years ago
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101
  1. # MobileNetV2 Example
  2. ## Description
  3. This is an example of training MobileNetV2 with ImageNet2012 dataset in MindSpore.
  4. ## Requirements
  5. * Install [MindSpore](https://www.mindspore.cn/install/en).
  6. * Download the dataset [ImageNet2012](http://www.image-net.org/).
  7. > Unzip the ImageNet2012 dataset to any path you want and the folder structure should be as follows:
  8. > ```
  9. > .
  10. > ├── train # train dataset
  11. > └── val # infer dataset
  12. > ```
  13. ## Example structure
  14. ``` shell
  15. .
  16. ├── config.py # parameter configuration
  17. ├── dataset.py # data preprocessing
  18. ├── eval.py # infer script
  19. ├── launch.py # launcher for distributed training
  20. ├── lr_generator.py # generate learning rate for each step
  21. ├── run_infer.sh # launch infering
  22. ├── run_train.sh # launch training
  23. └── train.py # train script
  24. ```
  25. ## Parameter configuration
  26. Parameters for both training and inference can be set in 'config.py'.
  27. ```
  28. "num_classes": 1000, # dataset class num
  29. "image_height": 224, # image height
  30. "image_width": 224, # image width
  31. "batch_size": 256, # training or infering batch size
  32. "epoch_size": 200, # total training epochs, including warmup_epochs
  33. "warmup_epochs": 4, # warmup epochs
  34. "lr": 0.4, # base learning rate
  35. "momentum": 0.9, # momentum
  36. "weight_decay": 4e-5, # weight decay
  37. "loss_scale": 1024, # loss scale
  38. "save_checkpoint": True, # whether save checkpoint
  39. "save_checkpoint_epochs": 1, # the epoch interval between two checkpoints
  40. "keep_checkpoint_max": 200, # only keep the last keep_checkpoint_max checkpoint
  41. "save_checkpoint_path": "./checkpoint" # path to save checkpoint
  42. ```
  43. ## Running the example
  44. ### Train
  45. #### Usage
  46. Usage: sh run_train.sh [DEVICE_NUM] [SERVER_IP(x.x.x.x)] [VISIABLE_DEVICES(0,1,2,3,4,5,6,7)] [DATASET_PATH]
  47. #### Launch
  48. ```
  49. # training example
  50. sh run_train.sh 8 192.168.0.1 0,1,2,3,4,5,6,7 ~/imagenet
  51. ```
  52. #### Result
  53. Training result will be stored in the example path. Checkpoints will be stored at `. /checkpoint` by default, and training log will be redirected to `./train/train.log` like followings.
  54. ```
  55. epoch: [ 0/200], step:[ 624/ 625], loss:[5.258/5.258], time:[140412.236], lr:[0.100]
  56. epoch time: 140522.500, per step time: 224.836, avg loss: 5.258
  57. epoch: [ 1/200], step:[ 624/ 625], loss:[3.917/3.917], time:[138221.250], lr:[0.200]
  58. epoch time: 138331.250, per step time: 221.330, avg loss: 3.917
  59. ```
  60. ### Infer
  61. #### Usage
  62. Usage: sh run_infer.sh [DATASET_PATH] [CHECKPOINT_PATH]
  63. #### Launch
  64. ```
  65. # infer example
  66. sh run_infer.sh ~/imagenet ~/train/mobilenet-200_625.ckpt
  67. ```
  68. > checkpoint can be produced in training process.
  69. #### Result
  70. Inference result will be stored in the example path, you can find result like the followings in `val.log`.
  71. ```
  72. result: {'acc': 0.71976314102564111} ckpt=/path/to/checkpoint/mobilenet-200_625.ckpt
  73. ```