You can not select more than 25 topics Topics must start with a chinese character,a letter or number, can include dashes ('-') and can be up to 35 characters long.

Readme.md 6.0 kB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142
  1. # MobileNetV2 Quantization Aware Training
  2. MobileNetV2 is a significant improvement over MobileNetV1 and pushes the state of the art for mobile visual recognition including classification, object detection and semantic segmentation.
  3. MobileNetV2 builds upon the ideas from MobileNetV1, using depthwise separable convolution as efficient building blocks. However, V2 introduces two new features to the architecture: 1) linear bottlenecks between the layers, and 2) shortcut connections between the bottlenecks1.
  4. Training MobileNetV2 with ImageNet dataset in MindSpore with quantization aware training.
  5. This is the simple and basic tutorial for constructing a network in MindSpore with quantization aware.
  6. In this readme tutorial, you will:
  7. 1. Train a MindSpore fusion MobileNetV2 model for ImageNet from scratch using `nn.Conv2dBnAct` and `nn.DenseBnAct`.
  8. 2. Fine tune the fusion model by applying the quantization aware training auto network converter API `convert_quant_network`, after the network convergence then export a quantization aware model checkpoint file.
  9. [Paper](https://arxiv.org/pdf/1801.04381) Sandler, Mark, et al. "Mobilenetv2: Inverted residuals and linear bottlenecks." Proceedings of the IEEE conference on computer vision and pattern recognition. 2018.
  10. # Dataset
  11. Dataset use: ImageNet
  12. - Dataset size: about 125G
  13. - Train: 120G, 1281167 images: 1000 directories
  14. - Test: 5G, 50000 images: images should be classified into 1000 directories firstly, just like train images
  15. - Data format: RGB images.
  16. - Note: Data will be processed in src/dataset.py
  17. # Environment Requirements
  18. - Hardware(Ascend)
  19. - Prepare hardware environment with Ascend processor. If you want to try Ascend, please send the [application form](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx) to ascend@huawei.com. Once approved, you can get the resources.
  20. - Framework
  21. - [MindSpore](http://10.90.67.50/mindspore/archive/20200506/OpenSource/me_vm_x86/)
  22. - For more information, please check the resources below:
  23. - [MindSpore tutorials](https://www.mindspore.cn/tutorial/zh-CN/master/index.html)
  24. - [MindSpore API](https://www.mindspore.cn/api/zh-CN/master/index.html)
  25. # Script description
  26. ## Script and sample code
  27. ```python
  28. ├── mobilenetv2_quant
  29. ├── Readme.md
  30. ├── scripts
  31. │ ├──run_train.sh
  32. │ ├──run_infer.sh
  33. │ ├──run_train_quant.sh
  34. │ ├──run_infer_quant.sh
  35. ├── src
  36. │ ├──config.py
  37. │ ├──dataset.py
  38. │ ├──luanch.py
  39. │ ├──lr_generator.py
  40. │ ├──mobilenetV2.py
  41. ├── train.py
  42. ├── eval.py
  43. ```
  44. ## Training process
  45. ### Train MobileNetV2 model
  46. Train a MindSpore fusion MobileNetV2 model for ImageNet, like:
  47. - sh run_train.sh Ascend [DEVICE_NUM] [SERVER_IP(x.x.x.x)] [VISIABLE_DEVICES(0,1,2,3,4,5,6,7)] [DATASET_PATH] [CKPT_PATH]
  48. You can just run this command instead.
  49. ``` bash
  50. >>> sh run_train.sh Ascend 4 192.168.0.1 0,1,2,3 ~/imagenet/train/ ~/mobilenet.ckpt
  51. ```
  52. Training result will be stored in the example path. Checkpoints will be stored at `. /checkpoint` by default, and training log will be redirected to `./train/train.log` like followings.
  53. ```
  54. >>> epoch: [ 0/200], step:[ 624/ 625], loss:[5.258/5.258], time:[140412.236], lr:[0.100]
  55. >>> epoch time: 140522.500, per step time: 224.836, avg loss: 5.258
  56. >>> epoch: [ 1/200], step:[ 624/ 625], loss:[3.917/3.917], time:[138221.250], lr:[0.200]
  57. >>> epoch time: 138331.250, per step time: 221.330, avg loss: 3.917
  58. ```
  59. ### Evaluate MobileNetV2 model
  60. Evaluate a MindSpore fusion MobileNetV2 model for ImageNet, like:
  61. - sh run_infer.sh Ascend [DATASET_PATH] [CHECKPOINT_PATH]
  62. You can just run this command instead.
  63. ``` bash
  64. >>> sh run_infer.sh Ascend ~/imagenet/val/ ~/train/mobilenet-200_625.ckpt
  65. ```
  66. Inference result will be stored in the example path, you can find result like the followings in `val.log`.
  67. ```
  68. >>> result: {'acc': 0.71976314102564111} ckpt=/path/to/checkpoint/mobilenet-200_625.ckpt
  69. ```
  70. ### Fine-tune for quantization aware training
  71. Fine tune the fusion model by applying the quantization aware training auto network converter API `convert_quant_network`, after the network convergence then export a quantization aware model checkpoint file.
  72. - sh run_train_quant.sh Ascend [DEVICE_NUM] [SERVER_IP(x.x.x.x)] [VISIABLE_DEVICES(0,1,2,3,4,5,6,7)] [DATASET_PATH] [CKPT_PATH]
  73. You can just run this command instead.
  74. ``` bash
  75. >>> sh run_train_quant.sh Ascend 4 192.168.0.1 0,1,2,3 ~/imagenet/train/ ~/mobilenet.ckpt
  76. ```
  77. Training result will be stored in the example path. Checkpoints will be stored at `. /checkpoint` by default, and training log will be redirected to `./train/train.log` like followings.
  78. ```
  79. >>> epoch: [ 0/60], step:[ 624/ 625], loss:[5.258/5.258], time:[140412.236], lr:[0.100]
  80. >>> epoch time: 140522.500, per step time: 224.836, avg loss: 5.258
  81. >>> epoch: [ 1/60], step:[ 624/ 625], loss:[3.917/3.917], time:[138221.250], lr:[0.200]
  82. >>> epoch time: 138331.250, per step time: 221.330, avg loss: 3.917
  83. ```
  84. ### Evaluate quantization aware training model
  85. Evaluate a MindSpore fusion MobileNetV2 model for ImageNet by applying the quantization aware training, like:
  86. - sh run_infer_quant.sh Ascend [DATASET_PATH] [CHECKPOINT_PATH]
  87. You can just run this command instead.
  88. ``` bash
  89. >>> sh run_infer_quant.sh Ascend ~/imagenet/val/ ~/train/mobilenet-60_625.ckpt
  90. ```
  91. Inference result will be stored in the example path, you can find result like the followings in `val.log`.
  92. ```
  93. >>> result: {'acc': 0.71976314102564111} ckpt=/path/to/checkpoint/mobilenet-60_625.ckpt
  94. ```
  95. # ModelZoo Homepage
  96. [Link](https://gitee.com/mindspore/mindspore/tree/master/mindspore/model_zoo)