You can not select more than 25 topics Topics must start with a chinese character,a letter or number, can include dashes ('-') and can be up to 35 characters long.

README.md 11 kB

5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247
  1. # Contents
  2. - [ResNeXt50 Description](#resnext50-description)
  3. - [Model Architecture](#model-architecture)
  4. - [Dataset](#dataset)
  5. - [Features](#features)
  6. - [Mixed Precision](#mixed-precision)
  7. - [Environment Requirements](#environment-requirements)
  8. - [Quick Start](#quick-start)
  9. - [Script Description](#script-description)
  10. - [Script and Sample Code](#script-and-sample-code)
  11. - [Script Parameters](#script-parameters)
  12. - [Training Process](#training-process)
  13. - [Evaluation Process](#evaluation-process)
  14. - [Model Description](#model-description)
  15. - [Performance](#performance)
  16. - [Training Performance](#evaluation-performance)
  17. - [Inference Performance](#evaluation-performance)
  18. - [Description of Random Situation](#description-of-random-situation)
  19. - [ModelZoo Homepage](#modelzoo-homepage)
  20. # [ResNeXt50 Description](#contents)
  21. ResNeXt is a simple, highly modularized network architecture for image classification. It designs results in a homogeneous, multi-branch architecture that has only a few hyper-parameters to set in ResNeXt. This strategy exposes a new dimension, which we call “cardinality” (the size of the set of transformations), as an essential factor in addition to the dimensions of depth and width.
  22. [Paper](https://arxiv.org/abs/1611.05431): Xie S, Girshick R, Dollár, Piotr, et al. Aggregated Residual Transformations for Deep Neural Networks. 2016.
  23. # [Model architecture](#contents)
  24. The overall network architecture of ResNeXt is show below:
  25. [Link](https://arxiv.org/abs/1611.05431)
  26. # [Dataset](#contents)
  27. Dataset used: [imagenet](http://www.image-net.org/)
  28. - Dataset size: ~125G, 1.2W colorful images in 1000 classes
  29. - Train: 120G, 1.2W images
  30. - Test: 5G, 50000 images
  31. - Data format: RGB images.
  32. - Note: Data will be processed in src/dataset.py
  33. # [Features](#contents)
  34. ## [Mixed Precision](#contents)
  35. The [mixed precision](https://www.mindspore.cn/tutorial/zh-CN/master/advanced_use/mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data formats, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware.
  36. For FP16 operators, if the input data type is FP32, the backend of MindSpore will automatically handle it with reduced precision. Users could check the reduced-precision operators by enabling INFO log and then searching ‘reduce precision’.
  37. # [Environment Requirements](#contents)
  38. - Hardware(Ascend/GPU)
  39. - Prepare hardware environment with Ascend or GPU processor. If you want to try Ascend , please send the [application form](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx) to ascend@huawei.com. Once approved, you can get the resources.
  40. - Framework
  41. - [MindSpore](http://10.90.67.50/mindspore/archive/20200506/OpenSource/me_vm_x86/)
  42. - For more information, please check the resources below:
  43. - [MindSpore tutorials](https://www.mindspore.cn/tutorial/zh-CN/master/index.html)
  44. - [MindSpore API](https://www.mindspore.cn/api/zh-CN/master/index.html)
  45. # [Script description](#contents)
  46. ## [Script and sample code](#contents)
  47. ```python
  48. .
  49. └─resnext50
  50. ├─README.md
  51. ├─scripts
  52. ├─run_standalone_train.sh # launch standalone training for ascend(1p)
  53. ├─run_distribute_train.sh # launch distributed training for ascend(8p)
  54. ├─run_standalone_train_for_gpu.sh # launch standalone training for gpu(1p)
  55. ├─run_distribute_train_for_gpu.sh # launch distributed training for gpu(8p)
  56. └─run_eval.sh # launch evaluating
  57. ├─src
  58. ├─backbone
  59. ├─_init_.py # initalize
  60. ├─resnet.py # resnext50 backbone
  61. ├─utils
  62. ├─_init_.py # initalize
  63. ├─cunstom_op.py # network operation
  64. ├─logging.py # print log
  65. ├─optimizers_init_.py # get parameters
  66. ├─sampler.py # distributed sampler
  67. ├─var_init_.py # calculate gain value
  68. ├─_init_.py # initalize
  69. ├─config.py # parameter configuration
  70. ├─crossentropy.py # CrossEntropy loss function
  71. ├─dataset.py # data preprocessing
  72. ├─head.py # commom head
  73. ├─image_classification.py # get resnet
  74. ├─linear_warmup.py # linear warmup learning rate
  75. ├─warmup_cosine_annealing.py # learning rate each step
  76. ├─warmup_step_lr.py # warmup step learning rate
  77. ├─eval.py # eval net
  78. └─train.py # train net
  79. ```
  80. ## [Script Parameters](#contents)
  81. Parameters for both training and evaluating can be set in config.py.
  82. ```
  83. "image_height": '224,224' # image size
  84. "num_classes": 1000, # dataset class number
  85. "per_batch_size": 128, # batch size of input tensor
  86. "lr": 0.05, # base learning rate
  87. "lr_scheduler": 'cosine_annealing', # learning rate mode
  88. "lr_epochs": '30,60,90,120', # epoch of lr changing
  89. "lr_gamma": 0.1, # decrease lr by a factor of exponential lr_scheduler
  90. "eta_min": 0, # eta_min in cosine_annealing scheduler
  91. "T_max": 150, # T-max in cosine_annealing scheduler
  92. "max_epoch": 150, # max epoch num to train the model
  93. "backbone": 'resnext50', # backbone metwork
  94. "warmup_epochs" : 1, # warmup epoch
  95. "weight_decay": 0.0001, # weight decay
  96. "momentum": 0.9, # momentum
  97. "is_dynamic_loss_scale": 0, # dynamic loss scale
  98. "loss_scale": 1024, # loss scale
  99. "label_smooth": 1, # label_smooth
  100. "label_smooth_factor": 0.1, # label_smooth_factor
  101. "ckpt_interval": 2000, # ckpt_interval
  102. "ckpt_path": 'outputs/', # checkpoint save location
  103. "is_save_on_master": 1,
  104. "rank": 0, # local rank of distributed
  105. "group_size": 1 # world size of distributed
  106. ```
  107. ## [Training Process](#contents)
  108. #### Usage
  109. You can start training by python script:
  110. ```
  111. python train.py --data_dir ~/imagenet/train/ --platform Ascend --is_distributed 0
  112. ```
  113. or shell stript:
  114. ```
  115. Ascend:
  116. # distribute training example(8p)
  117. sh run_distribute_train.sh RANK_TABLE_FILE DATA_PATH
  118. # standalone training
  119. sh run_standalone_train.sh DEVICE_ID DATA_PATH
  120. GPU:
  121. # distribute training example(8p)
  122. sh run_distribute_train_for_gpu.sh DATA_PATH
  123. # standalone training
  124. sh run_standalone_train_for_gpu.sh DEVICE_ID DATA_PATH
  125. ```
  126. #### Launch
  127. ```bash
  128. # distributed training example(8p) for Ascend
  129. sh scripts/run_distribute_train.sh RANK_TABLE_FILE /dataset/train
  130. # standalone training example for Ascend
  131. sh scripts/run_standalone_train.sh 0 /dataset/train
  132. # distributed training example(8p) for GPU
  133. sh scripts/run_distribute_train_for_gpu.sh /dataset/train
  134. # standalone training example for GPU
  135. sh scripts/run_standalone_train_for_gpu.sh 0 /dataset/train
  136. ```
  137. You can find checkpoint file together with result in log.
  138. ## [Evaluation Process](#contents)
  139. ### Usage
  140. You can start training by python script:
  141. ```
  142. python eval.py --data_dir ~/imagenet/val/ --platform Ascend --pretrained resnext.ckpt
  143. ```
  144. or shell stript:
  145. ```
  146. # Evaluation
  147. sh run_eval.sh DEVICE_ID DATA_PATH PRETRAINED_CKPT_PATH PLATFORM
  148. ```
  149. PLATFORM is Ascend or GPU, default is Ascend.
  150. #### Launch
  151. ```bash
  152. # Evaluation with checkpoint
  153. sh scripts/run_eval.sh 0 /opt/npu/datasets/classification/val /resnext50_100.ckpt Ascend
  154. ```
  155. #### Result
  156. Evaluation result will be stored in the scripts path. Under this, you can find result like the followings in log.
  157. ```
  158. acc=78.16%(TOP1)
  159. acc=93.88%(TOP5)
  160. ```
  161. # [Model description](#contents)
  162. ## [Performance](#contents)
  163. ### Training Performance
  164. | Parameters | ResNeXt50 | |
  165. | -------------------------- | ---------------------------------------------------------- | ------------------------- |
  166. | Resource | Ascend 910, cpu:2.60GHz 56cores, memory:314G | NV SMX2 V100-32G |
  167. | uploaded Date | 06/30/2020 | 07/23/2020 |
  168. | MindSpore Version | 0.5.0 | 0.6.0 |
  169. | Dataset | ImageNet | ImageNet |
  170. | Training Parameters | src/config.py | src/config.py |
  171. | Optimizer | Momentum | Momentum |
  172. | Loss Function | SoftmaxCrossEntropy | SoftmaxCrossEntropy |
  173. | Loss | 1.76592 | 1.8965 |
  174. | Accuracy | 78%(TOP1) | 77.8%(TOP1) |
  175. | Total time | 7.8 h 8ps | 21.5 h 8ps |
  176. | Checkpoint for Fine tuning | 192 M(.ckpt file) | 192 M(.ckpt file) |
  177. #### Inference Performance
  178. | Parameters | | | |
  179. | -------------------------- | ----------------------------- | ------------------------- | -------------------- |
  180. | Resource | Ascend 910 | NV SMX2 V100-32G | Ascend 310 |
  181. | uploaded Date | 06/30/2020 | 07/23/2020 | 07/23/2020 |
  182. | MindSpore Version | 0.5.0 | 0.6.0 | 0.6.0 |
  183. | Dataset | ImageNet, 1.2W | ImageNet, 1.2W | ImageNet, 1.2W |
  184. | batch_size | 1 | 1 | 1 |
  185. | outputs | probability | probability | probability |
  186. | Accuracy | acc=78.16%(TOP1) | acc=78.05%(TOP1) | |
  187. # [Description of Random Situation](#contents)
  188. In dataset.py, we set the seed inside “create_dataset" function. We also use random seed in train.py.
  189. # [ModelZoo Homepage](#contents)
  190. Please check the official [homepage](https://gitee.com/mindspore/mindspore/tree/master/model_zoo).