You can not select more than 25 topics Topics must start with a chinese character,a letter or number, can include dashes ('-') and can be up to 35 characters long.

README.md 18 kB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345
  1. # Contents
  2. - [Contents](#contents)
  3. - [YOLOv3-DarkNet53 Description](#yolov3-darknet53-description)
  4. - [Model Architecture](#model-architecture)
  5. - [Dataset](#dataset)
  6. - [Environment Requirements](#environment-requirements)
  7. - [Quick Start](#quick-start)
  8. - [Script Description](#script-description)
  9. - [Script and Sample Code](#script-and-sample-code)
  10. - [Script Parameters](#script-parameters)
  11. - [Training Process](#training-process)
  12. - [Training](#training)
  13. - [Distributed Training](#distributed-training)
  14. - [Evaluation Process](#evaluation-process)
  15. - [Evaluation](#evaluation)
  16. - [Model Description](#model-description)
  17. - [Performance](#performance)
  18. - [Evaluation Performance](#evaluation-performance)
  19. - [Inference Performance](#inference-performance)
  20. - [Description of Random Situation](#description-of-random-situation)
  21. - [ModelZoo Homepage](#modelzoo-homepage)
  22. ## [YOLOv3-DarkNet53 Description](#contents)
  23. You only look once (YOLO) is a state-of-the-art, real-time object detection system. YOLOv3 is extremely fast and accurate.
  24. Prior detection systems repurpose classifiers or localizers to perform detection. They apply the model to an image at multiple locations and scales. High scoring regions of the image are considered detections.
  25. YOLOv3 use a totally different approach. It apply a single neural network to the full image. This network divides the image into regions and predicts bounding boxes and probabilities for each region. These bounding boxes are weighted by the predicted probabilities.
  26. YOLOv3 uses a few tricks to improve training and increase performance, including: multi-scale predictions, a better backbone classifier, and more. The full details are in the paper!
  27. [Paper](https://pjreddie.com/media/files/papers/YOLOv3.pdf): YOLOv3: An Incremental Improvement. Joseph Redmon, Ali Farhadi,
  28. University of Washington
  29. ## [Model Architecture](#contents)
  30. YOLOv3 use DarkNet53 for performing feature extraction, which is a hybrid approach between the network used in YOLOv2, Darknet-19, and that newfangled residual network stuff. DarkNet53 uses successive 3 × 3 and 1 × 1 convolutional layers and has some shortcut connections as well and is significantly larger. It has 53 convolutional layers.
  31. ## [Dataset](#contents)
  32. Note that you can run the scripts based on the dataset mentioned in original paper or widely used in relevant domain/network architecture. In the following sections, we will introduce how to run the scripts using the related dataset below.
  33. Dataset used: [COCO2014](https://cocodataset.org/#download)
  34. - Dataset size: 19G, 123,287 images, 80 object categories.
  35. - Train:13G, 82,783 images
  36. - Val:6G, 40,504 images
  37. - Annotations: 241M, Train/Val annotations
  38. - Data format:zip files
  39. - Note:Data will be processed in yolo_dataset.py, and unzip files before uses it.
  40. ## [Environment Requirements](#contents)
  41. - Hardware(Ascend/GPU)
  42. - Prepare hardware environment with Ascend or GPU processor. If you want to try Ascend , please send the [application form](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx) to ascend@huawei.com. Once approved, you can get the resources.
  43. - Framework
  44. - [MindSpore](https://www.mindspore.cn/install/en)
  45. - For more information, please check the resources below:
  46. - [MindSpore Tutorials](https://www.mindspore.cn/tutorial/training/en/master/index.html)
  47. - [MindSpore Python API](https://www.mindspore.cn/doc/api_python/en/master/index.html)
  48. ## [Quick Start](#contents)
  49. After installing MindSpore via the official website, you can start training and evaluation in as follows. If running on GPU, please add `--device_target=GPU` in the python command or use the "_gpu" shell script ("xxx_gpu.sh").
  50. ```network
  51. # The darknet53_backbone.ckpt in the follow script is got from darknet53 training like paper.
  52. # pretrained_backbone can use src/convert_weight.py, convert darknet53.conv.74 to mindspore ckpt, darknet53.conv.74 can get from `https://pjreddie.com/media/files/darknet53.conv.74` .
  53. # The parameter of training_shape define image shape for network, default is "".
  54. # It means use 10 kinds of shape as input shape, or it can be set some kind of shape.
  55. # run training example(1p) by python command.
  56. python train.py \
  57. --data_dir=./dataset/coco2014 \
  58. --pretrained_backbone=darknet53_backbone.ckpt \
  59. --is_distributed=0 \
  60. --lr=0.001 \
  61. --loss_scale=1024 \
  62. --weight_decay=0.016 \
  63. --T_max=320 \
  64. --max_epoch=320 \
  65. --warmup_epochs=4 \
  66. --training_shape=416 \
  67. --lr_scheduler=cosine_annealing > log.txt 2>&1 &
  68. # standalone training example(1p) by shell script
  69. sh run_standalone_train.sh dataset/coco2014 darknet53_backbone.ckpt
  70. # For Ascend device, distributed training example(8p) by shell script
  71. sh run_distribute_train.sh dataset/coco2014 darknet53_backbone.ckpt rank_table_8p.json
  72. # For GPU device, distributed training example(8p) by shell script
  73. sh run_distribute_train_gpu.sh dataset/coco2014 darknet53_backbone.ckpt
  74. # run evaluation by python command
  75. python eval.py \
  76. --data_dir=./dataset/coco2014 \
  77. --pretrained=yolov3.ckpt \
  78. --testing_shape=416 > log.txt 2>&1 &
  79. # run evaluation by shell script
  80. sh run_eval.sh dataset/coco2014/ checkpoint/0-319_102400.ckpt
  81. ```
  82. ## [Script Description](#contents)
  83. ### [Script and Sample Code](#contents)
  84. ```contents
  85. .
  86. └─yolov3_darknet53
  87. ├─README.md
  88. ├─mindspore_hub_conf.md # config for mindspore hub
  89. ├─scripts
  90. ├─run_standalone_train.sh # launch standalone training(1p) in ascend
  91. ├─run_distribute_train.sh # launch distributed training(8p) in ascend
  92. └─run_eval.sh # launch evaluating in ascend
  93. ├─run_standalone_train_gpu.sh # launch standalone training(1p) in gpu
  94. ├─run_distribute_train_gpu.sh # launch distributed training(8p) in gpu
  95. └─run_eval_gpu.sh # launch evaluating in gpu
  96. ├─src
  97. ├─__init__.py # python init file
  98. ├─config.py # parameter configuration
  99. ├─darknet.py # backbone of network
  100. ├─distributed_sampler.py # iterator of dataset
  101. ├─initializer.py # initializer of parameters
  102. ├─logger.py # log function
  103. ├─loss.py # loss function
  104. ├─lr_scheduler.py # generate learning rate
  105. ├─transforms.py # Preprocess data
  106. ├─util.py # util function
  107. ├─yolo.py # yolov3 network
  108. ├─yolo_dataset.py # create dataset for YOLOV3
  109. ├─eval.py # eval net
  110. └─train.py # train net
  111. ```
  112. ### [Script Parameters](#contents)
  113. ```parameters
  114. Major parameters in train.py as follow.
  115. optional arguments:
  116. -h, --help show this help message and exit
  117. --device_target device where the code will be implemented: "Ascend" | "GPU", default is "Ascend"
  118. --data_dir DATA_DIR Train dataset directory.
  119. --per_batch_size PER_BATCH_SIZE
  120. Batch size for Training. Default: 32.
  121. --pretrained_backbone PRETRAINED_BACKBONE
  122. The ckpt file of DarkNet53. Default: "".
  123. --resume_yolov3 RESUME_YOLOV3
  124. The ckpt file of YOLOv3, which used to fine tune.
  125. Default: ""
  126. --lr_scheduler LR_SCHEDULER
  127. Learning rate scheduler, options: exponential,
  128. cosine_annealing. Default: exponential
  129. --lr LR Learning rate. Default: 0.001
  130. --lr_epochs LR_EPOCHS
  131. Epoch of changing of lr changing, split with ",".
  132. Default: 220,250
  133. --lr_gamma LR_GAMMA Decrease lr by a factor of exponential lr_scheduler.
  134. Default: 0.1
  135. --eta_min ETA_MIN Eta_min in cosine_annealing scheduler. Default: 0
  136. --T_max T_MAX T-max in cosine_annealing scheduler. Default: 320
  137. --max_epoch MAX_EPOCH
  138. Max epoch num to train the model. Default: 320
  139. --warmup_epochs WARMUP_EPOCHS
  140. Warmup epochs. Default: 0
  141. --weight_decay WEIGHT_DECAY
  142. Weight decay factor. Default: 0.0005
  143. --momentum MOMENTUM Momentum. Default: 0.9
  144. --loss_scale LOSS_SCALE
  145. Static loss scale. Default: 1024
  146. --label_smooth LABEL_SMOOTH
  147. Whether to use label smooth in CE. Default:0
  148. --label_smooth_factor LABEL_SMOOTH_FACTOR
  149. Smooth strength of original one-hot. Default: 0.1
  150. --log_interval LOG_INTERVAL
  151. Logging interval steps. Default: 100
  152. --ckpt_path CKPT_PATH
  153. Checkpoint save location. Default: outputs/
  154. --ckpt_interval CKPT_INTERVAL
  155. Save checkpoint interval. Default: None
  156. --is_save_on_master IS_SAVE_ON_MASTER
  157. Save ckpt on master or all rank, 1 for master, 0 for
  158. all ranks. Default: 1
  159. --is_distributed IS_DISTRIBUTED
  160. Distribute train or not, 1 for yes, 0 for no. Default:
  161. 1
  162. --rank RANK Local rank of distributed. Default: 0
  163. --group_size GROUP_SIZE
  164. World size of device. Default: 1
  165. --need_profiler NEED_PROFILER
  166. Whether use profiler. 0 for no, 1 for yes. Default: 0
  167. --training_shape TRAINING_SHAPE
  168. Fix training shape. Default: ""
  169. --resize_rate RESIZE_RATE
  170. Resize rate for multi-scale training. Default: None
  171. ```
  172. ### [Training Process](#contents)
  173. #### Training
  174. ```command
  175. python train.py \
  176. --data_dir=./dataset/coco2014 \
  177. --pretrained_backbone=darknet53_backbone.ckpt \
  178. --is_distributed=0 \
  179. --lr=0.001 \
  180. --loss_scale=1024 \
  181. --weight_decay=0.016 \
  182. --T_max=320 \
  183. --max_epoch=320 \
  184. --warmup_epochs=4 \
  185. --training_shape=416 \
  186. --lr_scheduler=cosine_annealing > log.txt 2>&1 &
  187. ```
  188. The python command above will run in the background, you can view the results through the file `log.txt`. If running on GPU, please add `--device_target=GPU` in the python command.
  189. After training, you'll get some checkpoint files under the outputs folder by default. The loss value will be achieved as follows:
  190. ```log
  191. # grep "loss:" train/log.txt
  192. 2020-08-20 14:14:43,640:INFO:epoch[0], iter[0], loss:7809.262695, 0.15 imgs/sec, lr:9.746589057613164e-06
  193. 2020-08-20 14:15:05,142:INFO:epoch[0], iter[100], loss:2778.349033, 133.92 imgs/sec, lr:0.0009844054002314806
  194. 2020-08-20 14:15:31,796:INFO:epoch[0], iter[200], loss:535.517361, 130.54 imgs/sec, lr:0.0019590642768889666
  195. ...
  196. ```
  197. The model checkpoint will be saved in outputs directory.
  198. #### Distributed Training
  199. For Ascend device, distributed training example(8p) by shell script
  200. ```command
  201. sh run_distribute_train.sh dataset/coco2014 darknet53_backbone.ckpt rank_table_8p.json
  202. ```
  203. For GPU device, distributed training example(8p) by shell script
  204. ```command
  205. sh run_distribute_train_gpu.sh dataset/coco2014 darknet53_backbone.ckpt
  206. ```
  207. The above shell script will run distribute training in the background. You can view the results through the file `train_parallel[X]/log.txt`. The loss value will be achieved as follows:
  208. ```log
  209. # distribute training result(8p)
  210. epoch[0], iter[0], loss:14623.384766, 1.23 imgs/sec, lr:7.812499825377017e-07
  211. epoch[0], iter[100], loss:746.253051, 22.01 imgs/sec, lr:7.890690624925494e-05
  212. epoch[0], iter[200], loss:101.579535, 344.41 imgs/sec, lr:0.00015703124925494192
  213. epoch[0], iter[300], loss:85.136754, 341.99 imgs/sec, lr:0.00023515624925494185
  214. epoch[1], iter[400], loss:79.429322, 405.14 imgs/sec, lr:0.00031328126788139345
  215. ...
  216. epoch[318], iter[102000], loss:30.504046, 458.03 imgs/sec, lr:9.63797575082026e-08
  217. epoch[319], iter[102100], loss:31.599150, 341.08 imgs/sec, lr:2.409552052995423e-08
  218. epoch[319], iter[102200], loss:31.652273, 372.57 imgs/sec, lr:2.409552052995423e-08
  219. epoch[319], iter[102300], loss:31.952403, 496.02 imgs/sec, lr:2.409552052995423e-08
  220. ...
  221. ```
  222. ### [Evaluation Process](#contents)
  223. #### Evaluation
  224. Before running the command below. If running on GPU, please add `--device_target=GPU` in the python command or use the "_gpu" shell script ("xxx_gpu.sh").
  225. ```command
  226. python eval.py \
  227. --data_dir=./dataset/coco2014 \
  228. --pretrained=yolov3.ckpt \
  229. --testing_shape=416 > log.txt 2>&1 &
  230. OR
  231. sh run_eval.sh dataset/coco2014/ checkpoint/0-319_102400.ckpt
  232. ```
  233. The above python command will run in the background. You can view the results through the file "log.txt". The mAP of the test dataset will be as follows:
  234. This the standard format from `pycocotools`, you can refer to [cocodataset](https://cocodataset.org/#detection-eval) for more detail.
  235. ```eval log
  236. # log.txt
  237. =============coco eval reulst=========
  238. Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.311
  239. Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.528
  240. Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.322
  241. Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.127
  242. Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.323
  243. Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.428
  244. Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.259
  245. Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.398
  246. Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.423
  247. Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.224
  248. Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.442
  249. Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.551
  250. ```
  251. ## [Model Description](#contents)
  252. ### [Performance](#contents)
  253. #### Evaluation Performance
  254. | Parameters | YOLO |YOLO |
  255. | -------------------------- | ----------------------------------------------------------- |------------------------------------------------------------ |
  256. | Model Version | YOLOv3 |YOLOv3 |
  257. | Resource | Ascend 910; CPU 2.60GHz, 192cores; Memory, 755G | NV SMX2 V100-16G; CPU 2.10GHz, 96cores; Memory, 251G |
  258. | uploaded Date | 09/15/2020 (month/day/year) | 09/02/2020 (month/day/year) |
  259. | MindSpore Version | 1.0.0 | 1.0.0 |
  260. | Dataset | COCO2014 | COCO2014 |
  261. | Training Parameters | epoch=320, batch_size=32, lr=0.001, momentum=0.9 | epoch=320, batch_size=32, lr=0.001, momentum=0.9 |
  262. | Optimizer | Momentum | Momentum |
  263. | Loss Function | Sigmoid Cross Entropy with logits | Sigmoid Cross Entropy with logits |
  264. | outputs | boxes and label | boxes and label |
  265. | Loss | 34 | 34 |
  266. | Speed | 1pc: 350 ms/step; | 1pc: 600 ms/step; |
  267. | Total time | 8pc: 18.5 hours | 8pc: 18 hours(shape=416) |
  268. | Parameters (M) | 62.1 | 62.1 |
  269. | Checkpoint for Fine tuning | 474M (.ckpt file) | 474M (.ckpt file) |
  270. | Scripts | https://gitee.com/mindspore/mindspore/tree/master/model_zoo/official/cv/yolov3_darknet53 | https://gitee.com/mindspore/mindspore/tree/master/model_zoo/official/cv/yolov3_darknet53 |
  271. #### Inference Performance
  272. | Parameters | YOLO |YOLO |
  273. | ------------------- | --------------------------- |------------------------------|
  274. | Model Version | YOLOv3 | YOLOv3 |
  275. | Resource | Ascend 910 | NV SMX2 V100-16G |
  276. | Uploaded Date | 09/15/2020 (month/day/year) | 08/20/2020 (month/day/year) |
  277. | MindSpore Version | 1.0.0 | 1.0.0 |
  278. | Dataset | COCO2014, 40,504 images | COCO2014, 40,504 images |
  279. | batch_size | 1 | 1 |
  280. | outputs | mAP | mAP |
  281. | Accuracy | 8pcs: 31.1% | 8pcs: 29.7%~30.3% (shape=416)|
  282. | Model for inference | 474M (.ckpt file) | 474M (.ckpt file) |
  283. ## [Description of Random Situation](#contents)
  284. There are random seeds in distributed_sampler.py, transforms.py, yolo_dataset.py files.
  285. ## [ModelZoo Homepage](#contents)
  286. Please check the official [homepage](https://gitee.com/mindspore/mindspore/tree/master/model_zoo).