You can not select more than 25 topics Topics must start with a chinese character,a letter or number, can include dashes ('-') and can be up to 35 characters long.

README.md 22 kB

4 years ago
4 years ago
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462463464465466467468469470471472473474475476477478479480481482483484485486487
  1. # Contents
  2. - [YOLOv4 Description](#YOLOv4-description)
  3. - [Model Architecture](#model-architecture)
  4. - [Pretrain Model](#pretrain-model)
  5. - [Dataset](#dataset)
  6. - [Environment Requirements](#environment-requirements)
  7. - [Quick Start](#quick-start)
  8. - [Script Description](#script-description)
  9. - [Script and Sample Code](#script-and-sample-code)
  10. - [Script Parameters](#script-parameters)
  11. - [Training Process](#training-process)
  12. - [Training](#training)
  13. - [Evaluation Process](#evaluation-process)
  14. - [Evaluation](#evaluation)
  15. - [Convert Process](#convert-process)
  16. - [Convert](#convert)
  17. - [Model Description](#model-description)
  18. - [Performance](#performance)
  19. - [Evaluation Performance](#evaluation-performance)
  20. - [Inference Performance](#inference-performance)
  21. - [ModelZoo Homepage](#modelzoo-homepage)
  22. # [YOLOv4 Description](#contents)
  23. YOLOv4 is a state-of-the-art detector which is faster (FPS) and more accurate (MS COCO AP50...95 and AP50) than all available alternative detectors.
  24. YOLOv4 has verified a large number of features, and selected for use such of them for improving the accuracy of both the classifier and the detector.
  25. These features can be used as best-practice for future studies and developments.
  26. [Paper](https://arxiv.org/pdf/2004.10934.pdf):
  27. Bochkovskiy A, Wang C Y, Liao H Y M. YOLOv4: Optimal Speed and Accuracy of Object Detection[J]. arXiv preprint arXiv:2004.10934, 2020.
  28. # [Model Architecture](#contents)
  29. YOLOv4 choose CSPDarknet53 backbone, SPP additional module, PANet path-aggregation neck, and YOLOv4 (anchor based) head as the architecture of YOLOv4.
  30. # [Pretrain Model](#contents)
  31. YOLOv4 needs a CSPDarknet53 backbone to extract image features for detection. You could get CSPDarknet53 train script from our modelzoo and modify the backbone structure according to CSPDarknet53 in ```./src.cspdarknet53```, Final train it on imagenet2012 to get CSPDarknet53 pretrain model.
  32. Steps:
  33. 1. Get resnet50 train script from our modelzoo.
  34. 2. Modify the network architecture according to CSPDarknet53 in ```./src.cspdarknet53```
  35. 3. Train CSPDarknet53 on imagenet2012.
  36. # [Dataset](#contents)
  37. Dataset used: [COCO2017](https://cocodataset.org/#download)
  38. Dataset support: [COCO2017] or datasetd with the same format as MS COCO
  39. Annotation support: [COCO2017] or annotation as the same format as MS COCO
  40. - The directory structure is as follows, the name of directory and file is user define:
  41. ```text
  42. ├── dataset
  43. ├── YOLOv4
  44. ├── annotations
  45. │ ├─ train.json
  46. │ └─ val.json
  47. ├─train
  48. │ ├─picture1.jpg
  49. │ ├─ ...
  50. │ └─picturen.jpg
  51. ├─ val
  52. ├─picture1.jpg
  53. ├─ ...
  54. └─picturen.jpg
  55. ```
  56. we suggest user to use MS COCO dataset to experience our model,
  57. other datasets need to use the same format as MS COCO.
  58. # [Environment Requirements](#contents)
  59. - Hardware(Ascend)
  60. - Prepare hardware environment with Ascend processor.
  61. - Framework
  62. - [MindSpore](https://www.mindspore.cn/install/en)
  63. - For more information, please check the resources below:
  64. - [MindSpore tutorials](https://www.mindspore.cn/tutorial/training/en/master/index.html)
  65. - [MindSpore Python API](https://www.mindspore.cn/doc/api_python/en/master/index.html)
  66. # [Quick Start](#contents)
  67. - After installing MindSpore via the official website, you can start training and evaluation as follows:
  68. - Prepare the CSPDarknet53.ckpt and hccl_8p.json files, before run network.
  69. - Please refer to [Pretrain Model]
  70. - Genatating hccl_8p.json, Run the script of model_zoo/utils/hccl_tools/hccl_tools.py.
  71. The following parameter "[0-8)" indicates that the hccl_8p.json file of cards 0 to 7 is generated.
  72. ```
  73. python hccl_tools.py --device_num "[0,8)"
  74. ```
  75. ```text
  76. # The parameter of training_shape define image shape for network, default is
  77. [416, 416],
  78. [448, 448],
  79. [480, 480],
  80. [512, 512],
  81. [544, 544],
  82. [576, 576],
  83. [608, 608],
  84. [640, 640],
  85. [672, 672],
  86. [704, 704],
  87. [736, 736].
  88. # It means use 11 kinds of shape as input shape, or it can be set some kind of shape.
  89. ```
  90. ```bash
  91. #run training example(1p) by python command (Training with a single scale)
  92. python train.py \
  93. --data_dir=./dataset/xxx \
  94. --pretrained_backbone=cspdarknet53_backbone.ckpt \
  95. --is_distributed=0 \
  96. --lr=0.1 \
  97. --t_max=320 \
  98. --max_epoch=320 \
  99. --warmup_epochs=4 \
  100. --training_shape=416 \
  101. --lr_scheduler=cosine_annealing > log.txt 2>&1 &
  102. ```
  103. ```bash
  104. # standalone training example(1p) by shell script (Training with a single scale)
  105. sh run_standalone_train.sh dataset/xxx cspdarknet53_backbone.ckpt
  106. ```
  107. ```bash
  108. # For Ascend device, distributed training example(8p) by shell script (Training with multi scale)
  109. sh run_distribute_train.sh dataset/xxx cspdarknet53_backbone.ckpt rank_table_8p.json
  110. ```
  111. ```bash
  112. # run evaluation by python command
  113. python eval.py \
  114. --data_dir=./dataset/xxx \
  115. --pretrained=yolov4.ckpt \
  116. --testing_shape=608 > log.txt 2>&1 &
  117. ```
  118. ```bash
  119. # run evaluation by shell script
  120. sh run_eval.sh dataset/xxx checkpoint/xxx.ckpt
  121. ```
  122. # [Script Description](#contents)
  123. ## [Script and Sample Code](#contents)
  124. ```text
  125. └─yolov4
  126. ├─README.md
  127. ├─mindspore_hub_conf.py # config for mindspore hub
  128. ├─scripts
  129. ├─run_standalone_train.sh # launch standalone training(1p) in ascend
  130. ├─run_distribute_train.sh # launch distributed training(8p) in ascend
  131. └─run_eval.sh # launch evaluating in ascend
  132. ├─run_test.sh # launch testing in ascend
  133. ├─src
  134. ├─__init__.py # python init file
  135. ├─config.py # parameter configuration
  136. ├─cspdarknet53.py # backbone of network
  137. ├─distributed_sampler.py # iterator of dataset
  138. ├─export.py # convert mindspore model to air model
  139. ├─initializer.py # initializer of parameters
  140. ├─logger.py # log function
  141. ├─loss.py # loss function
  142. ├─lr_scheduler.py # generate learning rate
  143. ├─transforms.py # Preprocess data
  144. ├─util.py # util function
  145. ├─yolo.py # yolov4 network
  146. ├─yolo_dataset.py # create dataset for YOLOV4
  147. ├─eval.py # evaluate val results
  148. ├─test.py# # evaluate test results
  149. └─train.py # train net
  150. ```
  151. ## [Script Parameters](#contents)
  152. Major parameters train.py as follows:
  153. ```text
  154. optional arguments:
  155. -h, --help show this help message and exit
  156. --device_target device where the code will be implemented: "Ascend", default is "Ascend"
  157. --data_dir DATA_DIR Train dataset directory.
  158. --per_batch_size PER_BATCH_SIZE
  159. Batch size for Training. Default: 8.
  160. --pretrained_backbone PRETRAINED_BACKBONE
  161. The ckpt file of CspDarkNet53. Default: "".
  162. --resume_yolov4 RESUME_YOLOV4
  163. The ckpt file of YOLOv4, which used to fine tune.
  164. Default: ""
  165. --lr_scheduler LR_SCHEDULER
  166. Learning rate scheduler, options: exponential,
  167. cosine_annealing. Default: exponential
  168. --lr LR Learning rate. Default: 0.001
  169. --lr_epochs LR_EPOCHS
  170. Epoch of changing of lr changing, split with ",".
  171. Default: 220,250
  172. --lr_gamma LR_GAMMA Decrease lr by a factor of exponential lr_scheduler.
  173. Default: 0.1
  174. --eta_min ETA_MIN Eta_min in cosine_annealing scheduler. Default: 0
  175. --t_max T_MAX T-max in cosine_annealing scheduler. Default: 320
  176. --max_epoch MAX_EPOCH
  177. Max epoch num to train the model. Default: 320
  178. --warmup_epochs WARMUP_EPOCHS
  179. Warmup epochs. Default: 0
  180. --weight_decay WEIGHT_DECAY
  181. Weight decay factor. Default: 0.0005
  182. --momentum MOMENTUM Momentum. Default: 0.9
  183. --loss_scale LOSS_SCALE
  184. Static loss scale. Default: 64
  185. --label_smooth LABEL_SMOOTH
  186. Whether to use label smooth in CE. Default:0
  187. --label_smooth_factor LABEL_SMOOTH_FACTOR
  188. Smooth strength of original one-hot. Default: 0.1
  189. --log_interval LOG_INTERVAL
  190. Logging interval steps. Default: 100
  191. --ckpt_path CKPT_PATH
  192. Checkpoint save location. Default: outputs/
  193. --ckpt_interval CKPT_INTERVAL
  194. Save checkpoint interval. Default: None
  195. --is_save_on_master IS_SAVE_ON_MASTER
  196. Save ckpt on master or all rank, 1 for master, 0 for
  197. all ranks. Default: 1
  198. --is_distributed IS_DISTRIBUTED
  199. Distribute train or not, 1 for yes, 0 for no. Default:
  200. 1
  201. --rank RANK Local rank of distributed. Default: 0
  202. --group_size GROUP_SIZE
  203. World size of device. Default: 1
  204. --need_profiler NEED_PROFILER
  205. Whether use profiler. 0 for no, 1 for yes. Default: 0
  206. --training_shape TRAINING_SHAPE
  207. Fix training shape. Default: ""
  208. --resize_rate RESIZE_RATE
  209. Resize rate for multi-scale training. Default: 10
  210. ```
  211. ## [Training Process](#contents)
  212. YOLOv4 can be trained from the scratch or with the backbone named cspdarknet53.
  213. Cspdarknet53 is a classifier which can be trained on some dataset like ImageNet(ILSVRC2012).
  214. It is easy for users to train Cspdarknet53. Just replace the backbone of Classifier Resnet50 with cspdarknet53.
  215. Resnet50 is easy to get in mindspore model zoo.
  216. ### Training
  217. For Ascend device, standalone training example(1p) by shell script
  218. ```bash
  219. sh run_standalone_train.sh dataset/coco2017 cspdarknet53_backbone.ckpt
  220. ```
  221. ```text
  222. python train.py \
  223. --data_dir=/dataset/xxx \
  224. --pretrained_backbone=cspdarknet53_backbone.ckpt \
  225. --is_distributed=0 \
  226. --lr=0.1 \
  227. --t_max=320 \
  228. --max_epoch=320 \
  229. --warmup_epochs=4 \
  230. --training_shape=416 \
  231. --lr_scheduler=cosine_annealing > log.txt 2>&1 &
  232. ```
  233. The python command above will run in the background, you can view the results through the file log.txt.
  234. After training, you'll get some checkpoint files under the outputs folder by default. The loss value will be achieved as follows:
  235. ```text
  236. # grep "loss:" train/log.txt
  237. 2020-10-16 15:00:37,483:INFO:epoch[0], iter[0], loss:8248.610352, 0.03 imgs/sec, lr:2.0466639227834094e-07
  238. 2020-10-16 15:00:52,897:INFO:epoch[0], iter[100], loss:5058.681709, 51.91 imgs/sec, lr:2.067130662908312e-05
  239. 2020-10-16 15:01:08,286:INFO:epoch[0], iter[200], loss:1583.772806, 51.99 imgs/sec, lr:4.1137944208458066e-05
  240. 2020-10-16 15:01:23,457:INFO:epoch[0], iter[300], loss:1229.840823, 52.75 imgs/sec, lr:6.160458724480122e-05
  241. 2020-10-16 15:01:39,046:INFO:epoch[0], iter[400], loss:1155.170310, 51.32 imgs/sec, lr:8.207122300518677e-05
  242. 2020-10-16 15:01:54,138:INFO:epoch[0], iter[500], loss:920.922433, 53.02 imgs/sec, lr:0.00010253786604152992
  243. 2020-10-16 15:02:09,209:INFO:epoch[0], iter[600], loss:808.610681, 53.09 imgs/sec, lr:0.00012300450180191547
  244. 2020-10-16 15:02:24,240:INFO:epoch[0], iter[700], loss:621.931513, 53.23 imgs/sec, lr:0.00014347114483825862
  245. 2020-10-16 15:02:39,280:INFO:epoch[0], iter[800], loss:527.155985, 53.20 imgs/sec, lr:0.00016393778787460178
  246. ...
  247. ```
  248. ### Distributed Training
  249. For Ascend device, distributed training example(8p) by shell script
  250. ```bash
  251. sh run_distribute_train.sh dataset/coco2017 cspdarknet53_backbone.ckpt rank_table_8p.json
  252. ```
  253. The above shell script will run distribute training in the background. You can view the results through the file train_parallel[X]/log.txt. The loss value will be achieved as follows:
  254. ```text
  255. # distribute training result(8p, dynamic shape)
  256. ...
  257. 2020-10-16 20:40:17,148:INFO:epoch[0], iter[800], loss:283.765033, 248.93 imgs/sec, lr:0.00026233625249005854
  258. 2020-10-16 20:40:43,576:INFO:epoch[0], iter[900], loss:257.549973, 242.18 imgs/sec, lr:0.00029508734587579966
  259. 2020-10-16 20:41:12,743:INFO:epoch[0], iter[1000], loss:252.426355, 219.43 imgs/sec, lr:0.00032783843926154077
  260. 2020-10-16 20:41:43,153:INFO:epoch[0], iter[1100], loss:232.104760, 210.46 imgs/sec, lr:0.0003605895326472819
  261. 2020-10-16 20:42:12,583:INFO:epoch[0], iter[1200], loss:236.973975, 217.47 imgs/sec, lr:0.00039334059692919254
  262. 2020-10-16 20:42:39,004:INFO:epoch[0], iter[1300], loss:228.881298, 242.24 imgs/sec, lr:0.00042609169031493366
  263. 2020-10-16 20:43:07,811:INFO:epoch[0], iter[1400], loss:255.025714, 222.19 imgs/sec, lr:0.00045884278370067477
  264. 2020-10-16 20:43:38,177:INFO:epoch[0], iter[1500], loss:223.847151, 210.76 imgs/sec, lr:0.0004915939061902463
  265. 2020-10-16 20:44:07,766:INFO:epoch[0], iter[1600], loss:222.302487, 216.30 imgs/sec, lr:0.000524344970472157
  266. 2020-10-16 20:44:37,411:INFO:epoch[0], iter[1700], loss:211.063779, 215.89 imgs/sec, lr:0.0005570960929617286
  267. 2020-10-16 20:45:03,092:INFO:epoch[0], iter[1800], loss:210.425542, 249.21 imgs/sec, lr:0.0005898471572436392
  268. 2020-10-16 20:45:32,767:INFO:epoch[1], iter[1900], loss:208.449521, 215.67 imgs/sec, lr:0.0006225982797332108
  269. 2020-10-16 20:45:59,163:INFO:epoch[1], iter[2000], loss:209.700071, 242.48 imgs/sec, lr:0.0006553493440151215
  270. ...
  271. ```
  272. ### Transfer Training
  273. You can train your own model based on either pretrained classification model or pretrained detection model. You can perform transfer training by following steps.
  274. 1. Convert your own dataset to COCO style. Otherwise you have to add your own data preprocess code.
  275. 2. Change config.py according to your own dataset, especially the `num_classes`.
  276. 3. Set argument `filter_weight` to `True` and `pretrained_checkpoint` to pretrained checkpoint while calling `train.py`, this will filter the final detection box weight from the pretrained model.
  277. 4. Build your own bash scripts using new config and arguments for further convenient.
  278. ## [Evaluation Process](#contents)
  279. ### Valid
  280. ```bash
  281. python eval.py \
  282. --data_dir=./dataset/coco2017 \
  283. --pretrained=yolov4.ckpt \
  284. --testing_shape=608 > log.txt 2>&1 &
  285. OR
  286. sh run_eval.sh dataset/coco2017 checkpoint/yolov4.ckpt
  287. ```
  288. The above python command will run in the background. You can view the results through the file "log.txt". The mAP of the test dataset will be as follows:
  289. ```text
  290. # log.txt
  291. =============coco eval reulst=========
  292. Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.442
  293. Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.635
  294. Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.479
  295. Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.274
  296. Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.485
  297. Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.567
  298. Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.331
  299. Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.545
  300. Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.590
  301. Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.418
  302. Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.638
  303. Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.717
  304. ```
  305. ### Test-dev
  306. ```bash
  307. python test.py \
  308. --data_dir=./dataset/coco2017 \
  309. --pretrained=yolov4.ckpt \
  310. --testing_shape=608 > log.txt 2>&1 &
  311. OR
  312. sh run_test.sh dataset/coco2017 checkpoint/yolov4.ckpt
  313. ```
  314. The predict_xxx.json will be found in test/outputs/%Y-%m-%d_time_%H_%M_%S/.
  315. Rename the file predict_xxx.json to detections_test-dev2017_yolov4_results.json and compress it to detections_test-dev2017_yolov4_results.zip
  316. Submit file detections_test-dev2017_yolov4_results.zip to the MS COCO evaluation server for the test-dev2019 (bbox) <https://competitions.codalab.org/competitions/20794#participate>
  317. You will get such results in the end of file View scoring output log.
  318. ```text
  319. overall performance
  320. Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.447
  321. Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.642
  322. Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.487
  323. Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.267
  324. Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.485
  325. Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.549
  326. Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.335
  327. Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.547
  328. Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.584
  329. Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.392
  330. Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.627
  331. Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.711
  332. ```
  333. ## [Convert Process](#contents)
  334. ### Convert
  335. If you want to infer the network on Ascend 310, you should convert the model to MINDIR:
  336. ```python
  337. python export.py --ckpt_file [CKPT_PATH] --file_name [FILE_NAME] --file_format [FILE_FORMAT]
  338. ```
  339. The ckpt_file parameter is required,
  340. `EXPORT_FORMAT` should be in ["AIR", "ONNX", "MINDIR"]
  341. ## [Inference Process](#contents)
  342. ### Usage
  343. Before performing inference, the mindir file must be exported by export script on the 910 environment.
  344. Current batch_Size can only be set to 1. The precision calculation process needs about 70G+ memory space.
  345. ```shell
  346. # Ascend310 inference
  347. sh run_infer_310.sh [MINDIR_PATH] [DATA_PATH] [DEVICE_ID] [ANN_FILE]
  348. ```
  349. `DEVICE_ID` is optional, default value is 0.
  350. ### result
  351. Inference result is saved in current path, you can find result like this in acc.log file.
  352. ```text
  353. =============coco eval reulst=========
  354. Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.438
  355. Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.630
  356. Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.475
  357. Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.272
  358. Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.481
  359. Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.567
  360. Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.330
  361. Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.542
  362. Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.588
  363. Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.410
  364. Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.636
  365. Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.716
  366. ```
  367. # [Model Description](#contents)
  368. ## [Performance](#contents)
  369. ### Evaluation Performance
  370. YOLOv4 on 118K images(The annotation and data format must be the same as coco2017)
  371. | Parameters | YOLOv4 |
  372. | -------------------------- | ----------------------------------------------------------- |
  373. | Resource | Ascend 910; CPU 2.60GHz, 192cores; Memory 755G; OS Euler2.8; System, Euleros 2.8;|
  374. | uploaded Date | 10/16/2020 (month/day/year) |
  375. | MindSpore Version | 1.0.0-alpha |
  376. | Dataset | 118K images |
  377. | Training Parameters | epoch=320, batch_size=8, lr=0.012,momentum=0.9 |
  378. | Optimizer | Momentum |
  379. | Loss Function | Sigmoid Cross Entropy with logits, Giou Loss |
  380. | outputs | boxes and label |
  381. | Loss | 50 |
  382. | Speed | 1p 53FPS 8p 390FPS(shape=416) 220FPS(dynamic shape) |
  383. | Total time | 48h(dynamic shape) |
  384. | Checkpoint for Fine tuning | about 500M (.ckpt file) |
  385. | Scripts | <https://gitee.com/mindspore/mindspore/tree/master/model_zoo/> |
  386. ### Inference Performance
  387. YOLOv4 on 20K images(The annotation and data format must be the same as coco test2017 )
  388. | Parameters | YOLOv4 |
  389. | -------------------------- | ----------------------------------------------------------- |
  390. | Resource | Ascend 910; CPU 2.60GHz, 192cores; Memory 755G; OS Euler2.8 |
  391. | uploaded Date | 10/16/2020 (month/day/year) |
  392. | MindSpore Version | 1.0.0-alpha |
  393. | Dataset | 20K images |
  394. | batch_size | 1 |
  395. | outputs | box position and sorces, and probability |
  396. | Accuracy | map >= 44.7%(shape=608) |
  397. | Model for inference | about 500M (.ckpt file) |
  398. # [Description of Random Situation](#contents)
  399. In dataset.py, we set the seed inside ```create_dataset``` function.
  400. In var_init.py, we set seed for weight initialization
  401. # [ModelZoo Homepage](#contents)
  402. Please check the official [homepage](https://gitee.com/mindspore/mindspore/tree/master/model_zoo).