You can not select more than 25 topics Topics must start with a chinese character,a letter or number, can include dashes ('-') and can be up to 35 characters long.

README.MD 21 kB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419
  1. # Contents
  2. - [YOLOv4 Description](#YOLOv4-description)
  3. - [Model Architecture](#model-architecture)
  4. - [Dataset](#dataset)
  5. - [Environment Requirements](#environment-requirements)
  6. - [Quick Start](#quick-start)
  7. - [Script Description](#script-description)
  8. - [Script and Sample Code](#script-and-sample-code)
  9. - [Script Parameters](#script-parameters)
  10. - [Training Process](#training-process)
  11. - [Training](#training)
  12. - [Evaluation Process](#evaluation-process)
  13. - [Evaluation](#evaluation)
  14. - [Convert Process](#convert-process)
  15. - [Convert](#convert)
  16. - [Model Description](#model-description)
  17. - [Performance](#performance)
  18. - [Evaluation Performance](#evaluation-performance)
  19. - [Inference Performance](#inference-performance)
  20. - [ModelZoo Homepage](#modelzoo-homepage)
  21. # [YOLOv4 Description](#contents)
  22. YOLOv4 is a state-of-the-art detector which is faster (FPS) and more accurate (MS COCO AP50...95 and AP50) than all available alternative detectors.
  23. YOLOv4 has verified a large number of features, and selected for use such of them for improving the accuracy of both the classifier and the detector.
  24. These features can be used as best-practice for future studies and developments.
  25. [Paper](https://arxiv.org/pdf/2004.10934.pdf):
  26. Bochkovskiy A, Wang C Y, Liao H Y M. YOLOv4: Optimal Speed and Accuracy of Object Detection[J]. arXiv preprint arXiv:2004.10934, 2020.
  27. # [Model Architecture](#contents)
  28. YOLOv4 choose CSPDarknet53 backbone, SPP additional module, PANet path-aggregation neck, and YOLOv4 (anchor based) head as the architecture of YOLOv4.
  29. # [Dataset](#contents)
  30. Dataset support: [MS COCO] or datasetd with the same format as MS COCO
  31. Annotation support: [MS COCO] or annotation as the same format as MS COCO
  32. - The directory structure is as follows, the name of directory and file is user define:
  33. ```
  34. ├── dataset
  35. ├── YOLOv4
  36. ├── annotations
  37. │ ├─ train.json
  38. │ └─ val.json
  39. ├─ ├─train
  40. │ ├─picture1.jpg
  41. │ ├─ ...
  42. │ └─picturen.jpg
  43. └─ val
  44. ├─picture1.jpg
  45. ├─ ...
  46. └─picturen.jpg
  47. ```
  48. we suggest user to use MS COCO dataset to experience our model,
  49. other datasets need to use the same format as MS COCO.
  50. # [Environment Requirements](#contents)
  51. - Hardware(Ascend)
  52. - Prepare hardware environment with Ascend processor. If you want to try Ascend, please send the [application form](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx) to ascend@huawei.com. Once approved, you can get the resources.
  53. - Framework
  54. - [MindSpore](https://www.mindspore.cn/)
  55. - For more information, please check the resources below:
  56. - [MindSpore tutorials](https://www.mindspore.cn/tutorial/training/zh-CN/master/index.html)
  57. - [MindSpore API](https://www.mindspore.cn/doc/api_python/zh-CN/master/index.html)
  58. # [Quick Start](#contents)
  59. After installing MindSpore via the official website, you can start training and evaluation as follows:
  60. ```
  61. # The cspdarknet53_backbone.ckpt in the follow script is got from cspdarknet53 training like paper.
  62. # The parameter of training_shape define image shape for network, default is
  63. [416, 416],
  64. [448, 448],
  65. [480, 480],
  66. [512, 512],
  67. [544, 544],
  68. [576, 576],
  69. [608, 608],
  70. [640, 640],
  71. [672, 672],
  72. [704, 704],
  73. [736, 736].
  74. # It means use 11 kinds of shape as input shape, or it can be set some kind of shape.
  75. ```
  76. ```
  77. #run training example(1p) by python command
  78. python train.py \
  79. --data_dir=./dataset/xxx \
  80. --pretrained_backbone=cspdarknet53_backbone.ckpt \
  81. --is_distributed=0 \
  82. --lr=0.1 \
  83. --t_max=320 \
  84. --max_epoch=320 \
  85. --warmup_epochs=4 \
  86. --training_shape=416 \
  87. --lr_scheduler=cosine_annealing > log.txt 2>&1 &
  88. ```
  89. ```
  90. # standalone training example(1p) by shell script
  91. sh run_standalone_train.sh dataset/xxx cspdarknet53_backbone.ckpt
  92. ```
  93. ```
  94. # For Ascend device, distributed training example(8p) by shell script
  95. sh run_distribute_train.sh dataset/xxx cspdarknet53_backbone.ckpt rank_table_8p.json
  96. ```
  97. ```
  98. # run evaluation by python command
  99. python eval.py \
  100. --data_dir=./dataset/xxx \
  101. --pretrained=yolov4.ckpt \
  102. --testing_shape=416 > log.txt 2>&1 &
  103. ```
  104. ```
  105. # run evaluation by shell script
  106. sh run_eval.sh dataset/xxx checkpoint/xxx.ckpt
  107. ```
  108. # [Script Description](#contents)
  109. ## [Script and Sample Code](#contents)
  110. ```
  111. └─yolov4
  112. ├─README.md
  113. ├─mindspore_hub_conf.py # config for mindspore hub
  114. ├─scripts
  115. ├─run_standalone_train.sh # launch standalone training(1p) in ascend
  116. ├─run_distribute_train.sh # launch distributed training(8p) in ascend
  117. └─run_eval.sh # launch evaluating in ascend
  118. ├─run_test.sh # launch testing in ascend
  119. ├─src
  120. ├─__init__.py # python init file
  121. ├─config.py # parameter configuration
  122. ├─cspdarknet53.py # backbone of network
  123. ├─distributed_sampler.py # iterator of dataset
  124. ├─export.py # convert mindspore model to air model
  125. ├─initializer.py # initializer of parameters
  126. ├─logger.py # log function
  127. ├─loss.py # loss function
  128. ├─lr_scheduler.py # generate learning rate
  129. ├─transforms.py # Preprocess data
  130. ├─util.py # util function
  131. ├─yolo.py # yolov4 network
  132. ├─yolo_dataset.py # create dataset for YOLOV4
  133. ├─eval.py # evaluate val results
  134. ├─test.py# # evaluate test results
  135. └─train.py # train net
  136. ```
  137. ## [Script Parameters](#contents)
  138. Major parameters train.py as follows:
  139. ```
  140. optional arguments:
  141. -h, --help show this help message and exit
  142. --device_target device where the code will be implemented: "Ascend", default is "Ascend"
  143. --data_dir DATA_DIR Train dataset directory.
  144. --per_batch_size PER_BATCH_SIZE
  145. Batch size for Training. Default: 32.
  146. --pretrained_backbone PRETRAINED_BACKBONE
  147. The ckpt file of CspDarkNet53. Default: "".
  148. --resume_yolov4 RESUME_YOLOV4
  149. The ckpt file of YOLOv4, which used to fine tune.
  150. Default: ""
  151. --lr_scheduler LR_SCHEDULER
  152. Learning rate scheduler, options: exponential,
  153. cosine_annealing. Default: exponential
  154. --lr LR Learning rate. Default: 0.001
  155. --lr_epochs LR_EPOCHS
  156. Epoch of changing of lr changing, split with ",".
  157. Default: 220,250
  158. --lr_gamma LR_GAMMA Decrease lr by a factor of exponential lr_scheduler.
  159. Default: 0.1
  160. --eta_min ETA_MIN Eta_min in cosine_annealing scheduler. Default: 0
  161. --t_max T_MAX T-max in cosine_annealing scheduler. Default: 320
  162. --max_epoch MAX_EPOCH
  163. Max epoch num to train the model. Default: 320
  164. --warmup_epochs WARMUP_EPOCHS
  165. Warmup epochs. Default: 0
  166. --weight_decay WEIGHT_DECAY
  167. Weight decay factor. Default: 0.0005
  168. --momentum MOMENTUM Momentum. Default: 0.9
  169. --loss_scale LOSS_SCALE
  170. Static loss scale. Default: 64
  171. --label_smooth LABEL_SMOOTH
  172. Whether to use label smooth in CE. Default:0
  173. --label_smooth_factor LABEL_SMOOTH_FACTOR
  174. Smooth strength of original one-hot. Default: 0.1
  175. --log_interval LOG_INTERVAL
  176. Logging interval steps. Default: 100
  177. --ckpt_path CKPT_PATH
  178. Checkpoint save location. Default: outputs/
  179. --ckpt_interval CKPT_INTERVAL
  180. Save checkpoint interval. Default: None
  181. --is_save_on_master IS_SAVE_ON_MASTER
  182. Save ckpt on master or all rank, 1 for master, 0 for
  183. all ranks. Default: 1
  184. --is_distributed IS_DISTRIBUTED
  185. Distribute train or not, 1 for yes, 0 for no. Default:
  186. 1
  187. --rank RANK Local rank of distributed. Default: 0
  188. --group_size GROUP_SIZE
  189. World size of device. Default: 1
  190. --need_profiler NEED_PROFILER
  191. Whether use profiler. 0 for no, 1 for yes. Default: 0
  192. --training_shape TRAINING_SHAPE
  193. Fix training shape. Default: ""
  194. --resize_rate RESIZE_RATE
  195. Resize rate for multi-scale training. Default: 10
  196. ```
  197. ## [Training Process](#contents)
  198. YOLOv4 can be trained from the scratch or with the backbone named cspdarknet53.
  199. Cspdarknet53 is a classifier which can be trained on some dataset like ImageNet(ILSVRC2012).
  200. It is easy for users to train Cspdarknet53. Just replace the backbone of Classifier Resnet50 with cspdarknet53.
  201. Resnet50 is easy to get in mindspore model zoo.
  202. ### Training
  203. For Ascend device, standalone training example(1p) by shell script
  204. ```
  205. sh run_standalone_train.sh dataset/coco2017 cspdarknet53_backbone.ckpt
  206. ```
  207. ```
  208. python train.py \
  209. --data_dir=/dataset/xxx \
  210. --pretrained_backbone=cspdarknet53_backbone.ckpt \
  211. --is_distributed=0 \
  212. --lr=0.1 \
  213. --t_max=320 \
  214. --max_epoch=320 \
  215. --warmup_epochs=4 \
  216. --training_shape=416 \
  217. --lr_scheduler=cosine_annealing > log.txt 2>&1 &
  218. ```
  219. The python command above will run in the background, you can view the results through the file log.txt.
  220. After training, you'll get some checkpoint files under the outputs folder by default. The loss value will be achieved as follows:
  221. ```
  222. # grep "loss:" train/log.txt
  223. 2020-10-16 15:00:37,483:INFO:epoch[0], iter[0], loss:8248.610352, 0.03 imgs/sec, lr:2.0466639227834094e-07
  224. 2020-10-16 15:00:52,897:INFO:epoch[0], iter[100], loss:5058.681709, 51.91 imgs/sec, lr:2.067130662908312e-05
  225. 2020-10-16 15:01:08,286:INFO:epoch[0], iter[200], loss:1583.772806, 51.99 imgs/sec, lr:4.1137944208458066e-05
  226. 2020-10-16 15:01:23,457:INFO:epoch[0], iter[300], loss:1229.840823, 52.75 imgs/sec, lr:6.160458724480122e-05
  227. 2020-10-16 15:01:39,046:INFO:epoch[0], iter[400], loss:1155.170310, 51.32 imgs/sec, lr:8.207122300518677e-05
  228. 2020-10-16 15:01:54,138:INFO:epoch[0], iter[500], loss:920.922433, 53.02 imgs/sec, lr:0.00010253786604152992
  229. 2020-10-16 15:02:09,209:INFO:epoch[0], iter[600], loss:808.610681, 53.09 imgs/sec, lr:0.00012300450180191547
  230. 2020-10-16 15:02:24,240:INFO:epoch[0], iter[700], loss:621.931513, 53.23 imgs/sec, lr:0.00014347114483825862
  231. 2020-10-16 15:02:39,280:INFO:epoch[0], iter[800], loss:527.155985, 53.20 imgs/sec, lr:0.00016393778787460178
  232. ...
  233. ```
  234. ### Distributed Training
  235. For Ascend device, distributed training example(8p) by shell script
  236. ```
  237. sh run_distribute_train.sh dataset/coco2017 cspdarknet53_backbone.ckpt rank_table_8p.json
  238. ```
  239. The above shell script will run distribute training in the background. You can view the results through the file train_parallel[X]/log.txt. The loss value will be achieved as follows:
  240. ```
  241. # distribute training result(8p, shape=416)
  242. ...
  243. 2020-10-16 14:58:25,142:INFO:epoch[0], iter[1000], loss:242.509259, 388.73 imgs/sec, lr:0.00032783843926154077
  244. 2020-10-16 14:58:41,320:INFO:epoch[0], iter[1100], loss:228.137516, 395.61 imgs/sec, lr:0.0003605895326472819
  245. 2020-10-16 14:58:57,607:INFO:epoch[0], iter[1200], loss:219.689884, 392.94 imgs/sec, lr:0.00039334059692919254
  246. 2020-10-16 14:59:13,787:INFO:epoch[0], iter[1300], loss:216.173309, 395.56 imgs/sec, lr:0.00042609169031493366
  247. 2020-10-16 14:59:29,969:INFO:epoch[0], iter[1400], loss:234.500610, 395.54 imgs/sec, lr:0.00045884278370067477
  248. 2020-10-16 14:59:46,132:INFO:epoch[0], iter[1500], loss:209.420913, 396.00 imgs/sec, lr:0.0004915939061902463
  249. 2020-10-16 15:00:02,416:INFO:epoch[0], iter[1600], loss:210.953930, 393.04 imgs/sec, lr:0.000524344970472157
  250. 2020-10-16 15:00:18,651:INFO:epoch[0], iter[1700], loss:197.171296, 394.20 imgs/sec, lr:0.0005570960929617286
  251. 2020-10-16 15:00:34,056:INFO:epoch[0], iter[1800], loss:203.928903, 415.47 imgs/sec, lr:0.0005898471572436392
  252. 2020-10-16 15:00:53,680:INFO:epoch[1], iter[1900], loss:191.693561, 326.14 imgs/sec, lr:0.0006225982797332108
  253. 2020-10-16 15:01:10,442:INFO:epoch[1], iter[2000], loss:196.632004, 381.82 imgs/sec, lr:0.0006553493440151215
  254. 2020-10-16 15:01:27,180:INFO:epoch[1], iter[2100], loss:193.813570, 382.43 imgs/sec, lr:0.0006881004082970321
  255. 2020-10-16 15:01:43,736:INFO:epoch[1], iter[2200], loss:176.996778, 386.59 imgs/sec, lr:0.0007208515307866037
  256. 2020-10-16 15:02:00,294:INFO:epoch[1], iter[2300], loss:185.858901, 386.55 imgs/sec, lr:0.0007536025950685143
  257. ...
  258. ```
  259. ```
  260. # distribute training result(8p, dynamic shape)
  261. ...
  262. 2020-10-16 20:40:17,148:INFO:epoch[0], iter[800], loss:283.765033, 248.93 imgs/sec, lr:0.00026233625249005854
  263. 2020-10-16 20:40:43,576:INFO:epoch[0], iter[900], loss:257.549973, 242.18 imgs/sec, lr:0.00029508734587579966
  264. 2020-10-16 20:41:12,743:INFO:epoch[0], iter[1000], loss:252.426355, 219.43 imgs/sec, lr:0.00032783843926154077
  265. 2020-10-16 20:41:43,153:INFO:epoch[0], iter[1100], loss:232.104760, 210.46 imgs/sec, lr:0.0003605895326472819
  266. 2020-10-16 20:42:12,583:INFO:epoch[0], iter[1200], loss:236.973975, 217.47 imgs/sec, lr:0.00039334059692919254
  267. 2020-10-16 20:42:39,004:INFO:epoch[0], iter[1300], loss:228.881298, 242.24 imgs/sec, lr:0.00042609169031493366
  268. 2020-10-16 20:43:07,811:INFO:epoch[0], iter[1400], loss:255.025714, 222.19 imgs/sec, lr:0.00045884278370067477
  269. 2020-10-16 20:43:38,177:INFO:epoch[0], iter[1500], loss:223.847151, 210.76 imgs/sec, lr:0.0004915939061902463
  270. 2020-10-16 20:44:07,766:INFO:epoch[0], iter[1600], loss:222.302487, 216.30 imgs/sec, lr:0.000524344970472157
  271. 2020-10-16 20:44:37,411:INFO:epoch[0], iter[1700], loss:211.063779, 215.89 imgs/sec, lr:0.0005570960929617286
  272. 2020-10-16 20:45:03,092:INFO:epoch[0], iter[1800], loss:210.425542, 249.21 imgs/sec, lr:0.0005898471572436392
  273. 2020-10-16 20:45:32,767:INFO:epoch[1], iter[1900], loss:208.449521, 215.67 imgs/sec, lr:0.0006225982797332108
  274. 2020-10-16 20:45:59,163:INFO:epoch[1], iter[2000], loss:209.700071, 242.48 imgs/sec, lr:0.0006553493440151215
  275. ...
  276. ```
  277. ## [Evaluation Process](#contents)
  278. ### Valid
  279. ```
  280. python eval.py \
  281. --data_dir=./dataset/coco2017 \
  282. --pretrained=yolov4.ckpt \
  283. --testing_shape=608 > log.txt 2>&1 &
  284. OR
  285. sh run_eval.sh dataset/coco2017 checkpoint/yolov4.ckpt
  286. ```
  287. The above python command will run in the background. You can view the results through the file "log.txt". The mAP of the test dataset will be as follows:
  288. ```
  289. # log.txt
  290. =============coco eval reulst=========
  291. Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.442
  292. Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.635
  293. Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.479
  294. Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.274
  295. Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.485
  296. Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.567
  297. Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.331
  298. Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.545
  299. Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.590
  300. Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.418
  301. Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.638
  302. Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.717
  303. ```
  304. ### Test-dev
  305. ```
  306. python test.py \
  307. --data_dir=./dataset/coco2017 \
  308. --pretrained=yolov4.ckpt \
  309. --testing_shape=608 > log.txt 2>&1 &
  310. OR
  311. sh run_test.sh dataset/coco2017 checkpoint/yolov4.ckpt
  312. ```
  313. The predict_xxx.json will be found in test/outputs/%Y-%m-%d_time_%H_%M_%S/.
  314. Rename the file predict_xxx.json to detections_test-dev2017_yolov4_results.json and compress it to detections_test-dev2017_yolov4_results.zip
  315. Submit file detections_test-dev2017_yolov4_results.zip to the MS COCO evaluation server for the test-dev2019 (bbox) https://competitions.codalab.org/competitions/20794#participate
  316. You will get such results in the end of file View scoring output log.
  317. ```
  318. overall performance
  319. Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.447
  320. Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.642
  321. Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.487
  322. Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.267
  323. Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.485
  324. Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.549
  325. Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.335
  326. Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.547
  327. Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.584
  328. Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.392
  329. Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.627
  330. Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.711
  331. ```
  332. ## [Convert Process](#contents)
  333. ### Convert
  334. If you want to infer the network on Ascend 310, you should convert the model to AIR:
  335. ```python
  336. python src/export.py --pretrained=[PRETRAINED_BACKBONE] --batch_size=[BATCH_SIZE]
  337. ```
  338. # [Model Description](#contents)
  339. ## [Performance](#contents)
  340. ### Evaluation Performance
  341. YOLOv4 on 118K images(The annotation and data format must be the same as coco2017)
  342. | Parameters | YOLOv4 |
  343. | -------------------------- | ----------------------------------------------------------- |
  344. | Resource | Ascend 910; CPU 2.60GHz, 192cores; Memory, 755G |
  345. | uploaded Date | 10/16/2020 (month/day/year) |
  346. | MindSpore Version | 1.0.0-alpha |
  347. | Dataset | 118K images |
  348. | Training Parameters | epoch=320, batch_size=8, lr=0.012,momentum=0.9 |
  349. | Optimizer | Momentum |
  350. | Loss Function | Sigmoid Cross Entropy with logits, Giou Loss |
  351. | outputs | boxes and label |
  352. | Loss | 50 |
  353. | Speed | 1p 53FPS 8p 390FPS(shape=416) 220FPS(dynamic shape) |
  354. | Total time | 48h(dynamic shape) |
  355. | Checkpoint for Fine tuning | about 500M (.ckpt file) |
  356. | Scripts | https://gitee.com/mindspore/mindspore/tree/master/model_zoo/ |
  357. ### Inference Performance
  358. YOLOv4 on 20K images(The annotation and data format must be the same as coco test2017 )
  359. | Parameters | YOLOv4 |
  360. | -------------------------- | ----------------------------------------------------------- |
  361. | Resource | Ascend 910; CPU 2.60GHz, 192cores; Memory, 755G |
  362. | uploaded Date | 10/16/2020 (month/day/year) |
  363. | MindSpore Version | 1.0.0-alpha |
  364. | Dataset | 20K images |
  365. | batch_size | 1 |
  366. | outputs | box position and sorces, and probability |
  367. | Accuracy | map >= 44.7%(shape=608) |
  368. | Model for inference | about 500M (.ckpt file) |
  369. # [Description of Random Situation](#contents)
  370. In dataset.py, we set the seed inside ```create_dataset``` function.
  371. In var_init.py, we set seed for weight initilization
  372. # [ModelZoo Homepage](#contents)
  373. Please check the official [homepage](https://gitee.com/mindspore/mindspore/tree/master/model_zoo).