You can not select more than 25 topics Topics must start with a chinese character,a letter or number, can include dashes ('-') and can be up to 35 characters long.

README.MD 23 kB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462463464465466467468469470471472473474475476477478479480481482483484485486487488489490491492493494495496497498499500501502
  1. # Contents
  2. - [CenterFace Description](#CenterFace-description)
  3. - [Model Architecture](#model-architecture)
  4. - [Dataset](#dataset)
  5. - [Environment Requirements](#environment-requirements)
  6. - [Quick Start](#quick-start)
  7. - [Script Description](#script-description)
  8. - [Script and Sample Code](#script-and-sample-code)
  9. - [Script Parameters](#script-parameters)
  10. - [Training Process](#training-process)
  11. - [Training](#training)
  12. - [Testing Process](#testing-process)
  13. - [Evaluation](#testing)
  14. - [Evaluation Process](#evaluation-process)
  15. - [Evaluation](#evaluation)
  16. - [Convert Process](#convert-process)
  17. - [Convert](#convert)
  18. - [Model Description](#model-description)
  19. - [Performance](#performance)
  20. - [Evaluation Performance](#evaluation-performance)
  21. - [Inference Performance](#inference-performance)
  22. - [ModelZoo Homepage](#modelzoo-homepage)
  23. # [CenterFace Description](#contents)
  24. CenterFace is a practical anchor-free face detection and alignment method for edge devices, we support training and evaluation on Ascend910.
  25. Face detection and alignment in unconstrained environment is always deployed on edge devices which have limited memory storage and low computing power.
  26. CenterFace proposes a one-stage method to simultaneously predict facial box and landmark location with real-time speed and high accuracy.
  27. [Paper](https://arxiv.org/ftp/arxiv/papers/1911/1911.03599.pdf): CenterFace: Joint Face Detection and Alignment Using Face as Point.
  28. Xu, Yuanyuan(Huaqiao University) and Yan, Wan(StarClouds) and Sun, Haixin(Xiamen University)
  29. and Yang, Genke(Shanghai Jiaotong University) and Luo, Jiliang(Huaqiao University)
  30. # [Model Architecture](#contents)
  31. CenterFace uses mobilenet_v2 as pretrained backbone, add 4 layer fpn, with four head.
  32. Four loss is presented, total loss is their weighted mean.
  33. # [Dataset](#contents)
  34. Note that you can run the scripts based on the dataset mentioned in original paper or widely used in relevant domain/network architecture. In the following sections, we will introduce how to run the scripts using the related dataset below.
  35. Dataset support: [WiderFace] or datasetd with the same format as WiderFace
  36. Annotation support: [WiderFace] or annotation as the same format as WiderFace
  37. - The directory structure is as follows, the name of directory and file is user define:
  38. ```
  39. ├── dataset
  40. ├── centerface
  41. ├── annotations
  42. │ ├─ train.json
  43. │ └─ val.json
  44. ├─ images
  45. │ ├─ train
  46. │ │ └─images
  47. │ │ ├─class1_image_folder
  48. │ │ ├─ ...
  49. │ │ └─classn_image_folder
  50. │ └─ val
  51. │ └─images
  52. │ ├─class1_image_folder
  53. │ ├─ ...
  54. │ └─classn_image_folder
  55. └─ ground_truth
  56. ├─val.mat
  57. ├─ ...
  58. └─xxx.mat
  59. ```
  60. we suggest user to use WiderFace dataset to experience our model,
  61. other datasets need to use the same format as WiderFace.
  62. # [Environment Requirements](#contents)
  63. - Hardware(Ascend)
  64. - Prepare hardware environment with Ascend processor. If you want to try Ascend, please send the [application form](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx) to ascend@huawei.com. Once approved, you can get the resources.
  65. - Framework
  66. - [MindSpore](https://cmc-szv.clouddragon.huawei.com/cmcversion/index/search?searchKey=Do-MindSpore%20V100R001C00B622)
  67. - For more information, please check the resources below:
  68. - [MindSpore tutorials](https://www.mindspore.cn/tutorial/zh-CN/master/index.html)
  69. - [MindSpore API](https://www.mindspore.cn/api/zh-CN/master/index.html)
  70. # [Quick Start](#contents)
  71. After installing MindSpore via the official website, you can start training and evaluation as follows:
  72. step1: prepare pretrained model: train a mobilenet_v2 model by mindspore or use the script below:
  73. ```python
  74. #CenterFace need a pretrained mobilenet_v2 model:
  75. # mobilenet_v2_key.ckpt is a model with all value zero, we need the key/cell/module name for this model.
  76. # you must first use this script to convert your mobilenet_v2 pytorch model to mindspore model as a pretrain model.
  77. # The key/cell/module name must as follow, otherwise you need to modify "name_map" function:
  78. # --mindspore: as the same as mobilenet_v2_key.ckpt
  79. # --pytorch: same as official pytorch model(e.g., official mobilenet_v2-b0353104.pth)
  80. python torch_to_ms_mobilenetv2.py --ckpt_fn=./mobilenet_v2_key.ckpt --pt_fn=./mobilenet_v2-b0353104.pth --out_ckpt_fn=./mobilenet_v2.ckpt
  81. ```
  82. step2: prepare user rank_table
  83. ```python
  84. # user can use your own rank table file
  85. # or use the [hccl_tools](https://gitee.com/mindspore/mindspore/tree/master/model_zoo/utils/hccl_tools) to generate rank table file
  86. # e.g., python hccl_tools.py --device_num "[0,8)"
  87. python hccl_tools.py --device_num "[0,8)"
  88. ```
  89. step3: train
  90. ```python
  91. cd scripts;
  92. # prepare data_path, use symbolic link
  93. ln -sf [USE_DATA_DIR] dataset
  94. # check you dir to make sure your datas are in the right path
  95. ls ./dataset/centerface # data path
  96. ls ./dataset/centerface/annotations/train.json # annot_path
  97. ls ./dataset/centerface/images/train/images # img_dir
  98. ```
  99. ```python
  100. # enter script dir, train CenterFace
  101. sh train_distribute.sh
  102. # after training
  103. mkdir ./model
  104. cp device0/outputs/*/*.ckpt ./model # cp model to [MODEL_PATH]
  105. ```
  106. step4: test
  107. ```python
  108. # test CenterFace preparing
  109. cd ../dependency/centernet/src/lib/external;
  110. python setup.py install;
  111. make;
  112. cd -; #cd ../../../../../scripts;
  113. cd ../dependency/evaluate;
  114. python setup.py install; # used for eval
  115. cd -; #cd ../../scripts;
  116. mkdir ./output
  117. mkdir ./output/centerface
  118. # check you dir to make sure your datas are in the right path
  119. ls ./dataset/images/val/images/ # data path
  120. ls ./dataset/centerface/ground_truth/val.mat # annot_path
  121. ```
  122. ```python
  123. # test CenterFace
  124. sh test_distribute.sh
  125. ```
  126. step5: eval
  127. ```python
  128. # after test, eval CenterFace, get MAP
  129. # cd ../dependency/evaluate;
  130. # python setup.py install;
  131. # cd -; #cd ../../scripts;
  132. sh eval_all.sh
  133. ```
  134. # [Script Description](#contents)
  135. ## [Script and Sample Code](#contents)
  136. ```
  137. ├── cv
  138. ├── centerface
  139. ├── train.py // training scripts
  140. ├── test.py // testing training outputs
  141. ├── export.py // convert mindspore model to air model
  142. ├── README.md // descriptions about CenterFace
  143. ├── scripts
  144. │ ├──eval.sh // evaluate a single testing result
  145. │ ├──eval_all.sh // choose a range of testing results to evaluate
  146. │ ├──test.sh // testing a single model
  147. │ ├──test_distribute.sh // testing a range of models
  148. │ ├──test_and_eval.sh // test then evaluate a single model
  149. │ ├──train_standalone.sh // train in ascend with single npu
  150. │ ├──train_distribute.sh // train in ascend with multi npu
  151. ├── src
  152. │ ├──__init__.py
  153. │ ├──centerface.py // centerface networks, training entry
  154. │ ├──dataset.py // generate dataloader and data processing entry
  155. │ ├──config.py // centerface unique configs
  156. │ ├──losses.py // losses for centerface
  157. │ ├──lr_scheduler.py // learning rate scheduler
  158. │ ├──mobile_v2.py // modified mobilenet_v2 backbone
  159. │ ├──utils.py // auxiliary functions for train, to log and preload
  160. │ ├──var_init.py // weight initilization
  161. │ ├──convert_weight_mobilenetv2.py // convert pretrained backbone to mindspore
  162. │ ├──convert_weight.py // CenterFace model convert to mindspore
  163. └── dependency // third party codes: MIT License
  164. ├──extd // training dependency: data augmentation
  165. │ ├──utils
  166. │ │ └──augmentations.py // data anchor sample of PyramidBox to generate small images
  167. ├──evaluate // evaluate dependency
  168. │ ├──box_overlaps.pyx // box overlaps
  169. │ ├──setup.py // setupfile for box_overlaps.pyx
  170. │ ├──eval.py // evaluate testing results
  171. └──centernet // modified from 'centernet'
  172. └──src
  173. └──lib
  174. ├──datasets
  175. │ ├──dataset // train dataset core
  176. │ │ ├──coco_hp.py // read and formatting data
  177. │ ├──sample
  178. │ │ └──multi_pose.py // core for data processing
  179. ├──detectors // test core, including running, pre-processing and post-processing
  180. │ ├──base_detector.py // user can add your own test core; for example, use pytorch or tf for pre/post processing
  181. ├──external // test dependency
  182. │ ├──__init__.py
  183. │ ├──Makefile // makefile for nms
  184. │ ├──nms.pyx // use soft_nms
  185. │ ├──setup.py // setupfile for nms.pyx
  186. └──utils
  187. └──image.py // image processing functions
  188. ```
  189. ## [Script Parameters](#contents)
  190. 1. train scripts parameters
  191. the command is: python train.py [train parameters]
  192. Major parameters train.py as follows:
  193. ```python
  194. --lr: learning rate
  195. --per_batch_size: batch size on each device
  196. --is_distributed: multi-device or not
  197. --t_max: for cosine lr_scheduler
  198. --max_epoch: training epochs
  199. --warmup_epochs: warmup_epochs, not needed for adam, needed for sgd
  200. --lr scheduler: learning rate scheduler, default is multistep
  201. --lr_epochs: decrease lr steps
  202. --lr_gamma: decrease lr by a factor
  203. --weight_decay: weight decay
  204. --loss_scale: mix precision training
  205. --pretrained_backbone: pretrained mobilenet_v2 model path
  206. --data_dir: data dir
  207. --annot_path: annotations path
  208. --img_dir: img dir in data_dir
  209. ```
  210. 2. centerface unique configs: in config.py; not recommend user to change
  211. 3. test scripts parameters:
  212. the command is: python test.py [test parameters]
  213. Major parameters test.py as follows:
  214. ```python
  215. test_script_path: test.py path;
  216. --is_distributed: multi-device or not
  217. --data_dir: img dir
  218. --test_model: test model dir
  219. --ground_truth_mat: ground_truth file, mat type
  220. --save_dir: save_path for evaluate
  221. --rank: use device id
  222. --ckpt_name: test model name
  223. # blow are used for calculate ckpt/model name
  224. # model/ckpt name is "0-" + str(ckpt_num) + "_" + str(steps_per_epoch*ckpt_num) + ".ckpt";
  225. # ckpt_num is epoch number, can be calculated by device_num
  226. # detail can be found in "test.py"
  227. # if ckpt is specified not need below 4 parameter
  228. --device_num: training device number
  229. --steps_per_epoch: steps for each epoch
  230. --start: start loop number, used to calculate first epoch number
  231. --end: end loop number, used to calculate last epoch number
  232. ```
  233. 4. eval scripts parameters:
  234. the command is: python eval.py [pred] [gt]
  235. Major parameters eval.py as follows:
  236. ```python
  237. --pred: pred path, test output test.py->[--save_dir]
  238. --gt: ground truth path
  239. ```
  240. ## [Training Process](#contents)
  241. ### Training
  242. 'task_set' is important for multi-npu train to get higher speed
  243. --task_set: 0, not task_set; 1 task_set;
  244. --task_set_core: task_set core number, most time = cpu number/nproc_per_node
  245. step1: user need train a mobilenet_v2 model by mindspore or use the script below:
  246. ```python
  247. python torch_to_ms_mobilenetv2.py --ckpt_fn=./mobilenet_v2_key.ckpt --pt_fn=./mobilenet_v2-b0353104.pth --out_ckpt_fn=./mobilenet_v2.ckpt
  248. ```
  249. step2: prepare user rank_table
  250. ```python
  251. # user can use your own rank table file
  252. # or use the [hccl_tools](https://gitee.com/mindspore/mindspore/tree/master/model_zoo/utils/hccl_tools) to generate rank table file
  253. # e.g., python hccl_tools.py --device_num "[0,8)"
  254. python hccl_tools.py --device_num "[0,8)"
  255. ```
  256. step3: train
  257. - Single device
  258. ```python
  259. # enter script dir, train CenterFace
  260. cd scripts
  261. # you need to change the parameter in train_standalone.sh
  262. # or use symbolic link as quick start
  263. # or use the command as follow:
  264. # USE_DEVICE_ID: your device
  265. # PRETRAINED_BACKBONE: your pretrained model path
  266. # DATASET: dataset path
  267. # ANNOTATIONS: annotation path
  268. # images: img_dir in dataset path
  269. sh train_standalone.sh [USE_DEVICE_ID] [PRETRAINED_BACKBONE] [DATASET] [ANNOTATIONS] [IMAGES]
  270. # after training
  271. cp device0/outputs/*/*.ckpt [MODEL_PATH]
  272. ```
  273. - multi-device (recommended)
  274. ```python
  275. # enter script dir, train CenterFace
  276. cd scripts;
  277. # you need to change the parameter in train_distribute.sh
  278. # or use symbolic link as quick start
  279. # or use the command as follow, most are the same as train_standalone.sh, the different is RANK_TABLE
  280. # RANK_TABLE: for multi-device only, from generate_rank_table.py or user writing
  281. sh train_distribute.sh [RANK_TABLE] [PRETRAINED_BACKBONE] [DATASET] [ANNOTATIONS] [IMAGES]
  282. # after training
  283. cp device0/outputs/*/*.ckpt [MODEL_PATH]
  284. ```
  285. After training with 8 device, the loss value will be achieved as follows:
  286. ```python
  287. # grep "loss is " device0/xxx.log
  288. # epoch: 1 step: 1, loss is greater than 500 and less than 5000
  289. 2020-09-24 19:00:53,550:INFO:epoch:1, iter:0, average_loss:loss:1148.415649, loss:1148.4156494140625, overflow:False, loss_scale:1024.0
  290. [WARNING] DEBUG(51499,python):2020-09-24-19:00:53.590.008 [mindspore/ccsrc/debug/dump_proto.cc:218] SetValueToProto] Unsupported type UInt
  291. 2020-09-24 19:00:53,784:INFO:epoch:1, iter:1, average_loss:loss:798.286713, loss:448.15777587890625, overflow:False, loss_scale:1024.0
  292. ...
  293. 2020-09-24 19:01:58,095:INFO:epoch:2, iter:197, average_loss:loss:1.942609, loss:1.5492267608642578, overflow:False, loss_scale:1024.0
  294. 2020-09-24 19:01:58,501:INFO:epoch[2], loss:1.942609, 477.97 imgs/sec, lr:0.004000000189989805
  295. 2020-09-24 19:01:58,502:INFO:==========end epoch===============
  296. 2020-09-24 19:02:00,780:INFO:epoch:3, iter:0, average_loss:loss:2.107658, loss:2.1076583862304688, overflow:False, loss_scale:1024.0
  297. ...
  298. # epoch: 140 average loss is greater than 0.3 and less than 1.5:
  299. 2020-09-24 20:19:16,255:INFO:epoch:140, iter:196, average_loss:loss:0.906300, loss:1.1071504354476929, overflow:False, loss_scale:1024.0
  300. 2020-09-24 20:19:16,347:INFO:epoch:140, iter:197, average_loss:loss:0.904684, loss:0.586264967918396, overflow:False, loss_scale:1024.0
  301. 2020-09-24 20:19:16,747:INFO:epoch[140], loss:0.904684, 480.10 imgs/sec, lr:3.9999998989515007e-05
  302. 2020-09-24 20:19:16,748:INFO:==========end epoch===============
  303. 2020-09-24 20:19:16,748:INFO:==========end training===============
  304. ```
  305. The model checkpoint will be saved in the scripts/device0/output/xxx/xxx.ckpt
  306. ## [Testing Process](#contents)
  307. ### Testing
  308. ```python
  309. # after train, prepare for test CenterFace
  310. cd scripts;
  311. cd ../dependency/centernet/src/lib/external;
  312. python setup.py install;
  313. make;
  314. cd ../../../scripts;
  315. mkdir [SAVE_PATH]
  316. ```
  317. 1. test a single ckpt file
  318. ```python
  319. # you need to change the parameter in test.sh
  320. # or use symbolic link as quick start
  321. # or use the command as follow:
  322. # MODEL_PATH: ckpt path saved during training
  323. # DATASET: img dir
  324. # GROUND_TRUTH_MAT: ground_truth file, mat type
  325. # SAVE_PATH: save_path for evaluate
  326. # DEVICE_ID: use device id
  327. # CKPT: test model name
  328. sh test.sh [MODEL_PATH] [DATASET] [GROUND_TRUTH_MAT] [SAVE_PATH] [DEVICE_ID] [CKPT]
  329. ```
  330. 2. test many out ckpt for user to choose the best one
  331. ```python
  332. # you need to change the parameter in test.sh
  333. # or use symbolic link as quick start
  334. # or use the command as follow, most are the same as test.sh, the different are:
  335. # DEVICE_NUM: training device number
  336. # STEPS_PER_EPOCH: steps for each epoch
  337. # START: start loop number, used to calculate first epoch number
  338. # END: end loop number, used to calculate last epoch number
  339. sh test_distribute.sh [MODEL_PATH] [DATASET] [GROUND_TRUTH_MAT] [SAVE_PATH] [DEVICE_NUM] [STEPS_PER_EPOCH] [START] [END]
  340. ```
  341. After testing, you can find many txt file save the box information and scores,
  342. open it you can see:
  343. ```python
  344. 646.3 189.1 42.1 51.8 0.747 # left top hight weight score
  345. 157.4 408.6 43.1 54.1 0.667
  346. 120.3 212.4 38.7 42.8 0.650
  347. ...
  348. ```
  349. ## [Evaluation Process](#contents)
  350. ### Evaluation
  351. ```python
  352. # after test, prepare for eval CenterFace, get MAP
  353. cd ../dependency/evaluate;
  354. python setup.py install;
  355. cd ../../../scripts;
  356. ```
  357. 1. eval a single testing output
  358. ```python
  359. # you need to change the parameter in eval.sh
  360. # default eval the ckpt saved in ./scripts/output/centerface/999
  361. sh eval.sh
  362. ```
  363. 2. eval many testing output for user to choose the best one
  364. ```python
  365. # you need to change the parameter in eval_all.sh
  366. # default eval the ckpt saved in ./scripts/output/centerface/[89-140]
  367. sh eval_all.sh
  368. ```
  369. 3. test+eval
  370. ```python
  371. # you need to change the parameter in test_and_eval.sh
  372. # or use symbolic link as quick start, default eval the ckpt saved in ./scripts/output/centerface/999
  373. # or use the command as follow, most are the same as test.sh, the different are:
  374. # GROUND_TRUTH_PATH: ground truth path
  375. sh test_and_eval.sh [MODEL_PATH] [DATASET] [GROUND_TRUTH_MAT] [SAVE_PATH] [CKPT] [GROUND_TRUTH_PATH]
  376. ```
  377. you can see the MAP below by eval.sh
  378. ```
  379. (ci3.7) [root@bms-aiserver scripts]# ./eval.sh
  380. start eval
  381. ==================== Results = ==================== ./scripts/output/centerface/999
  382. Easy Val AP: 0.923914407045363
  383. Medium Val AP: 0.9166100571371586
  384. Hard Val AP: 0.7810750535799462
  385. =================================================
  386. end eval
  387. ```
  388. you can see the MAP below by eval_all.sh
  389. ```
  390. (ci3.7) [root@bms-aiserver scripts]# ./eval_all.sh
  391. ==================== Results = ==================== ./scripts/output/centerface/89
  392. Easy Val AP: 0.8884892849068273
  393. Medium Val AP: 0.8928813452811216
  394. Hard Val AP: 0.7721131614294564
  395. =================================================
  396. ==================== Results = ==================== ./scripts/output/centerface/90
  397. Easy Val AP: 0.8836073914165545
  398. Medium Val AP: 0.8875938506473486
  399. Hard Val AP: 0.775956751740446
  400. ...
  401. ==================== Results = ==================== ./scripts/output/centerface/125
  402. Easy Val AP: 0.923914407045363
  403. Medium Val AP: 0.9166100571371586
  404. Hard Val AP: 0.7810750535799462
  405. =================================================
  406. ==================== Results = ==================== ./scripts/output/centerface/126
  407. Easy Val AP: 0.9218741197149122
  408. Medium Val AP: 0.9151860193570651
  409. Hard Val AP: 0.7825645670331809
  410. ...
  411. ==================== Results = ==================== ./scripts/output/centerface/140
  412. Easy Val AP: 0.9250715236965638
  413. Medium Val AP: 0.9170429723233877
  414. Hard Val AP: 0.7822182013830674
  415. =================================================
  416. ```
  417. ## [Convert Process](#contents)
  418. ### Convert
  419. If you want to infer the network on Ascend 310, you should convert the model to AIR:
  420. ```python
  421. python export.py [BATCH_SIZE] [PRETRAINED_BACKBONE]
  422. ```
  423. # [Model Description](#contents)
  424. ## [Performance](#contents)
  425. ### Evaluation Performance
  426. CenterFace on 13K images(The annotation and data format must be the same as widerFace)
  427. | Parameters | CenterFace |
  428. | -------------------------- | ----------------------------------------------------------- |
  429. | Resource | Ascend 910; CPU 2.60GHz, 192cores; Memory, 755G |
  430. | uploaded Date | 10/29/2020 (month/day/year) |
  431. | MindSpore Version | 1.0.0 |
  432. | Dataset | 13K images |
  433. | Training Parameters | epoch=140, steps=198 * epoch, batch_size = 8, lr=0.004 |
  434. | Optimizer | Adam |
  435. | Loss Function | Focal Loss, L1 Loss, Smooth L1 Loss |
  436. | outputs | heatmaps |
  437. | Loss | 0.3-1.5, average loss for last epoch is in 0.8-1.0 |
  438. | Speed | 1p 65 img/s, 8p 475 img/s |
  439. | Total time | train(8p) 1.1h, test 50min, eval 5-10min |
  440. | Checkpoint for Fine tuning | 22M (.ckpt file) |
  441. | Scripts | https://gitee.com/mindspore/mindspore/tree/master/model_zoo/official/cv/centerface |
  442. ### Inference Performance
  443. CenterFace on 3.2K images(The annotation and data format must be the same as widerFace)
  444. | Parameters | CenterFace |
  445. | -------------------------- | ----------------------------------------------------------- |
  446. | Resource | Ascend 910; CPU 2.60GHz, 192cores; Memory, 755G |
  447. | uploaded Date | 10/29/2020 (month/day/year) |
  448. | MindSpore Version | 1.0.0 |
  449. | Dataset | 3.2K images |
  450. | batch_size | 1 |
  451. | outputs | box position and sorces, and probability |
  452. | Accuracy | Easy 92.2% Medium 91.5% Hard 78.2% (+-0.5%) |
  453. | Model for inference | 22M (.ckpt file) |
  454. # [Description of Random Situation](#contents)
  455. In dataset.py, we set the seed inside ```create_dataset``` function.
  456. In var_init.py, we set seed for weight initilization
  457. # [ModelZoo Homepage](#contents)
  458. Please check the official [homepage](https://gitee.com/mindspore/mindspore/tree/master/model_zoo).