You can not select more than 25 topics Topics must start with a chinese character,a letter or number, can include dashes ('-') and can be up to 35 characters long.

README.md 25 kB

4 years ago
4 years ago
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462463464465466467468469470471472473474475476477478479480481482483484485486487488489490491492493494495496497498499500501502503504505506507508509510511512513514515516517518519520521522523524525526527528529530531532533534535536537538539540541542543544545546547548549550551552553554555556557558559560561562563564565566567568569
  1. # Contents
  2. - [DeepLabV3 Description](#DeepLabV3-description)
  3. - [Model Architecture](#model-architecture)
  4. - [Dataset](#dataset)
  5. - [Features](#features)
  6. - [Mixed Precision](#mixed-precision)
  7. - [Environment Requirements](#environment-requirements)
  8. - [Quick Start](#quick-start)
  9. - [Script Description](#script-description)
  10. - [Script and Sample Code](#script-and-sample-code)
  11. - [Script Parameters](#script-parameters)
  12. - [Training Process](#training-process)
  13. - [Evaluation Process](#evaluation-process)
  14. - [Export MindIR](#export-mindir)
  15. - [Inference Process](#inference-process)
  16. - [Model Description](#model-description)
  17. - [Performance](#performance)
  18. - [Evaluation Performance](#evaluation-performance)
  19. - [Description of Random Situation](#description-of-random-situation)
  20. - [ModelZoo Homepage](#modelzoo-homepage)
  21. # [DeepLabV3 Description](#contents)
  22. ## Description
  23. DeepLab is a series of image semantic segmentation models, DeepLabV3 improves significantly over previous versions. Two keypoints of DeepLabV3: Its multi-grid atrous convolution makes it better to deal with segmenting objects at multiple scales, and augmented ASPP makes image-level features available to capture long range information.
  24. This repository provides a script and recipe to DeepLabV3 model and achieve state-of-the-art performance.
  25. Refer to [this paper][1] for network details.
  26. `Chen L C, Papandreou G, Schroff F, et al. Rethinking atrous convolution for semantic image segmentation[J]. arXiv preprint arXiv:1706.05587, 2017.`
  27. [1]: https://arxiv.org/abs/1706.05587
  28. # [Model Architecture](#contents)
  29. Resnet101 as backbone, atrous convolution for dense feature extraction.
  30. # [Dataset](#contents)
  31. Pascal VOC datasets and Semantic Boundaries Dataset
  32. - Download segmentation dataset.
  33. - Prepare the training data list file. The list file saves the relative path to image and annotation pairs. Lines are like:
  34. ```shell
  35. JPEGImages/00001.jpg SegmentationClassGray/00001.png
  36. JPEGImages/00002.jpg SegmentationClassGray/00002.png
  37. JPEGImages/00003.jpg SegmentationClassGray/00003.png
  38. JPEGImages/00004.jpg SegmentationClassGray/00004.png
  39. ......
  40. ```
  41. You can also generate the list file automatically by run script: `python get_dataset_lst.py --data_root=/PATH/TO/DATA`
  42. - Configure and run build_data.sh to convert dataset to mindrecords. Arguments in scripts/build_data.sh:
  43. ```shell
  44. --data_root root path of training data
  45. --data_lst list of training data(prepared above)
  46. --dst_path where mindrecords are saved
  47. --num_shards number of shards of the mindrecords
  48. --shuffle shuffle or not
  49. ```
  50. # [Features](#contents)
  51. ## Mixed Precision
  52. The [mixed precision](https://www.mindspore.cn/tutorial/training/zh-CN/master/advanced_use/enable_mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data types, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware.
  53. For FP16 operators, if the input data type is FP32, the backend of MindSpore will automatically handle it with reduced precision. Users could check the reduced-precision operators by enabling INFO log and then searching ‘reduce precision’.
  54. # [Environment Requirements](#contents)
  55. - Hardware(Ascend)
  56. - Prepare hardware environment with Ascend.
  57. - Framework
  58. - [MindSpore](https://www.mindspore.cn/install/en)
  59. - For more information, please check the resources below:
  60. - [MindSpore Tutorials](https://www.mindspore.cn/tutorial/training/zh-CN/master/index.html)
  61. - [MindSpore Python API](https://www.mindspore.cn/doc/api_python/zh-CN/master/index.html)
  62. - Install python packages in requirements.txt
  63. - Generate config json file for 8pcs training
  64. ```bash
  65. # From the root of this project
  66. cd src/tools/
  67. python3 get_multicards_json.py 10.111.*.*
  68. # 10.111.*.* is the computer's ip address.
  69. ```
  70. # [Quick Start](#contents)
  71. After installing MindSpore via the official website, you can start training and evaluation as follows:
  72. - Running on Ascend
  73. Based on original DeepLabV3 paper, we reproduce two training experiments on vocaug (also as trainaug) dataset and evaluate on voc val dataset.
  74. For single device training, please config parameters, training script is:
  75. ```shell
  76. run_standalone_train.sh
  77. ```
  78. For 8 devices training, training steps are as follows:
  79. 1. Train s16 with vocaug dataset, finetuning from resnet101 pretrained model, script is:
  80. ```shell
  81. run_distribute_train_s16_r1.sh
  82. ```
  83. 2. Train s8 with vocaug dataset, finetuning from model in previous step, training script is:
  84. ```shell
  85. run_distribute_train_s8_r1.sh
  86. ```
  87. 3. Train s8 with voctrain dataset, finetuning from model in previous step, training script is:
  88. ```shell
  89. run_distribute_train_s8_r2.sh
  90. ```
  91. For evaluation, evaluating steps are as follows:
  92. 1. Eval s16 with voc val dataset, eval script is:
  93. ```shell
  94. run_eval_s16.sh
  95. ```
  96. 2. Eval s8 with voc val dataset, eval script is:
  97. ```shell
  98. run_eval_s8.sh
  99. ```
  100. 3. Eval s8 multiscale with voc val dataset, eval script is:
  101. ```shell
  102. run_eval_s8_multiscale.sh
  103. ```
  104. 4. Eval s8 multiscale and flip with voc val dataset, eval script is:
  105. ```shell
  106. run_eval_s8_multiscale_flip.sh
  107. ```
  108. # [Script Description](#contents)
  109. ## [Script and Sample Code](#contents)
  110. ```shell
  111. .
  112. └──deeplabv3
  113. ├── README.md
  114. ├── scripts
  115. ├── build_data.sh # convert raw data to mindrecord dataset
  116. ├── run_distribute_train_s16_r1.sh # launch ascend distributed training(8 pcs) with vocaug dataset in s16 structure
  117. ├── run_distribute_train_s8_r1.sh # launch ascend distributed training(8 pcs) with vocaug dataset in s8 structure
  118. ├── run_distribute_train_s8_r2.sh # launch ascend distributed training(8 pcs) with voctrain dataset in s8 structure
  119. ├── run_eval_s16.sh # launch ascend evaluation in s16 structure
  120. ├── run_eval_s8.sh # launch ascend evaluation in s8 structure
  121. ├── run_eval_s8_multiscale.sh # launch ascend evaluation with multiscale in s8 structure
  122. ├── run_eval_s8_multiscale_filp.sh # launch ascend evaluation with multiscale and filp in s8 structure
  123. ├── run_standalone_train.sh # launch ascend standalone training(1 pc)
  124. ├── run_standalone_train_cpu.sh # launch CPU standalone training
  125. ├── src
  126. ├── data
  127. ├── dataset.py # mindrecord data generator
  128. ├── build_seg_data.py # data preprocessing
  129. ├── get_dataset_lst.py # dataset list file generator
  130. ├── loss
  131. ├── loss.py # loss definition for deeplabv3
  132. ├── nets
  133. ├── deeplab_v3
  134. ├── deeplab_v3.py # DeepLabV3 network structure
  135. ├── net_factory.py # set S16 and S8 structures
  136. ├── tools
  137. ├── get_multicards_json.py # get rank table file
  138. └── utils
  139. └── learning_rates.py # generate learning rate
  140. ├── eval.py # eval net
  141. ├── train.py # train net
  142. └── requirements.txt # requirements file
  143. ```
  144. ## [Script Parameters](#contents)
  145. Default configuration
  146. ```shell
  147. "data_file":"/PATH/TO/MINDRECORD_NAME" # dataset path
  148. "device_target":Ascend # device target
  149. "train_epochs":300 # total epochs
  150. "batch_size":32 # batch size of input tensor
  151. "crop_size":513 # crop size
  152. "base_lr":0.08 # initial learning rate
  153. "lr_type":cos # decay mode for generating learning rate
  154. "min_scale":0.5 # minimum scale of data argumentation
  155. "max_scale":2.0 # maximum scale of data argumentation
  156. "ignore_label":255 # ignore label
  157. "num_classes":21 # number of classes
  158. "model":deeplab_v3_s16 # select model
  159. "ckpt_pre_trained":"/PATH/TO/PRETRAIN_MODEL" # path to load pretrain checkpoint
  160. "is_distributed": # distributed training, it will be True if the parameter is set
  161. "save_steps":410 # steps interval for saving
  162. "keep_checkpoint_max":200 # max checkpoint for saving
  163. ```
  164. ## [Training Process](#contents)
  165. ### Usage
  166. #### Running on Ascend
  167. Based on original DeepLabV3 paper, we reproduce two training experiments on vocaug (also as trainaug) dataset and evaluate on voc val dataset.
  168. For single device training, please config parameters, training script is as follows:
  169. ```shell
  170. # run_standalone_train.sh
  171. python ${train_code_path}/train.py --data_file=/PATH/TO/MINDRECORD_NAME \
  172. --train_dir=${train_path}/ckpt \
  173. --train_epochs=200 \
  174. --batch_size=32 \
  175. --crop_size=513 \
  176. --base_lr=0.015 \
  177. --lr_type=cos \
  178. --min_scale=0.5 \
  179. --max_scale=2.0 \
  180. --ignore_label=255 \
  181. --num_classes=21 \
  182. --model=deeplab_v3_s16 \
  183. --ckpt_pre_trained=/PATH/TO/PRETRAIN_MODEL \
  184. --save_steps=1500 \
  185. --keep_checkpoint_max=200 >log 2>&1 &
  186. ```
  187. For 8 devices training, training steps are as follows:
  188. 1. Train s16 with vocaug dataset, finetuning from resnet101 pretrained model, script is as follows:
  189. ```shell
  190. # run_distribute_train_s16_r1.sh
  191. for((i=0;i<=$RANK_SIZE-1;i++));
  192. do
  193. export RANK_ID=${i}
  194. export DEVICE_ID=$((i + RANK_START_ID))
  195. echo 'start rank='${i}', device id='${DEVICE_ID}'...'
  196. mkdir ${train_path}/device${DEVICE_ID}
  197. cd ${train_path}/device${DEVICE_ID} || exit
  198. python ${train_code_path}/train.py --train_dir=${train_path}/ckpt \
  199. --data_file=/PATH/TO/MINDRECORD_NAME \
  200. --train_epochs=300 \
  201. --batch_size=32 \
  202. --crop_size=513 \
  203. --base_lr=0.08 \
  204. --lr_type=cos \
  205. --min_scale=0.5 \
  206. --max_scale=2.0 \
  207. --ignore_label=255 \
  208. --num_classes=21 \
  209. --model=deeplab_v3_s16 \
  210. --ckpt_pre_trained=/PATH/TO/PRETRAIN_MODEL \
  211. --is_distributed \
  212. --save_steps=410 \
  213. --keep_checkpoint_max=200 >log 2>&1 &
  214. done
  215. ```
  216. 2. Train s8 with vocaug dataset, finetuning from model in previous step, training script is as follows:
  217. ```shell
  218. # run_distribute_train_s8_r1.sh
  219. for((i=0;i<=$RANK_SIZE-1;i++));
  220. do
  221. export RANK_ID=${i}
  222. export DEVICE_ID=$((i + RANK_START_ID))
  223. echo 'start rank='${i}', device id='${DEVICE_ID}'...'
  224. mkdir ${train_path}/device${DEVICE_ID}
  225. cd ${train_path}/device${DEVICE_ID} || exit
  226. python ${train_code_path}/train.py --train_dir=${train_path}/ckpt \
  227. --data_file=/PATH/TO/MINDRECORD_NAME \
  228. --train_epochs=800 \
  229. --batch_size=16 \
  230. --crop_size=513 \
  231. --base_lr=0.02 \
  232. --lr_type=cos \
  233. --min_scale=0.5 \
  234. --max_scale=2.0 \
  235. --ignore_label=255 \
  236. --num_classes=21 \
  237. --model=deeplab_v3_s8 \
  238. --loss_scale=2048 \
  239. --ckpt_pre_trained=/PATH/TO/PRETRAIN_MODEL \
  240. --is_distributed \
  241. --save_steps=820 \
  242. --keep_checkpoint_max=200 >log 2>&1 &
  243. done
  244. ```
  245. 3. Train s8 with voctrain dataset, finetuning from model in previous step, training script is as follows:
  246. ```shell
  247. # run_distribute_train_s8_r2.sh
  248. for((i=0;i<=$RANK_SIZE-1;i++));
  249. do
  250. export RANK_ID=${i}
  251. export DEVICE_ID=$((i + RANK_START_ID))
  252. echo 'start rank='${i}', device id='${DEVICE_ID}'...'
  253. mkdir ${train_path}/device${DEVICE_ID}
  254. cd ${train_path}/device${DEVICE_ID} || exit
  255. python ${train_code_path}/train.py --train_dir=${train_path}/ckpt \
  256. --data_file=/PATH/TO/MINDRECORD_NAME \
  257. --train_epochs=300 \
  258. --batch_size=16 \
  259. --crop_size=513 \
  260. --base_lr=0.008 \
  261. --lr_type=cos \
  262. --min_scale=0.5 \
  263. --max_scale=2.0 \
  264. --ignore_label=255 \
  265. --num_classes=21 \
  266. --model=deeplab_v3_s8 \
  267. --loss_scale=2048 \
  268. --ckpt_pre_trained=/PATH/TO/PRETRAIN_MODEL \
  269. --is_distributed \
  270. --save_steps=110 \
  271. --keep_checkpoint_max=200 >log 2>&1 &
  272. done
  273. ```
  274. #### Running on CPU
  275. For CPU training, please config parameters, training script is as follows:
  276. ```shell
  277. # run_standalone_train_cpu.sh
  278. python ${train_code_path}/train.py --data_file=/PATH/TO/MINDRECORD_NAME \
  279. --device_target=CPU \
  280. --train_dir=${train_path}/ckpt \
  281. --train_epochs=200 \
  282. --batch_size=32 \
  283. --crop_size=513 \
  284. --base_lr=0.015 \
  285. --lr_type=cos \
  286. --min_scale=0.5 \
  287. --max_scale=2.0 \
  288. --ignore_label=255 \
  289. --num_classes=21 \
  290. --model=deeplab_v3_s16 \
  291. --ckpt_pre_trained=/PATH/TO/PRETRAIN_MODEL \
  292. --save_steps=1500 \
  293. --keep_checkpoint_max=200 >log 2>&1 &
  294. ```
  295. ### Result
  296. #### Running on Ascend
  297. - Training vocaug in s16 structure
  298. ```shell
  299. # distribute training result(8p)
  300. epoch: 1 step: 41, loss is 0.8319108
  301. epoch time: 213856.477 ms, per step time: 5216.012 ms
  302. epoch: 2 step: 41, loss is 0.46052963
  303. epoch time: 21233.183 ms, per step time: 517.883 ms
  304. epoch: 3 step: 41, loss is 0.45012417
  305. epoch time: 21231.951 ms, per step time: 517.852 ms
  306. epoch: 4 step: 41, loss is 0.30687785
  307. epoch time: 21199.911 ms, per step time: 517.071 ms
  308. epoch: 5 step: 41, loss is 0.22769661
  309. epoch time: 21240.281 ms, per step time: 518.056 ms
  310. epoch: 6 step: 41, loss is 0.25470978
  311. ...
  312. ```
  313. - Training vocaug in s8 structure
  314. ```shell
  315. # distribute training result(8p)
  316. epoch: 1 step: 82, loss is 0.024167
  317. epoch time: 322663.456 ms, per step time: 3934.920 ms
  318. epoch: 2 step: 82, loss is 0.019832281
  319. epoch time: 43107.238 ms, per step time: 525.698 ms
  320. epoch: 3 step: 82, loss is 0.021008959
  321. epoch time: 43109.519 ms, per step time: 525.726 ms
  322. epoch: 4 step: 82, loss is 0.01912349
  323. epoch time: 43177.287 ms, per step time: 526.552 ms
  324. epoch: 5 step: 82, loss is 0.022886964
  325. epoch time: 43095.915 ms, per step time: 525.560 ms
  326. epoch: 6 step: 82, loss is 0.018708453
  327. epoch time: 43107.458 ms per step time: 525.701 ms
  328. ...
  329. ```
  330. - Training voctrain in s8 structure
  331. ```shell
  332. # distribute training result(8p)
  333. epoch: 1 step: 11, loss is 0.00554624
  334. epoch time: 199412.913 ms, per step time: 18128.447 ms
  335. epoch: 2 step: 11, loss is 0.007181881
  336. epoch time: 6119.375 ms, per step time: 556.307 ms
  337. epoch: 3 step: 11, loss is 0.004980865
  338. epoch time: 5996.978 ms, per step time: 545.180 ms
  339. epoch: 4 step: 11, loss is 0.0047651967
  340. epoch time: 5987.412 ms, per step time: 544.310 ms
  341. epoch: 5 step: 11, loss is 0.006262637
  342. epoch time: 5956.682 ms, per step time: 541.517 ms
  343. epoch: 6 step: 11, loss is 0.0060750707
  344. epoch time: 5962.164 ms, per step time: 542.015 ms
  345. ...
  346. ```
  347. #### Running on CPU
  348. - Training voctrain in s16 structure
  349. ```bash
  350. epoch: 1 step: 1, loss is 3.655448
  351. epoch: 2 step: 1, loss is 1.5531876
  352. epoch: 3 step: 1, loss is 1.5099041
  353. ...
  354. ```
  355. #### Transfer Training
  356. You can train your own model based on pretrained model. You can perform transfer training by following steps.
  357. 1. Convert your own dataset to Pascal VOC datasets. Otherwise you have to add your own data preprocess code.
  358. 2. Set argument `filter_weight` to `True`, `ckpt_pre_trained` to pretrained checkpoint and `num_classes` to the classes of your dataset while calling `train.py`, this will filter the final conv weight from the pretrained model.
  359. 3. Build your own bash scripts using new config and arguments for further convenient.
  360. ## [Evaluation Process](#contents)
  361. ### Usage
  362. #### Running on Ascend
  363. Configure checkpoint with --ckpt_path and dataset path. Then run script, mIOU will be printed in eval_path/eval_log.
  364. ```shell
  365. ./run_eval_s16.sh # test s16
  366. ./run_eval_s8.sh # test s8
  367. ./run_eval_s8_multiscale.sh # test s8 + multiscale
  368. ./run_eval_s8_multiscale_flip.sh # test s8 + multiscale + flip
  369. ```
  370. Example of test script is as follows:
  371. ```shell
  372. python ${train_code_path}/eval.py --data_root=/PATH/TO/DATA \
  373. --data_lst=/PATH/TO/DATA_lst.txt \
  374. --batch_size=16 \
  375. --crop_size=513 \
  376. --ignore_label=255 \
  377. --num_classes=21 \
  378. --model=deeplab_v3_s8 \
  379. --scales=0.5 \
  380. --scales=0.75 \
  381. --scales=1.0 \
  382. --scales=1.25 \
  383. --scales=1.75 \
  384. --flip \
  385. --freeze_bn \
  386. --ckpt_path=/PATH/TO/PRETRAIN_MODEL >${eval_path}/eval_log 2>&1 &
  387. ```
  388. ### Result
  389. Our result were obtained by running the applicable training script. To achieve the same results, follow the steps in the Quick Start Guide.
  390. #### Training accuracy
  391. | **Network** | OS=16 | OS=8 | MS | Flip | mIOU | mIOU in paper |
  392. | :----------: | :-----: | :----: | :----: | :-----: | :-----: | :-------------: |
  393. | deeplab_v3 | √ | | | | 77.37 | 77.21 |
  394. | deeplab_v3 | | √ | | | 78.84 | 78.51 |
  395. | deeplab_v3 | | √ | √ | | 79.70 |79.45 |
  396. | deeplab_v3 | | √ | √ | √ | 79.89 | 79.77 |
  397. Note: There OS is output stride, and MS is multiscale.
  398. ## [Export MindIR](#contents)
  399. Currently, batchsize can only set to 1.
  400. ```shell
  401. python export.py --ckpt_file [CKPT_PATH] --file_name [FILE_NAME] --file_format [FILE_FORMAT]
  402. ```
  403. The ckpt_file parameter is required,
  404. `EXPORT_FORMAT` should be in ["AIR", "MINDIR"]
  405. ## [Inference Process](#contents)
  406. ### Usage
  407. Before performing inference, the air file must bu exported by export script on the 910 environment.
  408. Current batch_Size can only be set to 1. The precision calculation process needs about 70G+ memory space.
  409. ```shell
  410. # Ascend310 inference
  411. bash run_infer_310.sh [MINDIR_PATH] [DATA_PATH] [DATA_ROOT] [DATA_LIST] [DEVICE_ID]
  412. ```
  413. `DEVICE_ID` is optional, default value is 0.
  414. ### result
  415. Inference result is saved in current path, you can find result in acc.log file.
  416. | **Network** | OS=16 | OS=8 | MS | Flip | mIOU | mIOU in paper |
  417. | :----------: | :-----: | :----: | :----: | :-----: | :-----: | :-------------: |
  418. | deeplab_v3 | | √ | | | 78.84 | 78.51 |
  419. # [Model Description](#contents)
  420. ## [Performance](#contents)
  421. ### Evaluation Performance
  422. | Parameters | Ascend 910
  423. | -------------------------- | -------------------------------------- |
  424. | Model Version | DeepLabV3
  425. | Resource | Ascend 910; OS Euler2.8 |
  426. | Uploaded Date | 09/04/2020 (month/day/year) |
  427. | MindSpore Version | 0.7.0-alpha |
  428. | Dataset | PASCAL VOC2012 + SBD |
  429. | Training Parameters | epoch = 300, batch_size = 32 (s16_r1) <br> epoch = 800, batch_size = 16 (s8_r1) <br> epoch = 300, batch_size = 16 (s8_r2) |
  430. | Optimizer | Momentum |
  431. | Loss Function | Softmax Cross Entropy |
  432. | Outputs | probability |
  433. | Loss | 0.0065883575 |
  434. | Speed | 60 fps(1pc, s16)<br> 480 fps(8pcs, s16) <br> 244 fps (8pcs, s8) |
  435. | Total time | 8pcs: 706 mins |
  436. | Parameters (M) | 58.2 |
  437. | Checkpoint for Fine tuning | 443M (.ckpt file) |
  438. | Model for inference | 223M (.air file) |
  439. | Scripts | [Link](https://gitee.com/mindspore/mindspore/tree/master/model_zoo/official/cv/deeplabv3) |
  440. ## Inference Performance
  441. | Parameters | Ascend |
  442. | ------------------- | --------------------------- |
  443. | Model Version | DeepLabV3 V1 |
  444. | Resource | Ascend 910; OS Euler2.8 |
  445. | Uploaded Date | 09/04/2020 (month/day/year) |
  446. | MindSpore Version | 0.7.0-alpha |
  447. | Dataset | VOC datasets |
  448. | batch_size | 32 (s16); 16 (s8) |
  449. | outputs | probability |
  450. | Accuracy | 8pcs: <br> s16: 77.37 <br> s8: 78.84% <br> s8_multiscale: 79.70% <br> s8_Flip: 79.89% |
  451. | Model for inference | 443M (.ckpt file) |
  452. # [Description of Random Situation](#contents)
  453. In dataset.py, we set the seed inside "create_dataset" function. We also use random seed in train.py.
  454. # [ModelZoo Homepage](#contents)
  455. Please check the official [homepage](https://gitee.com/mindspore/mindspore/tree/master/model_zoo).