You can not select more than 25 topics Topics must start with a chinese character,a letter or number, can include dashes ('-') and can be up to 35 characters long.

README.md 25 kB

5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462463464465466467468469470471472473474475476477478
  1. # Contents
  2. - [Contents](#contents)
  3. - [SSD Description](#ssd-description)
  4. - [Model Architecture](#model-architecture)
  5. - [Dataset](#dataset)
  6. - [Environment Requirements](#environment-requirements)
  7. - [Quick Start](#quick-start)
  8. - [Prepare the model](#prepare-the-model)
  9. - [Run the scripts](#run-the-scripts)
  10. - [Script Description](#script-description)
  11. - [Script and Sample Code](#script-and-sample-code)
  12. - [Script Parameters](#script-parameters)
  13. - [Training Process](#training-process)
  14. - [Training on Ascend](#training-on-ascend)
  15. - [Training on GPU](#training-on-gpu)
  16. - [Transfer Training](#transfer-training)
  17. - [Evaluation Process](#evaluation-process)
  18. - [Evaluation on Ascend](#evaluation-on-ascend)
  19. - [Evaluation on GPU](#evaluation-on-gpu)
  20. - [Inference Process](#inference-process)
  21. - [Export MindIR](#export-mindir)
  22. - [Infer on Ascend310](#infer-on-ascend310)
  23. - [result](#result)
  24. - [Model Description](#model-description)
  25. - [Performance](#performance)
  26. - [Evaluation Performance](#evaluation-performance)
  27. - [Inference Performance](#inference-performance)
  28. - [Description of Random Situation](#description-of-random-situation)
  29. - [ModelZoo Homepage](#modelzoo-homepage)
  30. ## [SSD Description](#contents)
  31. SSD discretizes the output space of bounding boxes into a set of default boxes over different aspect ratios and scales per feature map location. At prediction time, the network generates scores for the presence of each object category in each default box and produces adjustments to the box to better match the object shape.Additionally, the network combines predictions from multiple feature maps with different resolutions to naturally handle objects of various sizes.
  32. [Paper](https://arxiv.org/abs/1512.02325): Wei Liu, Dragomir Anguelov, Dumitru Erhan, Christian Szegedy, Scott Reed, Cheng-Yang Fu, Alexander C. Berg.European Conference on Computer Vision (ECCV), 2016 (In press).
  33. ## [Model Architecture](#contents)
  34. The SSD approach is based on a feed-forward convolutional network that produces a fixed-size collection of bounding boxes and scores for the presence of object class instances in those boxes, followed by a non-maximum suppression step to produce the final detections. The early network layers are based on a standard architecture used for high quality image classification, which is called the base network. Then add auxiliary structure to the network to produce detections.
  35. We present four different base architecture.
  36. - **ssd300**, reference from the paper. Using mobilenetv2 as backbone and the same bbox predictor as the paper present.
  37. - ***ssd-mobilenet-v1-fpn**, using mobilenet-v1 and FPN as feature extractor with weight-shared box predcitors.
  38. - ***ssd-resnet50-fpn**, using resnet50 and FPN as feature extractor with weight-shared box predcitors.
  39. - **ssd-vgg16**, reference from the paper. Using vgg16 as backbone and the same bbox predictor as the paper present.
  40. ## [Dataset](#contents)
  41. Note that you can run the scripts based on the dataset mentioned in original paper or widely used in relevant domain/network architecture. In the following sections, we will introduce how to run the scripts using the related dataset below.
  42. Dataset used: [COCO2017](<http://images.cocodataset.org/>)
  43. - Dataset size:19G
  44. - Train:18G,118000 images
  45. - Val:1G,5000 images
  46. - Annotations:241M,instances,captions,person_keypoints etc
  47. - Data format:image and json files
  48. - Note:Data will be processed in dataset.py
  49. ## [Environment Requirements](#contents)
  50. - Install [MindSpore](https://www.mindspore.cn/install/en).
  51. - Download the dataset COCO2017.
  52. - We use COCO2017 as training dataset in this example by default, and you can also use your own datasets.
  53. First, install Cython ,pycocotool and opencv to process data and to get evaluation result.
  54. ```shell
  55. pip install Cython
  56. pip install pycocotools
  57. pip install opencv-python
  58. ```
  59. 1. If coco dataset is used. **Select dataset to coco when run script.**
  60. Change the `coco_root` and other settings you need in `src/config.py`. The directory structure is as follows:
  61. ```shell
  62. .
  63. └─coco_dataset
  64. ├─annotations
  65. ├─instance_train2017.json
  66. └─instance_val2017.json
  67. ├─val2017
  68. └─train2017
  69. ```
  70. 2. If VOC dataset is used. **Select dataset to voc when run script.**
  71. Change `classes`, `num_classes`, `voc_json` and `voc_root` in `src/config.py`. `voc_json` is the path of json file with coco format for evaluation, `voc_root` is the path of VOC dataset, the directory structure is as follows:
  72. ```shell
  73. .
  74. └─voc_dataset
  75. └─train
  76. ├─0001.jpg
  77. └─0001.xml
  78. ...
  79. ├─xxxx.jpg
  80. └─xxxx.xml
  81. └─eval
  82. ├─0001.jpg
  83. └─0001.xml
  84. ...
  85. ├─xxxx.jpg
  86. └─xxxx.xml
  87. ```
  88. 3. If your own dataset is used. **Select dataset to other when run script.**
  89. Organize the dataset information into a TXT file, each row in the file is as follows:
  90. ```shell
  91. train2017/0000001.jpg 0,259,401,459,7 35,28,324,201,2 0,30,59,80,2
  92. ```
  93. Each row is an image annotation which split by space, the first column is a relative path of image, the others are box and class infomations of the format [xmin,ymin,xmax,ymax,class]. We read image from an image path joined by the `image_dir`(dataset directory) and the relative path in `anno_path`(the TXT file path), `image_dir` and `anno_path` are setting in `src/config.py`.
  94. ## [Quick Start](#contents)
  95. ### Prepare the model
  96. 1. Chose the model by changing the `using_model` in `src/confgi.py`. The optional models are: `ssd300`, `ssd_mobilenet_v1_fpn`.
  97. 2. Change the dataset config in the corresponding config. `src/config_ssd300.py` or `src/config_ssd_mobilenet_v1_fpn.py`.
  98. 3. If you are running with `ssd_mobilenet_v1_fpn`, you need a pretrained model for `mobilenet_v1`. Set the checkpoint path to `feature_extractor_base_param` in `src/config_ssd_mobilenet_v1_fpn.py`. For more detail about training mobilnet_v1, please refer to the mobilenetv1 model.
  99. ### Run the scripts
  100. After installing MindSpore via the official website, you can start training and evaluation as follows:
  101. - running on Ascend
  102. ```shell
  103. # distributed training on Ascend
  104. bash run_distribute_train.sh [DEVICE_NUM] [EPOCH_SIZE] [LR] [DATASET] [RANK_TABLE_FILE]
  105. # run eval on Ascend
  106. bash run_eval.sh [DATASET] [CHECKPOINT_PATH] [DEVICE_ID]
  107. # run inference on Ascend310, MINDIR_PATH is the mindir model which you can export from checkpoint using export.py
  108. bash run_infer_310.sh [MINDIR_PATH] [DATA_PATH] [DEVICE_ID]
  109. ```
  110. - running on GPU
  111. ```shell
  112. # distributed training on GPU
  113. bash run_distribute_train_gpu.sh [DEVICE_NUM] [EPOCH_SIZE] [LR] [DATASET]
  114. # run eval on GPU
  115. bash run_eval_gpu.sh [DATASET] [CHECKPOINT_PATH] [DEVICE_ID]
  116. ```
  117. - running on CPU(support Windows and Ubuntu)
  118. **CPU is usually used for fine-tuning, which needs pre_trained checkpoint.**
  119. ```shell
  120. # training on CPU
  121. python train.py --run_platform=CPU --lr=[LR] --dataset=[DATASET] --epoch_size=[EPOCH_SIZE] --batch_size=[BATCH_SIZE] --pre_trained=[PRETRAINED_CKPT] --filter_weight=True --save_checkpoint_epochs=1
  122. # run eval on GPU
  123. python eval.py --run_platform=CPU --dataset=[DATASET] --checkpoint_path=[PRETRAINED_CKPT]
  124. ```
  125. - Run on docker
  126. Build docker images(Change version to the one you actually used)
  127. ```shell
  128. # build docker
  129. docker build -t ssd:20.1.0 . --build-arg FROM_IMAGE_NAME=ascend-mindspore-arm:20.1.0
  130. ```
  131. Create a container layer over the created image and start it
  132. ```shell
  133. # start docker
  134. bash scripts/docker_start.sh ssd:20.1.0 [DATA_DIR] [MODEL_DIR]
  135. ```
  136. Then you can run everything just like on ascend.
  137. ## [Script Description](#contents)
  138. ### [Script and Sample Code](#contents)
  139. ```shell
  140. .
  141. └─ cv
  142. └─ ssd
  143. ├─ README.md # descriptions about SSD
  144. ├─ scripts
  145. ├─ run_distribute_train.sh # shell script for distributed on ascend
  146. ├─ run_distribute_train_gpu.sh # shell script for distributed on gpu
  147. ├─ run_eval.sh # shell script for eval on ascend
  148. └─ run_eval_gpu.sh # shell script for eval on gpu
  149. ├─ src
  150. ├─ __init__.py # init file
  151. ├─ box_utils.py # bbox utils
  152. ├─ eval_utils.py # metrics utils
  153. ├─ config.py # total config
  154. ├─ dataset.py # create dataset and process dataset
  155. ├─ init_params.py # parameters utils
  156. ├─ lr_schedule.py # learning ratio generator
  157. └─ ssd.py # ssd architecture
  158. ├─ eval.py # eval scripts
  159. ├─ train.py # train scripts
  160. ├─ export.py # export mindir script
  161. └─ mindspore_hub_conf.py # mindspore hub interface
  162. ```
  163. ### [Script Parameters](#contents)
  164. ```shell
  165. Major parameters in train.py and config.py as follows:
  166. "device_num": 1 # Use device nums
  167. "lr": 0.05 # Learning rate init value
  168. "dataset": coco # Dataset name
  169. "epoch_size": 500 # Epoch size
  170. "batch_size": 32 # Batch size of input tensor
  171. "pre_trained": None # Pretrained checkpoint file path
  172. "pre_trained_epoch_size": 0 # Pretrained epoch size
  173. "save_checkpoint_epochs": 10 # The epoch interval between two checkpoints. By default, the checkpoint will be saved per 10 epochs
  174. "loss_scale": 1024 # Loss scale
  175. "filter_weight": False # Load parameters in head layer or not. If the class numbers of train dataset is different from the class numbers in pre_trained checkpoint, please set True.
  176. "freeze_layer": "none" # Freeze the backbone parameters or not, support none and backbone.
  177. "class_num": 81 # Dataset class number
  178. "image_shape": [300, 300] # Image height and width used as input to the model
  179. "mindrecord_dir": "/data/MindRecord_COCO" # MindRecord path
  180. "coco_root": "/data/coco2017" # COCO2017 dataset path
  181. "voc_root": "/data/voc_dataset" # VOC original dataset path
  182. "voc_json": "annotations/voc_instances_val.json" # is the path of json file with coco format for evaluation
  183. "image_dir": "" # Other dataset image path, if coco or voc used, it will be useless
  184. "anno_path": "" # Other dataset annotation path, if coco or voc used, it will be useless
  185. ```
  186. ### [Training Process](#contents)
  187. To train the model, run `train.py`. If the `mindrecord_dir` is empty, it will generate [mindrecord](https://www.mindspore.cn/tutorial/training/zh-CN/master/advanced_use/convert_dataset.html) files by `coco_root`(coco dataset), `voc_root`(voc dataset) or `image_dir` and `anno_path`(own dataset). **Note if mindrecord_dir isn't empty, it will use mindrecord_dir instead of raw images.**
  188. #### Training on Ascend
  189. - Distribute mode
  190. ```shell
  191. bash run_distribute_train.sh [DEVICE_NUM] [EPOCH_SIZE] [LR] [DATASET] [RANK_TABLE_FILE] [PRE_TRAINED](optional) [PRE_TRAINED_EPOCH_SIZE](optional)
  192. ```
  193. We need five or seven parameters for this scripts.
  194. - `DEVICE_NUM`: the device number for distributed train.
  195. - `EPOCH_NUM`: epoch num for distributed train.
  196. - `LR`: learning rate init value for distributed train.
  197. - `DATASET`:the dataset mode for distributed train.
  198. - `RANK_TABLE_FILE :` the path of [rank_table.json](https://gitee.com/mindspore/mindspore/tree/master/model_zoo/utils/hccl_tools), it is better to use absolute path.
  199. - `PRE_TRAINED :` the path of pretrained checkpoint file, it is better to use absolute path.
  200. - `PRE_TRAINED_EPOCH_SIZE :` the epoch num of pretrained.
  201. Training result will be stored in the current path, whose folder name begins with "LOG". Under this, you can find checkpoint file together with result like the followings in log
  202. ```shell
  203. epoch: 1 step: 458, loss is 3.1681802
  204. epoch time: 228752.4654865265, per step time: 499.4595316299705
  205. epoch: 2 step: 458, loss is 2.8847265
  206. epoch time: 38912.93382644653, per step time: 84.96273761232868
  207. epoch: 3 step: 458, loss is 2.8398118
  208. epoch time: 38769.184827804565, per step time: 84.64887516987896
  209. ...
  210. epoch: 498 step: 458, loss is 0.70908034
  211. epoch time: 38771.079778671265, per step time: 84.65301261718616
  212. epoch: 499 step: 458, loss is 0.7974688
  213. epoch time: 38787.413120269775, per step time: 84.68867493508685
  214. epoch: 500 step: 458, loss is 0.5548882
  215. epoch time: 39064.8467540741, per step time: 85.29442522723602
  216. ```
  217. #### Training on GPU
  218. - Distribute mode
  219. ```shell
  220. bash run_distribute_train_gpu.sh [DEVICE_NUM] [EPOCH_SIZE] [LR] [DATASET] [PRE_TRAINED](optional) [PRE_TRAINED_EPOCH_SIZE](optional)
  221. ```
  222. We need five or seven parameters for this scripts.
  223. - `DEVICE_NUM`: the device number for distributed train.
  224. - `EPOCH_NUM`: epoch num for distributed train.
  225. - `LR`: learning rate init value for distributed train.
  226. - `DATASET`:the dataset mode for distributed train.
  227. - `PRE_TRAINED :` the path of pretrained checkpoint file, it is better to use absolute path.
  228. - `PRE_TRAINED_EPOCH_SIZE :` the epoch num of pretrained.
  229. Training result will be stored in the current path, whose folder name is "LOG". Under this, you can find checkpoint files together with result like the followings in log
  230. ```shell
  231. epoch: 1 step: 1, loss is 420.11783
  232. epoch: 1 step: 2, loss is 434.11032
  233. epoch: 1 step: 3, loss is 476.802
  234. ...
  235. epoch: 1 step: 458, loss is 3.1283689
  236. epoch time: 150753.701, per step time: 329.157
  237. ...
  238. ```
  239. #### Transfer Training
  240. You can train your own model based on either pretrained classification model or pretrained detection model. You can perform transfer training by following steps.
  241. 1. Convert your own dataset to COCO or VOC style. Otherwise you havet to add your own data preprocess code.
  242. 2. Change config.py according to your own dataset, especially the `num_classes`.
  243. 3. Set argument `filter_weight` to `True` while calling `train.py`, this will filter the final detection box weight from the pretrained model.
  244. 4. Build your own bash scripts using new config and arguments for further convenient.
  245. ### [Evaluation Process](#contents)
  246. #### Evaluation on Ascend
  247. ```shell
  248. bash run_eval.sh [DATASET] [CHECKPOINT_PATH] [DEVICE_ID]
  249. ```
  250. We need two parameters for this scripts.
  251. - `DATASET`:the dataset mode of evaluation dataset.
  252. - `CHECKPOINT_PATH`: the absolute path for checkpoint file.
  253. - `DEVICE_ID`: the device id for eval.
  254. > checkpoint can be produced in training process.
  255. Inference result will be stored in the example path, whose folder name begins with "eval". Under this, you can find result like the followings in log.
  256. ```shell
  257. Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.238
  258. Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.400
  259. Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.240
  260. Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.039
  261. Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.198
  262. Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.438
  263. Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.250
  264. Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.389
  265. Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.424
  266. Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.122
  267. Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.434
  268. Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.697
  269. ========================================
  270. mAP: 0.23808886505483504
  271. ```
  272. #### Evaluation on GPU
  273. ```shell
  274. bash run_eval_gpu.sh [DATASET] [CHECKPOINT_PATH] [DEVICE_ID]
  275. ```
  276. We need two parameters for this scripts.
  277. - `DATASET`:the dataset mode of evaluation dataset.
  278. - `CHECKPOINT_PATH`: the absolute path for checkpoint file.
  279. - `DEVICE_ID`: the device id for eval.
  280. > checkpoint can be produced in training process.
  281. Inference result will be stored in the example path, whose folder name begins with "eval". Under this, you can find result like the followings in log.
  282. ```shell
  283. Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.224
  284. Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.375
  285. Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.228
  286. Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.034
  287. Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.189
  288. Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.407
  289. Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.243
  290. Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.382
  291. Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.417
  292. Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.120
  293. Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.425
  294. Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.686
  295. ========================================
  296. mAP: 0.2244936111705981
  297. ```
  298. ## Inference Process
  299. ### [Export MindIR](#contents)
  300. ```shell
  301. python export.py --ckpt_file [CKPT_PATH] --file_name [FILE_NAME] --file_format [FILE_FORMAT]
  302. ```
  303. The ckpt_file parameter is required,
  304. `EXPORT_FORMAT` should be in ["AIR", "MINDIR"]
  305. ### Infer on Ascend310
  306. Before performing inference, the mindir file must bu exported by export script on the 910 environment. We only provide an example of inference using MINDIR model.
  307. Current batch_Size can only be set to 1. The precision calculation process needs about 70G+ memory space, otherwise the process will be killed for execeeding memory limits.
  308. ```shell
  309. # Ascend310 inference
  310. bash run_infer_310.sh [MINDIR_PATH] [DATA_PATH] [DEVICE_ID]
  311. ```
  312. `DEVICE_ID` is optional, default value is 0.
  313. ### result
  314. Inference result is saved in current path, you can find result like this in acc.log file.
  315. ```bash
  316. Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.339
  317. Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.521
  318. Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.370
  319. Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.168
  320. Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.386
  321. Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.461
  322. Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.310
  323. Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.481
  324. Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.515
  325. Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.293
  326. Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.659
  327. mAP: 0.33880018942412393
  328. ```
  329. ## [Model Description](#contents)
  330. ### [Performance](#contents)
  331. #### Evaluation Performance
  332. | Parameters | Ascend | GPU | Ascend |
  333. | ------------------- | ----------------------------------------------------------------------------- | ----------------------------------------------------------------------------- | ----------------------------------------------------------------------------- |
  334. | Model Version | SSD V1 | SSD V1 | SSD-Mobilenet-V1-Fpn |
  335. | Resource | Ascend 910 ;CPU 2.60GHz,192cores;Memory,755G | NV SMX2 V100-16G | Ascend 910 ;CPU 2.60GHz,192cores;Memory,755G |
  336. | uploaded Date | 09/15/2020 (month/day/year) | 09/24/2020 (month/day/year) | 01/13/2021 (month/day/year) |
  337. | MindSpore Version | 1.0.0 | 1.0.0 | 1.1.0 |
  338. | Dataset | COCO2017 | COCO2017 | COCO2017 |
  339. | Training Parameters | epoch = 500, batch_size = 32 | epoch = 800, batch_size = 32 | epoch = 60, batch_size = 32 |
  340. | Optimizer | Momentum | Momentum | Momentum |
  341. | Loss Function | Sigmoid Cross Entropy,SmoothL1Loss | Sigmoid Cross Entropy,SmoothL1Loss | Sigmoid Cross Entropy,SmoothL1Loss |
  342. | Speed | 8pcs: 90ms/step | 8pcs: 121ms/step | 8pcs: 547ms/step |
  343. | Total time | 8pcs: 4.81hours | 8pcs: 12.31hours | 8pcs: 4.22hours |
  344. | Parameters (M) | 34 | 34 | 48M |
  345. | Scripts | <https://gitee.com/mindspore/mindspore/tree/master/model_zoo/official/cv/ssd> | <https://gitee.com/mindspore/mindspore/tree/master/model_zoo/official/cv/ssd> | <https://gitee.com/mindspore/mindspore/tree/master/model_zoo/official/cv/ssd> |
  346. #### Inference Performance
  347. | Parameters | Ascend | GPU | Ascend |
  348. | ------------------- | --------------------------- | --------------------------- | --------------------------- |
  349. | Model Version | SSD V1 | SSD V1 | SSD-Mobilenet-V1-Fpn |
  350. | Resource | Ascend 910 | GPU | Ascend 910 |
  351. | Uploaded Date | 09/15/2020 (month/day/year) | 09/24/2020 (month/day/year) | 09/24/2020 (month/day/year) |
  352. | MindSpore Version | 1.0.0 | 1.0.0 | 1.1.0 |
  353. | Dataset | COCO2017 | COCO2017 | COCO2017 |
  354. | batch_size | 1 | 1 | 1 |
  355. | outputs | mAP | mAP | mAP |
  356. | Accuracy | IoU=0.50: 23.8% | IoU=0.50: 22.4% | Iout=0.50: 30% |
  357. | Model for inference | 34M(.ckpt file) | 34M(.ckpt file) | 48M(.ckpt file) |
  358. ## [Description of Random Situation](#contents)
  359. In dataset.py, we set the seed inside “create_dataset" function. We also use random seed in train.py.
  360. ## [ModelZoo Homepage](#contents)
  361. Please check the official [homepage](https://gitee.com/mindspore/mindspore/tree/master/model_zoo).