You can not select more than 25 topics Topics must start with a chinese character,a letter or number, can include dashes ('-') and can be up to 35 characters long.

README.md 11 kB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233
  1. # Contents
  2. - [FasterRcnn Description](#fasterrcnn-description)
  3. - [Model Architecture](#model-architecture)
  4. - [Dataset](#dataset)
  5. - [Environment Requirements](#environment-requirements)
  6. - [Quick Start](#quick-start)
  7. - [Script Description](#script-description)
  8. - [Script and Sample Code](#script-and-sample-code)
  9. - [Training Process](#training-process)
  10. - [Training Usage](#usage)
  11. - [Training Result](#result)
  12. - [Evaluation Process](#evaluation-process)
  13. - [Evaluation Usage](#usage)
  14. - [Evaluation Result](#result)
  15. - [Model Description](#model-description)
  16. - [Performance](#performance)
  17. - [Evaluation Performance](#evaluation-performance)
  18. - [Inference Performance](#evaluation-performance)
  19. - [ModelZoo Homepage](#modelzoo-homepage)
  20. # FasterRcnn Description
  21. Before FasterRcnn, the target detection networks rely on the region proposal algorithm to assume the location of targets, such as SPPnet and Fast R-CNN. Progress has reduced the running time of these detection networks, but it also reveals that the calculation of the region proposal is a bottleneck.
  22. FasterRcnn proposed that convolution feature maps based on region detectors (such as Fast R-CNN) can also be used to generate region proposals. At the top of these convolution features, a Region Proposal Network (RPN) is constructed by adding some additional convolution layers (which share the convolution characteristics of the entire image with the detection network, thus making it possible to make regions almost costlessProposal), outputting both region bounds and objectness score for each location.Therefore, RPN is a full convolutional network (FCN), which can be trained end-to-end, generate high-quality region proposals, and then fed into Fast R-CNN for detection.
  23. [Paper](https://arxiv.org/abs/1506.01497): Ren S , He K , Girshick R , et al. Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2015, 39(6).
  24. #Model Architecture
  25. FasterRcnn is a two-stage target detection network,This network uses a region proposal network (RPN), which can share the convolution features of the whole image with the detection network, so that the calculation of region proposal is almost cost free. The whole network further combines RPN and FastRcnn into a network by sharing the convolution features.
  26. # Dataset
  27. Dataset used: [COCO2017](<http://images.cocodataset.org/>)
  28. - Dataset size:19G
  29. - Train:18G,118000 images
  30. - Val:1G,5000 images
  31. - Annotations:241M,instances,captions,person_keypoints etc
  32. - Data format:image and json files
  33. - Note:Data will be processed in dataset.py
  34. #Environment Requirements
  35. - Install [MindSpore](https://www.mindspore.cn/install/en).
  36. - Download the dataset COCO2017.
  37. - We use COCO2017 as training dataset in this example by default, and you can also use your own datasets.
  38. 1. If coco dataset is used. **Select dataset to coco when run script.**
  39. Install Cython and pycocotool, and you can also install mmcv to process data.
  40. ```
  41. pip install Cython
  42. pip install pycocotools
  43. pip install mmcv==0.2.14
  44. ```
  45. And change the COCO_ROOT and other settings you need in `config.py`. The directory structure is as follows:
  46. ```
  47. .
  48. └─cocodataset
  49. ├─annotations
  50. ├─instance_train2017.json
  51. └─instance_val2017.json
  52. ├─val2017
  53. └─train2017
  54. ```
  55. 2. If your own dataset is used. **Select dataset to other when run script.**
  56. Organize the dataset infomation into a TXT file, each row in the file is as follows:
  57. ```
  58. train2017/0000001.jpg 0,259,401,459,7 35,28,324,201,2 0,30,59,80,2
  59. ```
  60. Each row is an image annotation which split by space, the first column is a relative path of image, the others are box and class infomations of the format [xmin,ymin,xmax,ymax,class]. We read image from an image path joined by the `IMAGE_DIR`(dataset directory) and the relative path in `ANNO_PATH`(the TXT file path), `IMAGE_DIR` and `ANNO_PATH` are setting in `config.py`.
  61. # Quick Start
  62. After installing MindSpore via the official website, you can start training and evaluation as follows:
  63. ```
  64. # standalone training
  65. sh run_standalone_train_ascend.sh [PRETRAINED_MODEL]
  66. # distributed training
  67. sh run_distribute_train_ascend.sh [RANK_TABLE_FILE] [PRETRAINED_MODEL]
  68. # eval
  69. sh run_eval_ascend.sh [VALIDATION_JSON_FILE] [CHECKPOINT_PATH]
  70. ```
  71. # Script Description
  72. ## Script and Sample Code
  73. ```shell
  74. .
  75. └─FasterRcnn
  76. ├─README.md // descriptions about fasterrcnn
  77. ├─scripts
  78. ├─run_standalone_train_ascend.sh // shell script for standalone on ascend
  79. ├─run_distribute_train_ascend.sh // shell script for distributed on ascend
  80. └─run_eval_ascend.sh // shell script for eval on ascend
  81. ├─src
  82. ├─FasterRcnn
  83. ├─__init__.py // init file
  84. ├─anchor_generator.py // anchor generator
  85. ├─bbox_assign_sample.py // first stage sampler
  86. ├─bbox_assign_sample_stage2.py // second stage sampler
  87. ├─faster_rcnn_r50.py // fasterrcnn network
  88. ├─fpn_neck.py //feature pyramid network
  89. ├─proposal_generator.py // proposal generator
  90. ├─rcnn.py // rcnn network
  91. ├─resnet50.py // backbone network
  92. ├─roi_align.py // roi align network
  93. └─rpn.py // region proposal network
  94. ├─config.py // total config
  95. ├─dataset.py // create dataset and process dataset
  96. ├─lr_schedule.py // learning ratio generator
  97. ├─network_define.py // network define for fasterrcnn
  98. └─util.py // routine operation
  99. ├─eval.py //eval scripts
  100. └─train.py // train scripts
  101. ```
  102. ## Training Process
  103. ### Usage
  104. ```
  105. # standalone training on ascend
  106. sh run_standalone_train_ascend.sh [PRETRAINED_MODEL]
  107. # distributed training on ascend
  108. sh run_distribute_train_ascend.sh [RANK_TABLE_FILE] [PRETRAINED_MODEL]
  109. ```
  110. > Rank_table.json which is specified by RANK_TABLE_FILE is needed when you are running a distribute task. You can generate it by using the [hccl_tools](https://gitee.com/mindspore/mindspore/tree/master/model_zoo/utils/hccl_tools).
  111. > As for PRETRAINED_MODEL,it should be a ResNet50 checkpoint that trained over ImageNet2012. Ready-made pretrained_models are not available now. Stay tuned.
  112. ### Result
  113. Training result will be stored in the example path, whose folder name begins with "train" or "train_parallel". You can find checkpoint file together with result like the followings in loss.log.
  114. ```
  115. # distribute training result(8p)
  116. epoch: 1 step: 7393, rpn_loss: 0.12054, rcnn_loss: 0.40601, rpn_cls_loss: 0.04025, rpn_reg_loss: 0.08032, rcnn_cls_loss: 0.25854, rcnn_reg_loss: 0.14746, total_loss: 0.52655
  117. epoch: 2 step: 7393, rpn_loss: 0.06561, rcnn_loss: 0.50293, rpn_cls_loss: 0.02587, rpn_reg_loss: 0.03967, rcnn_cls_loss: 0.35669, rcnn_reg_loss: 0.14624, total_loss: 0.56854
  118. epoch: 3 step: 7393, rpn_loss: 0.06940, rcnn_loss: 0.49658, rpn_cls_loss: 0.03769, rpn_reg_loss: 0.03165, rcnn_cls_loss: 0.36353, rcnn_reg_loss: 0.13318, total_loss: 0.56598
  119. ...
  120. epoch: 10 step: 7393, rpn_loss: 0.03555, rcnn_loss: 0.32666, rpn_cls_loss: 0.00697, rpn_reg_loss: 0.02859, rcnn_cls_loss: 0.16125, rcnn_reg_loss: 0.16541, total_loss: 0.36221
  121. epoch: 11 step: 7393, rpn_loss: 0.19849, rcnn_loss: 0.47827, rpn_cls_loss: 0.11639, rpn_reg_loss: 0.08209, rcnn_cls_loss: 0.29712, rcnn_reg_loss: 0.18115, total_loss: 0.67676
  122. epoch: 12 step: 7393, rpn_loss: 0.00691, rcnn_loss: 0.10168, rpn_cls_loss: 0.00529, rpn_reg_loss: 0.00162, rcnn_cls_loss: 0.05426, rcnn_reg_loss: 0.04745, total_loss: 0.10859
  123. ```
  124. ## Evaluation Process
  125. ### Usage
  126. ```
  127. # eval on ascend
  128. sh run_eval_ascend.sh [VALIDATION_JSON_FILE] [CHECKPOINT_PATH]
  129. ```
  130. > checkpoint can be produced in training process.
  131. ### Result
  132. Eval result will be stored in the example path, whose folder name is "eval". Under this, you can find result like the followings in log.
  133. ```
  134. Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.360
  135. Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.586
  136. Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.385
  137. Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.229
  138. Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.402
  139. Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.441
  140. Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.299
  141. Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.487
  142. Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.515
  143. Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.346
  144. Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.562
  145. Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.631
  146. ```
  147. # Model Description
  148. ## Performance
  149. ### Training Performance
  150. | Parameters | FasterRcnn |
  151. | -------------------------- | ----------------------------------------------------------- |
  152. | Model Version | V1 |
  153. | Resource | Ascend 910 ;CPU 2.60GHz,56cores;Memory,314G |
  154. | uploaded Date | 06/01/2020 (month/day/year) |
  155. | MindSpore Version | 0.3.0-alpha |
  156. | Dataset | COCO2017 |
  157. | Training Parameters | epoch=12, batch_size = 2 |
  158. | Optimizer | SGD |
  159. | Loss Function | Softmax Cross Entropy ,Sigmoid Cross Entropy,SmoothL1Loss |
  160. | Speed | 1pc: 190 ms/step; 8pcs: 200 ms/step |
  161. | Total time | 1pc: 37.17 hours; 8pcs: 4.89 hours |
  162. | Parameters (M) | 250 |
  163. | Scripts | https://gitee.com/mindspore/mindspore/tree/master/model_zoo/official/cv/faster_rcnn |
  164. ### Evaluation Performance
  165. | Parameters | FasterRcnn |
  166. | ------------------- | --------------------------- |
  167. | Model Version | V1 |
  168. | Resource | Ascend 910 |
  169. | Uploaded Date | 06/01/2020 (month/day/year) |
  170. | MindSpore Version | 0.3.0-alpha |
  171. | Dataset | COCO2017 |
  172. | batch_size | 2 |
  173. | outputs | mAP |
  174. | Accuracy | IoU=0.50: 58.6% |
  175. | Model for inference | 250M (.ckpt file) |
  176. # [ModelZoo Homepage](#contents)
  177. Please check the official [homepage](https://gitee.com/mindspore/mindspore/tree/master/model_zoo).