You can not select more than 25 topics Topics must start with a chinese character,a letter or number, can include dashes ('-') and can be up to 35 characters long.

README.md 14 kB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293
  1. ![](https://www.mindspore.cn/static/img/logo_black.6a5c850d.png)
  2. <!-- TOC -->
  3. # CTPN for Ascend
  4. - [CTPN Description](#CTPN-description)
  5. - [Model Architecture](#model-architecture)
  6. - [Dataset](#dataset)
  7. - [Features](#features)
  8. - [Mixed Precision](#mixed-precision)
  9. - [Environment Requirements](#environment-requirements)
  10. - [Script Description](#script-description)
  11. - [Script and Sample Code](#script-and-sample-code)
  12. - [Training Process](#training-process)
  13. - [Evaluation Process](#evaluation-process)
  14. - [Evaluation](#evaluation)
  15. - [Model Description](#model-description)
  16. - [Performance](#performance)
  17. - [Training Performance](#evaluation-performance)
  18. - [Inference Performance](#evaluation-performance)
  19. - [Description of Random Situation](#description-of-random-situation)
  20. - [ModelZoo Homepage](#modelzoo-homepage)
  21. # [CTPN Description](#contents)
  22. CTPN is a text detection model based on object detection method. It improves Faster R-CNN and combines with bidirectional LSTM, so ctpn is very effective for horizontal text detection. Another highlight of ctpn is to transform the text detection task into a series of small-scale text box detection.This idea was proposed in the paper "Detecting Text in Natural Image with Connectionist Text Proposal Network".
  23. [Paper](https://arxiv.org/pdf/1609.03605.pdf) Zhi Tian, Weilin Huang, Tong He, Pan He, Yu Qiao, "Detecting Text in Natural Image with Connectionist Text Proposal Network", ArXiv, vol. abs/1609.03605, 2016.
  24. # [Model architecture](#contents)
  25. The overall network architecture contains a VGG16 as backbone, and use bidirection lstm to extract context feature of the small-scale text box, then it used the RPN(RegionProposal Network) to predict the boundding box and probability.
  26. [Link](https://arxiv.org/pdf/1605.07314v1.pdf)
  27. # [Dataset](#contents)
  28. Here we used 6 datasets for training, and 1 datasets for Evaluation.
  29. - Dataset1: ICDAR 2013: Focused Scene Text
  30. - Train: 142MB, 229 images
  31. - Test: 110MB, 233 images
  32. - Dataset2: ICDAR 2011: Born-Digital Images
  33. - Train: 27.7MB, 410 images
  34. - Dataset3: ICDAR 2015:
  35. - Train:89MB, 1000 images
  36. - Dataset4: SCUT-FORU: Flickr OCR Universal Database
  37. - Train: 388MB, 1715 images
  38. - Dataset5: CocoText v2(Subset of MSCOCO2017):
  39. - Train: 13GB, 63686 images
  40. - Dataset6: SVT(The Street View Dataset)
  41. - Train: 115MB, 349 images
  42. # [Features](#contents)
  43. # [Environment Requirements](#contents)
  44. - Hardware(Ascend)
  45. - Prepare hardware environment with Ascend processor. If you want to try Ascend , please send the [application form](https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/file/other/Ascend%20Model%20Zoo%E4%BD%93%E9%AA%8C%E8%B5%84%E6%BA%90%E7%94%B3%E8%AF%B7%E8%A1%A8.docx) to ascend@huawei.com. Once approved, you can get the resources.
  46. - Framework
  47. - [MindSpore](https://www.mindspore.cn/install/en)
  48. - For more information, please check the resources below:
  49. - [MindSpore Tutorials](https://www.mindspore.cn/tutorial/training/en/master/index.html)
  50. - [MindSpore Python API](https://www.mindspore.cn/doc/api_python/en/master/index.html)
  51. # [Script description](#contents)
  52. ## [Script and sample code](#contents)
  53. ```shell
  54. .
  55. └─ctpn
  56. ├── README.md # network readme
  57. ├── eval.py # eval net
  58. ├── scripts
  59. │   ├── eval_res.sh # calculate precision and recall
  60. │   ├── run_distribute_train_ascend.sh # launch distributed training with ascend platform(8p)
  61. │   ├── run_eval_ascend.sh # launch evaluating with ascend platform
  62. │   └── run_standalone_train_ascend.sh # launch standalone training with ascend platform(1p)
  63. ├── src
  64. │   ├── CTPN
  65. │   │   ├── BoundingBoxDecode.py # bounding box decode
  66. │   │   ├── BoundingBoxEncode.py # bounding box encode
  67. │   │   ├── __init__.py # package init file
  68. │   │   ├── anchor_generator.py # anchor generator
  69. │   │   ├── bbox_assign_sample.py # proposal layer
  70. │   │   ├── proposal_generator.py # proposla generator
  71. │   │   ├── rpn.py # region-proposal network
  72. │   │   └── vgg16.py # backbone
  73. │   ├── config.py # training configuration
  74. │   ├── convert_icdar2015.py # convert icdar2015 dataset label
  75. │   ├── convert_svt.py # convert svt label
  76. │   ├── create_dataset.py # create mindrecord dataset
  77. │   ├── ctpn.py # ctpn network definition
  78. │   ├── dataset.py # data proprocessing
  79. │   ├── lr_schedule.py # learning rate scheduler
  80. │   ├── network_define.py # network definition
  81. │   └── text_connector
  82. │   ├── __init__.py # package init file
  83. │   ├── connect_text_lines.py # connect text lines
  84. │   ├── detector.py # detect box
  85. │   ├── get_successions.py # get succession proposal
  86. │   └── utils.py # some functions which is commonly used
  87. └── train.py # train net
  88. ```
  89. ## [Training process](#contents)
  90. ### Dataset
  91. To create dataset, download the dataset first and deal with it.We provided src/convert_svt.py and src/convert_icdar2015.py to deal with svt and icdar2015 dataset label.For svt dataset, you can deal with it as below:
  92. ```shell
  93. python convert_svt.py --dataset_path=/path/img --xml_file=/path/train.xml --location_dir=/path/location
  94. ```
  95. For ICDAR2015 dataset, you can deal with it
  96. ```shell
  97. python convert_icdar2015.py --src_label_path=/path/train_label --target_label_path=/path/label
  98. ```
  99. Then modify the src/config.py and add the dataset path.For each path, add IMAGE_PATH and LABEL_PATH into a list in config.An example is show as blow:
  100. ```python
  101. # create dataset
  102. "coco_root": "/path/coco",
  103. "coco_train_data_type": "train2017",
  104. "cocotext_json": "/path/cocotext.v2.json",
  105. "icdar11_train_path": ["/path/image/", "/path/label"],
  106. "icdar13_train_path": ["/path/image/", "/path/label"],
  107. "icdar15_train_path": ["/path/image/", "/path/label"],
  108. "icdar13_test_path": ["/path/image/", "/path/label"],
  109. "flick_train_path": ["/path/image/", "/path/label"],
  110. "svt_train_path": ["/path/image/", "/path/label"],
  111. "pretrain_dataset_path": "",
  112. "finetune_dataset_path": "",
  113. "test_dataset_path": "",
  114. ```
  115. Then you can create dataset with src/create_dataset.py with the command as below:
  116. ```shell
  117. python src/create_dataset.py
  118. ```
  119. ### Usage
  120. - Ascend:
  121. ```bash
  122. # distribute training example(8p)
  123. sh run_distribute_train_ascend.sh [RANK_TABLE_FILE] [TASK_TYPE] [PRETRAINED_PATH]
  124. # standalone training
  125. sh run_standalone_train_ascend.sh [TASK_TYPE] [PRETRAINED_PATH]
  126. # evaluation:
  127. sh run_eval_ascend.sh [IMAGE_PATH] [DATASET_PATH] [CHECKPOINT_PATH]
  128. ```
  129. The `pretrained_path` should be a checkpoint of vgg16 trained on Imagenet2012. The name of weight in dict should be totally the same, also the batch_norm should be enabled in the trainig of vgg16, otherwise fails in further steps.COCO_TEXT_PARSER_PATH coco_text.py can refer to [Link](https://github.com/andreasveit/coco-text).To get the vgg16 backbone, you can use the network structure defined in src/CTPN/vgg16.py.To train the backbone, copy the src/CTPN/vgg16.py under modelzoo/official/cv/vgg16/src/, and modify the vgg16/train.py to suit the new construction.You can fix it as below:
  130. ```python
  131. ...
  132. from src.vgg16 import VGG16
  133. ...
  134. network = VGG16()
  135. ...
  136. ```
  137. Then you can train it with ImageNet2012.
  138. > Notes:
  139. > RANK_TABLE_FILE can refer to [Link](https://www.mindspore.cn/tutorial/training/en/master/advanced_use/distributed_training_ascend.html) , and the device_ip can be got as [Link](https://gitee.com/mindspore/mindspore/tree/master/model_zoo/utils/hccl_tools). For large models like InceptionV4, it's better to export an external environment variable `export HCCL_CONNECT_TIMEOUT=600` to extend hccl connection checking time from the default 120 seconds to 600 seconds. Otherwise, the connection could be timeout since compiling time increases with the growth of model size.
  140. >
  141. > This is processor cores binding operation regarding the `device_num` and total processor numbers. If you are not expect to do it, remove the operations `taskset` in `scripts/run_distribute_train.sh`
  142. >
  143. > TASK_TYPE contains Pretraining and Finetune. For Pretraining, we use ICDAR2013, ICDAR2015, SVT, SCUT-FORU, CocoText v2. For Finetune, we use ICDAR2011,
  144. ICDAR2013, SCUT-FORU to improve precision and recall, and when doing Finetune, we use the checkpoint training in Pretrain as our PRETRAINED_PATH.
  145. > COCO_TEXT_PARSER_PATH coco_text.py can refer to [Link](https://github.com/andreasveit/coco-text).
  146. >
  147. ### Launch
  148. ```bash
  149. # training example
  150. shell:
  151. Ascend:
  152. # distribute training example(8p)
  153. sh run_distribute_train_ascend.sh [RANK_TABLE_FILE] [TASK_TYPE] [PRETRAINED_PATH]
  154. # standalone training
  155. sh run_standalone_train_ascend.sh [TASK_TYPE] [PRETRAINED_PATH]
  156. ```
  157. ### Result
  158. Training result will be stored in the example path. Checkpoints will be stored at `ckpt_path` by default, and training log will be redirected to `./log`, also the loss will be redirected to `./loss_0.log` like followings.
  159. ```python
  160. 377 epoch: 1 step: 229 ,rpn_loss: 0.00355, rpn_cls_loss: 0.00047, rpn_reg_loss: 0.00103,
  161. 399 epoch: 2 step: 229 ,rpn_loss: 0.00327,rpn_cls_loss: 0.00047, rpn_reg_loss: 0.00093,
  162. 424 epoch: 3 step: 229 ,rpn_loss: 0.00910, rpn_cls_loss: 0.00385, rpn_reg_loss: 0.00175,
  163. ```
  164. ## [Eval process](#contents)
  165. ### Usage
  166. You can start training using python or shell scripts. The usage of shell scripts as follows:
  167. - Ascend:
  168. ```bash
  169. sh run_eval_ascend.sh [IMAGE_PATH] [DATASET_PATH] [CHECKPOINT_PATH]
  170. ```
  171. After eval, you can get serval archive file named submit_ctpn-xx_xxxx.zip, which contains the name of your checkpoint file.To evalulate it, you can use the scripts provided by the ICDAR2013 network, you can download the Deteval scripts from the [link](https://rrc.cvc.uab.es/?com=downloads&action=download&ch=2&f=aHR0cHM6Ly9ycmMuY3ZjLnVhYi5lcy9zdGFuZGFsb25lcy9zY3JpcHRfdGVzdF9jaDJfdDFfZTItMTU3Nzk4MzA2Ny56aXA=)
  172. After download the scripts, unzip it and put it under ctpn/scripts and use eval_res.sh to get the result.You will get files as below:
  173. ```text
  174. gt.zip
  175. readme.txt
  176. rrc_evalulation_funcs_1_1.py
  177. script.py
  178. ```
  179. Then you can run the scripts/eval_res.sh to calculate the evalulation result.
  180. ```base
  181. bash eval_res.sh
  182. ```
  183. ### Result
  184. Evaluation result will be stored in the example path, you can find result like the followings in `log`.
  185. ```text
  186. {"precision": 0.90791, "recall": 0.86118, "hmean": 0.88393}
  187. ```
  188. # [Model description](#contents)
  189. ## [Performance](#contents)
  190. ### Training Performance
  191. | Parameters | Ascend |
  192. | -------------------------- | ------------------------------------------------------------ |
  193. | Model Version | CTPN |
  194. | Resource | Ascend 910, cpu:2.60GHz 192cores, memory:755G |
  195. | uploaded Date | 02/06/2021 |
  196. | MindSpore Version | 1.1.1 |
  197. | Dataset | 16930 images |
  198. | Batch_size | 2 |
  199. | Training Parameters | src/config.py |
  200. | Optimizer | Momentum |
  201. | Loss Function | SoftmaxCrossEntropyWithLogits for classification, SmoothL2Loss for bbox regression|
  202. | Loss | ~0.04 |
  203. | Total time (8p) | 6h |
  204. | Scripts | [ctpn script](https://gitee.com/mindspore/mindspore/tree/master/model_zoo/official/cv/ctpn) |
  205. #### Inference Performance
  206. | Parameters | Ascend |
  207. | ------------------- | --------------------------- |
  208. | Model Version | CTPN |
  209. | Resource | Ascend 910, cpu:2.60GHz 192cores, memory:755G |
  210. | Uploaded Date | 02/06/2020 |
  211. | MindSpore Version | 1.1.1 |
  212. | Dataset | 229 images |
  213. | Batch_size | 1 |
  214. | Accuracy | precision=0.9079, recall=0.8611 F-measure:0.8839 |
  215. | Total time | 1 min |
  216. | Model for inference | 135M (.ckpt file) |
  217. #### Training performance results
  218. | **Ascend** | train performance |
  219. | :--------: | :---------------: |
  220. | 1p | 10 img/s |
  221. | **Ascend** | train performance |
  222. | :--------: | :---------------: |
  223. | 8p | 84 img/s |
  224. # [Description of Random Situation](#contents)
  225. We set seed to 1 in train.py.
  226. # [ModelZoo Homepage](#contents)
  227. Please check the official [homepage](https://gitee.com/mindspore/mindspore/tree/master/model_zoo).