You can not select more than 25 topics Topics must start with a chinese character,a letter or number, can include dashes ('-') and can be up to 35 characters long.

README.md 16 kB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364
  1. # Contents
  2. - [CNNCTC Description](#CNNCTC-description)
  3. - [Model Architecture](#model-architecture)
  4. - [Dataset](#dataset)
  5. - [Features](#features)
  6. - [Mixed Precision](#mixed-precision)
  7. - [Environment Requirements](#environment-requirements)
  8. - [Quick Start](#quick-start)
  9. - [Script Description](#script-description)
  10. - [Script and Sample Code](#script-and-sample-code)
  11. - [Script Parameters](#script-parameters)
  12. - [Training Process](#training-process)
  13. - [Training](#training)
  14. - [Distributed Training](#distributed-training)
  15. - [Evaluation Process](#evaluation-process)
  16. - [Evaluation](#evaluation)
  17. - [Model Description](#model-description)
  18. - [Performance](#performance)
  19. - [Evaluation Performance](#evaluation-performance)
  20. - [Inference Performance](#evaluation-performance)
  21. - [How to use](#how-to-use)
  22. - [Inference](#inference)
  23. - [Continue Training on the Pretrained Model](#continue-training-on-the-pretrained-model)
  24. - [Transfer Learning](#transfer-learning)
  25. - [Description of Random Situation](#description-of-random-situation)
  26. - [ModelZoo Homepage](#modelzoo-homepage)
  27. # [CNNCTC Description](#contents)
  28. This paper proposes three major contributions to addresses scene text recognition (STR).
  29. First, we examine the inconsistencies of training and evaluation datasets, and the performance gap results from inconsistencies.
  30. Second, we introduce a unified four-stage STR framework that most existing STR models fit into.
  31. Using this framework allows for the extensive evaluation of previously proposed STR modules and the discovery of previously
  32. unexplored module combinations. Third, we analyze the module-wise contributions to performance in terms of accuracy, speed,
  33. and memory demand, under one consistent set of training and evaluation datasets. Such analyses clean up the hindrance on the current
  34. comparisons to understand the performance gain of the existing modules.
  35. [Paper](https://arxiv.org/abs/1904.01906): J. Baek, G. Kim, J. Lee, S. Park, D. Han, S. Yun, S. J. Oh, and H. Lee, “What is wrong with scene text recognition model comparisons? dataset and model analysis,” ArXiv, vol. abs/1904.01906, 2019.
  36. # [Model Architecture](#contents)
  37. This is an example of training CNN+CTC model for text recognition on MJSynth and SynthText dataset with MindSpore.
  38. # [Dataset](#contents)
  39. Note that you can run the scripts based on the dataset mentioned in original paper or widely used in relevant domain/network architecture. In the following sections, we will introduce how to run the scripts using the related dataset below.
  40. The [MJSynth](https://www.robots.ox.ac.uk/~vgg/data/text/) and [SynthText](https://github.com/ankush-me/SynthText) dataset are used for model training. The [The IIIT 5K-word dataset](https://cvit.iiit.ac.in/research/projects/cvit-projects/the-iiit-5k-word-dataset) dataset is used for evaluation.
  41. - step 1:
  42. All the datasets have been preprocessed and stored in .lmdb format and can be downloaded [**HERE**](https://drive.google.com/drive/folders/192UfE9agQUMNq6AgU3_E05_FcPZK4hyt).
  43. - step 2:
  44. Uncompress the downloaded file, rename the MJSynth dataset as MJ, the SynthText dataset as ST and the IIIT dataset as IIIT.
  45. - step 3:
  46. Move above mentioned three datasets into `cnnctc_data` folder, and the structure should be as below:
  47. ```text
  48. |--- CNNCTC/
  49. |--- cnnctc_data/
  50. |--- ST/
  51. data.mdb
  52. lock.mdb
  53. |--- MJ/
  54. data.mdb
  55. lock.mdb
  56. |--- IIIT/
  57. data.mdb
  58. lock.mdb
  59. ......
  60. ```
  61. - step 4:
  62. Preprocess the dataset by running:
  63. ```bash
  64. python src/preprocess_dataset.py
  65. ```
  66. This takes around 75 minutes.
  67. # [Features](#contents)
  68. ## Mixed Precision
  69. The [mixed precision](https://www.mindspore.cn/tutorial/training/en/master/advanced_use/enable_mixed_precision.html) training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data formats, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware.
  70. For FP16 operators, if the input data type is FP32, the backend of MindSpore will automatically handle it with reduced precision. Users could check the reduced-precision operators by enabling INFO log and then searching ‘reduce precision’.
  71. # [Environment Requirements](#contents)
  72. - Hardware(Ascend)
  73. - Prepare hardware environment with Ascend processor.
  74. - Framework
  75. - [MindSpore](https://www.mindspore.cn/install/en)
  76. - For more information, please check the resources below:
  77. - [MindSpore tutorials](https://www.mindspore.cn/tutorial/training/en/master/index.html)
  78. - [MindSpore Python API](https://www.mindspore.cn/doc/api_python/en/master/index.html)
  79. # [Quick Start](#contents)
  80. - Install dependencies:
  81. ```bash
  82. pip install lmdb
  83. pip install Pillow
  84. pip install tqdm
  85. pip install six
  86. ```
  87. - Standalone Training:
  88. ```bash
  89. bash scripts/run_standalone_train_ascend.sh $PRETRAINED_CKPT
  90. ```
  91. - Distributed Training:
  92. ```bash
  93. bash scripts/run_distribute_train_ascend.sh $RANK_TABLE_FILE $PRETRAINED_CKPT
  94. ```
  95. - Evaluation:
  96. ```bash
  97. bash scripts/run_eval_ascend.sh $TRAINED_CKPT
  98. ```
  99. # [Script Description](#contents)
  100. ## [Script and Sample Code](#contents)
  101. The entire code structure is as following:
  102. ```text
  103. |--- CNNCTC/
  104. |---README.md // descriptions about cnnctc
  105. |---train.py // train scripts
  106. |---eval.py // eval scripts
  107. |---scripts
  108. |---run_standalone_train_ascend.sh // shell script for standalone on ascend
  109. |---run_distribute_train_ascend.sh // shell script for distributed on ascend
  110. |---run_eval_ascend.sh // shell script for eval on ascend
  111. |---src
  112. |---__init__.py // init file
  113. |---cnn_ctc.py // cnn_ctc network
  114. |---config.py // total config
  115. |---callback.py // loss callback file
  116. |---dataset.py // process dataset
  117. |---util.py // routine operation
  118. |---preprocess_dataset.py // preprocess dataset
  119. ```
  120. ## [Script Parameters](#contents)
  121. Parameters for both training and evaluation can be set in `config.py`.
  122. Arguments:
  123. - `--CHARACTER`: Character labels.
  124. - `--NUM_CLASS`: The number of classes including all character labels and the <blank> label for CTCLoss.
  125. - `--HIDDEN_SIZE`: Model hidden size.
  126. - `--FINAL_FEATURE_WIDTH`: The number of features.
  127. - `--IMG_H`: The height of input image.
  128. - `--IMG_W`: The width of input image.
  129. - `--TRAIN_DATASET_PATH`: The path to training dataset.
  130. - `--TRAIN_DATASET_INDEX_PATH`: The path to training dataset index file which determines the order .
  131. - `--TRAIN_BATCH_SIZE`: Training batch size. The batch size and index file must ensure input data is in fixed shape.
  132. - `--TRAIN_DATASET_SIZE`: Training dataset size.
  133. - `--TEST_DATASET_PATH`: The path to test dataset.
  134. - `--TEST_BATCH_SIZE`: Test batch size.
  135. - `--TRAIN_EPOCHS`:Total training epochs.
  136. - `--CKPT_PATH`:The path to model checkpoint file, can be used to resume training and evaluation.
  137. - `--SAVE_PATH`:The path to save model checkpoint file.
  138. - `--LR`:Learning rate for standalone training.
  139. - `--LR_PARA`:Learning rate for distributed training.
  140. - `--MOMENTUM`:Momentum.
  141. - `--LOSS_SCALE`:Loss scale to prevent gradient underflow.
  142. - `--SAVE_CKPT_PER_N_STEP`:Save model checkpoint file per N steps.
  143. - `--KEEP_CKPT_MAX_NUM`:The maximum number of saved model checkpoint file.
  144. ## [Training Process](#contents)
  145. ### Training
  146. - Standalone Training:
  147. ```bash
  148. bash scripts/run_standalone_train_ascend.sh $PRETRAINED_CKPT
  149. ```
  150. Results and checkpoints are written to `./train` folder. Log can be found in `./train/log` and loss values are recorded in `./train/loss.log`.
  151. `$PRETRAINED_CKPT` is the path to model checkpoint and it is **optional**. If none is given the model will be trained from scratch.
  152. - Distributed Training:
  153. ```bash
  154. bash scripts/run_distribute_train_ascend.sh $RANK_TABLE_FILE $PRETRAINED_CKPT
  155. ```
  156. Results and checkpoints are written to `./train_parallel_{i}` folder for device `i` respectively.
  157. Log can be found in `./train_parallel_{i}/log_{i}.log` and loss values are recorded in `./train_parallel_{i}/loss.log`.
  158. `$RANK_TABLE_FILE` is needed when you are running a distribute task on ascend.
  159. `$PATH_TO_CHECKPOINT` is the path to model checkpoint and it is **optional**. If none is given the model will be trained from scratch.
  160. ### Training Result
  161. Training result will be stored in the example path, whose folder name begins with "train" or "train_parallel". You can find checkpoint file together with result like the following in loss.log.
  162. ```text
  163. # distribute training result(8p)
  164. epoch: 1 step: 1 , loss is 76.25, average time per step is 0.235177839748392712
  165. epoch: 1 step: 2 , loss is 73.46875, average time per step is 0.25798572540283203
  166. epoch: 1 step: 3 , loss is 69.46875, average time per step is 0.229678678512573
  167. epoch: 1 step: 4 , loss is 64.3125, average time per step is 0.23512671788533527
  168. epoch: 1 step: 5 , loss is 58.375, average time per step is 0.23149147033691406
  169. epoch: 1 step: 6 , loss is 52.7265625, average time per step is 0.2292975425720215
  170. ...
  171. epoch: 1 step: 8689 , loss is 9.706798802612482, average time per step is 0.2184656601312549
  172. epoch: 1 step: 8690 , loss is 9.70612545289855, average time per step is 0.2184725407765116
  173. epoch: 1 step: 8691 , loss is 9.70695776049204, average time per step is 0.21847309686135555
  174. epoch: 1 step: 8692 , loss is 9.707279624277456, average time per step is 0.21847339290613375
  175. epoch: 1 step: 8693 , loss is 9.70763437950938, average time per step is 0.2184720295013031
  176. epoch: 1 step: 8694 , loss is 9.707695425072046, average time per step is 0.21847410284595573
  177. epoch: 1 step: 8695 , loss is 9.708408273381295, average time per step is 0.21847338271072345
  178. epoch: 1 step: 8696 , loss is 9.708703753591953, average time per step is 0.2184726025560777
  179. epoch: 1 step: 8697 , loss is 9.709536406025824, average time per step is 0.21847212061114694
  180. epoch: 1 step: 8698 , loss is 9.708542263610315, average time per step is 0.2184715309307257
  181. ```
  182. ## [Evaluation Process](#contents)
  183. ### Evaluation
  184. - Evaluation:
  185. ```bash
  186. bash scripts/run_eval_ascend.sh $TRAINED_CKPT
  187. ```
  188. The model will be evaluated on the IIIT dataset, sample results and overall accuracy will be printed.
  189. # [Model Description](#contents)
  190. ## [Performance](#contents)
  191. ### Training Performance
  192. | Parameters | CNNCTC |
  193. | -------------------------- | ----------------------------------------------------------- |
  194. | Model Version | V1 |
  195. | Resource | Ascend 910; CPU 2.60GHz, 192cores; Memory 755G; OS Euler2.8 |
  196. | uploaded Date | 09/28/2020 (month/day/year) |
  197. | MindSpore Version | 1.0.0 |
  198. | Dataset | MJSynth,SynthText |
  199. | Training Parameters | epoch=3, batch_size=192 |
  200. | Optimizer | RMSProp |
  201. | Loss Function | CTCLoss |
  202. | Speed | 1pc: 250 ms/step; 8pcs: 260 ms/step |
  203. | Total time | 1pc: 15 hours; 8pcs: 1.92 hours |
  204. | Parameters (M) | 177 |
  205. | Scripts | <https://gitee.com/mindspore/mindspore/tree/master/model_zoo/official/cv/cnnctc> |
  206. ### Evaluation Performance
  207. | Parameters | CNNCTC |
  208. | ------------------- | --------------------------- |
  209. | Model Version | V1 |
  210. | Resource | Ascend 910; OS Euler2.8 |
  211. | Uploaded Date | 09/28/2020 (month/day/year) |
  212. | MindSpore Version | 1.0.0 |
  213. | Dataset | IIIT5K |
  214. | batch_size | 192 |
  215. | outputs | Accuracy |
  216. | Accuracy | 85% |
  217. | Model for inference | 675M (.ckpt file) |
  218. ## [How to use](#contents)
  219. ### Inference
  220. If you need to use the trained model to perform inference on multiple hardware platforms, such as GPU, Ascend 910 or Ascend 310, you can refer to this [Link](https://www.mindspore.cn/tutorial/training/en/master/advanced_use/migrate_3rd_scripts.html). Following the steps below, this is a simple example:
  221. - Running on Ascend
  222. ```python
  223. # Set context
  224. context.set_context(mode=context.GRAPH_HOME, device_target=cfg.device_target)
  225. context.set_context(device_id=cfg.device_id)
  226. # Load unseen dataset for inference
  227. dataset = dataset.create_dataset(cfg.data_path, 1, False)
  228. # Define model
  229. net = CNNCTC(cfg.NUM_CLASS, cfg.HIDDEN_SIZE, cfg.FINAL_FEATURE_WIDTH)
  230. opt = Momentum(filter(lambda x: x.requires_grad, net.get_parameters()), 0.01,
  231. cfg.momentum, weight_decay=cfg.weight_decay)
  232. loss = P.CTCLoss(preprocess_collapse_repeated=False,
  233. ctc_merge_repeated=True,
  234. ignore_longer_outputs_than_inputs=False)
  235. model = Model(net, loss_fn=loss, optimizer=opt, metrics={'acc'})
  236. # Load pre-trained model
  237. param_dict = load_checkpoint(cfg.checkpoint_path)
  238. load_param_into_net(net, param_dict)
  239. net.set_train(False)
  240. # Make predictions on the unseen dataset
  241. acc = model.eval(dataset)
  242. print("accuracy: ", acc)
  243. ```
  244. ### Continue Training on the Pretrained Model
  245. - running on Ascend
  246. ```python
  247. # Load dataset
  248. dataset = create_dataset(cfg.data_path, 1)
  249. batch_num = dataset.get_dataset_size()
  250. # Define model
  251. net = CNNCTC(cfg.NUM_CLASS, cfg.HIDDEN_SIZE, cfg.FINAL_FEATURE_WIDTH)
  252. # Continue training if set pre_trained to be True
  253. if cfg.pre_trained:
  254. param_dict = load_checkpoint(cfg.checkpoint_path)
  255. load_param_into_net(net, param_dict)
  256. lr = lr_steps(0, lr_max=cfg.lr_init, total_epochs=cfg.epoch_size,
  257. steps_per_epoch=batch_num)
  258. opt = Momentum(filter(lambda x: x.requires_grad, net.get_parameters()),
  259. Tensor(lr), cfg.momentum, weight_decay=cfg.weight_decay)
  260. loss = P.CTCLoss(preprocess_collapse_repeated=False,
  261. ctc_merge_repeated=True,
  262. ignore_longer_outputs_than_inputs=False)
  263. model = Model(net, loss_fn=loss, optimizer=opt, metrics={'acc'},
  264. amp_level="O2", keep_batchnorm_fp32=False, loss_scale_manager=None)
  265. # Set callbacks
  266. config_ck = CheckpointConfig(save_checkpoint_steps=batch_num * 5,
  267. keep_checkpoint_max=cfg.keep_checkpoint_max)
  268. time_cb = TimeMonitor(data_size=batch_num)
  269. ckpoint_cb = ModelCheckpoint(prefix="train_googlenet_cifar10", directory="./",
  270. config=config_ck)
  271. loss_cb = LossMonitor()
  272. # Start training
  273. model.train(cfg.epoch_size, dataset, callbacks=[time_cb, ckpoint_cb, loss_cb])
  274. print("train success")
  275. ```
  276. # [ModelZoo Homepage](#contents)
  277. Please check the official [homepage](https://gitee.com/mindspore/mindspore/tree/master/model_zoo).