You can not select more than 25 topics Topics must start with a chinese character,a letter or number, can include dashes ('-') and can be up to 35 characters long.

README.md 4.6 kB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137
  1. # Warpctc Example
  2. ## Description
  3. These is an example of training Warpctc with self-generated captcha image dataset in MindSpore.
  4. ## Requirements
  5. - Install [MindSpore](https://www.mindspore.cn/install/en).
  6. - Generate captcha images.
  7. > The [captcha](https://github.com/lepture/captcha) library can be used to generate captcha images. You can generate the train and test dataset by yourself or just run the script `scripts/run_process_data.sh`. By default, the shell script will generate 10000 test images and 50000 train images separately.
  8. > ```
  9. > $ cd scripts
  10. > $ sh run_process_data.sh
  11. >
  12. > # after execution, you will find the dataset like the follows:
  13. > .
  14. > └─warpctc
  15. > └─data
  16. > ├─ train # train dataset
  17. > └─ test # evaluate dataset
  18. > ...
  19. ## Structure
  20. ```shell
  21. .
  22. └──warpct
  23. ├── README.md
  24. ├── script
  25. ├── run_distribute_train.sh # launch distributed training(8 pcs)
  26. ├── run_eval.sh # launch evaluation
  27. ├── run_process_data.sh # launch dataset generation
  28. └── run_standalone_train.sh # launch standalone training(1 pcs)
  29. ├── src
  30. ├── config.py # parameter configuration
  31. ├── dataset.py # data preprocessing
  32. ├── loss.py # ctcloss definition
  33. ├── lr_generator.py # generate learning rate for each step
  34. ├── metric.py # accuracy metric for warpctc network
  35. ├── warpctc.py # warpctc network definition
  36. └── warpctc_for_train.py # warp network with grad, loss and gradient clip
  37. ├── eval.py # eval net
  38. ├── process_data.py # dataset generation script
  39. └── train.py # train net
  40. ```
  41. ## Parameter configuration
  42. Parameters for both training and evaluation can be set in config.py.
  43. ```
  44. "max_captcha_digits": 4, # max number of digits in each
  45. "captcha_width": 160, # width of captcha images
  46. "captcha_height": 64, # height of capthca images
  47. "batch_size": 64, # batch size of input tensor
  48. "epoch_size": 30, # only valid for taining, which is always 1 for inference
  49. "hidden_size": 512, # hidden size in LSTM layers
  50. "learning_rate": 0.01, # initial learning rate
  51. "momentum": 0.9 # momentum of SGD optimizer
  52. "save_checkpoint": True, # whether save checkpoint or not
  53. "save_checkpoint_steps": 98, # the step interval between two checkpoints. By default, the last checkpoint will be saved after the last step
  54. "keep_checkpoint_max": 30, # only keep the last keep_checkpoint_max checkpoint
  55. "save_checkpoint_path": "./", # path to save checkpoint
  56. ```
  57. ## Running the example
  58. ### Train
  59. #### Usage
  60. ```
  61. # distributed training
  62. Usage: sh run_distribute_train.sh [MINDSPORE_HCCL_CONFIG_PATH] [DATASET_PATH]
  63. # standalone training
  64. Usage: sh run_standalone_train.sh [DATASET_PATH]
  65. ```
  66. #### Launch
  67. ```
  68. # distribute training example
  69. sh run_distribute_train.sh rank_table.json ../data/train
  70. # standalone training example
  71. sh run_standalone_train.sh ../data/train
  72. ```
  73. > About rank_table.json, you can refer to the [distributed training tutorial](https://www.mindspore.cn/tutorial/en/master/advanced_use/distributed_training.html).
  74. #### Result
  75. Training result will be stored in folder `scripts`, whose name begins with "train" or "train_parallel". Under this, you can find checkpoint file together with result like the followings in log.
  76. ```
  77. # distribute training result(8 pcs)
  78. Epoch: [ 1/ 30], step: [ 98/ 98], loss: [0.5853/0.5853], time: [376813.7944]
  79. Epoch: [ 2/ 30], step: [ 98/ 98], loss: [0.4007/0.4007], time: [75882.0951]
  80. Epoch: [ 3/ 30], step: [ 98/ 98], loss: [0.0921/0.0921], time: [75150.9385]
  81. Epoch: [ 4/ 30], step: [ 98/ 98], loss: [0.1472/0.1472], time: [75135.0193]
  82. Epoch: [ 5/ 30], step: [ 98/ 98], loss: [0.0186/0.0186], time: [75199.5809]
  83. ...
  84. ```
  85. ### Evaluation
  86. #### Usage
  87. ```
  88. # evaluation
  89. Usage: sh run_eval.sh [DATASET_PATH] [CHECKPOINT_PATH]
  90. ```
  91. #### Launch
  92. ```
  93. # evaluation example
  94. sh run_eval.sh ../data/test warpctc-30-98.ckpt
  95. ```
  96. > checkpoint can be produced in training process.
  97. #### Result
  98. Evaluation result will be stored in the example path, whose folder name is "eval". Under this, you can find result like the followings in log.
  99. ```
  100. result: {'WarpCTCAccuracy': 0.9901472929936306}
  101. ```