You can not select more than 25 topics Topics must start with a chinese character,a letter or number, can include dashes ('-') and can be up to 35 characters long.

README.md 14 kB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227
  1. # BERT Example
  2. ## Description
  3. This example implements pre-training, fine-tuning and evaluation of [BERT-base](https://github.com/google-research/bert) and [BERT-NEZHA](https://github.com/huawei-noah/Pretrained-Language-Model).
  4. ## Requirements
  5. - Install [MindSpore](https://www.mindspore.cn/install/en).
  6. - Download the zhwiki dataset for pre-training. Extract and clean text in the dataset with [WikiExtractor](https://github.com/attardi/wikiextractor). Convert the dataset to TFRecord format and move the files to a specified path.
  7. - Download dataset for fine-tuning and evaluation such as CLUENER, TNEWS, SQuAD v1.1, etc.
  8. - Convert dataset files from json format to tfrecord format, please refer to run_classifier.py which in [BERT](https://github.com/google-research/bert) repository.
  9. > Notes:
  10. If you are running a fine-tuning or evaluation task, prepare a checkpoint from pre-train.
  11. ## Running the Example
  12. ### Pre-Training
  13. - Set options in `config.py`, including lossscale, optimizer and network. Click [here](https://www.mindspore.cn/tutorial/zh-CN/master/use/data_preparation/loading_the_datasets.html#tfrecord) for more information about dataset and the json schema file.
  14. - Run `run_standalone_pretrain.sh` for non-distributed pre-training of BERT-base and BERT-NEZHA model on `Ascend`.
  15. ``` bash
  16. bash scripts/run_standalone_pretrain.sh DEVICE_ID EPOCH_SIZE DATA_DIR SCHEMA_DIR
  17. ```
  18. - Run `run_standalone_pretrain_for_gpu.sh` for non-distributed pre-training of BERT-base and BERT-NEZHA model on `GPU`.
  19. ``` bash
  20. bash scripts/run_standalone_pretrain_for_gpu.sh DEVICE_ID EPOCH_SIZE DATA_DIR SCHEMA_DIR
  21. ```
  22. - Run `run_distribute_pretrain.sh` for distributed pre-training of BERT-base and BERT-NEZHA model on `Ascend`.
  23. ``` bash
  24. bash scripts/run_distribute_pretrain.sh DATA_DIR RANK_TABLE_FILE
  25. ```
  26. - Run `run_distribute_pretrain_for_gpu.sh` for distributed pre-training of BERT-base and BERT-NEZHA model on `GPU`.
  27. ```bash
  28. bash scripts/run_distribute_pretrain_for_gpu.sh RANK_SIZE EPOCH_SIZE DATA_DIR SCHEMA_DIR
  29. ```
  30. ### Fine-Tuning and Evaluation
  31. - Including three kinds of task: Classification, NER(Named Entity Recognition) and SQuAD(Stanford Question Answering Dataset)
  32. - Set bert network config and optimizer hyperparameters in `finetune_eval_config.py`.
  33. - Classification task: Set task related hyperparameters in scripts/run_classifier.sh.
  34. - Run `bash scripts/run_classifier.py` for fine-tuning of BERT-base and BERT-NEZHA model.
  35. ```bash
  36. bash scripts/run_classifier.sh
  37. ```
  38. - NER task: Set task related hyperparameters in scripts/run_ner.sh.
  39. - Run `bash scripts/run_ner.py` for fine-tuning of BERT-base and BERT-NEZHA model.
  40. ```bash
  41. bash scripts/run_ner.sh
  42. ```
  43. - SQuAD task: Set task related hyperparameters in scripts/run_squad.sh.
  44. - Run `bash scripts/run_squad.py` for fine-tuning of BERT-base and BERT-NEZHA model.
  45. ```bash
  46. bash scripts/run_squad.sh
  47. ```
  48. ## Usage
  49. ### Pre-Training
  50. ```
  51. usage: run_pretrain.py [--distribute DISTRIBUTE] [--epoch_size N] [----device_num N] [--device_id N]
  52. [--enable_save_ckpt ENABLE_SAVE_CKPT]
  53. [--enable_lossscale ENABLE_LOSSSCALE] [--do_shuffle DO_SHUFFLE]
  54. [--enable_data_sink ENABLE_DATA_SINK] [--data_sink_steps N] [--checkpoint_path CHECKPOINT_PATH]
  55. [--save_checkpoint_steps N] [--save_checkpoint_num N]
  56. [--data_dir DATA_DIR] [--schema_dir SCHEMA_DIR]
  57. options:
  58. --distribute pre_training by serveral devices: "true"(training by more than 1 device) | "false", default is "false"
  59. --epoch_size epoch size: N, default is 1
  60. --device_num number of used devices: N, default is 1
  61. --device_id device id: N, default is 0
  62. --enable_save_ckpt enable save checkpoint: "true" | "false", default is "true"
  63. --enable_lossscale enable lossscale: "true" | "false", default is "true"
  64. --do_shuffle enable shuffle: "true" | "false", default is "true"
  65. --enable_data_sink enable data sink: "true" | "false", default is "true"
  66. --data_sink_steps set data sink steps: N, default is 1
  67. --checkpoint_path path to save checkpoint files: PATH, default is ""
  68. --save_checkpoint_steps steps for saving checkpoint files: N, default is 1000
  69. --save_checkpoint_num number for saving checkpoint files: N, default is 1
  70. --data_dir path to dataset directory: PATH, default is ""
  71. --schema_dir path to schema.json file, PATH, default is ""
  72. ```
  73. ### Fine-Tuning and Evaluation
  74. ```
  75. usage: run_ner.py [--device_target DEVICE_TARGET] [--do_train DO_TRAIN] [----do_eval DO_EVAL]
  76. [--assessment_method ASSESSMENT_METHOD] [--use_crf USE_CRF]
  77. [--device_id N] [--epoch_num N] [--vocab_file_path VOCAB_FILE_PATH]
  78. [--label2id_file_path LABEL2ID_FILE_PATH]
  79. [--save_finetune_checkpoint_path SAVE_FINETUNE_CHECKPOINT_PATH]
  80. [--load_pretrain_checkpoint_path LOAD_PRETRAIN_CHECKPOINT_PATH]
  81. [--train_data_file_path TRAIN_DATA_FILE_PATH]
  82. [--eval_data_file_path EVAL_DATA_FILE_PATH]
  83. [--schema_file_path SCHEMA_FILE_PATH]
  84. options:
  85. --device_target targeted device to run task: Ascend | GPU
  86. --do_train whether to run training on training set: true | false
  87. --do_eval whether to run eval on dev set: true | false
  88. --assessment_method assessment method to do evaluation: f1 | clue_benchmark
  89. --use_crf whether to use crf to calculate loss: true | false
  90. --device_id device id to run task
  91. --epoch_num total number of training epochs to perform
  92. --num_class number of classes to do labeling
  93. --vocab_file_path the vocabulary file that the BERT model was trained on
  94. --label2id_file_path label to id json file
  95. --save_finetune_checkpoint_path path to save generated finetuning checkpoint
  96. --load_pretrain_checkpoint_path initial checkpoint (usually from a pre-trained BERT model)
  97. --load_finetune_checkpoint_path give a finetuning checkpoint path if only do eval
  98. --train_data_file_path ner tfrecord for training. E.g., train.tfrecord
  99. --eval_data_file_path ner tfrecord for predictions if f1 is used to evaluate result, ner json for predictions if clue_benchmark is used to evaluate result
  100. --schema_file_path path to datafile schema file
  101. usage: run_squad.py [--device_target DEVICE_TARGET] [--do_train DO_TRAIN] [----do_eval DO_EVAL]
  102. [--device_id N] [--epoch_num N] [--num_class N]
  103. [--vocab_file_path VOCAB_FILE_PATH]
  104. [--eval_json_path EVAL_JSON_PATH]
  105. [--save_finetune_checkpoint_path SAVE_FINETUNE_CHECKPOINT_PATH]
  106. [--load_pretrain_checkpoint_path LOAD_PRETRAIN_CHECKPOINT_PATH]
  107. [--load_finetune_checkpoint_path LOAD_FINETUNE_CHECKPOINT_PATH]
  108. [--train_data_file_path TRAIN_DATA_FILE_PATH]
  109. [--eval_data_file_path EVAL_DATA_FILE_PATH]
  110. [--schema_file_path SCHEMA_FILE_PATH]
  111. options:
  112. --device_target targeted device to run task: Ascend | GPU
  113. --do_train whether to run training on training set: true | false
  114. --do_eval whether to run eval on dev set: true | false
  115. --device_id device id to run task
  116. --epoch_num total number of training epochs to perform
  117. --num_class number of classes to classify, usually 2 for squad task
  118. --vocab_file_path the vocabulary file that the BERT model was trained on
  119. --eval_json_path path to squad dev json file
  120. --save_finetune_checkpoint_path path to save generated finetuning checkpoint
  121. --load_pretrain_checkpoint_path initial checkpoint (usually from a pre-trained BERT model)
  122. --load_finetune_checkpoint_path give a finetuning checkpoint path if only do eval
  123. --train_data_file_path squad tfrecord for training. E.g., train1.1.tfrecord
  124. --eval_data_file_path squad tfrecord for predictions. E.g., dev1.1.tfrecord
  125. --schema_file_path path to datafile schema file
  126. usage: run_classifier.py [--device_target DEVICE_TARGET] [--do_train DO_TRAIN] [----do_eval DO_EVAL]
  127. [--assessment_method ASSESSMENT_METHOD] [--device_id N] [--epoch_num N] [--num_class N]
  128. [--save_finetune_checkpoint_path SAVE_FINETUNE_CHECKPOINT_PATH]
  129. [--load_pretrain_checkpoint_path LOAD_PRETRAIN_CHECKPOINT_PATH]
  130. [--load_finetune_checkpoint_path LOAD_FINETUNE_CHECKPOINT_PATH]
  131. [--train_data_file_path TRAIN_DATA_FILE_PATH]
  132. [--eval_data_file_path EVAL_DATA_FILE_PATH]
  133. [--schema_file_path SCHEMA_FILE_PATH]
  134. options:
  135. --device_target targeted device to run task: Ascend | GPU
  136. --do_train whether to run training on training set: true | false
  137. --do_eval whether to run eval on dev set: true | false
  138. --assessment_method assessment method to do evaluation: accuracy | f1 | mcc | spearman_correlation
  139. --device_id device id to run task
  140. --epoch_num total number of training epochs to perform
  141. --num_class number of classes to do labeling
  142. --save_finetune_checkpoint_path path to save generated finetuning checkpoint
  143. --load_pretrain_checkpoint_path initial checkpoint (usually from a pre-trained BERT model)
  144. --load_finetune_checkpoint_path give a finetuning checkpoint path if only do eval
  145. --train_data_file_path tfrecord for training. E.g., train.tfrecord
  146. --eval_data_file_path tfrecord for predictions. E.g., dev.tfrecord
  147. --schema_file_path path to datafile schema file
  148. ```
  149. ## Options and Parameters
  150. It contains of parameters of BERT model and options for training, which is set in file `config.py` and `finetune_eval_config.py` respectively.
  151. ### Options:
  152. ```
  153. config.py:
  154. bert_network version of BERT model: base | nezha, default is base
  155. loss_scale_value initial value of loss scale: N, default is 2^32
  156. scale_factor factor used to update loss scale: N, default is 2
  157. scale_window steps for once updatation of loss scale: N, default is 1000
  158. optimizer optimizer used in the network: AdamWerigtDecayDynamicLR | Lamb | Momentum, default is "Lamb"
  159. ```
  160. ### Parameters:
  161. ```
  162. Parameters for dataset and network (Pre-Training/Fine-Tuning/Evaluation):
  163. batch_size batch size of input dataset: N, default is 16
  164. seq_length length of input sequence: N, default is 128
  165. vocab_size size of each embedding vector: N, must be consistant with the dataset you use. Default is 21136
  166. hidden_size size of bert encoder layers: N, default is 768
  167. num_hidden_layers number of hidden layers: N, default is 12
  168. num_attention_heads number of attention heads: N, default is 12
  169. intermediate_size size of intermediate layer: N, default is 3072
  170. hidden_act activation function used: ACTIVATION, default is "gelu"
  171. hidden_dropout_prob dropout probability for BertOutput: Q, default is 0.1
  172. attention_probs_dropout_prob dropout probability for BertAttention: Q, default is 0.1
  173. max_position_embeddings maximum length of sequences: N, default is 512
  174. type_vocab_size size of token type vocab: N, default is 16
  175. initializer_range initialization value of TruncatedNormal: Q, default is 0.02
  176. use_relative_positions use relative positions or not: True | False, default is False
  177. input_mask_from_dataset use the input mask loaded form dataset or not: True | False, default is True
  178. token_type_ids_from_dataset use the token type ids loaded from dataset or not: True | False, default is True
  179. dtype data type of input: mstype.float16 | mstype.float32, default is mstype.float32
  180. compute_type compute type in BertTransformer: mstype.float16 | mstype.float32, default is mstype.float16
  181. Parameters for optimizer:
  182. AdamWeightDecay:
  183. decay_steps steps of the learning rate decay: N
  184. learning_rate value of learning rate: Q
  185. end_learning_rate value of end learning rate: Q, must be positive
  186. power power: Q
  187. warmup_steps steps of the learning rate warm up: N
  188. weight_decay weight decay: Q
  189. eps term added to the denominator to improve numerical stability: Q
  190. Lamb:
  191. decay_steps steps of the learning rate decay: N
  192. learning_rate value of learning rate: Q
  193. end_learning_rate value of end learning rate: Q
  194. power power: Q
  195. warmup_steps steps of the learning rate warm up: N
  196. weight_decay weight decay: Q
  197. Momentum:
  198. learning_rate value of learning rate: Q
  199. momentum momentum for the moving average: Q
  200. ```