You can not select more than 25 topics Topics must start with a chinese character,a letter or number, can include dashes ('-') and can be up to 35 characters long.

README.md 9.6 kB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149
  1. # BERT Example
  2. ## Description
  3. This example implements pre-training, fine-tuning and evaluation of [BERT-base](https://github.com/google-research/bert)(the base version of BERT model) and [BERT-NEZHA](https://github.com/huawei-noah/Pretrained-Language-Model)(a Chinese pretrained language model developed by Huawei, which introduced a improvement of Functional Relative Positional Encoding as an effective positional encoding scheme).
  4. ## Requirements
  5. - Install [MindSpore](https://www.mindspore.cn/install/en).
  6. - Download the zhwiki dataset from <https://dumps.wikimedia.org/zhwiki> for pre-training. Extract and clean text in the dataset with [WikiExtractor](https://github.com/attardi/wil
  7. kiextractor). Convert the dataset to TFRecord format and move the files to a specified path.
  8. - Download the CLUE dataset from <https://www.cluebenchmarks.com> for fine-tuning and evaluation.
  9. > Notes:
  10. If you are running a fine-tuning or evaluation task, prepare the corresponding checkpoint file.
  11. ## Running the Example
  12. ### Pre-Training
  13. - Set options in `config.py`, including lossscale, optimizer and network. Click [here](https://www.mindspore.cn/tutorial/zh-CN/master/use/data_preparation/loading_the_datasets.html#tfrecord) for more information about dataset and the json schema file.
  14. - Run `run_standalone_pretrain.sh` for non-distributed pre-training of BERT-base and BERT-NEZHA model.
  15. ``` bash
  16. sh run_standalone_pretrain.sh DEVICE_ID EPOCH_SIZE DATA_DIR SCHEMA_DIR MINDSPORE_PATH
  17. ```
  18. - Run `run_distribute_pretrain.sh` for distributed pre-training of BERT-base and BERT-NEZHA model.
  19. ``` bash
  20. sh run_distribute_pretrain.sh DEVICE_NUM EPOCH_SIZE DATA_DIR SCHEMA_DIR MINDSPORE_HCCL_CONFIG_PATH MINDSPORE_PATH
  21. ```
  22. ### Fine-Tuning
  23. - Set options in `finetune_config.py`. Make sure the 'data_file', 'schema_file' and 'ckpt_file' are set to your own path, set the 'pre_training_ckpt' to save the checkpoint files generated.
  24. - Run `finetune.py` for fine-tuning of BERT-base and BERT-NEZHA model.
  25. ```bash
  26. python finetune.py --backend=ms
  27. ```
  28. ### Evaluation
  29. - Set options in `evaluation_config.py`. Make sure the 'data_file', 'schema_file' and 'finetune_ckpt' are set to your own path.
  30. - Run `evaluation.py` for evaluation of BERT-base and BERT-NEZHA model.
  31. ```bash
  32. python evaluation.py --backend=ms
  33. ```
  34. ## Usage
  35. ### Pre-Training
  36. ```
  37. usage: run_pretrain.py [--distribute DISTRIBUTE] [--epoch_size N] [----device_num N] [--device_id N]
  38. [--enable_task_sink ENABLE_TASK_SINK] [--enable_loop_sink ENABLE_LOOP_SINK]
  39. [--enable_mem_reuse ENABLE_MEM_REUSE] [--enable_save_ckpt ENABLE_SAVE_CKPT]
  40. [--enable_lossscale ENABLE_LOSSSCALE] [--do_shuffle DO_SHUFFLE]
  41. [--enable_data_sink ENABLE_DATA_SINK] [--data_sink_steps N] [--checkpoint_path CHECKPOINT_PATH]
  42. [--save_checkpoint_steps N] [--save_checkpoint_num N]
  43. [--data_dir DATA_DIR] [--schema_dir SCHEMA_DIR]
  44. options:
  45. --distribute pre_training by serveral devices: "true"(training by more than 1 device) | "false", default is "false"
  46. --epoch_size epoch size: N, default is 1
  47. --device_num number of used devices: N, default is 1
  48. --device_id device id: N, default is 0
  49. --enable_task_sink enable task sink: "true" | "false", default is "true"
  50. --enable_loop_sink enable loop sink: "true" | "false", default is "true"
  51. --enable_mem_reuse enable memory reuse: "true" | "false", default is "true"
  52. --enable_save_ckpt enable save checkpoint: "true" | "false", default is "true"
  53. --enable_lossscale enable lossscale: "true" | "false", default is "true"
  54. --do_shuffle enable shuffle: "true" | "false", default is "true"
  55. --enable_data_sink enable data sink: "true" | "false", default is "true"
  56. --data_sink_steps set data sink steps: N, default is 1
  57. --checkpoint_path path to save checkpoint files: PATH, default is ""
  58. --save_checkpoint_steps steps for saving checkpoint files: N, default is 1000
  59. --save_checkpoint_num number for saving checkpoint files: N, default is 1
  60. --data_dir path to dataset directory: PATH, default is ""
  61. --schema_dir path to schema.json file, PATH, default is ""
  62. ```
  63. ## Options and Parameters
  64. It contains of parameters of BERT model and options for training, which is set in file `config.py`, `finetune_config.py` and `evaluation_config.py` respectively.
  65. ### Options:
  66. ```
  67. Pre-Training:
  68. bert_network version of BERT model: base | nezha, default is base
  69. loss_scale_value initial value of loss scale: N, default is 2^32
  70. scale_factor factor used to update loss scale: N, default is 2
  71. scale_window steps for once updatation of loss scale: N, default is 1000
  72. optimizer optimizer used in the network: AdamWerigtDecayDynamicLR | Lamb | Momentum, default is "Lamb"
  73. Fine-Tuning:
  74. task task type: NER | XNLI | LCQMC | SENTI
  75. data_file dataset file to load: PATH, default is "/your/path/cn-wiki-128"
  76. schema_file dataset schema file to load: PATH, default is "/your/path/datasetSchema.json"
  77. epoch_num repeat counts of training: N, default is 40
  78. ckpt_prefix prefix used to save checkpoint files: PREFIX, default is "bert"
  79. ckpt_dir path to save checkpoint files: PATH, default is None
  80. pre_training_ckpt checkpoint file to load: PATH, default is "/your/path/pre_training.ckpt"
  81. optimizer optimizer used in the network: AdamWeigtDecayDynamicLR | Lamb | Momentum, default is "Lamb"
  82. Evaluation:
  83. task task type: NER | XNLI | LCQMC | SENTI
  84. data_file dataset file to load: PATH, default is "/your/path/evaluation.tfrecord"
  85. schema_file dataset schema file to load: PATH, default is "/your/path/schema.json"
  86. finetune_ckpt checkpoint file to load: PATH, default is "/your/path/your.ckpt"
  87. ```
  88. ### Parameters:
  89. ```
  90. Parameters for dataset and network (Pre-Training/Fine-Tuning/Evaluation):
  91. batch_size batch size of input dataset: N, default is 16
  92. seq_length length of input sequence: N, default is 128
  93. vocab_size size of each embedding vector: N, default is 21136
  94. hidden_size size of bert encoder layers: N, default is 768
  95. num_hidden_layers number of hidden layers: N, default is 12
  96. num_attention_heads number of attention heads: N, default is 12
  97. intermediate_size size of intermediate layer: N, default is 3072
  98. hidden_act activation function used: ACTIVATION, default is "gelu"
  99. hidden_dropout_prob dropout probability for BertOutput: Q, default is 0.1
  100. attention_probs_dropout_prob dropout probability for BertAttention: Q, default is 0.1
  101. max_position_embeddings maximum length of sequences: N, default is 512
  102. type_vocab_size size of token type vocab: N, default is 16
  103. initializer_range initialization value of TruncatedNormal: Q, default is 0.02
  104. use_relative_positions use relative positions or not: True | False, default is False
  105. input_mask_from_dataset use the input mask loaded form dataset or not: True | False, default is True
  106. token_type_ids_from_dataset use the token type ids loaded from dataset or not: True | False, default is True
  107. dtype data type of input: mstype.float16 | mstype.float32, default is mstype.float32
  108. compute_type compute type in BertTransformer: mstype.float16 | mstype.float32, default is mstype.float16
  109. Parameters for optimizer:
  110. AdamWeightDecayDynamicLR:
  111. decay_steps steps of the learning rate decay: N, default is 12276*3
  112. learning_rate value of learning rate: Q, default is 1e-5
  113. end_learning_rate value of end learning rate: Q, default is 0.0
  114. power power: Q, default is 10.0
  115. warmup_steps steps of the learning rate warm up: N, default is 2100
  116. weight_decay weight decay: Q, default is 1e-5
  117. eps term added to the denominator to improve numerical stability: Q, default is 1e-6
  118. Lamb:
  119. decay_steps steps of the learning rate decay: N, default is 12276*3
  120. learning_rate value of learning rate: Q, default is 1e-5
  121. end_learning_rate value of end learning rate: Q, default is 0.0
  122. power power: Q, default is 5.0
  123. warmup_steps steps of the learning rate warm up: N, default is 2100
  124. weight_decay weight decay: Q, default is 1e-5
  125. decay_filter function to determine whether to apply weight decay on parameters: FUNCTION, default is lambda x: False
  126. Momentum:
  127. learning_rate value of learning rate: Q, default is 2e-5
  128. momentum momentum for the moving average: Q, default is 0.9
  129. ```