You can not select more than 25 topics Topics must start with a chinese character,a letter or number, can include dashes ('-') and can be up to 35 characters long.

README.md 5.8 kB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293
  1. # BERT Example
  2. ## Description
  3. This is an example of training bert by second-order optimizer THOR. THOR is a novel approximate seond-order optimization method in MindSpore.
  4. ## Requirements
  5. - Install [MindSpore](https://www.mindspore.cn/install/en).
  6. - Download the zhwiki dataset for pre-training. Extract and clean text in the dataset with [WikiExtractor](https://github.com/attardi/wikiextractor). Convert the dataset to TFRecord format and move the files to a specified path.
  7. - Download dataset for fine-tuning and evaluation such as CLUENER, TNEWS, SQuAD v1.1, etc.
  8. > Notes:
  9. If you are running a fine-tuning or evaluation task, prepare a checkpoint from pre-train.
  10. ## Running the Example
  11. ### Pre-Training
  12. - Set options in `config.py`, including lossscale, optimizer and network. Click [here](https://www.mindspore.cn/tutorial/zh-CN/master/use/data_preparation/loading_the_datasets.html#tfrecord) for more information about dataset and the json schema file.
  13. - Run `run_standalone_pretrain.sh` for non-distributed pre-training of BERT-base and BERT-NEZHA model.
  14. ``` bash
  15. sh scripts/run_standalone_pretrain.sh DEVICE_ID EPOCH_SIZE DATA_DIR SCHEMA_DIR
  16. ```
  17. - Run `run_distribute_pretrain.sh` for distributed pre-training of BERT-base and BERT-NEZHA model.
  18. ``` bash
  19. sh scripts/run_distribute_pretrain.sh DEVICE_NUM EPOCH_SIZE DATA_DIR SCHEMA_DIR MINDSPORE_HCCL_CONFIG_PATH
  20. ```
  21. ## Usage
  22. ### Pre-Training
  23. ```
  24. usage: run_pretrain.py [--distribute DISTRIBUTE] [--epoch_size N] [----device_num N] [--device_id N]
  25. [--enable_save_ckpt ENABLE_SAVE_CKPT]
  26. [--enable_lossscale ENABLE_LOSSSCALE] [--do_shuffle DO_SHUFFLE]
  27. [--enable_data_sink ENABLE_DATA_SINK] [--data_sink_steps N] [--checkpoint_path CHECKPOINT_PATH]
  28. [--save_checkpoint_steps N] [--save_checkpoint_num N]
  29. [--data_dir DATA_DIR] [--schema_dir SCHEMA_DIR]
  30. options:
  31. --distribute pre_training by serveral devices: "true"(training by more than 1 device) | "false", default is "false"
  32. --epoch_size epoch size: N, default is 1
  33. --device_num number of used devices: N, default is 1
  34. --device_id device id: N, default is 0
  35. --enable_save_ckpt enable save checkpoint: "true" | "false", default is "true"
  36. --enable_lossscale enable lossscale: "true" | "false", default is "true"
  37. --do_shuffle enable shuffle: "true" | "false", default is "true"
  38. --enable_data_sink enable data sink: "true" | "false", default is "true"
  39. --data_sink_steps set data sink steps: N, default is 1
  40. --checkpoint_path path to save checkpoint files: PATH, default is ""
  41. --save_checkpoint_steps steps for saving checkpoint files: N, default is 1000
  42. --save_checkpoint_num number for saving checkpoint files: N, default is 1
  43. --data_dir path to dataset directory: PATH, default is ""
  44. --schema_dir path to schema.json file, PATH, default is ""
  45. ```
  46. ## Options and Parameters
  47. It contains of parameters of BERT model and options for training, which is set in file `config.py`, `bert_net_config.py` and `evaluation_config.py` respectively.
  48. ### Options:
  49. ```
  50. config.py:
  51. bert_network version of BERT model: base | nezha, default is base
  52. optimizer optimizer used in the network: AdamWerigtDecayDynamicLR | Lamb | Momentum | Thor, default is "Thor"
  53. ```
  54. ### Parameters:
  55. ```
  56. Parameters for dataset and network (Pre-Training/Evaluation):
  57. batch_size batch size of input dataset: N, default is 8
  58. seq_length length of input sequence: N, default is 128
  59. vocab_size size of each embedding vector: N, must be consistant with the dataset you use. Default is 21136
  60. hidden_size size of bert encoder layers: N, default is 768
  61. num_hidden_layers number of hidden layers: N, default is 12
  62. num_attention_heads number of attention heads: N, default is 12
  63. intermediate_size size of intermediate layer: N, default is 3072
  64. hidden_act activation function used: ACTIVATION, default is "gelu"
  65. hidden_dropout_prob dropout probability for BertOutput: Q, default is 0.1
  66. attention_probs_dropout_prob dropout probability for BertAttention: Q, default is 0.1
  67. max_position_embeddings maximum length of sequences: N, default is 512
  68. type_vocab_size size of token type vocab: N, default is 16
  69. initializer_range initialization value of TruncatedNormal: Q, default is 0.02
  70. use_relative_positions use relative positions or not: True | False, default is False
  71. input_mask_from_dataset use the input mask loaded form dataset or not: True | False, default is True
  72. token_type_ids_from_dataset use the token type ids loaded from dataset or not: True | False, default is True
  73. dtype data type of input: mstype.float16 | mstype.float32, default is mstype.float32
  74. compute_type compute type in BertTransformer: mstype.float16 | mstype.float32, default is mstype.float16
  75. Parameters for optimizer:
  76. Thor:
  77. momentum momentum for the moving average: Q
  78. weight_decay weight decay: Q
  79. loss_scale loss scale: N
  80. frequency the step interval to update second-order information matrix: N, default is 10
  81. batch_size batch size of input dataset: N, default is 8
  82. ```