You can not select more than 25 topics Topics must start with a chinese character,a letter or number, can include dashes ('-') and can be up to 35 characters long.

README.md 7.4 kB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129
  1. # TinyBERT Example
  2. ## Description
  3. [TinyBERT](https://github.com/huawei-noah/Pretrained-Model/tree/master/TinyBERT) is 7.5x smalller and 9.4x faster on inference than [BERT-base](https://github.com/google-research/bert) (the base version of BERT model) and achieves competitive performances in the tasks of natural language understanding. It performs a novel transformer distillation at both the pre-training and task-specific learning stages.
  4. ## Requirements
  5. - Install [MindSpore](https://www.mindspore.cn/install/en).
  6. - Download dataset for general distill and task distill such as GLUE.
  7. - Prepare a pre-trained bert model and a fine-tuned bert model for specific task such as GLUE.
  8. ## Running the Example
  9. ### General Distill
  10. - Set options in `src/gd_config.py`, including lossscale, optimizer and network.
  11. - Set options in `scripts/run_standalone_gd.sh`, including device target, data sink config, checkpoint config and dataset. Click [here](https://www.mindspore.cn/tutorial/zh-CN/master/use/data_preparation/loading_the_datasets.html#tfrecord) for more information about dataset and the json schema file.
  12. - Run `run_standalone_gd.sh` for non-distributed general distill of BERT-base model.
  13. ``` bash
  14. bash scripts/run_standalone_gd.sh
  15. ```
  16. - Run `run_distribute_gd.sh` for distributed general distill of BERT-base model.
  17. ``` bash
  18. bash scripts/run_distribute_gd.sh DEVICE_NUM EPOCH_SIZE RANK_TABLE_FILE
  19. ```
  20. ### Task Distill
  21. Task distill has two phases, pre-distill and task distill.
  22. - Set options in `src/td_config.py`, including lossscale, optimizer config of phase 1 and 2, as well as network config.
  23. - Run `run_standalone_td.py` for task distill of BERT-base model.
  24. ```bash
  25. bash scripts/run_standalone_td.sh
  26. ```
  27. ## Usage
  28. ### General Distill
  29. ```
  30. usage: run_standalone_gd.py [--distribute DISTRIBUTE] [--device_target DEVICE_TARGET]
  31. [--epoch_size N] [--device_id N]
  32. [--enable_data_sink ENABLE_DATA_SINK] [--data_sink_steps N]
  33. [--save_checkpoint_steps N] [--max_ckpt_num N]
  34. [--load_teacher_ckpt_path LOAD_TEACHER_CKPT_PATH]
  35. [--data_dir DATA_DIR] [--schema_dir SCHEMA_DIR]
  36. options:
  37. --distribute whether to run distributely: "true" | "false"
  38. --device_target targeted device to run task: "Ascend" | "GPU"
  39. --epoch_size epoch size: N, default is 1
  40. --device_id device id: N, default is 0
  41. --enable_data_sink enable data sink: "true" | "false", default is "true"
  42. --data_sink_steps set data sink steps: N, default is 1
  43. --load_teacher_ckpt_path path of teacher checkpoint to load: PATH, default is ""
  44. --data_dir path to dataset directory: PATH, default is ""
  45. --schema_dir path to schema.json file, PATH, default is ""
  46. usage: run_distribute_gd.py [--distribute DISTRIBUTE] [--device_target DEVICE_TARGET]
  47. [--epoch_size N] [--device_id N] [--device_num N]
  48. [--enable_data_sink ENABLE_DATA_SINK] [--data_sink_steps N]
  49. [--save_ckpt_steps N] [--max_ckpt_num N]
  50. [--load_teacher_ckpt_path LOAD_TEACHER_CKPT_PATH]
  51. [--data_dir DATA_DIR] [--schema_dir SCHEMA_DIR]
  52. options:
  53. --distribute whether to run distributely: "true" | "false"
  54. --device_target targeted device to run task: "Ascend" | "GPU"
  55. --epoch_size epoch size: N, default is 1
  56. --device_id device id: N, default is 0
  57. --device_num device id to run task
  58. --enable_data_sink enable data sink: "true" | "false", default is "true"
  59. --data_sink_steps set data sink steps: N, default is 1
  60. --load_teacher_ckpt_path path of teacher checkpoint to load: PATH, default is ""
  61. --data_dir path to dataset directory: PATH, default is ""
  62. --schema_dir path to schema.json file, PATH, default is ""
  63. ```
  64. ## Options and Parameters
  65. `gd_config.py` and `td_config.py` Contain parameters of BERT model and options for optimizer and lossscale.
  66. ### Options:
  67. ```
  68. Parameters for lossscale:
  69. loss_scale_value initial value of loss scale: N, default is 2^8
  70. scale_factor factor used to update loss scale: N, default is 2
  71. scale_window steps for once updatation of loss scale: N, default is 50
  72. Parameters for task-specific config:
  73. load_teacher_ckpt_path teacher checkpoint to load
  74. load_student_ckpt_path student checkpoint to load
  75. data_dir training data dir
  76. eval_data_dir evaluation data dir
  77. schema_dir data schema path
  78. ```
  79. ### Parameters:
  80. ```
  81. Parameters for bert network:
  82. batch_size batch size of input dataset: N, default is 16
  83. seq_length length of input sequence: N, default is 128
  84. vocab_size size of each embedding vector: N, must be consistant with the dataset you use. Default is 30522
  85. hidden_size size of bert encoder layers: N
  86. num_hidden_layers number of hidden layers: N
  87. num_attention_heads number of attention heads: N, default is 12
  88. intermediate_size size of intermediate layer: N
  89. hidden_act activation function used: ACTIVATION, default is "gelu"
  90. hidden_dropout_prob dropout probability for BertOutput: Q
  91. attention_probs_dropout_prob dropout probability for BertAttention: Q
  92. max_position_embeddings maximum length of sequences: N, default is 512
  93. save_ckpt_step number for saving checkponit: N, default is 100
  94. max_ckpt_num maximum number for saving checkpoint: N, default is 1
  95. type_vocab_size size of token type vocab: N, default is 2
  96. initializer_range initialization value of TruncatedNormal: Q, default is 0.02
  97. use_relative_positions use relative positions or not: True | False, default is False
  98. input_mask_from_dataset use the input mask loaded form dataset or not: True | False, default is True
  99. token_type_ids_from_dataset use the token type ids loaded from dataset or not: True | False, default is True
  100. dtype data type of input: mstype.float16 | mstype.float32, default is mstype.float32
  101. compute_type compute type in BertTransformer: mstype.float16 | mstype.float32, default is mstype.float16
  102. enable_fused_layernorm use batchnorm instead of layernorm to improve performance, default is False
  103. Parameters for optimizer:
  104. optimizer optimizer used in the network: AdamWeightDecay
  105. learning_rate value of learning rate: Q
  106. end_learning_rate value of end learning rate: Q, must be positive
  107. power power: Q
  108. weight_decay weight decay: Q
  109. eps term added to the denominator to improve numerical stability: Q
  110. ```