You can not select more than 25 topics Topics must start with a chinese character,a letter or number, can include dashes ('-') and can be up to 35 characters long.

README.md 29 kB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462463464465466467468469470471472473474475476477478479480481482483484485486487488489490491492493494495496497498499500501502503504505506507508509510511512513514515516517518519520521522523524525526527528529530531532533534535536537538539540541542543544545546547548549550551552553554555556557558559560561562563564565566567568569570571572573574575576577578579580581582583584585586587588589590591592593594595596597598599600601602603604605606607608609610611612613614615616617618619620621622623624625626627628629630631632633634635636637638639640641642643644645646647648649650651652653654655
  1. # Contexts
  2. <!-- TOC -->
  3. - [MASS: Masked Sequence to Sequence Pre-training for Language Generation Description](#mass-masked-sequence-to-sequence-pre-training-for-language-generation-description)
  4. - [Model Architecture](#model-architecture)
  5. - [Dataset](#dataset)
  6. - [Features](#features)
  7. - [Script description](#script-description)
  8. - [Data Preparation](#data-preparation)
  9. - [Tokenization](#tokenization)
  10. - [Byte Pair Encoding](#byte-pair-encoding)
  11. - [Build Vocabulary](#build-vocabulary)
  12. - [Generate Dataset](#generate-dataset)
  13. - [News Crawl Corpus](#news-crawl-corpus)
  14. - [Gigaword Corpus](#gigaword-corpus)
  15. - [Cornell Movie Dialog Corpus](#cornell-movie-dialog-corpus)
  16. - [Configuration](#configuration)
  17. - [Training & Evaluation process](#training--evaluation-process)
  18. - [Weights average](#weights-average)
  19. - [Learning rate scheduler](#learning-rate-scheduler)
  20. - [Environment Requirements](#environment-requirements)
  21. - [Platform](#platform)
  22. - [Requirements](#requirements)
  23. - [Get started](#get-started)
  24. - [Pre-training](#pre-training)
  25. - [Fine-tuning](#fine-tuning)
  26. - [Inference](#inference)
  27. - [Performance](#performance)
  28. - [Results](#results)
  29. - [Fine-Tuning on Text Summarization](#fine-tuning-on-text-summarization)
  30. - [Fine-Tuning on Conversational ResponseGeneration](#fine-tuning-on-conversational-responsegeneration)
  31. - [Training Performance](#training-performance)
  32. - [Inference Performance](#inference-performance)
  33. - [Description of random situation](#description-of-random-situation)
  34. - [others](#others)
  35. - [ModelZoo Homepage](#modelzoo-homepage)
  36. <!-- /TOC -->
  37. # MASS: Masked Sequence to Sequence Pre-training for Language Generation Description
  38. [MASS: Masked Sequence to Sequence Pre-training for Language Generation](https://www.microsoft.com/en-us/research/uploads/prod/2019/06/MASS-paper-updated-002.pdf) was released by MicroSoft in June 2019.
  39. BERT(Devlin et al., 2018) have achieved SOTA in natural language understanding area by pre-training the encoder part of Transformer(Vaswani et al., 2017) with masked rich-resource text. Likewise, GPT(Raddford et al., 2018) pre-trains the decoder part of Transformer with masked(encoder inputs are masked) rich-resource text. Both of them build a robust language model by pre-training with masked rich-resource text.
  40. Inspired by BERT, GPT and other language models, MicroSoft addressed [MASS: Masked Sequence to Sequence Pre-training for Language Generation](https://www.microsoft.com/en-us/research/uploads/prod/2019/06/MASS-paper-updated-002.pdf) which combines BERT's and GPT's idea. MASS has an important parameter k, which controls the masked fragment length. BERT and GPT are specicl case when k equals to 1 and sentence length.
  41. [Introducing MASS – A pre-training method that outperforms BERT and GPT in sequence to sequence language generation tasks](https://www.microsoft.com/en-us/research/blog/introducing-mass-a-pre-training-method-that-outperforms-bert-and-gpt-in-sequence-to-sequence-language-generation-tasks/)
  42. [Paper](https://www.microsoft.com/en-us/research/uploads/prod/2019/06/MASS-paper-updated-002.pdf): Song, Kaitao, Xu Tan, Tao Qin, Jianfeng Lu and Tie-Yan Liu. “MASS: Masked Sequence to Sequence Pre-training for Language Generation.” ICML (2019).
  43. # Model Architecture
  44. The MASS network is implemented by Transformer, which has multi-encoder layers and multi-decoder layers.
  45. For pre-training, we use the Adam optimizer and loss-scale to get the pre-trained model.
  46. During fine-turning, we fine-tune this pre-trained model with different dataset according to different tasks.
  47. During testing, we use the fine-turned model to predict the result, and adopt a beam search algorithm to
  48. get the most possible prediction results.
  49. # Dataset
  50. Note that you can run the scripts based on the dataset mentioned in original paper or widely used in relevant domain/network architecture. In the following sections, we will introduce how to run the scripts using the related dataset below.
  51. Dataset used:
  52. - monolingual English data from News Crawl dataset(WMT 2019) for pre-training.
  53. - Gigaword Corpus(Graff et al., 2003) for Text Summarization.
  54. - Cornell movie dialog corpus(DanescuNiculescu-Mizil & Lee, 2011).
  55. Details about those dataset could be found in [MASS: Masked Sequence to Sequence Pre-training for Language Generation](https://www.microsoft.com/en-us/research/uploads/prod/2019/06/MASS-paper-updated-002.pdf).
  56. # Features
  57. Mass is designed to jointly pre train encoder and decoder to complete the task of language generation.
  58. First of all, through a sequence to sequence framework, mass only predicts the blocked token, which forces the encoder to understand the meaning of the unshielded token, and encourages the decoder to extract useful information from the encoder.
  59. Secondly, by predicting the continuous token of the decoder, the decoder can build better language modeling ability than only predicting discrete token.
  60. Third, by further shielding the input token of the decoder which is not shielded in the encoder, the decoder is encouraged to extract more useful information from the encoder side, rather than using the rich information in the previous token.
  61. # Script description
  62. MASS script and code structure are as follow:
  63. ```text
  64. ├── mass
  65. ├── README.md // Introduction of MASS model.
  66. ├── config
  67. │ ├──config.py // Configuration instance definition.
  68. │ ├──config.json // Configuration file.
  69. ├── src
  70. │ ├──dataset
  71. │ ├──bi_data_loader.py // Dataset loader for fine-tune or inferring.
  72. │ ├──mono_data_loader.py // Dataset loader for pre-training.
  73. │ ├──language_model
  74. │ ├──noise_channel_language_model.p // Noisy channel language model for dataset generation.
  75. │ ├──mass_language_model.py // MASS language model according to MASS paper.
  76. │ ├──loose_masked_language_model.py // MASS language model according to MASS released code.
  77. │ ├──masked_language_model.py // Masked language model according to MASS paper.
  78. │ ├──transformer
  79. │ ├──create_attn_mask.py // Generate mask matrix to remove padding positions.
  80. │ ├──transformer.py // Transformer model architecture.
  81. │ ├──encoder.py // Transformer encoder component.
  82. │ ├──decoder.py // Transformer decoder component.
  83. │ ├──self_attention.py // Self-Attention block component.
  84. │ ├──multi_head_attention.py // Multi-Head Self-Attention component.
  85. │ ├──embedding.py // Embedding component.
  86. │ ├──positional_embedding.py // Positional embedding component.
  87. │ ├──feed_forward_network.py // Feed forward network.
  88. │ ├──residual_conn.py // Residual block.
  89. │ ├──beam_search.py // Beam search decoder for inferring.
  90. │ ├──transformer_for_infer.py // Use Transformer to infer.
  91. │ ├──transformer_for_train.py // Use Transformer to train.
  92. │ ├──utils
  93. │ ├──byte_pair_encoding.py // Apply BPE with subword-nmt.
  94. │ ├──dictionary.py // Dictionary.
  95. │ ├──loss_moniter.py // Callback of monitering loss during training step.
  96. │ ├──lr_scheduler.py // Learning rate scheduler.
  97. │ ├──ppl_score.py // Perplexity score based on N-gram.
  98. │ ├──rouge_score.py // Calculate ROUGE score.
  99. │ ├──load_weights.py // Load weights from a checkpoint or NPZ file.
  100. │ ├──initializer.py // Parameters initializer.
  101. ├── vocab
  102. │ ├──all.bpe.codes // BPE codes table(this file should be generated by user).
  103. │ ├──all_en.dict.bin // Learned vocabulary file(this file should be generated by user).
  104. ├── scripts
  105. │ ├──run_ascend.sh // Ascend train & evaluate model script.
  106. │ ├──run_gpu.sh // GPU train & evaluate model script.
  107. │ ├──learn_subword.sh // Learn BPE codes.
  108. │ ├──stop_training.sh // Stop training.
  109. ├── requirements.txt // Requirements of third party package.
  110. ├── train.py // Train API entry.
  111. ├── eval.py // Infer API entry.
  112. ├── tokenize_corpus.py // Corpus tokenization.
  113. ├── apply_bpe_encoding.py // Applying bpe encoding.
  114. ├── weights_average.py // Average multi model checkpoints to NPZ format.
  115. ├── news_crawl.py // Create News Crawl dataset for pre-training.
  116. ├── gigaword.py // Create Gigaword Corpus.
  117. ├── cornell_dialog.py // Create Cornell Movie Dialog dataset for conversation response.
  118. ```
  119. ## Data Preparation
  120. The data preparation of a natural language processing task contains data cleaning, tokenization, encoding and vocabulary generation steps.
  121. In our experiments, using [Byte Pair Encoding(BPE)](https://arxiv.org/abs/1508.07909) could reduce size of vocabulary, and relieve the OOV influence effectively.
  122. Vocabulary could be created using `src/utils/dictionary.py` with text dictionary which is learnt from BPE.
  123. For more detail about BPE, please refer to [Subword-nmt lib](https://www.cnpython.com/pypi/subword-nmt) or [paper](https://arxiv.org/abs/1508.07909).
  124. In our experiments, vocabulary was learned based on 1.9M sentences from News Crawl Dataset, size of vocabulary is 45755.
  125. Here, we have a brief introduction of data preparation scripts.
  126. ### Tokenization
  127. Using `tokenize_corpus.py` could tokenize corpus whose text files are in format of `.txt`.
  128. Major parameters in `tokenize_corpus.py`:
  129. ```bash
  130. --corpus_folder: Corpus folder path, if multi-folders are provided, use ',' split folders.
  131. --output_folder: Output folder path.
  132. --tokenizer: Tokenizer to be used, nltk or jieba, if nltk is not installed fully, use jieba instead.
  133. --pool_size: Processes pool size.
  134. ```
  135. Sample code:
  136. ```bash
  137. python tokenize_corpus.py --corpus_folder /{path}/corpus --output_folder /{path}/tokenized_corpus --tokenizer {nltk|jieba} --pool_size 16
  138. ```
  139. ### Byte Pair Encoding
  140. After tokenization, BPE is applied to tokenized corpus with provided `all.bpe.codes`.
  141. Apply BPE script can be found in `apply_bpe_encoding.py`.
  142. Major parameters in `apply_bpe_encoding.py`:
  143. ```bash
  144. --codes: BPE codes file.
  145. --src_folder: Corpus folders.
  146. --output_folder: Output files folder.
  147. --prefix: Prefix of text file in `src_folder`.
  148. --vocab_path: Generated vocabulary output path.
  149. --threshold: Filter out words that frequency is lower than threshold.
  150. --processes: Size of process pool (to accelerate). Default: 2.
  151. ```
  152. Sample code:
  153. ```bash
  154. python tokenize_corpus.py --codes /{path}/all.bpe.codes \
  155. --src_folder /{path}/tokenized_corpus \
  156. --output_folder /{path}/tokenized_corpus/bpe \
  157. --prefix tokenized \
  158. --vocab_path /{path}/vocab_en.dict.bin
  159. --processes 32
  160. ```
  161. ### Build Vocabulary
  162. Support that you want to create a new vocabulary, there are two options:
  163. 1. Learn BPE codes from scratch, and create vocabulary with multi vocabulary files from `subword-nmt`.
  164. 2. Create from an existing vocabulary file which lines in the format of `word frequency`.
  165. 3. *Optional*, Create a small vocabulary based on `vocab/all_en.dict.bin` with method of `shink` from `src/utils/dictionary.py`.
  166. 4. Persistent vocabulary to `vocab` folder with method `persistence()`.
  167. Major interface of `src/utils/dictionary.py` are as follow:
  168. 1. `shrink(self, threshold=50)`: Shrink the size of vocabulary by filter out words frequency is lower than threshold. It returns a new vocabulary.
  169. 2. `load_from_text(cls, filepaths: List[str])`: Load existed text vocabulary which lines in the format of `word frequency`.
  170. 3. `load_from_persisted_dict(cls, filepath)`: Load from a persisted binary vocabulary which was saved by calling `persistence()` method.
  171. 4. `persistence(self, path)`: Save vocabulary object to binary file.
  172. Sample code:
  173. ```python
  174. from src.utils import Dictionary
  175. vocabulary = Dictionary.load_from_persisted_dict("vocab/all_en.dict.bin")
  176. tokens = [1, 2, 3, 4, 5]
  177. # Convert ids to symbols.
  178. print([vocabulary[t] for t in tokens])
  179. sentence = ["Hello", "world"]
  180. # Convert symbols to ids.
  181. print([vocabulary.index[s] for s in sentence])
  182. ```
  183. For more detail, please refer to the source file.
  184. ### Generate Dataset
  185. As mentioned above, three corpus are used in MASS mode, dataset generation scripts for them are provided.
  186. #### News Crawl Corpus
  187. Script can be found in `news_crawl.py`.
  188. Major parameters in `news_crawl.py`:
  189. ```bash
  190. Note that please provide `--existed_vocab` or `--dict_folder` at least one.
  191. A new vocabulary would be created in `output_folder` when pass `--dict_folder`.
  192. --src_folder: Corpus folders.
  193. --existed_vocab: Optional, persisted vocabulary file.
  194. --mask_ratio: Ratio of mask.
  195. --output_folder: Output dataset files folder path.
  196. --max_len: Maximum sentence length. If a sentence longer than `max_len`, then drop it.
  197. --suffix: Optional, suffix of generated dataset files.
  198. --processes: Optional, size of process pool (to accelerate). Default: 2.
  199. ```
  200. Sample code:
  201. ```bash
  202. python news_crawl.py --src_folder /{path}/news_crawl \
  203. --existed_vocab /{path}/mass/vocab/all_en.dict.bin \
  204. --mask_ratio 0.5 \
  205. --output_folder /{path}/news_crawl_dataset \
  206. --max_len 32 \
  207. --processes 32
  208. ```
  209. #### Gigaword Corpus
  210. Script can be found in `gigaword.py`.
  211. Major parameters in `gigaword.py`:
  212. ```bash
  213. --train_src: Train source file path.
  214. --train_ref: Train reference file path.
  215. --test_src: Test source file path.
  216. --test_ref: Test reference file path.
  217. --existed_vocab: Persisted vocabulary file.
  218. --output_folder: Output dataset files folder path.
  219. --noise_prob: Optional, add noise prob. Default: 0.
  220. --max_len: Optional, maximum sentence length. If a sentence longer than `max_len`, then drop it. Default: 64.
  221. --format: Optional, dataset format, "mindrecord" or "tfrecord". Default: "tfrecord".
  222. ```
  223. Sample code:
  224. ```bash
  225. python gigaword.py --train_src /{path}/gigaword/train_src.txt \
  226. --train_ref /{path}/gigaword/train_ref.txt \
  227. --test_src /{path}/gigaword/test_src.txt \
  228. --test_ref /{path}/gigaword/test_ref.txt \
  229. --existed_vocab /{path}/mass/vocab/all_en.dict.bin \
  230. --noise_prob 0.1 \
  231. --output_folder /{path}/gigaword_dataset \
  232. --max_len 64
  233. ```
  234. #### Cornell Movie Dialog Corpus
  235. Script can be found in `cornell_dialog.py`.
  236. Major parameters in `cornell_dialog.py`:
  237. ```bash
  238. --src_folder: Corpus folders.
  239. --existed_vocab: Persisted vocabulary file.
  240. --train_prefix: Train source and target file prefix. Default: train.
  241. --test_prefix: Test source and target file prefix. Default: test.
  242. --output_folder: Output dataset files folder path.
  243. --max_len: Maximum sentence length. If a sentence longer than `max_len`, then drop it.
  244. --valid_prefix: Optional, Valid source and target file prefix. Default: valid.
  245. ```
  246. Sample code:
  247. ```bash
  248. python cornell_dialog.py --src_folder /{path}/cornell_dialog \
  249. --existed_vocab /{path}/mass/vocab/all_en.dict.bin \
  250. --train_prefix train \
  251. --test_prefix test \
  252. --noise_prob 0.1 \
  253. --output_folder /{path}/cornell_dialog_dataset \
  254. --max_len 64
  255. ```
  256. ## Configuration
  257. Json file under the path `config/` is the template configuration file.
  258. Almost all of the options and arguments needed could be assigned conveniently, including the training platform, configurations of dataset and model, arguments of optimizer etc. Optional features such as loss scale and checkpoint are also available by setting the options correspondingly.
  259. For more detailed information about the attributes, refer to the file `config/config.py`.
  260. ## Training & Evaluation process
  261. For training a model, the shell script `run_ascend.sh` or `run_gpu.sh` is all you need. In this scripts, the environment variable is set and the training script `train.py` under `mass` is executed.
  262. You may start a task training with single device or multiple devices by assigning the options and run the command in bash:
  263. Ascend:
  264. ```ascend
  265. sh run_ascend.sh [--options]
  266. ```
  267. GPU:
  268. ```gpu
  269. sh run_gpu.sh [--options]
  270. ```
  271. The usage of `run_ascend.sh` is shown as below:
  272. ```text
  273. Usage: run_ascend.sh [-h, --help] [-t, --task <CHAR>] [-n, --device_num <N>]
  274. [-i, --device_id <N>] [-j, --hccl_json <FILE>]
  275. [-c, --config <FILE>] [-o, --output <FILE>]
  276. [-v, --vocab <FILE>]
  277. options:
  278. -h, --help show usage
  279. -t, --task select task: CHAR, 't' for train and 'i' for inference".
  280. -n, --device_num device number used for training: N, default is 1.
  281. -i, --device_id device id used for training with single device: N, 0<=N<=7, default is 0.
  282. -j, --hccl_json rank table file used for training with multiple devices: FILE.
  283. -c, --config configuration file as shown in the path 'mass/config': FILE.
  284. -o, --output assign output file of inference: FILE.
  285. -v, --vocab set the vocabulary.
  286. -m, --metric set the metric.
  287. ```
  288. Notes: Be sure to assign the hccl_json file while running a distributed-training.
  289. The usage of `run_gpu.sh` is shown as below:
  290. ```text
  291. Usage: run_gpu.sh [-h, --help] [-t, --task <CHAR>] [-n, --device_num <N>]
  292. [-i, --device_id <N>] [-c, --config <FILE>]
  293. [-o, --output <FILE>] [-v, --vocab <FILE>]
  294. options:
  295. -h, --help show usage
  296. -t, --task select task: CHAR, 't' for train and 'i' for inference".
  297. -n, --device_num device number used for training: N, default is 1.
  298. -i, --device_id device id used for training with single device: N, 0<=N<=7, default is 0.
  299. -c, --config configuration file as shown in the path 'mass/config': FILE.
  300. -o, --output assign output file of inference: FILE.
  301. -v, --vocab set the vocabulary.
  302. -m, --metric set the metric.
  303. ```
  304. The command followed shows a example for training with 2 devices.
  305. Ascend:
  306. ```ascend
  307. sh run_ascend.sh --task t --device_num 2 --hccl_json /{path}/rank_table.json --config /{path}/config.json
  308. ```
  309. ps. Discontinuous device id is not supported in `run_ascend.sh` at present, device id in `rank_table.json` must start from 0.
  310. GPU:
  311. ```gpu
  312. sh run_gpu.sh --task t --device_num 2 --config /{path}/config.json
  313. ```
  314. If use a single chip, it would be like this:
  315. Ascend:
  316. ```ascend
  317. sh run_ascend.sh --task t --device_num 1 --device_id 0 --config /{path}/config.json
  318. ```
  319. GPU:
  320. ```gpu
  321. sh run_gpu.sh --task t --device_num 1 --device_id 0 --config /{path}/config.json
  322. ```
  323. ## Weights average
  324. ```python
  325. python weights_average.py --input_files your_checkpoint_list --output_file model.npz
  326. ```
  327. The input_files is a list of you checkpoints file. To use model.npz as the weights, add its path in config.json at "existed_ckpt".
  328. ```json
  329. {
  330. ...
  331. "checkpoint_options": {
  332. "existed_ckpt": "/xxx/xxx/model.npz",
  333. "save_ckpt_steps": 1000,
  334. ...
  335. },
  336. ...
  337. }
  338. ```
  339. ## Learning rate scheduler
  340. Two learning rate scheduler are provided in our model:
  341. 1. [Polynomial decay scheduler](https://towardsdatascience.com/learning-rate-schedules-and-adaptive-learning-rate-methods-for-deep-learning-2c8f433990d1).
  342. 2. [Inverse square root scheduler](https://ece.uwaterloo.ca/~dwharder/aads/Algorithms/Inverse_square_root/).
  343. LR scheduler could be config in `config/config.json`.
  344. For Polynomial decay scheduler, config could be like:
  345. ```json
  346. {
  347. ...
  348. "learn_rate_config": {
  349. "optimizer": "adam",
  350. "lr": 1e-4,
  351. "lr_scheduler": "poly",
  352. "poly_lr_scheduler_power": 0.5,
  353. "decay_steps": 10000,
  354. "warmup_steps": 2000,
  355. "min_lr": 1e-6
  356. },
  357. ...
  358. }
  359. ```
  360. For Inverse square root scheduler, config could be like:
  361. ```json
  362. {
  363. ...
  364. "learn_rate_config": {
  365. "optimizer": "adam",
  366. "lr": 1e-4,
  367. "lr_scheduler": "isr",
  368. "decay_start_step": 12000,
  369. "warmup_steps": 2000,
  370. "min_lr": 1e-6
  371. },
  372. ...
  373. }
  374. ```
  375. More detail about LR scheduler could be found in `src/utils/lr_scheduler.py`.
  376. # Environment Requirements
  377. ## Platform
  378. - Hardware(Ascend/GPU)
  379. - Prepare hardware environment with Ascend or GPU processor.
  380. - Framework
  381. - [MindSpore](https://www.mindspore.cn/install/en)
  382. - For more information, please check the resources below:
  383. - [MindSpore Tutorials](https://www.mindspore.cn/tutorial/training/en/master/index.html)
  384. - [MindSpore Python API](https://www.mindspore.cn/doc/api_python/en/master/index.html)
  385. ## Requirements
  386. ```txt
  387. nltk
  388. numpy
  389. subword-nmt
  390. rouge
  391. ```
  392. <https://www.mindspore.cn/tutorial/training/en/master/advanced_use/migrate_3rd_scripts.html>
  393. # Get started
  394. MASS pre-trains a sequence to sequence model by predicting the masked fragments in an input sequence. After this, downstream tasks including text summarization and conversation response are candidated for fine-tuning the model and for inference.
  395. Here we provide a practice example to demonstrate the basic usage of MASS for pre-training, fine-tuning a model, and the inference process. The overall process is as follows:
  396. 1. Download and process the dataset.
  397. 2. Modify the `config.json` to config the network.
  398. 3. Run a task for pre-training and fine-tuning.
  399. 4. Perform inference and validation.
  400. ## Pre-training
  401. For pre-training a model, config the options in `config.json` firstly:
  402. - Assign the `pre_train_dataset` under `dataset_config` node to the dataset path.
  403. - Choose the optimizer('momentum/adam/lamb' is available).
  404. - Assign the 'ckpt_prefix' and 'ckpt_path' under `checkpoint_path` to save the model files.
  405. - Set other arguments including dataset configurations and network configurations.
  406. - If you have a trained model already, assign the `existed_ckpt` to the checkpoint file.
  407. If you use the ascend chip, run the shell script `run_ascend.sh` as followed:
  408. ```ascend
  409. sh run_ascend.sh -t t -n 1 -i 1 -c /mass/config/config.json
  410. ```
  411. You can also run the shell script `run_gpu.sh` on gpu as followed:
  412. ```gpu
  413. sh run_gpu.sh -t t -n 1 -i 1 -c /mass/config/config.json
  414. ```
  415. Get the log and output files under the path `./train_mass_*/`, and the model file under the path assigned in the `config/config.json` file.
  416. ## Fine-tuning
  417. For fine-tuning a model, config the options in `config.json` firstly:
  418. - Assign the `fine_tune_dataset` under `dataset_config` node to the dataset path.
  419. - Assign the `existed_ckpt` under `checkpoint_path` node to the existed model file generated by pre-training.
  420. - Choose the optimizer('momentum/adam/lamb' is available).
  421. - Assign the `ckpt_prefix` and `ckpt_path` under `checkpoint_path` node to save the model files.
  422. - Set other arguments including dataset configurations and network configurations.
  423. If you use the ascend chip, run the shell script `run_ascend.sh` as followed:
  424. ```ascend
  425. sh run_ascend.sh -t t -n 1 -i 1 -c config/config.json
  426. ```
  427. You can also run the shell script `run_gpu.sh` on gpu as followed:
  428. ```gpu
  429. sh run_gpu.sh -t t -n 1 -i 1 -c config/config.json
  430. ```
  431. Get the log and output files under the path `./train_mass_*/`, and the model file under the path assigned in the `config/config.json` file.
  432. ## Inference
  433. If you need to use the trained model to perform inference on multiple hardware platforms, such as GPU, Ascend 910 or Ascend 310, you can refer to this [Link](https://www.mindspore.cn/tutorial/training/en/master/advanced_use/migrate_3rd_scripts.html).
  434. For inference, config the options in `config.json` firstly:
  435. - Assign the `test_dataset` under `dataset_config` node to the dataset path.
  436. - Assign the `existed_ckpt` under `checkpoint_path` node to the model file produced by fine-tuning.
  437. - Choose the optimizer('momentum/adam/lamb' is available).
  438. - Assign the `ckpt_prefix` and `ckpt_path` under `checkpoint_path` node to save the model files.
  439. - Set other arguments including dataset configurations and network configurations.
  440. If you use the ascend chip, run the shell script `run_ascend.sh` as followed:
  441. ```bash
  442. sh run_ascend.sh -t i -n 1 -i 1 -c config/config.json -o {outputfile}
  443. ```
  444. You can also run the shell script `run_gpu.sh` on gpu as followed:
  445. ```gpu
  446. sh run_gpu.sh -t i -n 1 -i 1 -c config/config.json -o {outputfile}
  447. ```
  448. # Performance
  449. ## Results
  450. ### Fine-Tuning on Text Summarization
  451. The comparisons between MASS and two other pre-training methods in terms of ROUGE score on the text summarization task
  452. with 3.8M training data are as follows:
  453. | Method | RG-1(F) | RG-2(F) | RG-L(F) |
  454. |:---------------|:--------------|:-------------|:-------------|
  455. | MASS | Ongoing | Ongoing | Ongoing |
  456. ### Fine-Tuning on Conversational ResponseGeneration
  457. The comparisons between MASS and other baseline methods in terms of PPL on Cornell Movie Dialog corpus are as follows:
  458. | Method | Data = 10K | Data = 110K |
  459. |--------------------|------------------|-----------------|
  460. | MASS | Ongoing | Ongoing |
  461. ### Training Performance
  462. | Parameters | Masked Sequence to Sequence Pre-training for Language Generation |
  463. |:---------------------------|:--------------------------------------------------------------------------|
  464. | Model Version | v1 |
  465. | Resource | Ascend 910, cpu 2.60GHz, 192cores;memory, 755G |
  466. | uploaded Date | 05/24/2020 |
  467. | MindSpore Version | 1.0.0 |
  468. | Dataset | News Crawl 2007-2017 English monolingual corpus, Gigaword corpus, Cornell Movie Dialog corpus |
  469. | Training Parameters | Epoch=50, steps=XXX, batch_size=192, lr=1e-4 |
  470. | Optimizer | Adam |
  471. | Loss Function | Label smoothed cross-entropy criterion |
  472. | outputs | Sentence and probability |
  473. | Loss | Lower than 2 |
  474. | Accuracy | For conversation response, ppl=23.52, for text summarization, RG-1=29.79. |
  475. | Speed | 611.45 sentences/s |
  476. | Total time | --/-- |
  477. | Params (M) | 44.6M |
  478. ### Inference Performance
  479. | Parameters | Masked Sequence to Sequence Pre-training for Language Generation |
  480. |:---------------------------|:-----------------------------------------------------------|
  481. | Model Version | V1 |
  482. | Resource | Huawei 910 |
  483. | uploaded Date | 05/24/2020 |
  484. | MindSpore Version | 1.0.0 |
  485. | Dataset | Gigaword corpus, Cornell Movie Dialog corpus |
  486. | batch_size | --- |
  487. | outputs | Sentence and probability |
  488. | Accuracy | ppl=23.52 for conversation response, RG-1=29.79 for text summarization. |
  489. | Speed | ---- sentences/s |
  490. | Total time | --/-- |
  491. # Description of random situation
  492. MASS model contains dropout operations, if you want to disable dropout, please set related dropout_rate to 0 in `config/config.json`.
  493. # others
  494. The model has been validated on Ascend and GPU environments, not validated on CPU.
  495. # ModelZoo Homepage
  496. [Link](https://gitee.com/mindspore/mindspore/tree/master/model_zoo)