You can not select more than 25 topics Topics must start with a chinese character,a letter or number, can include dashes ('-') and can be up to 35 characters long.

README.md 8.8 kB

4 years ago
4 years ago
4 years ago
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225
  1. # Contents
  2. - [Thinking Path Re-Ranker](#thinking-path-re-ranker)
  3. - [Model Architecture](#model-architecture)
  4. - [Dataset](#dataset)
  5. - [Features](#features)
  6. - [Mixed Precision](#mixed-precision)
  7. - [Environment Requirements](#environment-requirements)
  8. - [Quick Start](#quick-start)
  9. - [Script Description](#script-description)
  10. - [Script and Sample Code](#script-and-sample-code)
  11. - [Script Parameters](#script-parameters)
  12. - [Training Process](#training-process)
  13. - [Training](#training)
  14. - [Evaluation Process](#evaluation-process)
  15. - [Evaluation](#evaluation)
  16. - [Model Description](#model-description)
  17. - [Performance](#performance)
  18. - [Description of random situation](#description-of-random-situation)
  19. - [ModelZoo Homepage](#modelzoo-homepage)
  20. # [Thinking Path Re-Ranker](#contents)
  21. Thinking Path Re-Ranker(TPRR) was proposed in 2021 by Huawei Poisson Lab & Parallel Distributed Computing Lab. By incorporating the
  22. retriever, reranker and reader modules, TPRR shows excellent performance on open-domain multi-hop question answering. Moreover, TPRR has won
  23. the first place in the current HotpotQA official leaderboard. This is a example of evaluation of TPRR with HotPotQA dataset in MindSpore. More
  24. importantly, this is the first open source version for TPRR.
  25. # [Model Architecture](#contents)
  26. Specially, TPRR contains three main modules. The first is retriever, which generate document sequences of each hop iteratively. The second
  27. is reranker for selecting the best path from candidate paths generated by retriever. The last one is reader for extracting answer spans.
  28. # [Dataset](#contents)
  29. The retriever dataset consists of three parts:
  30. Wikipedia data: the 2017 English Wikipedia dump version with bidirectional hyperlinks.
  31. dev data: HotPotQA full wiki setting dev data with 7398 question-answer pairs.
  32. dev tf-idf data: the candidates for each question in dev data which is originated from top-500 retrieved from 5M paragraphs of Wikipedia
  33. through TF-IDF.
  34. The dataset of re-ranker consists of two parts:
  35. Wikipedia data: the 2017 English Wikipedia dump version.
  36. dev data: HotPotQA full wiki setting dev data with 7398 question-answer pairs.
  37. # [Features](#contents)
  38. ## [Mixed Precision](#contents)
  39. To ultilize the strong computation power of Ascend chip, and accelerate the evaluation process, the mixed evaluation method is used. MindSpore
  40. is able to cope with FP32 inputs and FP16 operators. In TPRR example, the model is set to FP16 mode for the matmul calculation part.
  41. # [Environment Requirements](#contents)
  42. - Hardware (Ascend)
  43. - Framework
  44. - [MindSpore](https://www.mindspore.cn/install/en)
  45. - For more information, please check the resources below:
  46. - [MindSpore Tutorials](https://www.mindspore.cn/tutorial/training/en/master/index.html)
  47. - [MindSpore Python API](https://www.mindspore.cn/doc/api_python/en/master/index.html)
  48. # [Quick Start](#contents)
  49. After installing MindSpore via the official website and Dataset is correctly generated, you can start training and evaluation as follows.
  50. - running on Ascend
  51. ```python
  52. # run evaluation example with HotPotQA dev dataset
  53. pip install transformers
  54. sh run_eval_ascend.sh
  55. sh run_eval_ascend_reranker_reader.sh
  56. ```
  57. # [Script Description](#contents)
  58. ## [Script and Sample Code](#contents)
  59. ```shell
  60. .
  61. └─tprr
  62. ├─README.md
  63. ├─scripts
  64. | ├─run_eval_ascend.sh # Launch retriever evaluation in ascend
  65. | └─run_eval_ascend_reranker_reader # Launch re-ranker and reader evaluation in ascend
  66. |
  67. ├─src
  68. | ├─build_reranker_data.py # build data for re-ranker from result of retriever
  69. | ├─config.py # Evaluation configurations for retriever
  70. | ├─converted_bert.py # Bert model for tprr
  71. | ├─hotpot_evaluate_v1.py # Hotpotqa evaluation script
  72. | ├─onehop.py # Onehop model of retriever
  73. | ├─process_data.py # Data preprocessing for retriever
  74. | ├─reader.py # Reader model
  75. | ├─albert.py # Albert-xxlarge model
  76. | ├─reader_downstream.py # Downstream module of reader model
  77. | ├─reader_eval.py # Reader evaluation script
  78. | ├─rerank_and_reader_data_generator.py # Data generator for re-ranker and reader
  79. | ├─rerank_and_reader_utils.py # Utils for re-ranker and reader
  80. | ├─rerank_downstream.py # Downstream module of re-ranker model
  81. | ├─reranker.py # Re-ranker model
  82. | ├─reranker_eval.py # Re-ranker evaluation script
  83. | ├─twohop.py # Twohop model of retriever
  84. | └─utils.py # Utils for retriever
  85. |
  86. ├─retriever_eval.py # Evaluation net for retriever
  87. └─reranker_and_reader_eval.py # Evaluation net for re-ranker and reader
  88. ```
  89. ## [Script Parameters](#contents)
  90. Parameters for retriever evaluation can be set in config.py.
  91. - config for TPRR retriever
  92. ```python
  93. "q_len": 64, # Max query length
  94. "d_len": 192, # Max doc length
  95. "s_len": 448, # Max sequence length
  96. "in_len": 768, # Input dim
  97. "out_len": 1, # Output dim
  98. "num_docs": 500, # Num of docs
  99. "topk": 8, # Top k
  100. "onehop_num": 8 # Num of onehop doc as twohop neighbor
  101. ```
  102. config.py for more configuration.
  103. Parameters for re-ranker and reader evaluation can be passed directly at execution time.
  104. - parameters for TPRR re-ranker and reader
  105. ```python
  106. "seq_len": 512, # sequence length
  107. "rerank_batch_size": 32, # batch size for re-ranker evaluation
  108. "reader_batch_size": 448, # batch size for reader evaluation
  109. "sp_threshold": 8 # threshold for picking supporting sentence
  110. ```
  111. config.py for more configuration.
  112. ## [Evaluation Process](#contents)
  113. ### Evaluation
  114. - Retriever evaluation on Ascend
  115. ```python
  116. sh run_eval_ascend.sh
  117. ```
  118. Evaluation result will be stored in the scripts path, whose folder name begins with "eval_tr". You can find the result like the
  119. followings in log.
  120. ```python
  121. ###step###: 0
  122. val: 0
  123. count: 1
  124. true count: 0
  125. PEM: 0.0
  126. ...
  127. ###step###: 7396
  128. val:6796
  129. count:7397
  130. true count: 6924
  131. PEM: 0.9187508449371367
  132. true top8 PEM: 0.9815135759676488
  133. evaluation time (h): 20.155506462653477
  134. ```
  135. - Re-ranker and reader evaluation on Ascend
  136. Use the output of retriever as input of re-ranker
  137. ```python
  138. sh run_eval_ascend_reranker_reader.sh
  139. ```
  140. Evaluation result will be stored in the scripts path, whose folder name begins with "eval". You can find the result like the
  141. followings in log.
  142. ```python
  143. total top1 pem: 0.8803511141120864
  144. ...
  145. em: 0.67440918298447
  146. f1: 0.8025625656569652
  147. prec: 0.8292800393689271
  148. recall: 0.8136908451841731
  149. sp_em: 0.6009453072248481
  150. sp_f1: 0.844555664157302
  151. sp_prec: 0.8640844345841021
  152. sp_recall: 0.8446123918845106
  153. joint_em: 0.4537474679270763
  154. joint_f1: 0.715119580346802
  155. joint_prec: 0.7540052057184267
  156. joint_recall: 0.7250240424067661
  157. ```
  158. # [Model Description](#contents)
  159. ## [Performance](#contents)
  160. ### Inference Performance
  161. | Parameter | BGCF Ascend |
  162. | ------------------------------ | ---------------------------- |
  163. | Model Version | Inception V1 |
  164. | Resource | Ascend 910; OS Euler2.8 |
  165. | uploaded Date | 03/12/2021(month/day/year) |
  166. | MindSpore Version | 1.2.0 |
  167. | Dataset | HotPotQA |
  168. | Batch_size | 1 |
  169. | Output | inference path |
  170. | PEM | 0.9188 |
  171. | total top1 pem | 0.88 |
  172. | joint_f1 | 0.7151 |
  173. # [Description of random situation](#contents)
  174. No random situation for evaluation.
  175. # [ModelZoo Homepage](#contents)
  176. Please check the official [homepage](http://gitee.com/mindspore/mindspore/tree/master/model_zoo).