You can not select more than 25 topics Topics must start with a chinese character,a letter or number, can include dashes ('-') and can be up to 35 characters long.

tutorial_1.ipynb 24 kB

Dev0.4.0 (#149) * 1. CRF增加支持bmeso类型的tag 2. vocabulary中增加注释 * BucketSampler增加一条错误检测 * 1.修改ClipGradientCallback的bug;删除LRSchedulerCallback中的print,之后应该传入pbar进行打印;2.增加MLP注释 * update MLP module * 增加metric注释;修改trainer save过程中的bug * Update README.md fix tutorial link * Add ENAS (Efficient Neural Architecture Search) * add ignore_type in DataSet.add_field * * AutoPadder will not pad when dtype is None * add ignore_type in DataSet.apply * 修复fieldarray中padder潜在bug * 修复crf中typo; 以及可能导致数值不稳定的地方 * 修复CRF中可能存在的bug * change two default init arguments of Trainer into None * Changes to Callbacks: * 给callback添加给定几个只读属性 * 通过manager设置这些属性 * 代码优化,减轻@transfer的负担 * * 将enas相关代码放到automl目录下 * 修复fast_param_mapping的一个bug * Trainer添加自动创建save目录 * Vocabulary的打印,显示内容 * * 给vocabulary添加遍历方法 * 修复CRF为负数的bug * add SQuAD metric * add sigmoid activate function in MLP * - add star transformer model - add ConllLoader, for all kinds of conll-format files - add JsonLoader, for json-format files - add SSTLoader, for SST-2 & SST-5 - change Callback interface - fix batch multi-process when killed - add README to list models and their performance * - fix test * - fix callback & tests * - update README * 修改部分bug;调整callback * 准备发布0.4.0版本“ * update readme * support parallel loss * 防止多卡的情况导致无法正确计算loss“ * update advance_tutorial jupyter notebook * 1. 在embedding_loader中增加新的读取函数load_with_vocab(), load_without_vocab, 比之前的函数改变主要在(1)不再需要传入embed_dim(2)自动判断当前是word2vec还是glove. 2. vocabulary增加from_dataset(), index_dataset()函数。避免需要多行写index dataset的问题。 3. 在utils中新增一个cache_result()修饰器,用于cache函数的返回值。 4. callback中新增update_every属性 * 1.DataSet.apply()报错时提供错误的index 2.Vocabulary.from_dataset(), index_dataset()提供报错时的vocab顺序 3.embedloader在embed读取时遇到不规则的数据跳过这一行. * update attention * doc tools * fix some doc errors * 修改为中文注释,增加viterbi解码方法 * 样例版本 * - add pad sequence for lstm - add csv, conll, json filereader - update dataloader - remove useless dataloader - fix trainer loss print - fix tests * - fix test_tutorial * 注释增加 * 测试文档 * 本地暂存 * 本地暂存 * 修改文档的顺序 * - add document * 本地暂存 * update pooling * update bert * update documents in MLP * update documents in snli * combine self attention module to attention.py * update documents on losses.py * 对DataSet的文档进行更新 * update documents on metrics * 1. 删除了LSTM中print的内容; 2. 将Trainer和Tester的use_cuda修改为了device; 3.补充Trainer的文档 * 增加对Trainer的注释 * 完善了trainer,callback等的文档; 修改了部分代码的命名以使得代码从文档中隐藏 * update char level encoder * update documents on embedding.py * - update doc * 补充注释,并修改部分代码 * - update doc - add get_embeddings * 修改了文档配置项 * 修改embedding为init_embed初始化 * 1.增加对Trainer和Tester的多卡支持; * - add test - fix jsonloader * 删除了注释教程 * 给 dataset 增加了get_field_names * 修复bug * - add Const - fix bugs * 修改部分注释 * - add model runner for easier test models - add model tests * 修改了 docs 的配置和架构 * 修改了核心部分的一大部分文档,TODO: 1. 完善 trainer 和 tester 部分的文档 2. 研究注释样例与测试 * core部分的注释基本检查完成 * 修改了 io 部分的注释 * 全部改为相对路径引用 * 全部改为相对路径引用 * small change * 1. 从安装文件中删除api/automl的安装 2. metric中存在seq_len的bug 3. sampler中存在命名错误,已修改 * 修复 bug :兼容 cpu 版本的 PyTorch TODO:其它地方可能也存在类似的 bug * 修改文档中的引用部分 * 把 tqdm.autonotebook 换成tqdm.auto * - fix batch & vocab * 上传了文档文件 *.rst * 上传了文档文件和若干 TODO * 讨论并整合了若干模块 * core部分的测试和一些小修改 * 删除了一些冗余文档 * update init files * update const files * update const files * 增加cnn的测试 * fix a little bug * - update attention - fix tests * 完善测试 * 完成快速入门教程 * 修改了sequence_modeling 命名为 sequence_labeling 的文档 * 重新 apidoc 解决改名的遗留问题 * 修改文档格式 * 统一不同位置的seq_len_to_mask, 现统一到core.utils.seq_len_to_mask * 增加了一行提示 * 在文档中展示 dataset_loader * 提示 Dataset.read_csv 会被 CSVLoader 替换 * 完成 Callback 和 Trainer 之间的文档 * index更新了部分 * 删除冗余的print * 删除用于分词的metric,因为有可能引起错误 * 修改文档中的中文名称 * 完成了详细介绍文档 * tutorial 的 ipynb 文件 * 修改了一些介绍文档 * 修改了 models 和 modules 的主页介绍 * 加上了 titlesonly 这个设置 * 修改了模块文档展示的标题 * 修改了 core 和 io 的开篇介绍 * 修改了 modules 和 models 开篇介绍 * 使用 .. todo:: 隐藏了可能被抽到文档中的 TODO 注释 * 修改了一些注释 * delete an old metric in test * 修改 tutorials 的测试文件 * 把暂不发布的功能移到 legacy 文件夹 * 删除了不能运行的测试 * 修改 callback 的测试文件 * 删除了过时的教程和测试文件 * cache_results 参数的修改 * 修改 io 的测试文件; 删除了一些过时的测试 * 修复bug * 修复无法通过test_utils.py的测试 * 修复与pytorch1.1中的padsequence的兼容问题; 修改Trainer的pbar * 1. 修复metric中的bug; 2.增加metric测试 * add model summary * 增加别名 * 删除encoder中的嵌套层 * 修改了 core 部分 import 的顺序,__all__ 暴露的内容 * 修改了 models 部分 import 的顺序,__all__ 暴露的内容 * 修改了文件名 * 修改了 modules 模块的__all__ 和 import * fix var runn * 增加vocab的clear方法 * 一些符合 PEP8 的微调 * 更新了cache_results的例子 * 1. 对callback中indices潜在None作出提示;2.DataSet支持通过List进行index * 修改了一个typo * 修改了 README.md * update documents on bert * update documents on encoder/bert * 增加一个fitlog callback,实现与fitlog实验记录 * typo * - update dataset_loader * 增加了到 fitlog 文档的链接。 * 增加了 DataSet Loader 的文档 * - add star-transformer reproduction
6 years ago
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462463464465466467468469470471472473474475476477478479480481482483484485486487488489490491492493494495496497498499500501502503504505506507508509510511512513514515516517518519520521522523524525526527528529530531532533534535536537538539540541542543544545546547548549550551552553554555556557558559560561562563564565566567568569570571572573574575576577578579580581582583584585586587588589590591592593594595596597598599600601602603604605606607608609610611612613614615616617618619620621622623624625626627628629630631632633634635636637638639640641642643644645646647648649650651652653654655656657658659660661662663664665666667668669670671672673674675676677678679680681682683684685686687688689690691692693694695696697698699700701702703704705706707708709710711712713714715716717718719720721722723724725726727728729730731732733734735736737738739740741742743744745746747748749750751752753754755756757758759760761762763764765766767768769770771772773774775776777778779780781782783784785786787788789790791792793794795796797798799800801802803804805806807808809810811812813814815816817818819820821822823824825826827828829830831
  1. {
  2. "cells": [
  3. {
  4. "cell_type": "markdown",
  5. "metadata": {
  6. "collapsed": true
  7. },
  8. "source": [
  9. "# 详细指南"
  10. ]
  11. },
  12. {
  13. "cell_type": "markdown",
  14. "metadata": {},
  15. "source": [
  16. "### 数据读入"
  17. ]
  18. },
  19. {
  20. "cell_type": "code",
  21. "execution_count": 1,
  22. "metadata": {},
  23. "outputs": [
  24. {
  25. "data": {
  26. "text/plain": [
  27. "{'raw_sentence': A series of escapades demonstrating the adage that what is good for the goose is also good for the gander , some of which occasionally amuses but none of which amounts to much of a story . type=str,\n",
  28. "'label': 1 type=str}"
  29. ]
  30. },
  31. "execution_count": 1,
  32. "metadata": {},
  33. "output_type": "execute_result"
  34. }
  35. ],
  36. "source": [
  37. "from fastNLP.io import CSVLoader\n",
  38. "\n",
  39. "loader = CSVLoader(headers=('raw_sentence', 'label'), sep='\\t')\n",
  40. "dataset = loader.load(\"./sample_data/tutorial_sample_dataset.csv\")\n",
  41. "dataset[0]"
  42. ]
  43. },
  44. {
  45. "cell_type": "markdown",
  46. "metadata": {},
  47. "source": [
  48. "Instance表示一个样本,由一个或多个field(域,属性,特征)组成,每个field有名字和值。\n",
  49. "\n",
  50. "在初始化Instance时即可定义它包含的域,使用 \"field_name=field_value\"的写法。"
  51. ]
  52. },
  53. {
  54. "cell_type": "code",
  55. "execution_count": 2,
  56. "metadata": {},
  57. "outputs": [
  58. {
  59. "data": {
  60. "text/plain": [
  61. "{'raw_sentence': fake data type=str,\n",
  62. "'label': 0 type=str}"
  63. ]
  64. },
  65. "execution_count": 2,
  66. "metadata": {},
  67. "output_type": "execute_result"
  68. }
  69. ],
  70. "source": [
  71. "from fastNLP import Instance\n",
  72. "\n",
  73. "dataset.append(Instance(raw_sentence='fake data', label='0'))\n",
  74. "dataset[-1]"
  75. ]
  76. },
  77. {
  78. "cell_type": "markdown",
  79. "metadata": {},
  80. "source": [
  81. "### 数据处理"
  82. ]
  83. },
  84. {
  85. "cell_type": "code",
  86. "execution_count": 3,
  87. "metadata": {},
  88. "outputs": [
  89. {
  90. "data": {
  91. "text/plain": [
  92. "{'raw_sentence': A series of escapades demonstrating the adage that what is good for the goose is also good for the gander , some of which occasionally amuses but none of which amounts to much of a story . type=str,\n",
  93. "'label': 1 type=str,\n",
  94. "'sentence': a series of escapades demonstrating the adage that what is good for the goose is also good for the gander , some of which occasionally amuses but none of which amounts to much of a story . type=str,\n",
  95. "'words': [4, 1, 6, 1, 1, 2, 1, 11, 153, 10, 28, 17, 2, 1, 10, 1, 28, 17, 2, 1, 5, 154, 6, 149, 1, 1, 23, 1, 6, 149, 1, 8, 30, 6, 4, 35, 3] type=list,\n",
  96. "'target': 1 type=int}"
  97. ]
  98. },
  99. "execution_count": 3,
  100. "metadata": {},
  101. "output_type": "execute_result"
  102. }
  103. ],
  104. "source": [
  105. "from fastNLP import Vocabulary\n",
  106. "\n",
  107. "# 将所有字母转为小写, 并所有句子变成单词序列\n",
  108. "dataset.apply(lambda x: x['raw_sentence'].lower(), new_field_name='sentence')\n",
  109. "dataset.apply_field(lambda x: x.split(), field_name='sentence', new_field_name='words')\n",
  110. "\n",
  111. "# 使用Vocabulary类统计单词,并将单词序列转化为数字序列\n",
  112. "vocab = Vocabulary(min_freq=2).from_dataset(dataset, field_name='words')\n",
  113. "vocab.index_dataset(dataset, field_name='words',new_field_name='words')\n",
  114. "\n",
  115. "# 将label转为整数\n",
  116. "dataset.apply(lambda x: int(x['label']), new_field_name='target')\n",
  117. "dataset[0]"
  118. ]
  119. },
  120. {
  121. "cell_type": "code",
  122. "execution_count": 4,
  123. "metadata": {},
  124. "outputs": [
  125. {
  126. "name": "stdout",
  127. "output_type": "stream",
  128. "text": [
  129. "{'raw_sentence': A series of escapades demonstrating the adage that what is good for the goose is also good for the gander , some of which occasionally amuses but none of which amounts to much of a story . type=str,\n",
  130. "'label': 1 type=str,\n",
  131. "'sentence': a series of escapades demonstrating the adage that what is good for the goose is also good for the gander , some of which occasionally amuses but none of which amounts to much of a story . type=str,\n",
  132. "'words': [4, 1, 6, 1, 1, 2, 1, 11, 153, 10, 28, 17, 2, 1, 10, 1, 28, 17, 2, 1, 5, 154, 6, 149, 1, 1, 23, 1, 6, 149, 1, 8, 30, 6, 4, 35, 3] type=list,\n",
  133. "'target': 1 type=int,\n",
  134. "'seq_len': 37 type=int}\n"
  135. ]
  136. }
  137. ],
  138. "source": [
  139. "# 增加长度信息\n",
  140. "dataset.apply_field(lambda x: len(x), field_name='words', new_field_name='seq_len')\n",
  141. "print(dataset[0])"
  142. ]
  143. },
  144. {
  145. "cell_type": "markdown",
  146. "metadata": {},
  147. "source": [
  148. "## 使用内置模块CNNText\n",
  149. "设置为符合内置模块的名称"
  150. ]
  151. },
  152. {
  153. "cell_type": "code",
  154. "execution_count": 5,
  155. "metadata": {},
  156. "outputs": [
  157. {
  158. "data": {
  159. "text/plain": [
  160. "CNNText(\n",
  161. " (embed): Embedding(\n",
  162. " 177, 50\n",
  163. " (dropout): Dropout(p=0.0)\n",
  164. " )\n",
  165. " (conv_pool): ConvMaxpool(\n",
  166. " (convs): ModuleList(\n",
  167. " (0): Conv1d(50, 3, kernel_size=(3,), stride=(1,), padding=(2,))\n",
  168. " (1): Conv1d(50, 4, kernel_size=(4,), stride=(1,), padding=(2,))\n",
  169. " (2): Conv1d(50, 5, kernel_size=(5,), stride=(1,), padding=(2,))\n",
  170. " )\n",
  171. " )\n",
  172. " (dropout): Dropout(p=0.1)\n",
  173. " (fc): Linear(in_features=12, out_features=5, bias=True)\n",
  174. ")"
  175. ]
  176. },
  177. "execution_count": 5,
  178. "metadata": {},
  179. "output_type": "execute_result"
  180. }
  181. ],
  182. "source": [
  183. "from fastNLP.models import CNNText\n",
  184. "\n",
  185. "model_cnn = CNNText((len(vocab),50), num_classes=5, padding=2, dropout=0.1)\n",
  186. "model_cnn"
  187. ]
  188. },
  189. {
  190. "cell_type": "markdown",
  191. "metadata": {},
  192. "source": [
  193. "我们在使用内置模块的时候,还应该使用应该注意把 field 设定成符合内置模型输入输出的名字。"
  194. ]
  195. },
  196. {
  197. "cell_type": "code",
  198. "execution_count": 6,
  199. "metadata": {},
  200. "outputs": [
  201. {
  202. "name": "stdout",
  203. "output_type": "stream",
  204. "text": [
  205. "words\n",
  206. "seq_len\n",
  207. "target\n"
  208. ]
  209. }
  210. ],
  211. "source": [
  212. "from fastNLP import Const\n",
  213. "\n",
  214. "dataset.rename_field('words', Const.INPUT)\n",
  215. "dataset.rename_field('seq_len', Const.INPUT_LEN)\n",
  216. "dataset.rename_field('target', Const.TARGET)\n",
  217. "\n",
  218. "dataset.set_input(Const.INPUT, Const.INPUT_LEN)\n",
  219. "dataset.set_target(Const.TARGET)\n",
  220. "\n",
  221. "print(Const.INPUT)\n",
  222. "print(Const.INPUT_LEN)\n",
  223. "print(Const.TARGET)"
  224. ]
  225. },
  226. {
  227. "cell_type": "markdown",
  228. "metadata": {},
  229. "source": [
  230. "### 分割训练集/验证集/测试集"
  231. ]
  232. },
  233. {
  234. "cell_type": "code",
  235. "execution_count": 7,
  236. "metadata": {
  237. "scrolled": true
  238. },
  239. "outputs": [
  240. {
  241. "data": {
  242. "text/plain": [
  243. "(64, 7, 7)"
  244. ]
  245. },
  246. "execution_count": 7,
  247. "metadata": {},
  248. "output_type": "execute_result"
  249. }
  250. ],
  251. "source": [
  252. "train_dev_data, test_data = dataset.split(0.1)\n",
  253. "train_data, dev_data = train_dev_data.split(0.1)\n",
  254. "len(train_data), len(dev_data), len(test_data)"
  255. ]
  256. },
  257. {
  258. "cell_type": "markdown",
  259. "metadata": {},
  260. "source": [
  261. "### 训练(model_cnn)"
  262. ]
  263. },
  264. {
  265. "cell_type": "markdown",
  266. "metadata": {},
  267. "source": [
  268. "### loss\n",
  269. "训练模型需要提供一个损失函数\n",
  270. "\n",
  271. "下面提供了一个在分类问题中常用的交叉熵损失。注意它的**初始化参数**。\n",
  272. "\n",
  273. "pred参数对应的是模型的forward返回的dict的一个key的名字,这里是\"output\"。\n",
  274. "\n",
  275. "target参数对应的是dataset作为标签的field的名字,这里是\"label_seq\"。"
  276. ]
  277. },
  278. {
  279. "cell_type": "code",
  280. "execution_count": 8,
  281. "metadata": {},
  282. "outputs": [],
  283. "source": [
  284. "from fastNLP import CrossEntropyLoss\n",
  285. "\n",
  286. "# loss = CrossEntropyLoss()\n",
  287. "# 等价于\n",
  288. "loss = CrossEntropyLoss(pred=Const.OUTPUT, target=Const.TARGET)"
  289. ]
  290. },
  291. {
  292. "cell_type": "markdown",
  293. "metadata": {},
  294. "source": [
  295. "### Metric\n",
  296. "定义评价指标\n",
  297. "\n",
  298. "这里使用准确率。参数的“命名规则”跟上面类似。\n",
  299. "\n",
  300. "pred参数对应的是模型的predict方法返回的dict的一个key的名字,这里是\"predict\"。\n",
  301. "\n",
  302. "target参数对应的是dataset作为标签的field的名字,这里是\"label_seq\"。"
  303. ]
  304. },
  305. {
  306. "cell_type": "code",
  307. "execution_count": 9,
  308. "metadata": {},
  309. "outputs": [],
  310. "source": [
  311. "from fastNLP import AccuracyMetric\n",
  312. "\n",
  313. "# metrics=AccuracyMetric()\n",
  314. "# 等价于\n",
  315. "metrics=AccuracyMetric(pred=Const.OUTPUT, target=Const.TARGET)"
  316. ]
  317. },
  318. {
  319. "cell_type": "code",
  320. "execution_count": 10,
  321. "metadata": {},
  322. "outputs": [
  323. {
  324. "name": "stdout",
  325. "output_type": "stream",
  326. "text": [
  327. "input fields after batch(if batch size is 2):\n",
  328. "\twords: (1)type:torch.Tensor (2)dtype:torch.int64, (3)shape:torch.Size([2, 16]) \n",
  329. "\tseq_len: (1)type:torch.Tensor (2)dtype:torch.int64, (3)shape:torch.Size([2]) \n",
  330. "target fields after batch(if batch size is 2):\n",
  331. "\ttarget: (1)type:torch.Tensor (2)dtype:torch.int64, (3)shape:torch.Size([2]) \n",
  332. "\n",
  333. "training epochs started 2019-05-12-21-38-34\n"
  334. ]
  335. },
  336. {
  337. "data": {
  338. "application/vnd.jupyter.widget-view+json": {
  339. "model_id": "",
  340. "version_major": 2,
  341. "version_minor": 0
  342. },
  343. "text/plain": [
  344. "HBox(children=(IntProgress(value=0, layout=Layout(flex='2'), max=20), HTML(value='')), layout=Layout(display='…"
  345. ]
  346. },
  347. "metadata": {},
  348. "output_type": "display_data"
  349. },
  350. {
  351. "name": "stdout",
  352. "output_type": "stream",
  353. "text": [
  354. "Evaluation at Epoch 1/10. Step:2/20. AccuracyMetric: acc=0.285714\n",
  355. "\n",
  356. "Evaluation at Epoch 2/10. Step:4/20. AccuracyMetric: acc=0.428571\n",
  357. "\n",
  358. "Evaluation at Epoch 3/10. Step:6/20. AccuracyMetric: acc=0.428571\n",
  359. "\n",
  360. "Evaluation at Epoch 4/10. Step:8/20. AccuracyMetric: acc=0.428571\n",
  361. "\n",
  362. "Evaluation at Epoch 5/10. Step:10/20. AccuracyMetric: acc=0.428571\n",
  363. "\n",
  364. "Evaluation at Epoch 6/10. Step:12/20. AccuracyMetric: acc=0.428571\n",
  365. "\n",
  366. "Evaluation at Epoch 7/10. Step:14/20. AccuracyMetric: acc=0.428571\n",
  367. "\n",
  368. "Evaluation at Epoch 8/10. Step:16/20. AccuracyMetric: acc=0.857143\n",
  369. "\n",
  370. "Evaluation at Epoch 9/10. Step:18/20. AccuracyMetric: acc=0.857143\n",
  371. "\n",
  372. "Evaluation at Epoch 10/10. Step:20/20. AccuracyMetric: acc=0.857143\n",
  373. "\n",
  374. "\n",
  375. "In Epoch:8/Step:16, got best dev performance:AccuracyMetric: acc=0.857143\n",
  376. "Reloaded the best model.\n"
  377. ]
  378. },
  379. {
  380. "data": {
  381. "text/plain": [
  382. "{'best_eval': {'AccuracyMetric': {'acc': 0.857143}},\n",
  383. " 'best_epoch': 8,\n",
  384. " 'best_step': 16,\n",
  385. " 'seconds': 0.21}"
  386. ]
  387. },
  388. "execution_count": 10,
  389. "metadata": {},
  390. "output_type": "execute_result"
  391. }
  392. ],
  393. "source": [
  394. "from fastNLP import Trainer\n",
  395. "\n",
  396. "trainer = Trainer(model=model_cnn, train_data=train_data, dev_data=dev_data, loss=loss, metrics=metrics)\n",
  397. "trainer.train()"
  398. ]
  399. },
  400. {
  401. "cell_type": "markdown",
  402. "metadata": {},
  403. "source": [
  404. "### 测试(model_cnn)"
  405. ]
  406. },
  407. {
  408. "cell_type": "code",
  409. "execution_count": 11,
  410. "metadata": {},
  411. "outputs": [
  412. {
  413. "name": "stdout",
  414. "output_type": "stream",
  415. "text": [
  416. "[tester] \n",
  417. "AccuracyMetric: acc=0.857143\n"
  418. ]
  419. },
  420. {
  421. "data": {
  422. "text/plain": [
  423. "{'AccuracyMetric': {'acc': 0.857143}}"
  424. ]
  425. },
  426. "execution_count": 11,
  427. "metadata": {},
  428. "output_type": "execute_result"
  429. }
  430. ],
  431. "source": [
  432. "from fastNLP import Tester\n",
  433. "\n",
  434. "tester = Tester(test_data, model_cnn, metrics=AccuracyMetric())\n",
  435. "tester.test()"
  436. ]
  437. },
  438. {
  439. "cell_type": "markdown",
  440. "metadata": {},
  441. "source": [
  442. "## 编写自己的模型\n",
  443. "\n",
  444. "完全支持 pytorch 的模型,与 pytorch 唯一不同的是返回结果是一个字典,字典中至少需要包含 \"pred\" 这个字段"
  445. ]
  446. },
  447. {
  448. "cell_type": "code",
  449. "execution_count": 12,
  450. "metadata": {},
  451. "outputs": [],
  452. "source": [
  453. "import torch\n",
  454. "import torch.nn as nn\n",
  455. "\n",
  456. "class LSTMText(nn.Module):\n",
  457. " def __init__(self, vocab_size, embedding_dim, output_dim, hidden_dim=64, num_layers=2, dropout=0.5):\n",
  458. " super().__init__()\n",
  459. "\n",
  460. " self.embedding = nn.Embedding(vocab_size, embedding_dim)\n",
  461. " self.lstm = nn.LSTM(embedding_dim, hidden_dim, num_layers=num_layers, bidirectional=True, dropout=dropout)\n",
  462. " self.fc = nn.Linear(hidden_dim * 2, output_dim)\n",
  463. " self.dropout = nn.Dropout(dropout)\n",
  464. "\n",
  465. " def forward(self, words):\n",
  466. " # (input) words : (batch_size, seq_len)\n",
  467. " words = words.permute(1,0)\n",
  468. " # words : (seq_len, batch_size)\n",
  469. "\n",
  470. " embedded = self.dropout(self.embedding(words))\n",
  471. " # embedded : (seq_len, batch_size, embedding_dim)\n",
  472. " output, (hidden, cell) = self.lstm(embedded)\n",
  473. " # output: (seq_len, batch_size, hidden_dim * 2)\n",
  474. " # hidden: (num_layers * 2, batch_size, hidden_dim)\n",
  475. " # cell: (num_layers * 2, batch_size, hidden_dim)\n",
  476. "\n",
  477. " hidden = torch.cat((hidden[-2, :, :], hidden[-1, :, :]), dim=1)\n",
  478. " hidden = self.dropout(hidden)\n",
  479. " # hidden: (batch_size, hidden_dim * 2)\n",
  480. "\n",
  481. " pred = self.fc(hidden.squeeze(0))\n",
  482. " # result: (batch_size, output_dim)\n",
  483. " return {\"pred\":pred}"
  484. ]
  485. },
  486. {
  487. "cell_type": "code",
  488. "execution_count": 13,
  489. "metadata": {},
  490. "outputs": [
  491. {
  492. "name": "stdout",
  493. "output_type": "stream",
  494. "text": [
  495. "input fields after batch(if batch size is 2):\n",
  496. "\twords: (1)type:torch.Tensor (2)dtype:torch.int64, (3)shape:torch.Size([2, 16]) \n",
  497. "\tseq_len: (1)type:torch.Tensor (2)dtype:torch.int64, (3)shape:torch.Size([2]) \n",
  498. "target fields after batch(if batch size is 2):\n",
  499. "\ttarget: (1)type:torch.Tensor (2)dtype:torch.int64, (3)shape:torch.Size([2]) \n",
  500. "\n",
  501. "training epochs started 2019-05-12-21-38-36\n"
  502. ]
  503. },
  504. {
  505. "data": {
  506. "application/vnd.jupyter.widget-view+json": {
  507. "model_id": "",
  508. "version_major": 2,
  509. "version_minor": 0
  510. },
  511. "text/plain": [
  512. "HBox(children=(IntProgress(value=0, layout=Layout(flex='2'), max=20), HTML(value='')), layout=Layout(display='…"
  513. ]
  514. },
  515. "metadata": {},
  516. "output_type": "display_data"
  517. },
  518. {
  519. "name": "stdout",
  520. "output_type": "stream",
  521. "text": [
  522. "Evaluation at Epoch 1/10. Step:2/20. AccuracyMetric: acc=0.571429\n",
  523. "\n",
  524. "Evaluation at Epoch 2/10. Step:4/20. AccuracyMetric: acc=0.571429\n",
  525. "\n",
  526. "Evaluation at Epoch 3/10. Step:6/20. AccuracyMetric: acc=0.571429\n",
  527. "\n",
  528. "Evaluation at Epoch 4/10. Step:8/20. AccuracyMetric: acc=0.571429\n",
  529. "\n",
  530. "Evaluation at Epoch 5/10. Step:10/20. AccuracyMetric: acc=0.714286\n",
  531. "\n",
  532. "Evaluation at Epoch 6/10. Step:12/20. AccuracyMetric: acc=0.857143\n",
  533. "\n",
  534. "Evaluation at Epoch 7/10. Step:14/20. AccuracyMetric: acc=0.857143\n",
  535. "\n",
  536. "Evaluation at Epoch 8/10. Step:16/20. AccuracyMetric: acc=0.857143\n",
  537. "\n",
  538. "Evaluation at Epoch 9/10. Step:18/20. AccuracyMetric: acc=0.857143\n",
  539. "\n",
  540. "Evaluation at Epoch 10/10. Step:20/20. AccuracyMetric: acc=0.857143\n",
  541. "\n",
  542. "\n",
  543. "In Epoch:6/Step:12, got best dev performance:AccuracyMetric: acc=0.857143\n",
  544. "Reloaded the best model.\n"
  545. ]
  546. },
  547. {
  548. "data": {
  549. "text/plain": [
  550. "{'best_eval': {'AccuracyMetric': {'acc': 0.857143}},\n",
  551. " 'best_epoch': 6,\n",
  552. " 'best_step': 12,\n",
  553. " 'seconds': 2.15}"
  554. ]
  555. },
  556. "execution_count": 13,
  557. "metadata": {},
  558. "output_type": "execute_result"
  559. }
  560. ],
  561. "source": [
  562. "model_lstm = LSTMText(len(vocab),50,5)\n",
  563. "trainer = Trainer(model=model_lstm, train_data=train_data, dev_data=dev_data, loss=loss, metrics=metrics)\n",
  564. "trainer.train()"
  565. ]
  566. },
  567. {
  568. "cell_type": "code",
  569. "execution_count": 14,
  570. "metadata": {},
  571. "outputs": [
  572. {
  573. "name": "stdout",
  574. "output_type": "stream",
  575. "text": [
  576. "[tester] \n",
  577. "AccuracyMetric: acc=0.857143\n"
  578. ]
  579. },
  580. {
  581. "data": {
  582. "text/plain": [
  583. "{'AccuracyMetric': {'acc': 0.857143}}"
  584. ]
  585. },
  586. "execution_count": 14,
  587. "metadata": {},
  588. "output_type": "execute_result"
  589. }
  590. ],
  591. "source": [
  592. "tester = Tester(test_data, model_lstm, metrics=AccuracyMetric())\n",
  593. "tester.test()"
  594. ]
  595. },
  596. {
  597. "cell_type": "markdown",
  598. "metadata": {},
  599. "source": [
  600. "### 使用 Batch编写自己的训练过程"
  601. ]
  602. },
  603. {
  604. "cell_type": "code",
  605. "execution_count": 15,
  606. "metadata": {},
  607. "outputs": [
  608. {
  609. "name": "stdout",
  610. "output_type": "stream",
  611. "text": [
  612. "Epoch 0 Avg Loss: 3.11 18ms\n",
  613. "Epoch 1 Avg Loss: 2.88 30ms\n",
  614. "Epoch 2 Avg Loss: 2.69 42ms\n",
  615. "Epoch 3 Avg Loss: 2.47 54ms\n",
  616. "Epoch 4 Avg Loss: 2.38 67ms\n",
  617. "Epoch 5 Avg Loss: 2.10 78ms\n",
  618. "Epoch 6 Avg Loss: 2.06 91ms\n",
  619. "Epoch 7 Avg Loss: 1.92 103ms\n",
  620. "Epoch 8 Avg Loss: 1.91 114ms\n",
  621. "Epoch 9 Avg Loss: 1.76 126ms\n",
  622. "[tester] \n",
  623. "AccuracyMetric: acc=0.571429\n"
  624. ]
  625. },
  626. {
  627. "data": {
  628. "text/plain": [
  629. "{'AccuracyMetric': {'acc': 0.571429}}"
  630. ]
  631. },
  632. "execution_count": 15,
  633. "metadata": {},
  634. "output_type": "execute_result"
  635. }
  636. ],
  637. "source": [
  638. "from fastNLP import BucketSampler\n",
  639. "from fastNLP import Batch\n",
  640. "import torch\n",
  641. "import time\n",
  642. "\n",
  643. "model = CNNText((len(vocab),50), num_classes=5, padding=2, dropout=0.1)\n",
  644. "\n",
  645. "def train(epoch, data):\n",
  646. " optim = torch.optim.Adam(model.parameters(), lr=0.001)\n",
  647. " lossfunc = torch.nn.CrossEntropyLoss()\n",
  648. " batch_size = 32\n",
  649. "\n",
  650. " # 定义一个Batch,传入DataSet,规定batch_size和去batch的规则。\n",
  651. " # 顺序(Sequential),随机(Random),相似长度组成一个batch(Bucket)\n",
  652. " train_sampler = BucketSampler(batch_size=batch_size, seq_len_field_name='seq_len')\n",
  653. " train_batch = Batch(batch_size=batch_size, dataset=data, sampler=train_sampler)\n",
  654. " \n",
  655. " start_time = time.time()\n",
  656. " for i in range(epoch):\n",
  657. " loss_list = []\n",
  658. " for batch_x, batch_y in train_batch:\n",
  659. " optim.zero_grad()\n",
  660. " output = model(batch_x['words'])\n",
  661. " loss = lossfunc(output['pred'], batch_y['target'])\n",
  662. " loss.backward()\n",
  663. " optim.step()\n",
  664. " loss_list.append(loss.item())\n",
  665. " print('Epoch {:d} Avg Loss: {:.2f}'.format(i, sum(loss_list) / len(loss_list)),end=\" \")\n",
  666. " print('{:d}ms'.format(round((time.time()-start_time)*1000)))\n",
  667. " loss_list.clear()\n",
  668. " \n",
  669. "train(10, train_data)\n",
  670. "tester = Tester(test_data, model, metrics=AccuracyMetric())\n",
  671. "tester.test()"
  672. ]
  673. },
  674. {
  675. "cell_type": "markdown",
  676. "metadata": {},
  677. "source": [
  678. "### 使用 Callback 实现自己想要的效果"
  679. ]
  680. },
  681. {
  682. "cell_type": "code",
  683. "execution_count": 16,
  684. "metadata": {},
  685. "outputs": [
  686. {
  687. "name": "stdout",
  688. "output_type": "stream",
  689. "text": [
  690. "input fields after batch(if batch size is 2):\n",
  691. "\twords: (1)type:torch.Tensor (2)dtype:torch.int64, (3)shape:torch.Size([2, 16]) \n",
  692. "\tseq_len: (1)type:torch.Tensor (2)dtype:torch.int64, (3)shape:torch.Size([2]) \n",
  693. "target fields after batch(if batch size is 2):\n",
  694. "\ttarget: (1)type:torch.Tensor (2)dtype:torch.int64, (3)shape:torch.Size([2]) \n",
  695. "\n",
  696. "training epochs started 2019-05-12-21-38-40\n"
  697. ]
  698. },
  699. {
  700. "data": {
  701. "application/vnd.jupyter.widget-view+json": {
  702. "model_id": "",
  703. "version_major": 2,
  704. "version_minor": 0
  705. },
  706. "text/plain": [
  707. "HBox(children=(IntProgress(value=0, layout=Layout(flex='2'), max=20), HTML(value='')), layout=Layout(display='…"
  708. ]
  709. },
  710. "metadata": {},
  711. "output_type": "display_data"
  712. },
  713. {
  714. "name": "stdout",
  715. "output_type": "stream",
  716. "text": [
  717. "Evaluation at Epoch 1/10. Step:2/20. AccuracyMetric: acc=0.285714\n",
  718. "\n",
  719. "Sum Time: 51ms\n",
  720. "\n",
  721. "\n",
  722. "Evaluation at Epoch 2/10. Step:4/20. AccuracyMetric: acc=0.285714\n",
  723. "\n",
  724. "Sum Time: 69ms\n",
  725. "\n",
  726. "\n",
  727. "Evaluation at Epoch 3/10. Step:6/20. AccuracyMetric: acc=0.285714\n",
  728. "\n",
  729. "Sum Time: 91ms\n",
  730. "\n",
  731. "\n",
  732. "Evaluation at Epoch 4/10. Step:8/20. AccuracyMetric: acc=0.571429\n",
  733. "\n",
  734. "Sum Time: 107ms\n",
  735. "\n",
  736. "\n",
  737. "Evaluation at Epoch 5/10. Step:10/20. AccuracyMetric: acc=0.571429\n",
  738. "\n",
  739. "Sum Time: 125ms\n",
  740. "\n",
  741. "\n",
  742. "Evaluation at Epoch 6/10. Step:12/20. AccuracyMetric: acc=0.571429\n",
  743. "\n",
  744. "Sum Time: 142ms\n",
  745. "\n",
  746. "\n",
  747. "Evaluation at Epoch 7/10. Step:14/20. AccuracyMetric: acc=0.571429\n",
  748. "\n",
  749. "Sum Time: 158ms\n",
  750. "\n",
  751. "\n",
  752. "Evaluation at Epoch 8/10. Step:16/20. AccuracyMetric: acc=0.571429\n",
  753. "\n",
  754. "Sum Time: 176ms\n",
  755. "\n",
  756. "\n",
  757. "Evaluation at Epoch 9/10. Step:18/20. AccuracyMetric: acc=0.714286\n",
  758. "\n",
  759. "Sum Time: 193ms\n",
  760. "\n",
  761. "\n",
  762. "Evaluation at Epoch 10/10. Step:20/20. AccuracyMetric: acc=0.857143\n",
  763. "\n",
  764. "Sum Time: 212ms\n",
  765. "\n",
  766. "\n",
  767. "\n",
  768. "In Epoch:10/Step:20, got best dev performance:AccuracyMetric: acc=0.857143\n",
  769. "Reloaded the best model.\n"
  770. ]
  771. },
  772. {
  773. "data": {
  774. "text/plain": [
  775. "{'best_eval': {'AccuracyMetric': {'acc': 0.857143}},\n",
  776. " 'best_epoch': 10,\n",
  777. " 'best_step': 20,\n",
  778. " 'seconds': 0.2}"
  779. ]
  780. },
  781. "execution_count": 16,
  782. "metadata": {},
  783. "output_type": "execute_result"
  784. }
  785. ],
  786. "source": [
  787. "from fastNLP import Callback\n",
  788. "\n",
  789. "start_time = time.time()\n",
  790. "\n",
  791. "class MyCallback(Callback):\n",
  792. " def on_epoch_end(self):\n",
  793. " print('Sum Time: {:d}ms\\n\\n'.format(round((time.time()-start_time)*1000)))\n",
  794. " \n",
  795. "\n",
  796. "model = CNNText((len(vocab),50), num_classes=5, padding=2, dropout=0.1)\n",
  797. "trainer = Trainer(model=model, train_data=train_data, dev_data=dev_data,\n",
  798. " loss=CrossEntropyLoss(), metrics=AccuracyMetric(), callbacks=[MyCallback()])\n",
  799. "trainer.train()"
  800. ]
  801. },
  802. {
  803. "cell_type": "code",
  804. "execution_count": null,
  805. "metadata": {},
  806. "outputs": [],
  807. "source": []
  808. }
  809. ],
  810. "metadata": {
  811. "kernelspec": {
  812. "display_name": "Python 3",
  813. "language": "python",
  814. "name": "python3"
  815. },
  816. "language_info": {
  817. "codemirror_mode": {
  818. "name": "ipython",
  819. "version": 3
  820. },
  821. "file_extension": ".py",
  822. "mimetype": "text/x-python",
  823. "name": "python",
  824. "nbconvert_exporter": "python",
  825. "pygments_lexer": "ipython3",
  826. "version": "3.6.7"
  827. }
  828. },
  829. "nbformat": 4,
  830. "nbformat_minor": 1
  831. }