You can not select more than 25 topics Topics must start with a chinese character,a letter or number, can include dashes ('-') and can be up to 35 characters long.

fastnlp_10min_tutorial.ipynb 29 kB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462463464465466467468469470471472473474475476477478479480481482483484485486487488489490491492493494495496497498499500501502503504505506507508509510511512513514515516517518519520521522523524525526527528529530531532533534535536537538539540541542543544545546547548549550551552553554555556557558559560561562563564565566567568569570571572573574575576577578579580581582583584585586587588589590591592593594595596597598599600601602603604605606607608609610611612613614615616617618619620621622623624625626627628629630631632633634635636637638639640641642643644645646647648649650651652653654655656657658659660661662663664665666667668669670671672673674675676677678679680681682683684685686687688689690691692693694695696697698699700701702703704705706707708709710711712713714715716717718719720721722723724725726727728729730731732733734735736737738739740741742743744745746747748749750751
  1. {
  2. "cells": [
  3. {
  4. "cell_type": "markdown",
  5. "metadata": {},
  6. "source": [
  7. "fastNLP10 分钟上手教程\n",
  8. "-------\n",
  9. "\n",
  10. "fastNLP提供方便的数据预处理,训练和测试模型的功能"
  11. ]
  12. },
  13. {
  14. "cell_type": "markdown",
  15. "metadata": {},
  16. "source": [
  17. "如果您还没有通过pip安装fastNLP,可以执行下面的操作加载当前模块"
  18. ]
  19. },
  20. {
  21. "cell_type": "code",
  22. "execution_count": 4,
  23. "metadata": {},
  24. "outputs": [],
  25. "source": [
  26. "import sys\n",
  27. "sys.path.append(\"../\")"
  28. ]
  29. },
  30. {
  31. "cell_type": "markdown",
  32. "metadata": {},
  33. "source": [
  34. "DataSet & Instance\n",
  35. "------\n",
  36. "\n",
  37. "fastNLP用DataSet和Instance保存和处理数据。每个DataSet表示一个数据集,每个Instance表示一个数据样本。一个DataSet存有多个Instance,每个Instance可以自定义存哪些内容。\n",
  38. "\n",
  39. "有一些read_*方法,可以轻松从文件读取数据,存成DataSet。"
  40. ]
  41. },
  42. {
  43. "cell_type": "code",
  44. "execution_count": 1,
  45. "metadata": {},
  46. "outputs": [
  47. {
  48. "name": "stdout",
  49. "output_type": "stream",
  50. "text": [
  51. "77\n"
  52. ]
  53. }
  54. ],
  55. "source": [
  56. "from fastNLP import DataSet\n",
  57. "from fastNLP import Instance\n",
  58. "\n",
  59. "# 从csv读取数据到DataSet\n",
  60. "dataset = DataSet.read_csv('sample_data/tutorial_sample_dataset.csv', headers=('raw_sentence', 'label'), sep='\\t')\n",
  61. "print(len(dataset))"
  62. ]
  63. },
  64. {
  65. "cell_type": "code",
  66. "execution_count": 2,
  67. "metadata": {},
  68. "outputs": [
  69. {
  70. "name": "stdout",
  71. "output_type": "stream",
  72. "text": [
  73. "{'raw_sentence': A series of escapades demonstrating the adage that what is good for the goose is also good for the gander , some of which occasionally amuses but none of which amounts to much of a story . type=str,\n",
  74. "'label': 1 type=str}\n",
  75. "{'raw_sentence': The plot is romantic comedy boilerplate from start to finish . type=str,\n",
  76. "'label': 2 type=str}\n"
  77. ]
  78. }
  79. ],
  80. "source": [
  81. "# 使用数字索引[k],获取第k个样本\n",
  82. "print(dataset[0])\n",
  83. "\n",
  84. "# 索引也可以是负数\n",
  85. "print(dataset[-3])"
  86. ]
  87. },
  88. {
  89. "cell_type": "markdown",
  90. "metadata": {},
  91. "source": [
  92. "## Instance\n",
  93. "Instance表示一个样本,由一个或多个field(域,属性,特征)组成,每个field有名字和值。\n",
  94. "\n",
  95. "在初始化Instance时即可定义它包含的域,使用 \"field_name=field_value\"的写法。"
  96. ]
  97. },
  98. {
  99. "cell_type": "code",
  100. "execution_count": 3,
  101. "metadata": {},
  102. "outputs": [
  103. {
  104. "data": {
  105. "text/plain": [
  106. "{'raw_sentence': fake data type=str,\n",
  107. "'label': 0 type=str}"
  108. ]
  109. },
  110. "execution_count": 3,
  111. "metadata": {},
  112. "output_type": "execute_result"
  113. }
  114. ],
  115. "source": [
  116. "# DataSet.append(Instance)加入新数据\n",
  117. "dataset.append(Instance(raw_sentence='fake data', label='0'))\n",
  118. "dataset[-1]"
  119. ]
  120. },
  121. {
  122. "cell_type": "markdown",
  123. "metadata": {},
  124. "source": [
  125. "## DataSet.apply方法\n",
  126. "数据预处理利器"
  127. ]
  128. },
  129. {
  130. "cell_type": "code",
  131. "execution_count": 4,
  132. "metadata": {},
  133. "outputs": [
  134. {
  135. "name": "stdout",
  136. "output_type": "stream",
  137. "text": [
  138. "{'raw_sentence': a series of escapades demonstrating the adage that what is good for the goose is also good for the gander , some of which occasionally amuses but none of which amounts to much of a story . type=str,\n",
  139. "'label': 1 type=str}\n"
  140. ]
  141. }
  142. ],
  143. "source": [
  144. "# 将所有数字转为小写\n",
  145. "dataset.apply(lambda x: x['raw_sentence'].lower(), new_field_name='raw_sentence')\n",
  146. "print(dataset[0])"
  147. ]
  148. },
  149. {
  150. "cell_type": "code",
  151. "execution_count": 5,
  152. "metadata": {},
  153. "outputs": [
  154. {
  155. "name": "stdout",
  156. "output_type": "stream",
  157. "text": [
  158. "{'raw_sentence': a series of escapades demonstrating the adage that what is good for the goose is also good for the gander , some of which occasionally amuses but none of which amounts to much of a story . type=str,\n",
  159. "'label': 1 type=int}\n"
  160. ]
  161. }
  162. ],
  163. "source": [
  164. "# label转int\n",
  165. "dataset.apply(lambda x: int(x['label']), new_field_name='label')\n",
  166. "print(dataset[0])"
  167. ]
  168. },
  169. {
  170. "cell_type": "code",
  171. "execution_count": 6,
  172. "metadata": {},
  173. "outputs": [
  174. {
  175. "name": "stdout",
  176. "output_type": "stream",
  177. "text": [
  178. "{'raw_sentence': a series of escapades demonstrating the adage that what is good for the goose is also good for the gander , some of which occasionally amuses but none of which amounts to much of a story . type=str,\n",
  179. "'label': 1 type=int,\n",
  180. "'words': ['a', 'series', 'of', 'escapades', 'demonstrating', 'the', 'adage', 'that', 'what', 'is', 'good', 'for', 'the', 'goose', 'is', 'also', 'good', 'for', 'the', 'gander', ',', 'some', 'of', 'which', 'occasionally', 'amuses', 'but', 'none', 'of', 'which', 'amounts', 'to', 'much', 'of', 'a', 'story', '.'] type=list}\n"
  181. ]
  182. }
  183. ],
  184. "source": [
  185. "# 使用空格分割句子\n",
  186. "def split_sent(ins):\n",
  187. " return ins['raw_sentence'].split()\n",
  188. "dataset.apply(split_sent, new_field_name='words')\n",
  189. "print(dataset[0])"
  190. ]
  191. },
  192. {
  193. "cell_type": "code",
  194. "execution_count": 7,
  195. "metadata": {},
  196. "outputs": [
  197. {
  198. "name": "stdout",
  199. "output_type": "stream",
  200. "text": [
  201. "{'raw_sentence': a series of escapades demonstrating the adage that what is good for the goose is also good for the gander , some of which occasionally amuses but none of which amounts to much of a story . type=str,\n",
  202. "'label': 1 type=int,\n",
  203. "'words': ['a', 'series', 'of', 'escapades', 'demonstrating', 'the', 'adage', 'that', 'what', 'is', 'good', 'for', 'the', 'goose', 'is', 'also', 'good', 'for', 'the', 'gander', ',', 'some', 'of', 'which', 'occasionally', 'amuses', 'but', 'none', 'of', 'which', 'amounts', 'to', 'much', 'of', 'a', 'story', '.'] type=list,\n",
  204. "'seq_len': 37 type=int}\n"
  205. ]
  206. }
  207. ],
  208. "source": [
  209. "# 增加长度信息\n",
  210. "dataset.apply(lambda x: len(x['words']), new_field_name='seq_len')\n",
  211. "print(dataset[0])"
  212. ]
  213. },
  214. {
  215. "cell_type": "markdown",
  216. "metadata": {},
  217. "source": [
  218. "## DataSet.drop\n",
  219. "筛选数据"
  220. ]
  221. },
  222. {
  223. "cell_type": "code",
  224. "execution_count": 8,
  225. "metadata": {},
  226. "outputs": [
  227. {
  228. "name": "stdout",
  229. "output_type": "stream",
  230. "text": [
  231. "77\n"
  232. ]
  233. }
  234. ],
  235. "source": [
  236. "# 删除低于某个长度的词语\n",
  237. "dataset.drop(lambda x: x['seq_len'] <= 3)\n",
  238. "print(len(dataset))"
  239. ]
  240. },
  241. {
  242. "cell_type": "markdown",
  243. "metadata": {},
  244. "source": [
  245. "## 配置DataSet\n",
  246. "1. 哪些域是特征,哪些域是标签\n",
  247. "2. 切分训练集/验证集"
  248. ]
  249. },
  250. {
  251. "cell_type": "code",
  252. "execution_count": 9,
  253. "metadata": {},
  254. "outputs": [],
  255. "source": [
  256. "# 设置DataSet中,哪些field要转为tensor\n",
  257. "\n",
  258. "# set target,loss或evaluate中的golden,计算loss,模型评估时使用\n",
  259. "dataset.set_target(\"label\")\n",
  260. "# set input,模型forward时使用\n",
  261. "dataset.set_input(\"words\")"
  262. ]
  263. },
  264. {
  265. "cell_type": "code",
  266. "execution_count": 10,
  267. "metadata": {},
  268. "outputs": [
  269. {
  270. "name": "stdout",
  271. "output_type": "stream",
  272. "text": [
  273. "54\n",
  274. "23\n"
  275. ]
  276. }
  277. ],
  278. "source": [
  279. "# 分出测试集、训练集\n",
  280. "\n",
  281. "test_data, train_data = dataset.split(0.3)\n",
  282. "print(len(test_data))\n",
  283. "print(len(train_data))"
  284. ]
  285. },
  286. {
  287. "cell_type": "markdown",
  288. "metadata": {},
  289. "source": [
  290. "Vocabulary\n",
  291. "------\n",
  292. "\n",
  293. "fastNLP中的Vocabulary轻松构建词表,将词转成数字"
  294. ]
  295. },
  296. {
  297. "cell_type": "code",
  298. "execution_count": 11,
  299. "metadata": {},
  300. "outputs": [
  301. {
  302. "name": "stdout",
  303. "output_type": "stream",
  304. "text": [
  305. "{'raw_sentence': the performances are an absolute joy . type=str,\n",
  306. "'label': 4 type=int,\n",
  307. "'words': [3, 1, 1, 26, 1, 1, 2] type=list,\n",
  308. "'seq_len': 7 type=int}\n"
  309. ]
  310. }
  311. ],
  312. "source": [
  313. "from fastNLP import Vocabulary\n",
  314. "\n",
  315. "# 构建词表, Vocabulary.add(word)\n",
  316. "vocab = Vocabulary(min_freq=2)\n",
  317. "train_data.apply(lambda x: [vocab.add(word) for word in x['words']])\n",
  318. "vocab.build_vocab()\n",
  319. "\n",
  320. "# index句子, Vocabulary.to_index(word)\n",
  321. "train_data.apply(lambda x: [vocab.to_index(word) for word in x['words']], new_field_name='words')\n",
  322. "test_data.apply(lambda x: [vocab.to_index(word) for word in x['words']], new_field_name='words')\n",
  323. "\n",
  324. "\n",
  325. "print(test_data[0])"
  326. ]
  327. },
  328. {
  329. "cell_type": "code",
  330. "execution_count": 12,
  331. "metadata": {},
  332. "outputs": [
  333. {
  334. "name": "stdout",
  335. "output_type": "stream",
  336. "text": [
  337. "batch_x has: {'words': tensor([[ 15, 72, 15, 73, 74, 7, 3, 75, 6, 3, 16, 16,\n",
  338. " 76, 2],\n",
  339. " [ 15, 72, 15, 73, 74, 7, 3, 75, 6, 3, 16, 16,\n",
  340. " 76, 2]])}\n",
  341. "batch_y has: {'label': tensor([ 1, 1])}\n"
  342. ]
  343. }
  344. ],
  345. "source": [
  346. "# 如果你们需要做强化学习或者GAN之类的项目,你们也可以使用这些数据预处理的工具\n",
  347. "from fastNLP.core.batch import Batch\n",
  348. "from fastNLP.core.sampler import RandomSampler\n",
  349. "\n",
  350. "batch_iterator = Batch(dataset=train_data, batch_size=2, sampler=RandomSampler())\n",
  351. "for batch_x, batch_y in batch_iterator:\n",
  352. " print(\"batch_x has: \", batch_x)\n",
  353. " print(\"batch_y has: \", batch_y)\n",
  354. " break"
  355. ]
  356. },
  357. {
  358. "cell_type": "markdown",
  359. "metadata": {},
  360. "source": [
  361. "# Model\n",
  362. "定义一个PyTorch模型"
  363. ]
  364. },
  365. {
  366. "cell_type": "code",
  367. "execution_count": 15,
  368. "metadata": {},
  369. "outputs": [
  370. {
  371. "data": {
  372. "text/plain": [
  373. "CNNText(\n",
  374. " (embed): Embedding(\n",
  375. " 77, 50\n",
  376. " (dropout): Dropout(p=0.0)\n",
  377. " )\n",
  378. " (conv_pool): ConvMaxpool(\n",
  379. " (convs): ModuleList(\n",
  380. " (0): Conv1d(50, 3, kernel_size=(3,), stride=(1,), padding=(2,))\n",
  381. " (1): Conv1d(50, 4, kernel_size=(4,), stride=(1,), padding=(2,))\n",
  382. " (2): Conv1d(50, 5, kernel_size=(5,), stride=(1,), padding=(2,))\n",
  383. " )\n",
  384. " )\n",
  385. " (dropout): Dropout(p=0.1)\n",
  386. " (fc): Linear(\n",
  387. " (linear): Linear(in_features=12, out_features=5, bias=True)\n",
  388. " )\n",
  389. ")"
  390. ]
  391. },
  392. "execution_count": 15,
  393. "metadata": {},
  394. "output_type": "execute_result"
  395. }
  396. ],
  397. "source": [
  398. "from fastNLP.models import CNNText\n",
  399. "model = CNNText((len(vocab), 50), num_classes=5, padding=2, dropout=0.1)\n",
  400. "model"
  401. ]
  402. },
  403. {
  404. "cell_type": "markdown",
  405. "metadata": {},
  406. "source": [
  407. "这是上述模型的forward方法。如果你不知道什么是forward方法,请参考我们的PyTorch教程。\n",
  408. "\n",
  409. "注意两点:\n",
  410. "1. forward参数名字叫**word_seq**,请记住。\n",
  411. "2. forward的返回值是一个**dict**,其中有个key的名字叫**output**。\n",
  412. "\n",
  413. "```Python\n",
  414. " def forward(self, word_seq):\n",
  415. " \"\"\"\n",
  416. "\n",
  417. " :param word_seq: torch.LongTensor, [batch_size, seq_len]\n",
  418. " :return output: dict of torch.LongTensor, [batch_size, num_classes]\n",
  419. " \"\"\"\n",
  420. " x = self.embed(word_seq) # [N,L] -> [N,L,C]\n",
  421. " x = self.conv_pool(x) # [N,L,C] -> [N,C]\n",
  422. " x = self.dropout(x)\n",
  423. " x = self.fc(x) # [N,C] -> [N, N_class]\n",
  424. " return {'output': x}\n",
  425. "```"
  426. ]
  427. },
  428. {
  429. "cell_type": "markdown",
  430. "metadata": {},
  431. "source": [
  432. "这是上述模型的predict方法,是用来直接输出该任务的预测结果,与forward目的不同。\n",
  433. "\n",
  434. "注意两点:\n",
  435. "1. predict参数名也叫**word_seq**。\n",
  436. "2. predict的返回值是也一个**dict**,其中有个key的名字叫**predict**。\n",
  437. "\n",
  438. "```\n",
  439. " def predict(self, word_seq):\n",
  440. " \"\"\"\n",
  441. "\n",
  442. " :param word_seq: torch.LongTensor, [batch_size, seq_len]\n",
  443. " :return predict: dict of torch.LongTensor, [batch_size, seq_len]\n",
  444. " \"\"\"\n",
  445. " output = self(word_seq)\n",
  446. " _, predict = output['output'].max(dim=1)\n",
  447. " return {'predict': predict}\n",
  448. "```"
  449. ]
  450. },
  451. {
  452. "cell_type": "markdown",
  453. "metadata": {},
  454. "source": [
  455. "Trainer & Tester\n",
  456. "------\n",
  457. "\n",
  458. "使用fastNLP的Trainer训练模型"
  459. ]
  460. },
  461. {
  462. "cell_type": "code",
  463. "execution_count": 16,
  464. "metadata": {},
  465. "outputs": [],
  466. "source": [
  467. "from fastNLP import Trainer\n",
  468. "from copy import deepcopy\n",
  469. "from fastNLP.core.losses import CrossEntropyLoss\n",
  470. "from fastNLP.core.metrics import AccuracyMetric\n",
  471. "\n",
  472. "\n",
  473. "# 更改DataSet中对应field的名称,与模型的forward的参数名一致\n",
  474. "# 因为forward的参数叫word_seq, 所以要把原本叫words的field改名为word_seq\n",
  475. "# 这里的演示是让你了解这种**命名规则**\n",
  476. "train_data.rename_field('words', 'word_seq')\n",
  477. "test_data.rename_field('words', 'word_seq')\n",
  478. "\n",
  479. "# 顺便把label换名为label_seq\n",
  480. "train_data.rename_field('label', 'label_seq')\n",
  481. "test_data.rename_field('label', 'label_seq')"
  482. ]
  483. },
  484. {
  485. "cell_type": "markdown",
  486. "metadata": {},
  487. "source": [
  488. "### loss\n",
  489. "训练模型需要提供一个损失函数\n",
  490. "\n",
  491. "下面提供了一个在分类问题中常用的交叉熵损失。注意它的**初始化参数**。\n",
  492. "\n",
  493. "pred参数对应的是模型的forward返回的dict的一个key的名字,这里是\"output\"。\n",
  494. "\n",
  495. "target参数对应的是dataset作为标签的field的名字,这里是\"label_seq\"。"
  496. ]
  497. },
  498. {
  499. "cell_type": "code",
  500. "execution_count": 17,
  501. "metadata": {},
  502. "outputs": [],
  503. "source": [
  504. "loss = CrossEntropyLoss(pred=\"output\", target=\"label_seq\")"
  505. ]
  506. },
  507. {
  508. "cell_type": "markdown",
  509. "metadata": {},
  510. "source": [
  511. "### Metric\n",
  512. "定义评价指标\n",
  513. "\n",
  514. "这里使用准确率。参数的“命名规则”跟上面类似。\n",
  515. "\n",
  516. "pred参数对应的是模型的predict方法返回的dict的一个key的名字,这里是\"predict\"。\n",
  517. "\n",
  518. "target参数对应的是dataset作为标签的field的名字,这里是\"label_seq\"。"
  519. ]
  520. },
  521. {
  522. "cell_type": "code",
  523. "execution_count": 18,
  524. "metadata": {},
  525. "outputs": [],
  526. "source": [
  527. "metric = AccuracyMetric(pred=\"predict\", target=\"label_seq\")"
  528. ]
  529. },
  530. {
  531. "cell_type": "code",
  532. "execution_count": 19,
  533. "metadata": {},
  534. "outputs": [
  535. {
  536. "name": "stdout",
  537. "output_type": "stream",
  538. "text": [
  539. "input fields after batch(if batch size is 2):\n",
  540. "\tword_seq: (1)type:torch.Tensor (2)dtype:torch.int64, (3)shape:torch.Size([2, 11]) \n",
  541. "target fields after batch(if batch size is 2):\n",
  542. "\tlabel_seq: (1)type:torch.Tensor (2)dtype:torch.int64, (3)shape:torch.Size([2]) \n",
  543. "\n"
  544. ]
  545. },
  546. {
  547. "ename": "NameError",
  548. "evalue": "\nProblems occurred when calling CNNText.forward(self, words, seq_len=None)\n\tmissing param: ['words']\n\tunused field: ['word_seq']\n\tSuggestion: You need to provide ['words'] in DataSet and set it as input. ",
  549. "output_type": "error",
  550. "traceback": [
  551. "\u001b[0;31m---------------------------------------------------------------------------\u001b[0m",
  552. "\u001b[0;31mNameError\u001b[0m Traceback (most recent call last)",
  553. "\u001b[0;32m<ipython-input-19-ff7d68caf88a>\u001b[0m in \u001b[0;36m<module>\u001b[0;34m()\u001b[0m\n\u001b[1;32m 7\u001b[0m \u001b[0msave_path\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0;32mNone\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 8\u001b[0m \u001b[0mbatch_size\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0;36m32\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m----> 9\u001b[0;31m n_epochs=5)\n\u001b[0m\u001b[1;32m 10\u001b[0m \u001b[0moverfit_trainer\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mtrain\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
  554. "\u001b[0;32m/Users/fdujyn/anaconda3/lib/python3.6/site-packages/fastNLP/core/trainer.py\u001b[0m in \u001b[0;36m__init__\u001b[0;34m(self, train_data, model, optimizer, loss, batch_size, sampler, update_every, n_epochs, print_every, dev_data, metrics, metric_key, validate_every, save_path, prefetch, use_tqdm, device, callbacks, check_code_level)\u001b[0m\n\u001b[1;32m 447\u001b[0m _check_code(dataset=train_data, model=model, losser=losser, metrics=metrics, dev_data=dev_data,\n\u001b[1;32m 448\u001b[0m \u001b[0mmetric_key\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0mmetric_key\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mcheck_level\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0mcheck_code_level\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m--> 449\u001b[0;31m batch_size=min(batch_size, DEFAULT_CHECK_BATCH_SIZE))\n\u001b[0m\u001b[1;32m 450\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 451\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mtrain_data\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mtrain_data\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
  555. "\u001b[0;32m/Users/fdujyn/anaconda3/lib/python3.6/site-packages/fastNLP/core/trainer.py\u001b[0m in \u001b[0;36m_check_code\u001b[0;34m(dataset, model, losser, metrics, batch_size, dev_data, metric_key, check_level)\u001b[0m\n\u001b[1;32m 808\u001b[0m \u001b[0mprint\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0minfo_str\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 809\u001b[0m _check_forward_error(forward_func=model.forward, dataset=dataset,\n\u001b[0;32m--> 810\u001b[0;31m batch_x=batch_x, check_level=check_level)\n\u001b[0m\u001b[1;32m 811\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 812\u001b[0m \u001b[0mrefined_batch_x\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0m_build_args\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mmodel\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mforward\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;34m**\u001b[0m\u001b[0mbatch_x\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
  556. "\u001b[0;32m/Users/fdujyn/anaconda3/lib/python3.6/site-packages/fastNLP/core/utils.py\u001b[0m in \u001b[0;36m_check_forward_error\u001b[0;34m(forward_func, batch_x, dataset, check_level)\u001b[0m\n\u001b[1;32m 594\u001b[0m \u001b[0msugg_str\u001b[0m \u001b[0;34m+=\u001b[0m \u001b[0msuggestions\u001b[0m\u001b[0;34m[\u001b[0m\u001b[0;36m0\u001b[0m\u001b[0;34m]\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 595\u001b[0m \u001b[0merr_str\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0;34m'\\n'\u001b[0m \u001b[0;34m+\u001b[0m \u001b[0;34m'\\n'\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mjoin\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0merrs\u001b[0m\u001b[0;34m)\u001b[0m \u001b[0;34m+\u001b[0m \u001b[0;34m'\\n\\tSuggestion: '\u001b[0m \u001b[0;34m+\u001b[0m \u001b[0msugg_str\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m--> 596\u001b[0;31m \u001b[0;32mraise\u001b[0m \u001b[0mNameError\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0merr_str\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 597\u001b[0m \u001b[0;32mif\u001b[0m \u001b[0m_unused\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 598\u001b[0m \u001b[0;32mif\u001b[0m \u001b[0mcheck_level\u001b[0m \u001b[0;34m==\u001b[0m \u001b[0mWARNING_CHECK_LEVEL\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
  557. "\u001b[0;31mNameError\u001b[0m: \nProblems occurred when calling CNNText.forward(self, words, seq_len=None)\n\tmissing param: ['words']\n\tunused field: ['word_seq']\n\tSuggestion: You need to provide ['words'] in DataSet and set it as input. "
  558. ]
  559. }
  560. ],
  561. "source": [
  562. "# 实例化Trainer,传入模型和数据,进行训练\n",
  563. "# 先在test_data拟合(确保模型的实现是正确的)\n",
  564. "copy_model = deepcopy(model)\n",
  565. "overfit_trainer = Trainer(model=copy_model, train_data=test_data, dev_data=test_data,\n",
  566. " loss=loss,\n",
  567. " metrics=metric,\n",
  568. " save_path=None,\n",
  569. " batch_size=32,\n",
  570. " n_epochs=5)\n",
  571. "overfit_trainer.train()"
  572. ]
  573. },
  574. {
  575. "cell_type": "code",
  576. "execution_count": 22,
  577. "metadata": {},
  578. "outputs": [
  579. {
  580. "name": "stdout",
  581. "output_type": "stream",
  582. "text": [
  583. "input fields after batch(if batch size is 2):\n",
  584. "\tword_seq: (1)type:torch.Tensor (2)dtype:torch.int64, (3)shape:torch.Size([2, 20]) \n",
  585. "target fields after batch(if batch size is 2):\n",
  586. "\tlabel_seq: (1)type:torch.Tensor (2)dtype:torch.int64, (3)shape:torch.Size([2]) \n",
  587. "\n",
  588. "training epochs started 2019-01-12 17-09-05\n"
  589. ]
  590. },
  591. {
  592. "data": {
  593. "text/plain": [
  594. "HBox(children=(IntProgress(value=0, layout=Layout(flex='2'), max=5), HTML(value='')), layout=Layout(display='i…"
  595. ]
  596. },
  597. "metadata": {},
  598. "output_type": "display_data"
  599. },
  600. {
  601. "name": "stdout",
  602. "output_type": "stream",
  603. "text": [
  604. "Evaluation at Epoch 1/5. Step:1/5. AccuracyMetric: acc=0.37037\n",
  605. "Evaluation at Epoch 2/5. Step:2/5. AccuracyMetric: acc=0.37037\n",
  606. "Evaluation at Epoch 3/5. Step:3/5. AccuracyMetric: acc=0.462963\n",
  607. "Evaluation at Epoch 4/5. Step:4/5. AccuracyMetric: acc=0.425926\n",
  608. "Evaluation at Epoch 5/5. Step:5/5. AccuracyMetric: acc=0.481481\n",
  609. "\n",
  610. "In Epoch:5/Step:5, got best dev performance:AccuracyMetric: acc=0.481481\n",
  611. "Reloaded the best model.\n",
  612. "Train finished!\n"
  613. ]
  614. }
  615. ],
  616. "source": [
  617. "# 用train_data训练,在test_data验证\n",
  618. "trainer = Trainer(model=model, train_data=train_data, dev_data=test_data,\n",
  619. " loss=CrossEntropyLoss(pred=\"output\", target=\"label_seq\"),\n",
  620. " metrics=AccuracyMetric(pred=\"predict\", target=\"label_seq\"),\n",
  621. " save_path=None,\n",
  622. " batch_size=32,\n",
  623. " n_epochs=5)\n",
  624. "trainer.train()\n",
  625. "print('Train finished!')"
  626. ]
  627. },
  628. {
  629. "cell_type": "code",
  630. "execution_count": 23,
  631. "metadata": {},
  632. "outputs": [
  633. {
  634. "name": "stdout",
  635. "output_type": "stream",
  636. "text": [
  637. "[tester] \n",
  638. "AccuracyMetric: acc=0.481481\n",
  639. "{'AccuracyMetric': {'acc': 0.481481}}\n"
  640. ]
  641. }
  642. ],
  643. "source": [
  644. "# 调用Tester在test_data上评价效果\n",
  645. "from fastNLP import Tester\n",
  646. "\n",
  647. "tester = Tester(data=test_data, model=model, metrics=AccuracyMetric(pred=\"predict\", target=\"label_seq\"),\n",
  648. " batch_size=4)\n",
  649. "acc = tester.test()\n",
  650. "print(acc)"
  651. ]
  652. },
  653. {
  654. "cell_type": "markdown",
  655. "metadata": {},
  656. "source": [
  657. "# In summary\n",
  658. "\n",
  659. "## fastNLP Trainer的伪代码逻辑\n",
  660. "### 1. 准备DataSet,假设DataSet中共有如下的fields\n",
  661. " ['raw_sentence', 'word_seq1', 'word_seq2', 'raw_label','label']\n",
  662. " 通过\n",
  663. " DataSet.set_input('word_seq1', word_seq2', flag=True)将'word_seq1', 'word_seq2'设置为input\n",
  664. " 通过\n",
  665. " DataSet.set_target('label', flag=True)将'label'设置为target\n",
  666. "### 2. 初始化模型\n",
  667. " class Model(nn.Module):\n",
  668. " def __init__(self):\n",
  669. " xxx\n",
  670. " def forward(self, word_seq1, word_seq2):\n",
  671. " # (1) 这里使用的形参名必须和DataSet中的input field的名称对应。因为我们是通过形参名, 进行赋值的\n",
  672. " # (2) input field的数量可以多于这里的形参数量。但是不能少于。\n",
  673. " xxxx\n",
  674. " # 输出必须是一个dict\n",
  675. "### 3. Trainer的训练过程\n",
  676. " (1) 从DataSet中按照batch_size取出一个batch,调用Model.forward\n",
  677. " (2) 将 Model.forward的结果 与 标记为target的field 传入Losser当中。\n",
  678. " 由于每个人写的Model.forward的output的dict可能key并不一样,比如有人是{'pred':xxx}, {'output': xxx}; \n",
  679. " 另外每个人将target可能也会设置为不同的名称, 比如有人是label, 有人设置为target;\n",
  680. " 为了解决以上的问题,我们的loss提供映射机制\n",
  681. " 比如CrossEntropyLosser的需要的输入是(prediction, target)。但是forward的output是{'output': xxx}; 'label'是target\n",
  682. " 那么初始化losser的时候写为CrossEntropyLosser(prediction='output', target='label')即可\n",
  683. " (3) 对于Metric是同理的\n",
  684. " Metric计算也是从 forward的结果中取值 与 设置target的field中取值。 也是可以通过映射找到对应的值 \n",
  685. " \n",
  686. " \n",
  687. "\n",
  688. "## 一些问题.\n",
  689. "### 1. DataSet中为什么需要设置input和target\n",
  690. " 只有被设置为input或者target的数据才会在train的过程中被取出来\n",
  691. " (1.1) 我们只会在设置为input的field中寻找传递给Model.forward的参数。\n",
  692. " (1.2) 我们在传递值给losser或者metric的时候会使用来自: \n",
  693. " (a)Model.forward的output\n",
  694. " (b)被设置为target的field\n",
  695. " \n",
  696. "\n",
  697. "### 2. 我们是通过forwad中的形参名将DataSet中的field赋值给对应的参数\n",
  698. " (1.1) 构建模型过程中,\n",
  699. " 例如:\n",
  700. " DataSet中x,seq_lens是input,那么forward就应该是\n",
  701. " def forward(self, x, seq_lens):\n",
  702. " pass\n",
  703. " 我们是通过形参名称进行匹配的field的\n",
  704. " \n",
  705. "\n",
  706. "\n",
  707. "### 1. 加载数据到DataSet\n",
  708. "### 2. 使用apply操作对DataSet进行预处理\n",
  709. " (2.1) 处理过程中将某些field设置为input,某些field设置为target\n",
  710. "### 3. 构建模型\n",
  711. " (3.1) 构建模型过程中,需要注意forward函数的形参名需要和DataSet中设置为input的field名称是一致的。\n",
  712. " 例如:\n",
  713. " DataSet中x,seq_lens是input,那么forward就应该是\n",
  714. " def forward(self, x, seq_lens):\n",
  715. " pass\n",
  716. " 我们是通过形参名称进行匹配的field的\n",
  717. " (3.2) 模型的forward的output需要是dict类型的。\n",
  718. " 建议将输出设置为{\"pred\": xx}.\n",
  719. " \n"
  720. ]
  721. },
  722. {
  723. "cell_type": "code",
  724. "execution_count": null,
  725. "metadata": {},
  726. "outputs": [],
  727. "source": []
  728. }
  729. ],
  730. "metadata": {
  731. "kernelspec": {
  732. "display_name": "Python 3",
  733. "language": "python",
  734. "name": "python3"
  735. },
  736. "language_info": {
  737. "codemirror_mode": {
  738. "name": "ipython",
  739. "version": 3
  740. },
  741. "file_extension": ".py",
  742. "mimetype": "text/x-python",
  743. "name": "python",
  744. "nbconvert_exporter": "python",
  745. "pygments_lexer": "ipython3",
  746. "version": "3.6.7"
  747. }
  748. },
  749. "nbformat": 4,
  750. "nbformat_minor": 2
  751. }