{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# T2. dataloader 和 tokenizer 的基本使用\n", "\n", "  1   fastNLP 中的 dataloader\n", "\n", "    1.1   databundle 的结构与使用\n", "\n", "    1.2   dataloader 的结构与使用\n", "\n", "  2   fastNLP 中的 tokenizer\n", " \n", "    2.1   传统 GloVe 词嵌入的加载\n", " \n", "    2.2   PreTrainedTokenizer 的概念\n", "\n", "    2.3   BertTokenizer 的基本使用\n", "\n", "  3   实例:NG20 数据集的完整加载过程\n", " \n", "    3.1   \n", "\n", "    3.2   " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 1. fastNLP 中的 dataloader\n", "\n", "### 1.1 databundle 的结构与使用\n", "\n", "在`fastNLP 0.8`中,在常用的数据加载模块`DataLoader`和数据集`DataSet`模块之间,还存在\n", "\n", "  一个中间模块,即 **数据包`DataBundle`模块**,可以从`fastNLP.io`路径中导入该模块\n", "\n", "在`fastNLP 0.8`中,**一个`databundle`数据包包含若干`dataset`数据集和`vocabulary`词汇表**\n", "\n", "  分别存储在`datasets`和`vocabs`两个变量中,所以了解`databundle`数据包之前\n", "\n", "  需要首先**复习`dataset`数据集和`vocabulary`词汇表**,**下面的一串代码**,**你知道其大概含义吗?**\n", "\n", "必要提示:`NG20`,全称[`News Group 20`](http://qwone.com/~jason/20Newsgroups/),是一个新闻文本分类数据集,包含20个大类以及若干小类\n", "\n", "  数据集包含训练集`'ng20_train.csv'`和测试集`'ng20_test.csv'`两部分,每条数据\n", "\n", "  包括`'label'`标签和`'text'`文本两个条目,通过`sample(frac=1)[:10]`随机采样并读取前十条" ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [ { "data": { "text/html": [ "
\n",
       "
\n" ], "text/plain": [ "\n" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "application/vnd.jupyter.widget-view+json": { "model_id": "", "version_major": 2, "version_minor": 0 }, "text/plain": [ "Processing: 0%| | 0/10 [00:00': 0, '': 1, 'rec': 2, 'talk': 3, 'comp': 4, 'soc': 5, 'misc': 6, 'sci': 7}\n" ] } ], "source": [ "import pandas as pd\n", "\n", "from fastNLP import DataSet\n", "from fastNLP import Vocabulary\n", "from fastNLP.io import DataBundle\n", "\n", "datasets = {}\n", "datasets['train'] = DataSet.from_pandas(pd.read_csv('./data/ng20_train.csv').sample(frac=1)[:10])\n", "datasets['train'].apply_more(lambda ins:{'label': ins['label'].lower().split('.')[0], \n", " 'text': ins['text'].lower().split()},\n", " progress_bar='tqdm')\n", "datasets['test'] = DataSet.from_pandas(pd.read_csv('./data/ng20_test.csv').sample(frac=1)[:10])\n", "datasets['test'].apply_more(lambda ins:{'label': ins['label'].lower().split('.')[0], \n", " 'text': ins['text'].lower().split()},\n", " progress_bar='tqdm')\n", "print(datasets['train'])\n", "\n", "vocabs = {}\n", "vocabs['label'] = Vocabulary().from_dataset(datasets['train'].concat(datasets['test'], inplace=False), field_name='label')\n", "vocabs['text'] = Vocabulary().from_dataset(datasets['train'].concat(datasets['test'], inplace=False), field_name='text')\n", "print(vocabs['label'].word2idx)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "数据集(比如:分开的训练集、验证集和测试集)以及各个field对应的vocabulary。\n", " 该对象一般由fastNLP中各种Loader的load函数生成,可以通过以下的方法获取里面的内容" ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "In total 2 datasets:\n", "\ttrain has 10 instances.\n", "\ttest has 10 instances.\n", "In total 2 vocabs:\n", "\tlabel has 8 entries.\n", "\ttext has 1687 entries.\n", "\n" ] } ], "source": [ "data_bundle = DataBundle(datasets=datasets, vocabs=vocabs)\n", "print(data_bundle)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### 1.2 dataloader 的结构与使用" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 2. fastNLP 中的 tokenizer\n", "\n", "### 2.1 传统 GloVe 词嵌入的加载" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### 2.2 PreTrainTokenizer 的提出\n", "\n", "在`fastNLP 0.8`中,**使用`PreTrainedTokenizer`模块来为数据集中的词语进行词向量的标注**\n", "\n", "  需要注意的是,`PreTrainedTokenizer`模块的下载和导入**需要确保环境安装了`transformers`模块**\n", "\n", "  这是因为 `fastNLP 0.8`中`PreTrainedTokenizer`模块的实现基于`Huggingface Transformers`库\n", "\n", "**`Huggingface Transformers`是基于一个开源的**,**基于`transformer`模型结构提供的预训练语言库**\n", "\n", "  包含了多种经典的基于`transformer`的预训练模型,如`BERT`、`BART`、`RoBERTa`、`GPT2`、`CPT`\n", "\n", "  更多相关内容可以参考`Huggingface Transformers`的[相关论文](https://arxiv.org/pdf/1910.03771.pdf)、[官方文档](https://huggingface.co/transformers/)以及[的代码仓库](https://github.com/huggingface/transformers)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### 2.3 BertTokenizer 的基本使用\n", "\n", "在`fastNLP 0.8`中,以`PreTrainedTokenizer`为基类,泛化出多个子类,实现基于`BERT`等模型的标注\n", "\n", "  本节以`BertTokenizer`模块为例,展示`PreTrainedTokenizer`模块的使用方法与应用实例\n", "\n", "**`BertTokenizer`的初始化包括 导入模块和导入数据 两步**,先通过从`fastNLP.transformers.torch`中\n", "\n", "  导入`BertTokenizer`模块,再通过`from_pretrained`方法指定`tokenizer`参数类型下载\n", "\n", "  其中,**`'bert-base-uncased'`指定`tokenizer`使用的预训练`BERT`类型**:单词不区分大小写\n", "\n", "    **模块层数`L=12`**,**隐藏层维度`H=768`**,**自注意力头数`A=12`**,**总参数量`110M`**\n", "\n", "  另外,模型参数自动下载至 home 目录下的`~\\.cache\\huggingface\\transformers`文件夹中" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": false }, "outputs": [], "source": [ "from fastNLP.transformers.torch import BertTokenizer\n", "\n", "tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "dir(tokenizer)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 3. 实例:NG20 数据集的完整加载过程\n", "\n", "### 3.1 使用 BertTokenizer 处理数据集\n", "\n", "在`fastNLP 0.8`中,**`Trainer`模块和`Evaluator`模块分别表示“训练器”和“评测器”**\n", "\n", "  对应于之前的`fastNLP`版本中的`Trainer`模块和`Tester`模块,其定义方法如下所示\n", "\n", "在`fastNLP 0.8`中,需要注意,在同个`python`脚本中先使用`Trainer`训练,然后使用`Evaluator`评测\n", "\n", "  非常关键的问题在于**如何正确设置二者的`driver`**。这就引入了另一个问题:什么是 `driver`?" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import pandas as pd\n", "\n", "from fastNLP import DataSet\n", "from fastNLP import Vocabulary\n", "\n", "dataset = DataSet.from_pandas(pd.read_csv('./data/ng20_test.csv'))" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from functools import partial\n", "\n", "encode = partial(tokenizer.encode_plus, max_length=100, truncation=True,\n", " return_attention_mask=True)\n", "# 会新增 input_ids 、 attention_mask 和 token_type_ids 这三个 field\n", "dataset.apply_field_more(encode, field_name='text')" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "target_vocab = Vocabulary(padding=None, unknown=None)\n", "\n", "target_vocab.from_dataset(*[ds for _, ds in data_bundle.iter_datasets()], field_name='label')\n", "target_vocab.index_dataset(*[ds for _, ds in data_bundle.iter_datasets()], field_name='label',\n", " new_field_name='labels')\n", "# 需要将 input_ids 的 pad 值设置为 tokenizer 的 pad 值\n", "dataset.set_pad('input_ids', pad_val=tokenizer.pad_token_id)\n", "dataset.set_ignore('label', 'text') # 因为 label 是原始的不需要的 str ,所以我们可以忽略它,让它不要在 batch 的输出中出现" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.7.4" } }, "nbformat": 4, "nbformat_minor": 1 }