 Dev0.4.0 (#149)
* 1. CRF增加支持bmeso类型的tag 2. vocabulary中增加注释
* BucketSampler增加一条错误检测
* 1.修改ClipGradientCallback的bug;删除LRSchedulerCallback中的print,之后应该传入pbar进行打印;2.增加MLP注释
* update MLP module
* 增加metric注释;修改trainer save过程中的bug
* Update README.md
fix tutorial link
* Add ENAS (Efficient Neural Architecture Search)
* add ignore_type in DataSet.add_field
* * AutoPadder will not pad when dtype is None
* add ignore_type in DataSet.apply
* 修复fieldarray中padder潜在bug
* 修复crf中typo; 以及可能导致数值不稳定的地方
* 修复CRF中可能存在的bug
* change two default init arguments of Trainer into None
* Changes to Callbacks:
* 给callback添加给定几个只读属性
* 通过manager设置这些属性
* 代码优化,减轻@transfer的负担
* * 将enas相关代码放到automl目录下
* 修复fast_param_mapping的一个bug
* Trainer添加自动创建save目录
* Vocabulary的打印,显示内容
* * 给vocabulary添加遍历方法
* 修复CRF为负数的bug
* add SQuAD metric
* add sigmoid activate function in MLP
* - add star transformer model
- add ConllLoader, for all kinds of conll-format files
- add JsonLoader, for json-format files
- add SSTLoader, for SST-2 & SST-5
- change Callback interface
- fix batch multi-process when killed
- add README to list models and their performance
* - fix test
* - fix callback & tests
* - update README
* 修改部分bug;调整callback
* 准备发布0.4.0版本“
* update readme
* support parallel loss
* 防止多卡的情况导致无法正确计算loss“
* update advance_tutorial jupyter notebook
* 1. 在embedding_loader中增加新的读取函数load_with_vocab(), load_without_vocab, 比之前的函数改变主要在(1)不再需要传入embed_dim(2)自动判断当前是word2vec还是glove.
2. vocabulary增加from_dataset(), index_dataset()函数。避免需要多行写index dataset的问题。
3. 在utils中新增一个cache_result()修饰器,用于cache函数的返回值。
4. callback中新增update_every属性
* 1.DataSet.apply()报错时提供错误的index
2.Vocabulary.from_dataset(), index_dataset()提供报错时的vocab顺序
3.embedloader在embed读取时遇到不规则的数据跳过这一行.
* update attention
* doc tools
* fix some doc errors
* 修改为中文注释,增加viterbi解码方法
* 样例版本
* - add pad sequence for lstm
- add csv, conll, json filereader
- update dataloader
- remove useless dataloader
- fix trainer loss print
- fix tests
* - fix test_tutorial
* 注释增加
* 测试文档
* 本地暂存
* 本地暂存
* 修改文档的顺序
* - add document
* 本地暂存
* update pooling
* update bert
* update documents in MLP
* update documents in snli
* combine self attention module to attention.py
* update documents on losses.py
* 对DataSet的文档进行更新
* update documents on metrics
* 1. 删除了LSTM中print的内容; 2. 将Trainer和Tester的use_cuda修改为了device; 3.补充Trainer的文档
* 增加对Trainer的注释
* 完善了trainer,callback等的文档; 修改了部分代码的命名以使得代码从文档中隐藏
* update char level encoder
* update documents on embedding.py
* - update doc
* 补充注释,并修改部分代码
* - update doc
- add get_embeddings
* 修改了文档配置项
* 修改embedding为init_embed初始化
* 1.增加对Trainer和Tester的多卡支持;
* - add test
- fix jsonloader
* 删除了注释教程
* 给 dataset 增加了get_field_names
* 修复bug
* - add Const
- fix bugs
* 修改部分注释
* - add model runner for easier test models
- add model tests
* 修改了 docs 的配置和架构
* 修改了核心部分的一大部分文档,TODO:
1. 完善 trainer 和 tester 部分的文档
2. 研究注释样例与测试
* core部分的注释基本检查完成
* 修改了 io 部分的注释
* 全部改为相对路径引用
* 全部改为相对路径引用
* small change
* 1. 从安装文件中删除api/automl的安装
2. metric中存在seq_len的bug
3. sampler中存在命名错误,已修改
* 修复 bug :兼容 cpu 版本的 PyTorch
TODO:其它地方可能也存在类似的 bug
* 修改文档中的引用部分
* 把 tqdm.autonotebook 换成tqdm.auto
* - fix batch & vocab
* 上传了文档文件 *.rst
* 上传了文档文件和若干 TODO
* 讨论并整合了若干模块
* core部分的测试和一些小修改
* 删除了一些冗余文档
* update init files
* update const files
* update const files
* 增加cnn的测试
* fix a little bug
* - update attention
- fix tests
* 完善测试
* 完成快速入门教程
* 修改了sequence_modeling 命名为 sequence_labeling 的文档
* 重新 apidoc 解决改名的遗留问题
* 修改文档格式
* 统一不同位置的seq_len_to_mask, 现统一到core.utils.seq_len_to_mask
* 增加了一行提示
* 在文档中展示 dataset_loader
* 提示 Dataset.read_csv 会被 CSVLoader 替换
* 完成 Callback 和 Trainer 之间的文档
* index更新了部分
* 删除冗余的print
* 删除用于分词的metric,因为有可能引起错误
* 修改文档中的中文名称
* 完成了详细介绍文档
* tutorial 的 ipynb 文件
* 修改了一些介绍文档
* 修改了 models 和 modules 的主页介绍
* 加上了 titlesonly 这个设置
* 修改了模块文档展示的标题
* 修改了 core 和 io 的开篇介绍
* 修改了 modules 和 models 开篇介绍
* 使用 .. todo:: 隐藏了可能被抽到文档中的 TODO 注释
* 修改了一些注释
* delete an old metric in test
* 修改 tutorials 的测试文件
* 把暂不发布的功能移到 legacy 文件夹
* 删除了不能运行的测试
* 修改 callback 的测试文件
* 删除了过时的教程和测试文件
* cache_results 参数的修改
* 修改 io 的测试文件; 删除了一些过时的测试
* 修复bug
* 修复无法通过test_utils.py的测试
* 修复与pytorch1.1中的padsequence的兼容问题; 修改Trainer的pbar
* 1. 修复metric中的bug; 2.增加metric测试
* add model summary
* 增加别名
* 删除encoder中的嵌套层
* 修改了 core 部分 import 的顺序,__all__ 暴露的内容
* 修改了 models 部分 import 的顺序,__all__ 暴露的内容
* 修改了文件名
* 修改了 modules 模块的__all__ 和 import
* fix var runn
* 增加vocab的clear方法
* 一些符合 PEP8 的微调
* 更新了cache_results的例子
* 1. 对callback中indices潜在None作出提示;2.DataSet支持通过List进行index
* 修改了一个typo
* 修改了 README.md
* update documents on bert
* update documents on encoder/bert
* 增加一个fitlog callback,实现与fitlog实验记录
* typo
* - update dataset_loader
* 增加了到 fitlog 文档的链接。
* 增加了 DataSet Loader 的文档
* - add star-transformer reproduction
6 years ago |
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383 |
- # Code Modified from https://github.com/carpedm20/ENAS-pytorch
-
- import math
- import time
- from datetime import datetime
- from datetime import timedelta
-
- import numpy as np
- import torch
-
- try:
- from tqdm.auto import tqdm
- except:
- from fastNLP.core.utils import _pseudo_tqdm as tqdm
-
- from fastNLP.core.batch import Batch
- from fastNLP.core.callback import CallbackException
- from fastNLP.core.dataset import DataSet
- from fastNLP.core.utils import _move_dict_value_to_device
- import fastNLP
- from . import enas_utils as utils
- from fastNLP.core.utils import _build_args
-
- from torch.optim import Adam
-
-
- def _get_no_grad_ctx_mgr():
- """Returns a the `torch.no_grad` context manager for PyTorch version >=
- 0.4, or a no-op context manager otherwise.
- """
- return torch.no_grad()
-
-
- class ENASTrainer(fastNLP.Trainer):
- """A class to wrap training code."""
- def __init__(self, train_data, model, controller, **kwargs):
- """Constructor for training algorithm.
- :param DataSet train_data: the training data
- :param torch.nn.modules.module model: a PyTorch model
- :param torch.nn.modules.module controller: a PyTorch model
- """
- self.final_epochs = kwargs['final_epochs']
- kwargs.pop('final_epochs')
- super(ENASTrainer, self).__init__(train_data, model, **kwargs)
- self.controller_step = 0
- self.shared_step = 0
- self.max_length = 35
-
- self.shared = model
- self.controller = controller
-
- self.shared_optim = Adam(
- self.shared.parameters(),
- lr=20.0,
- weight_decay=1e-7)
-
- self.controller_optim = Adam(
- self.controller.parameters(),
- lr=3.5e-4)
-
- def train(self, load_best_model=True):
- """
- :param bool load_best_model: 该参数只有在初始化提供了dev_data的情况下有效,如果True, trainer将在返回之前重新加载dev表现
- 最好的模型参数。
- :return results: 返回一个字典类型的数据,
- 内含以下内容::
-
- seconds: float, 表示训练时长
- 以下三个内容只有在提供了dev_data的情况下会有。
- best_eval: Dict of Dict, 表示evaluation的结果
- best_epoch: int,在第几个epoch取得的最佳值
- best_step: int, 在第几个step(batch)更新取得的最佳值
-
- """
- results = {}
- if self.n_epochs <= 0:
- print(f"training epoch is {self.n_epochs}, nothing was done.")
- results['seconds'] = 0.
- return results
- try:
- if torch.cuda.is_available() and self.use_cuda:
- self.model = self.model.cuda()
- self._model_device = self.model.parameters().__next__().device
- self._mode(self.model, is_test=False)
-
- self.start_time = str(datetime.now().strftime('%Y-%m-%d-%H-%M-%S'))
- start_time = time.time()
- print("training epochs started " + self.start_time, flush=True)
-
- try:
- self.callback_manager.on_train_begin()
- self._train()
- self.callback_manager.on_train_end(self.model)
- except (CallbackException, KeyboardInterrupt) as e:
- self.callback_manager.on_exception(e, self.model)
-
- if self.dev_data is not None:
- print("\nIn Epoch:{}/Step:{}, got best dev performance:".format(self.best_dev_epoch, self.best_dev_step) +
- self.tester._format_eval_results(self.best_dev_perf),)
- results['best_eval'] = self.best_dev_perf
- results['best_epoch'] = self.best_dev_epoch
- results['best_step'] = self.best_dev_step
- if load_best_model:
- model_name = "best_" + "_".join([self.model.__class__.__name__, self.metric_key, self.start_time])
- load_succeed = self._load_model(self.model, model_name)
- if load_succeed:
- print("Reloaded the best model.")
- else:
- print("Fail to reload best model.")
- finally:
- pass
- results['seconds'] = round(time.time() - start_time, 2)
-
- return results
-
- def _train(self):
- if not self.use_tqdm:
- from fastNLP.core.utils import _pseudo_tqdm as inner_tqdm
- else:
- inner_tqdm = tqdm
- self.step = 0
- start = time.time()
- total_steps = (len(self.train_data) // self.batch_size + int(
- len(self.train_data) % self.batch_size != 0)) * self.n_epochs
- with inner_tqdm(total=total_steps, postfix='loss:{0:<6.5f}', leave=False, dynamic_ncols=True) as pbar:
- avg_loss = 0
- data_iterator = Batch(self.train_data, batch_size=self.batch_size, sampler=self.sampler, as_numpy=False,
- prefetch=self.prefetch)
- for epoch in range(1, self.n_epochs+1):
- pbar.set_description_str(desc="Epoch {}/{}".format(epoch, self.n_epochs))
- last_stage = (epoch > self.n_epochs + 1 - self.final_epochs)
- if epoch == self.n_epochs + 1 - self.final_epochs:
- print('Entering the final stage. (Only train the selected structure)')
- # early stopping
- self.callback_manager.on_epoch_begin(epoch, self.n_epochs)
-
- # 1. Training the shared parameters omega of the child models
- self.train_shared(pbar)
-
- # 2. Training the controller parameters theta
- if not last_stage:
- self.train_controller()
-
- if ((self.validate_every > 0 and self.step % self.validate_every == 0) or
- (self.validate_every < 0 and self.step % len(data_iterator) == 0)) \
- and self.dev_data is not None:
- if not last_stage:
- self.derive()
- eval_res = self._do_validation(epoch=epoch, step=self.step)
- eval_str = "Evaluation at Epoch {}/{}. Step:{}/{}. ".format(epoch, self.n_epochs, self.step,
- total_steps) + \
- self.tester._format_eval_results(eval_res)
- pbar.write(eval_str)
-
- # lr decay; early stopping
- self.callback_manager.on_epoch_end(epoch, self.n_epochs, self.optimizer)
- # =============== epochs end =================== #
- pbar.close()
- # ============ tqdm end ============== #
-
-
- def get_loss(self, inputs, targets, hidden, dags):
- """Computes the loss for the same batch for M models.
-
- This amounts to an estimate of the loss, which is turned into an
- estimate for the gradients of the shared model.
- """
- if not isinstance(dags, list):
- dags = [dags]
-
- loss = 0
- for dag in dags:
- self.shared.setDAG(dag)
- inputs = _build_args(self.shared.forward, **inputs)
- inputs['hidden'] = hidden
- result = self.shared(**inputs)
- output, hidden, extra_out = result['pred'], result['hidden'], result['extra_out']
-
- self.callback_manager.on_loss_begin(targets, result)
- sample_loss = self._compute_loss(result, targets)
- loss += sample_loss
-
- assert len(dags) == 1, 'there are multiple `hidden` for multple `dags`'
- return loss, hidden, extra_out
-
- def train_shared(self, pbar=None, max_step=None, dag=None):
- """Train the language model for 400 steps of minibatches of 64
- examples.
-
- Args:
- max_step: Used to run extra training steps as a warm-up.
- dag: If not None, is used instead of calling sample().
-
- BPTT is truncated at 35 timesteps.
-
- For each weight update, gradients are estimated by sampling M models
- from the fixed controller policy, and averaging their gradients
- computed on a batch of training data.
- """
- model = self.shared
- model.train()
- self.controller.eval()
-
- hidden = self.shared.init_hidden(self.batch_size)
-
- abs_max_grad = 0
- abs_max_hidden_norm = 0
- step = 0
- raw_total_loss = 0
- total_loss = 0
- train_idx = 0
- avg_loss = 0
- data_iterator = Batch(self.train_data, batch_size=self.batch_size, sampler=self.sampler, as_numpy=False,
- prefetch=self.prefetch)
-
- for batch_x, batch_y in data_iterator:
- _move_dict_value_to_device(batch_x, batch_y, device=self._model_device)
- indices = data_iterator.get_batch_indices()
- # negative sampling; replace unknown; re-weight batch_y
- self.callback_manager.on_batch_begin(batch_x, batch_y, indices)
- # prediction = self._data_forward(self.model, batch_x)
-
- dags = self.controller.sample(1)
- inputs, targets = batch_x, batch_y
- # self.callback_manager.on_loss_begin(batch_y, prediction)
- loss, hidden, extra_out = self.get_loss(inputs,
- targets,
- hidden,
- dags)
- hidden.detach_()
-
- avg_loss += loss.item()
-
- # Is loss NaN or inf? requires_grad = False
- self.callback_manager.on_backward_begin(loss, self.model)
- self._grad_backward(loss)
- self.callback_manager.on_backward_end(self.model)
-
- self._update()
- self.callback_manager.on_step_end(self.optimizer)
-
- if (self.step+1) % self.print_every == 0:
- if self.use_tqdm:
- print_output = "loss:{0:<6.5f}".format(avg_loss / self.print_every)
- pbar.update(self.print_every)
- else:
- end = time.time()
- diff = timedelta(seconds=round(end - start))
- print_output = "[epoch: {:>3} step: {:>4}] train loss: {:>4.6} time: {}".format(
- epoch, self.step, avg_loss, diff)
- pbar.set_postfix_str(print_output)
- avg_loss = 0
- self.step += 1
- step += 1
- self.shared_step += 1
- self.callback_manager.on_batch_end()
- # ================= mini-batch end ==================== #
-
-
- def get_reward(self, dag, entropies, hidden, valid_idx=0):
- """Computes the perplexity of a single sampled model on a minibatch of
- validation data.
- """
- if not isinstance(entropies, np.ndarray):
- entropies = entropies.data.cpu().numpy()
-
- data_iterator = Batch(self.dev_data, batch_size=self.batch_size, sampler=self.sampler, as_numpy=False,
- prefetch=self.prefetch)
-
- for inputs, targets in data_iterator:
- valid_loss, hidden, _ = self.get_loss(inputs, targets, hidden, dag)
- valid_loss = utils.to_item(valid_loss.data)
-
- valid_ppl = math.exp(valid_loss)
-
- R = 80 / valid_ppl
-
- rewards = R + 1e-4 * entropies
-
- return rewards, hidden
-
- def train_controller(self):
- """Fixes the shared parameters and updates the controller parameters.
-
- The controller is updated with a score function gradient estimator
- (i.e., REINFORCE), with the reward being c/valid_ppl, where valid_ppl
- is computed on a minibatch of validation data.
-
- A moving average baseline is used.
-
- The controller is trained for 2000 steps per epoch (i.e.,
- first (Train Shared) phase -> second (Train Controller) phase).
- """
- model = self.controller
- model.train()
- # Why can't we call shared.eval() here? Leads to loss
- # being uniformly zero for the controller.
- # self.shared.eval()
-
- avg_reward_base = None
- baseline = None
- adv_history = []
- entropy_history = []
- reward_history = []
-
- hidden = self.shared.init_hidden(self.batch_size)
- total_loss = 0
- valid_idx = 0
- for step in range(20):
- # sample models
- dags, log_probs, entropies = self.controller.sample(
- with_details=True)
-
- # calculate reward
- np_entropies = entropies.data.cpu().numpy()
- # No gradients should be backpropagated to the
- # shared model during controller training, obviously.
- with _get_no_grad_ctx_mgr():
- rewards, hidden = self.get_reward(dags,
- np_entropies,
- hidden,
- valid_idx)
-
-
- reward_history.extend(rewards)
- entropy_history.extend(np_entropies)
-
- # moving average baseline
- if baseline is None:
- baseline = rewards
- else:
- decay = 0.95
- baseline = decay * baseline + (1 - decay) * rewards
-
- adv = rewards - baseline
- adv_history.extend(adv)
-
- # policy loss
- loss = -log_probs*utils.get_variable(adv,
- self.use_cuda,
- requires_grad=False)
-
- loss = loss.sum() # or loss.mean()
-
- # update
- self.controller_optim.zero_grad()
- loss.backward()
-
- self.controller_optim.step()
-
- total_loss += utils.to_item(loss.data)
-
- if ((step % 50) == 0) and (step > 0):
- reward_history, adv_history, entropy_history = [], [], []
- total_loss = 0
-
- self.controller_step += 1
- # prev_valid_idx = valid_idx
- # valid_idx = ((valid_idx + self.max_length) %
- # (self.valid_data.size(0) - 1))
- # # Whenever we wrap around to the beginning of the
- # # validation data, we reset the hidden states.
- # if prev_valid_idx > valid_idx:
- # hidden = self.shared.init_hidden(self.batch_size)
-
- def derive(self, sample_num=10, valid_idx=0):
- """We are always deriving based on the very first batch
- of validation data? This seems wrong...
- """
- hidden = self.shared.init_hidden(self.batch_size)
-
- dags, _, entropies = self.controller.sample(sample_num,
- with_details=True)
-
- max_R = 0
- best_dag = None
- for dag in dags:
- R, _ = self.get_reward(dag, entropies, hidden, valid_idx)
- if R.max() > max_R:
- max_R = R.max()
- best_dag = dag
-
- self.model.setDAG(best_dag)
|