iamkissg
  • PaperHighlights
  • 2019
    • 03
      • Not All Contexts Are Created Equal Better Word Representations with Variable Attention
      • Learning Context-Sensitive Word Embeddings with Neural Tensor Skip-Gram Model
      • Approximating CNNs with Bag-of-local-Features models works surprisingly well on ImageNet
      • pair2vec: Compositional Word-Pair Embeddings for Cross-Sentence Inference
      • Contextual Word Representations: A Contextual Introduction
      • Not All Neural Embeddings are Born Equal
      • High-risk learning: acquiring new word vectors from tiny data
      • Learning word embeddings from dictionary definitions only
      • Dependency-Based Word Embeddings
    • 02
      • Improving Word Embedding Compositionality using Lexicographic Definitions
      • From Word Embeddings To Document Distances
      • Progressive Growing of GANs for Improved Quality, Stability, and Variation
      • Retrofitting Word Vectors to Semantic Lexicons
      • Bag of Tricks for Image Classification with Convolutional Neural Networks
      • Multi-Task Deep Neural Networks for Natural Language Understanding
      • Snapshot Ensembles: Train 1, get M for free
      • EDA: Easy Data Augmentation Techniques for Boosting Performance on Text Classification Tasks
      • Counter-fitting Word Vectors to Linguistic Constraints
      • AdaScale: Towards Real-time Video Object Detection Using Adaptive Scaling
      • Learning semantic similarity in a continuous space
      • Progressive Neural Networks
      • BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
      • Language Models are Unsupervised Multitask Learners
    • 01
      • Querying Word Embeddings for Similarity and Relatedness
      • Data Distillation: Towards Omni-Supervised Learning
      • A Rank-Based Similarity Metric for Word Embeddings
      • Dict2vec: Learning Word Embeddings using Lexical Dictionaries
      • Graph Convolutional Networks for Text Classification
      • Improving Distributional Similarity with Lessons Learned from Word Embeddings
      • Real-time Personalization using Embeddings for Search Ranking at Airbnb
      • Glyce: Glyph-vectors for Chinese Character Representations
      • Auto-Encoding Dictionary Definitions into Consistent Word Embeddings
      • Distilling the Knowledge in a Neural Network
      • Uncovering divergent linguistic information in word embeddings with lessons for intrinsic and extrin
      • The (Too Many) Problems of Analogical Reasoning with Word Vectors
      • Linear Ensembles of Word Embedding Models
      • Intrinsic Evaluation of Word Vectors Fails to Predict Extrinsic Performance
      • Dynamic Meta-Embeddings for Improved Sentence Representations
  • 2018
    • 11
      • Think Globally, Embed Locally — Locally Linear Meta-embedding of Words
      • Learning linear transformations between counting-based and prediction-based word embeddings
      • Learning Word Meta-Embeddings by Autoencoding
      • Learning Word Meta-Embeddings
      • Frustratingly Easy Meta-Embedding – Computing Meta-Embeddings by Averaging Source Word Embeddings
    • 6
      • Universal Language Model Fine-tuning for Text Classification
      • Semi-supervised sequence tagging with bidirectional language models
      • Consensus Attention-based Neural Networks for Chinese Reading Comprehension
      • Attention-over-Attention Neural Networks for Reading Comprehension
      • Baseline Needs More Love: On Simple Word-Embedding-Based Models and Associated Pooling Mechanisms
      • Convolutional Neural Networks for Sentence Classification
      • Deep contextualized word representations
      • Neural Architectures for Named Entity Recognition
      • Improving Language Understanding by Generative Pre-Training
      • A Sensitivity Analysis of (and Practitioners’ Guide to) Convolutional Neural Networks for Sentence C
      • Teaching Machines to Read and Comprehend
    • 5
      • Text Understanding with the Attention Sum Reader Network
      • Effective Approaches to Attention-based Neural Machine Translation
      • Distance-based Self-Attention Network for Natural Language Inference
      • Deep Residual Learning for Image Recognition
      • U-Net: Convolutional Networks for Biomedical Image Segmentation
      • Memory Networks
      • Neural Machine Translation by Jointly Learning to Align and Translate
      • Convolutional Sequence to Sequence Learning
      • An Empirical Evaluation of Generic Convolutional and Recurrent Networks for Sequence Modeling
      • Graph Attention Networks
      • Attention is All You Need
      • DiSAN: Directional Self-Attention Network for RNN/CNN-Free Language Understanding
      • A Structured Self-attentive Sentence Embedding
      • Hierarchical Attention Networks for Document Classification
      • Grammar as a Foreign Language
      • Show, Attend and Tell: Neural Image Caption Generation with Visual Attention
      • Transforming Auto-encoders
      • Self-Attention with Relative Position Representations
    • 1
      • 20180108-20180114
  • 2017
    • 12
      • 20171218-2017124 论文笔记
    • 11
      • 20171127-20171203 论文笔记 1
      • 20171106-20171126 论文笔记
      • 20171030-20171105 论文笔记 1
  • Paper Title as Note Title
Powered by GitBook
On this page

Was this helpful?

  1. 2019
  2. 02

Multi-Task Deep Neural Networks for Natural Language Understanding

PreviousBag of Tricks for Image Classification with Convolutional Neural NetworksNextSnapshot Ensembles: Train 1, get M for free

Last updated 5 years ago

Was this helpful?

论文地址:

要点

牛顿爵爷说过: 如果说我看得比别人更远些, 那是因为我站在巨人的肩膀上. 这个道理似乎放之四海皆准, 至少站在"最强"的模型之上, 你的模型可能就成最强的模型了. 本文就是这样的一个例子.

文中提出的 MT-DNN 是作者 15 年提出的多任务学习模型的一个扩展, 最主要的改动就是使用 BERT 来学习句子(句子对)的表示. 前后模型的对比如下两图所示.

可以看到框架基本不变, 模型都由两部分组成: Shared layers 和 Task specific layers. 前者负责学习输入句子或句子对的通用表示, 后者提供了多任务学习的支持. 除了使用了更现代化一些的 Lexicon Encoder 替换了 Letter 3gram, 模型方面其他的改动几乎都是应 BERT 的使用而变的, 甚至都不得不说 Lexicon Encoder 的使用, 一方面也是为了 BERT.

BERT 本身就是为迁移学习而生的, 但是它一次只能针对单一任务进行 fine-tuning, 本文提出的 MT-DNN 则可以同时在多个任务上进行 fine-tuning. GPT-2 的论文抛出了一个疑点"在单一领域的数据集上进行单一任务的(监督)训练限制了模型的泛化", 并试图采用无监督 language modeling 习的方式来解决. 本文则采用了另一种思路: 利用多任务间的约束来避免单一任务上的过拟合, 从而提升模型的泛化能力. 直觉上比较好想通, 在一个任务上表现出色的模型在其他任务上可能会失效, 在多种任务上都表现良好的模型, 一方面已经证明了它的多面能力, 另一方面, 它在新任务上失效的可能性大大降低了(脑海中突然冒出一个词, task-ensemble, 各位看看就好).

根据文章的说法, 多任务学习的另一个(我没想到的)优势是, 比起单任务, 它能提供更多的数据. 数据量是监督学习不容忽视的一个量.

上文说到 Lexicon Encoder 的使用某种程度上也是为了 BERT. 事实上, 它就是 BERT 的一部分, 上图2中 Lexicon Encoder + Transformer Encoder = BERT, 文章用了"拿来主义", 直接使用的 HuggingFace 预训练好的 BERT. 后面 Task specific layer 的输入, 也完全按照 BERT 论文来 (以 [CLS] 对应的输出作为句子/句子对的表示). 于是本文就省了预训练的步骤, 直接在 GLUE 上进行 fine-tuning.

GPT-2 狂揽 7/8 项 language modelding 任务的记录. 本文的模型则连破 8/9 项 GLUE 任务的记录, 剩下 1 项还是因为数据集本身有问题. MT-DNN 与 BERT-large(在单一任务上分别 fine-tune) 的对比如下(省略表头).

MT-DNN 的 shared layers 就是 BERT-large, 因此可以认为它的优势完全来自于多任务学习. 多任务学习带来的另一个优点是, Domain Adaptation(领域自适应能力), 只需要少量数据, MT-DNN 就能在新任务上取得较好的结果, 在数据极端少的情况下(新任务数据集0.1%的数据), 准确率较 BERT 高了近一倍(80+% vs. 50+%).

除了 GLUE, MT-DNN 目前还是 SNLI 和 SciTail 的 SOTA 模型.

https://arxiv.org/pdf/1901.11504.pdf
MT-DNN1
MT-DNN2
MT-DNN2 on GLUE