iamkissg
  • PaperHighlights
  • 2019
    • 03
      • Not All Contexts Are Created Equal Better Word Representations with Variable Attention
      • Learning Context-Sensitive Word Embeddings with Neural Tensor Skip-Gram Model
      • Approximating CNNs with Bag-of-local-Features models works surprisingly well on ImageNet
      • pair2vec: Compositional Word-Pair Embeddings for Cross-Sentence Inference
      • Contextual Word Representations: A Contextual Introduction
      • Not All Neural Embeddings are Born Equal
      • High-risk learning: acquiring new word vectors from tiny data
      • Learning word embeddings from dictionary definitions only
      • Dependency-Based Word Embeddings
    • 02
      • Improving Word Embedding Compositionality using Lexicographic Definitions
      • From Word Embeddings To Document Distances
      • Progressive Growing of GANs for Improved Quality, Stability, and Variation
      • Retrofitting Word Vectors to Semantic Lexicons
      • Bag of Tricks for Image Classification with Convolutional Neural Networks
      • Multi-Task Deep Neural Networks for Natural Language Understanding
      • Snapshot Ensembles: Train 1, get M for free
      • EDA: Easy Data Augmentation Techniques for Boosting Performance on Text Classification Tasks
      • Counter-fitting Word Vectors to Linguistic Constraints
      • AdaScale: Towards Real-time Video Object Detection Using Adaptive Scaling
      • Learning semantic similarity in a continuous space
      • Progressive Neural Networks
      • BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
      • Language Models are Unsupervised Multitask Learners
    • 01
      • Querying Word Embeddings for Similarity and Relatedness
      • Data Distillation: Towards Omni-Supervised Learning
      • A Rank-Based Similarity Metric for Word Embeddings
      • Dict2vec: Learning Word Embeddings using Lexical Dictionaries
      • Graph Convolutional Networks for Text Classification
      • Improving Distributional Similarity with Lessons Learned from Word Embeddings
      • Real-time Personalization using Embeddings for Search Ranking at Airbnb
      • Glyce: Glyph-vectors for Chinese Character Representations
      • Auto-Encoding Dictionary Definitions into Consistent Word Embeddings
      • Distilling the Knowledge in a Neural Network
      • Uncovering divergent linguistic information in word embeddings with lessons for intrinsic and extrin
      • The (Too Many) Problems of Analogical Reasoning with Word Vectors
      • Linear Ensembles of Word Embedding Models
      • Intrinsic Evaluation of Word Vectors Fails to Predict Extrinsic Performance
      • Dynamic Meta-Embeddings for Improved Sentence Representations
  • 2018
    • 11
      • Think Globally, Embed Locally — Locally Linear Meta-embedding of Words
      • Learning linear transformations between counting-based and prediction-based word embeddings
      • Learning Word Meta-Embeddings by Autoencoding
      • Learning Word Meta-Embeddings
      • Frustratingly Easy Meta-Embedding – Computing Meta-Embeddings by Averaging Source Word Embeddings
    • 6
      • Universal Language Model Fine-tuning for Text Classification
      • Semi-supervised sequence tagging with bidirectional language models
      • Consensus Attention-based Neural Networks for Chinese Reading Comprehension
      • Attention-over-Attention Neural Networks for Reading Comprehension
      • Baseline Needs More Love: On Simple Word-Embedding-Based Models and Associated Pooling Mechanisms
      • Convolutional Neural Networks for Sentence Classification
      • Deep contextualized word representations
      • Neural Architectures for Named Entity Recognition
      • Improving Language Understanding by Generative Pre-Training
      • A Sensitivity Analysis of (and Practitioners’ Guide to) Convolutional Neural Networks for Sentence C
      • Teaching Machines to Read and Comprehend
    • 5
      • Text Understanding with the Attention Sum Reader Network
      • Effective Approaches to Attention-based Neural Machine Translation
      • Distance-based Self-Attention Network for Natural Language Inference
      • Deep Residual Learning for Image Recognition
      • U-Net: Convolutional Networks for Biomedical Image Segmentation
      • Memory Networks
      • Neural Machine Translation by Jointly Learning to Align and Translate
      • Convolutional Sequence to Sequence Learning
      • An Empirical Evaluation of Generic Convolutional and Recurrent Networks for Sequence Modeling
      • Graph Attention Networks
      • Attention is All You Need
      • DiSAN: Directional Self-Attention Network for RNN/CNN-Free Language Understanding
      • A Structured Self-attentive Sentence Embedding
      • Hierarchical Attention Networks for Document Classification
      • Grammar as a Foreign Language
      • Show, Attend and Tell: Neural Image Caption Generation with Visual Attention
      • Transforming Auto-encoders
      • Self-Attention with Relative Position Representations
    • 1
      • 20180108-20180114
  • 2017
    • 12
      • 20171218-2017124 论文笔记
    • 11
      • 20171127-20171203 论文笔记 1
      • 20171106-20171126 论文笔记
      • 20171030-20171105 论文笔记 1
  • Paper Title as Note Title
Powered by GitBook
On this page
  • TextRank: Bringing Order into Texts
  • Levenshtein distance
  • A Convolutional Neural Network for Modelling Sentences

Was this helpful?

  1. 2017
  2. 12

20171218-2017124 论文笔记

Previous12Next11

Last updated 5 years ago

Was this helpful?

一位读者觉得往期文章的翻译色彩太浓, 一些术语翻译成中文反而反应不过来. 因此本文尝试在术语第一次出现时使用双语, 以后使用原英文表示.

TextRank: Bringing Order into Texts

论文地址:

本文提出了一个基于图的文本处理的排名模型, graph-based rank model for text processing.

所谓基于图的排名算法, graph-based ranking algorithm, 就是通过考虑整张图, Graph, G[简]的全局信息, 而非特定顶点, Vertex, V[简]的局部信息来决定图中顶点的的重要性的方法.

Graph-based ranking model 实现的基本思想是投票, Vote或推荐, Recommendation. 当一个顶点连接到另一个顶点时, 记作为另一个顶点投了一票. 得票数越多, 顶点越重要. 反过来, 顶点越重要, 它投出的票也越重要. 简而言之, 一个顶点的重要性, 取决于它的票数, 以及投票的顶点的重要性. (这是一个鸡生蛋, 蛋生鸡的正反馈循环,)

文章采用 PageRank 论文提出的方法来计算顶点的得分, 如下:

其中 In(Vi) 表示指向 Vi 的顶点集, Out(Vi) 表示 Vi 指向的顶点集 (有向图, directed graph). d 是阻尼因子, damping factor, 表征了从一个顶点跳到另一个顶点的概率. 文中此处应该援引了 PageRank 论文: In the context of Web surfing, 用户随机点击链接的概率是 d, 跳转到新页面的概率是 1-d. d 通常设为 0.85

graph-based ranking algorithm 算法执行完毕后, 顶点的得分就表示它在图中的重要性. 而算法收敛(执行完毕)的依据是图中任意顶点的错误率, error rate低于某一给定阈值 threshold. 由于顶点的真实得分不可知, 文中用 来近似表示 error rate.

当将基于图的循环排名算法, recursive graph-based ranking algorithm应用于无向图, undirected graph时, 顶点的出度等于入度, 边, Edge数与顶点数成比例, 使得无向图的收敛曲线更平缓.

将 graph-based ranking algorithm 应用于赋权图, weighted graph时, 顶点的得分计算公式改写如下:

以文本作为图, 执行 graph-based ranking algorithm, 就得到了 TextRank 算法, 其一般执行过程如下:

  1. 确定文本单元, text unit(能用于表示文本的字词, 或短语等), 作为顶点——Identify text units that best define the task at hand, and add them as vertices in the graph;

  2. 确定 text unit 之间的关系, 以之作为图的边——Identify relations that connect such text units, and use these relations to draw edges between vertices in the graph. Edges can be directed or undirected, weighted or unweighted;

  3. 执行算法——Iterate the graph-based ranking algorithm until convergence;

  4. 根据重要性排序——Sort vertices based on their final socre. Use the values attached to each vertex for ranking/selection decisions.

文中用了 2 个实际例子来证明 TextRank 的有效性.

  1. 关键词提取, keyword extraction. 从文本中自动提取关键词 (想象一下自动提取论文关键词)(后面的一篇文章就使用了该方法).

    • 此时, 使用了co-occurance来确定词的关系: 有一个大小为 N 的窗口, 当两个词同时落在这个窗口中时, 就判断它们是有联系的, 在它们的顶点之间加一条边.

    • 选择顶点时, 用 语法过滤器, syntactic filter过滤掉无意义的词, 此外, 文中仅使用单个单词作为顶点, 所有的词组短语都被打散了. (维护一张很大的图是一种开销).

    • 顶点的初始得分可以随意设置, 简单点就设为 1; 可以设置最大迭代次数, 比如 20 次; 算法收敛的 threshold 根据顶点数与初始值而定, 比如 0,0001; 最终从文本中提取多少个关键词, 视文本而定, 较短的文本可考虑较少的关键词;

    • 文中还提到了一种后置处理 post processing方法: 提取的关键词在文本中是相邻出现的, 那么合并成一个词组.

    • 实验结论:

      1. 窗口越大, 精确度越低, 因此窗口不宜过大

      2. 当过滤器过滤仅剩下名词与形容词时, 效果最好. 语言信息, linguistic information能帮助提取关键词

  2. 关键句提取, sentence extraction. 从文本中自动提取出总结性的句子.

    • co-occurance不再适用于 sentence extraction. 文中定义了另一种相似性关系, similarity relation, similarity 定义为两个句子内容的重叠度.

    • 文中将这种关系描述为一个推荐, recommendation过程: 一个句子表达了某一理念, 它会向读者推荐另一个表达了相同理念的句子.

重要的事情再说一遍: TextRank 有效的一个原因是, 它利用的是整个文档的全局信息, 而不是一个 text unit 的局部上下文.

Levenshtein distance

我看到 TextRank 的一些开源实现, 采用 Levenshtein distance 来计算两个 keywords 之间的关系. 简单了解了一下.

计算公式如下:

A Convolutional Neural Network for Modelling Sentences

如标题所示, 本文提出了一种使用 CNN 对句子进行建模的方法, 称为Dynamic Convolutional Neural Network, DCNN, 使用Dynamic k-Max Pooling. 以下是一个较直观的示意图.

Sentence modeling, 以下简称 SM旨在分析和表示句子的语义内容, semantic content. 一个句子由若干个单词(字)组成, 因此 SM 基于句子中高频单词或 n-grams 的特征来表示句子.

In some cases, composition is defined by algebraic operations over word meaning vectors to produce sentence meaning vectors.

本文提出的 DCNN, 交替使用1-D Conv Layer和Dynamic k-max pooling layer.

Dynamic k-max pooling 是一般 max pooling 的泛化, 它返回 k maximum values 的子序列而不是 maximum value, 动态则体现在它能根据输入序列的长度而选择参数 k.

Conv layer apply 1-D filters across each row of features in the sentence matrix ([num_words x embedding_dim]).

有两种类型的 1-D 卷积: narrow type与wide type. 如下所示. narrow convolution 要求 len_s > len_m, 上述公式中, j 的取值在 len_m 和 len_s 之间; 而 wide convolution 对两个向量的长度没有要求, 只是 j 的取值范围更大了, 1 到 len_s + len_m - 1 之间 (需要 zero-padding). wide convolution 相比 narrow convolution 的优势在于: 确保了 filter 中的所有权值都应用到了整个句子上, 同时保证了总是能生成有效的, 非空的 C. (本文就采用了 wide convolution)

k-max pooling 操作降采样得到 k 个最大个特征, 保留特征在原始序列中的顺序, 但不关心它们的具体位置.

Dynamic k-max pooling 后接一个 non-linear layer. 为 pooled matrix 的每一行 (根据框架与编程习惯不同, 你的可能是列) 分配一个 bias, 然后对其应用一个 non-linear function.

以上的 wide convolution, dynamic k-max pooling 及 non-linear transfer 都是在 sentence matrix s 的每一行内进行的. 这使得不同行之间一直相互独立, 直到神经网络顶层. 因此本文最后提出了一个称为 folding 的方法, 在 convolutional layer 之后, k-max pooling 之前, 将相邻的每两行相加, 从而减缓了行间的相互依赖问题.

本文采用如下公式来描述句子相似性:

具体地, 1-D 卷积是权重向量 m 与输入向量 s 之间的点积. 此时 m 就是卷积的 filter, s 作为句子, 其每个元素 s_i 对应一个单词. 1-D 卷积背后的思想是计算 m 与 s 的每个 m-gram 的点积, 获得另一个序列 c: .

Dynamic k-max pooling 则将 k 设置成输入序列的长度与网络深度的函数, 文中采用的方式是: . 其中 k_{top} 是最后一个卷积层的 k.

In information theory, Linguistics and computer science, the Levenshtein distance is a string metric for measuring the difference between two sequences. Informally, the Levenshtein distance between two words is the minimum number of single-character edits (insertions, deletions or substitutions) required to change one word into the other.
https://web.eecs.umich.edu/~mihalcea/papers/mihalcea.emnlp04.pdf
unweighted_vertex_score.png
weighted_vertex_score.png
levenshtein_distance.png
DCNN_subgraph.png
DCNN_1d_convolution_show.png
approximated_error_rate.png
sentence_similarity.png
DCNN_dynamic_k.png
DCNN_1d_convolution_equation.png