iamkissg
  • PaperHighlights
  • 2019
    • 03
      • Not All Contexts Are Created Equal Better Word Representations with Variable Attention
      • Learning Context-Sensitive Word Embeddings with Neural Tensor Skip-Gram Model
      • Approximating CNNs with Bag-of-local-Features models works surprisingly well on ImageNet
      • pair2vec: Compositional Word-Pair Embeddings for Cross-Sentence Inference
      • Contextual Word Representations: A Contextual Introduction
      • Not All Neural Embeddings are Born Equal
      • High-risk learning: acquiring new word vectors from tiny data
      • Learning word embeddings from dictionary definitions only
      • Dependency-Based Word Embeddings
    • 02
      • Improving Word Embedding Compositionality using Lexicographic Definitions
      • From Word Embeddings To Document Distances
      • Progressive Growing of GANs for Improved Quality, Stability, and Variation
      • Retrofitting Word Vectors to Semantic Lexicons
      • Bag of Tricks for Image Classification with Convolutional Neural Networks
      • Multi-Task Deep Neural Networks for Natural Language Understanding
      • Snapshot Ensembles: Train 1, get M for free
      • EDA: Easy Data Augmentation Techniques for Boosting Performance on Text Classification Tasks
      • Counter-fitting Word Vectors to Linguistic Constraints
      • AdaScale: Towards Real-time Video Object Detection Using Adaptive Scaling
      • Learning semantic similarity in a continuous space
      • Progressive Neural Networks
      • BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
      • Language Models are Unsupervised Multitask Learners
    • 01
      • Querying Word Embeddings for Similarity and Relatedness
      • Data Distillation: Towards Omni-Supervised Learning
      • A Rank-Based Similarity Metric for Word Embeddings
      • Dict2vec: Learning Word Embeddings using Lexical Dictionaries
      • Graph Convolutional Networks for Text Classification
      • Improving Distributional Similarity with Lessons Learned from Word Embeddings
      • Real-time Personalization using Embeddings for Search Ranking at Airbnb
      • Glyce: Glyph-vectors for Chinese Character Representations
      • Auto-Encoding Dictionary Definitions into Consistent Word Embeddings
      • Distilling the Knowledge in a Neural Network
      • Uncovering divergent linguistic information in word embeddings with lessons for intrinsic and extrin
      • The (Too Many) Problems of Analogical Reasoning with Word Vectors
      • Linear Ensembles of Word Embedding Models
      • Intrinsic Evaluation of Word Vectors Fails to Predict Extrinsic Performance
      • Dynamic Meta-Embeddings for Improved Sentence Representations
  • 2018
    • 11
      • Think Globally, Embed Locally — Locally Linear Meta-embedding of Words
      • Learning linear transformations between counting-based and prediction-based word embeddings
      • Learning Word Meta-Embeddings by Autoencoding
      • Learning Word Meta-Embeddings
      • Frustratingly Easy Meta-Embedding – Computing Meta-Embeddings by Averaging Source Word Embeddings
    • 6
      • Universal Language Model Fine-tuning for Text Classification
      • Semi-supervised sequence tagging with bidirectional language models
      • Consensus Attention-based Neural Networks for Chinese Reading Comprehension
      • Attention-over-Attention Neural Networks for Reading Comprehension
      • Baseline Needs More Love: On Simple Word-Embedding-Based Models and Associated Pooling Mechanisms
      • Convolutional Neural Networks for Sentence Classification
      • Deep contextualized word representations
      • Neural Architectures for Named Entity Recognition
      • Improving Language Understanding by Generative Pre-Training
      • A Sensitivity Analysis of (and Practitioners’ Guide to) Convolutional Neural Networks for Sentence C
      • Teaching Machines to Read and Comprehend
    • 5
      • Text Understanding with the Attention Sum Reader Network
      • Effective Approaches to Attention-based Neural Machine Translation
      • Distance-based Self-Attention Network for Natural Language Inference
      • Deep Residual Learning for Image Recognition
      • U-Net: Convolutional Networks for Biomedical Image Segmentation
      • Memory Networks
      • Neural Machine Translation by Jointly Learning to Align and Translate
      • Convolutional Sequence to Sequence Learning
      • An Empirical Evaluation of Generic Convolutional and Recurrent Networks for Sequence Modeling
      • Graph Attention Networks
      • Attention is All You Need
      • DiSAN: Directional Self-Attention Network for RNN/CNN-Free Language Understanding
      • A Structured Self-attentive Sentence Embedding
      • Hierarchical Attention Networks for Document Classification
      • Grammar as a Foreign Language
      • Show, Attend and Tell: Neural Image Caption Generation with Visual Attention
      • Transforming Auto-encoders
      • Self-Attention with Relative Position Representations
    • 1
      • 20180108-20180114
  • 2017
    • 12
      • 20171218-2017124 论文笔记
    • 11
      • 20171127-20171203 论文笔记 1
      • 20171106-20171126 论文笔记
      • 20171030-20171105 论文笔记 1
  • Paper Title as Note Title
Powered by GitBook
On this page
  • TL;DR
  • Key Points
  • Notes/Questions

Was this helpful?

  1. 2018
  2. 5

Transforming Auto-encoders

PreviousShow, Attend and Tell: Neural Image Caption Generation with Visual AttentionNextSelf-Attention with Relative Position Representations

Last updated 5 years ago

Was this helpful?

TL;DR

本文是 的"前世", capsule 的基本哲学都出自本文. 本文从 CNN 的弊端出发, 引入计算机图形学 CG 中的视点 viewpoint 和坐标系 coordinate frame 的概念, 基于不变性 invariance 与等变性 equivariance, 提出了胶囊 Capsule 的概念, 为日后 CapsNet 的提出作了很长 (6年) 的铺垫.

Key Points

  • 指出 CNN 的弊端: 多次降采样 subsample 之后, 高层特征对于自身的姿态 pose (总感觉而翻译不恰当, 后文将使用 pose, 读者自己体会) 将变得不确定 (比如鼻子的朝向), 从而使得计算精确的空间关系变得不可能.

  • 提出胶囊的概念: 一组神经元. 与其用池化得到的标量 scalar 作为输出, 不如使用胶囊对输入进行相对更复杂的计算, 用向量来表示结果 (这个过程形象地称为装入胶囊 encapsule), 从而保留更多的信息.

  • 胶囊将学习到场景中可见实体 visual entity 的特征, 并输出实体存在的概率, 和实体的实例化参数 instantiation parameters (对照文末的图, 将对何为实例化参数有更清晰的认识).

  • 胶囊学得好的情况下, 可见实体在场景中出现的概率将是局部不变的, 即不管实体在局部空间如何移动, 它的存在性是不变的 invariant. 通俗地讲, 我望向你的脸~, 脸不会因为你的移动而消失.

  • 实例化参数是等变的 equivariant. 因为实例化参数对应于实体的场景表示, 观察条件的改变 (比如角度) 或实体的移动, 实体的表示将改变. 比如你由正脸转成侧脸, 你高傲的鼻子的位置和形状就和之前不同了.

  • 胶囊输出实例化参数的一大优势是, 使得通过识别部分进而识别得整体变得更加容易. 胶囊以向量形式习得的实体的 pose, 和 CG 中实体 pose 的自然表示是线性相关的. 此时, 两个激活的胶囊 A 和 B, 具有正确的空间关系, 从而激活高层胶囊 C 是很直观的. 通俗地讲, 识别到眼睛和鼻子, 它们的空间关系是正常人脸上的关系, 那么就相当于识别到脸了.

  • 以上可以归纳为: 关于部分-整体关系的认知是视点不变的 viewpoint-variant (即不随观察点的变化而变化); 对于观察到的物体的实例化参数及其部分的认知是视点等变的 viewpoint-equivariant (即岁观察点的变化而变化). 前者用权值矩阵 weights matrice表示, 后者用神经活动 neural activities 表示. 具体地, 当脸转向了, 或者说观察点改变了, 表征眼睛或鼻子的胶囊 (里面是 vector) 乘以同一个矩阵 (坐标系转换), 将得到新的眼睛或鼻子的胶囊, 这是就表现脸视点不变性; 视点等变性上文已经提及, 至于神经活动, 就是胶囊中向量的改变.

  • 本文中的胶囊由识别单元 recognition unit 和 生成单元 generation unit 构成. 识别单元作为隐层计算胶囊的输出向量, 其中特别有一位数字表示实体存在的概率 (这一点与日后的胶囊不一样); 生成单元则用于计算胶囊对高层胶囊的贡献 (CapsNet 中动态路由计算的就是这玩意儿).

  • 胶囊的一个缺陷是, 一个胶囊只能表示实体的一种实例化参数.

  • 下图是文中实验的所用的网络结构. 中间的圆角矩形就表示一个 Capsule. 其中下面的红色矩形代表识别单元, 上面的绿色矩形表示生成单元.

Notes/Questions

  • 本文将 CG 的思想引入神经网络, 而且逻辑居然很通顺. 其实 Hinton 将其他领域的知识引入神经网络又不是第一次了. 这告诉我们, 知识是相同的, 有时候向内求解不得时, 可向外求解, 也许会有意想不到的收获.

  • 笔者阅读时, 对于实验部分选择性浏览了一遍, 不得不说, 太晦涩了. 虽然道理讲得很清爽, 但实验部分真的不知所云啊.

  • 笔者是在看了 CapsNet 的文章两篇之后, 才转而看这篇文章的. CapsNet 中许多没有讲清楚的概念, 都能在本文找到答案, 尤其是 CapsNet 的提出的哲学. 举个例子, CapsNet 中提到 Inverse Graphic, 没有 CG 背景的同学 (比如我) 会一头雾水. 有了本文的背景, 结合下图, 包秒懂.

CapsNet
transforming_ae.png
cg_rendering.png
cg_inverse_rendering.png