iamkissg
search
⌘Ctrlk
iamkissg
  • PaperHighlights
  • 2019
    • 03
    • 02
      • Improving Word Embedding Compositionality using Lexicographic Definitions
      • From Word Embeddings To Document Distances
      • Progressive Growing of GANs for Improved Quality, Stability, and Variation
      • Retrofitting Word Vectors to Semantic Lexicons
      • Bag of Tricks for Image Classification with Convolutional Neural Networks
      • Multi-Task Deep Neural Networks for Natural Language Understanding
      • Snapshot Ensembles: Train 1, get M for free
      • EDA: Easy Data Augmentation Techniques for Boosting Performance on Text Classification Tasks
      • Counter-fitting Word Vectors to Linguistic Constraints
      • AdaScale: Towards Real-time Video Object Detection Using Adaptive Scaling
      • Learning semantic similarity in a continuous space
      • Progressive Neural Networks
      • BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
      • Language Models are Unsupervised Multitask Learners
    • 01
  • 2018
  • 2017
  • Paper Title as Note Title
gitbookPowered by GitBook
block-quoteOn this pagechevron-down
  1. 2019

02

Improving Word Embedding Compositionality using Lexicographic Definitionschevron-rightFrom Word Embeddings To Document Distanceschevron-rightProgressive Growing of GANs for Improved Quality, Stability, and Variationchevron-rightRetrofitting Word Vectors to Semantic Lexiconschevron-rightBag of Tricks for Image Classification with Convolutional Neural Networkschevron-rightMulti-Task Deep Neural Networks for Natural Language Understandingchevron-rightSnapshot Ensembles: Train 1, get M for freechevron-rightEDA: Easy Data Augmentation Techniques for Boosting Performance on Text Classification Taskschevron-rightCounter-fitting Word Vectors to Linguistic Constraintschevron-rightAdaScale: Towards Real-time Video Object Detection Using Adaptive Scalingchevron-rightLearning semantic similarity in a continuous spacechevron-rightProgressive Neural Networkschevron-rightBERT: Pre-training of Deep Bidirectional Transformers for Language Understandingchevron-rightLanguage Models are Unsupervised Multitask Learnerschevron-right
PreviousDependency-Based Word Embeddingschevron-leftNextImproving Word Embedding Compositionality using Lexicographic Definitionschevron-right

Last updated 5 years ago