AICurious Logo

What is: Contextual Word Vectors?

SourceLearned in Translation: Contextualized Word Vectors
Year2000
Data SourceCC BY-SA - https://paperswithcode.com

CoVe, or Contextualized Word Vectors, uses a deep LSTM encoder from an attentional sequence-to-sequence model trained for machine translation to contextualize word vectors. CoVe\text{CoVe} word embeddings are therefore a function of the entire input sequence. These word embeddings can then be used in downstream tasks by concatenating them with GloVe\text{GloVe} embeddings:

v=[GloVe(x),CoVe(x)] v = \left[\text{GloVe}\left(x\right), \text{CoVe}\left(x\right)\right]

and then feeding these in as features for the task-specific models.