This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision Next revision Both sides next revision | ||
mind:word2vec [2016/09/25 16:24] bayb2 |
mind:word2vec [2016/09/25 16:39] bayb2 [Examples] |
||
---|---|---|---|
Line 1: | Line 1: | ||
=Word2Vec= | =Word2Vec= | ||
+ | |||
+ | ==Intro== | ||
+ | Word2Vec can guess a word’s association with other words, or cluster documents and define them by topic. It makes qualities into quantities, and similar things and ideas are shown to be “close” in its 500-dimension vectorspace. | ||
+ | |||
+ | Word2Vec is not classified as "deep learning" because it is only a 2-layer neural net. | ||
+ | |||
+ | ===Examples=== | ||
+ | |||
+ | Rome - Italy = Beijing - China, so Rome - Italy + China = Beijing | ||
+ | |||
+ | king : queen :: man : woman | ||
+ | |||
+ | house : roof :: castle : [dome, bell_tower, spire, crenellations, turrets] | ||
+ | |||
+ | China : Taiwan :: Russia : [Ukraine, Moscow, Moldova, Armenia] | ||
+ | |||
==Notation== | ==Notation== | ||
Line 14: | Line 30: | ||
knee : leg :: elbow : arm | knee : leg :: elbow : arm | ||
+ | |||
+ | Input -> text corpus | ||
+ | Output -> set of vectors (neural word embeddings) | ||
+ | |||
+ | More research: Cosine similarity, dot product equation | ||
+ | |||
+ | |||
+ | ==Models== | ||
+ | ===Continuous bag of words (CBOW) model=== | ||
+ | Using context to predict a target word. Faster. | ||
+ | ===Skip-gram model=== | ||
+ | Using a word to predict a target context. Produces more accurate results on large datasets. | ||
+ | |||
+ | “Just as Van Gogh’s painting of sunflowers is a two-dimensional mixture of oil on canvas that represents vegetable matter in a three-dimensional space in Paris in the late 1880s, so 500 numbers arranged in a vector can represent a word or group of words.” | ||
+ | |||
+ | Each word is a point in a 500-dimensional vectorspace. | ||
- | ==Examples== | + | More than three layers in a neural network (including input and output) qualifies as “deep” learning. Deep means more than one hidden layer. |
+ | ==Implementation== | ||
+ | Word2Vec can be implemented in DL4J, TensorFlow | ||
- | Rome - Italy = Beijing - China, so Rome - Italy + China = Beijing | ||
- | king : queen :: man : woman | ||
- | house : roof :: castle : [dome, bell_tower, spire, crenellations, turrets] | ||
+ | ==Links== | ||
+ | *http://deeplearning4j.org/word2vec | ||