Summary: | "Information representation is a fundamental aspect of computational linguistics and learning from unstructured data. This course explores vector space models, how they're used to represent the meaning of words and documents, and how to create them using Python-based spaCy. You'll learn about several types of vector space models, how they relate to each other, and how to determine which model is best for natural language processing applications like information retrieval, indexing, and relevancy rankings. The course begins with a look at various encodings of sparse document-term matrices, moves on to dense vector representations that need to be learned, touches on latent semantic analysis, and finishes with an exploration of representation learning from neural network models with a focus on word2vec and Gensim. To get the most out of this course, learners should have intermediate level Python skills."--Resource description page
|