Natural Language Processing with Probabilistic Models
Skills:
ML Maths Basics90%
In Course 2 of the Natural Language Processing Specialization, you will:
a) Create a simple auto-correct algorithm using minimum edit distance and dynamic programming,
b) Apply the Viterbi Algorithm for part-of-speech (POS) tagging, which is vital for computational linguistics,
c) Write a better auto-complete algorithm using an N-gram language model, and
d) Write your own Word2Vec model that uses a neural network to compute word embeddings using a continuous bag-of-words model.
By the end of this Specialization, you will have designed NLP applications that perform question-answering and sentiment analysis, created tools to translate languages and summarize text.
This Specialization is designed and taught by two experts in NLP, machine learning, and deep learning. Younes Bensouda Mourri is an Instructor of AI at Stanford University who also helped build the Deep Learning Specialization. Łukasz Kaiser is a Staff Research Scientist at Google Brain and the co-author of Tensorflow, the Tensor2Tensor and Trax libraries, and the Transformer paper.
Watch on Coursera ↗
(saves to browser)
Sign in to unlock AI tutor explanation · ⚡30
More on: ML Maths Basics
View skill →Related AI Lessons
⚡
⚡
⚡
⚡
7 Common Java Streams Mistakes and How to Avoid Them
Medium · Programming
Implementing an Item-Based Recommendation System from Scratch in Python
Medium · Machine Learning
Implementing an Item-Based Recommendation System from Scratch in Python
Medium · Data Science
The Threshold Is a Business Decision, Not a Statistical One
Medium · Machine Learning
🎓
Tutor Explanation
DeepCamp AI