Embedding Models: From Architecture to Implementation

Coursera Courses ↗ · Coursera

Open Course on Coursera

Free to audit · Opens on Coursera

Embedding Models: From Architecture to Implementation

Coursera · Intermediate ·🔍 RAG & Vector Search ·1mo ago
Skills: RAG Basics90%
Join our new short course, Embedding Models: From Architecture to Implementation! Learn from Ofer Mendelevitch, Head of Developer Relations at Vectara. This course goes into the details of the architecture and capabilities of embedding models, which are used in many AI applications to capture the meaning of words and sentences. You will learn about the evolution of embedding models, from word to sentence embeddings, and build and train a simple dual encoder model. This hands-on approach will help you understand the technical concepts behind embedding models and how to use them effectively. In detail, you’ll: 1. Learn about word embedding, sentence embedding, and cross-encoder models; and how they can be used in RAG. 2. Understand how transformer models, specifically BERT (Bi-directional Encoder Representations from Transformers), are trained and used in semantic search systems. 3. Gain knowledge of the evolution of sentence embedding and understand how the dual encoder architecture was formed. 4. Use a contrastive loss to train a dual encoder model, with one encoder trained for questions and another for the responses. 5. Utilize separate encoders for question and answer in a RAG pipeline and see how it affects the retrieval compared to using a single encoder model. By the end of this course, you will understand word, sentence, and cross-encoder embedding models, and how transformer-based models like BERT are trained and used in semantic search. You will also learn how to train dual encoder models with contrastive loss and evaluate their impact on retrieval in a RAG pipeline.
Watch on Coursera ↗ (saves to browser)
Sign in to unlock AI tutor explanation · ⚡30

Related AI Lessons

Most Companies Doing GenAI Are Really Just Doing RAG: RAGOps Explained for analysts
Learn why RAGOps is becoming the preferred approach for GenAI projects and how it differs from agent-based approaches
Medium · RAG
RAG - Sliding Window, Token Based Chunking and PDF Chunking Packages
Learn about RAG chunking mechanisms, including Sliding Window, Token Based, and PDF Chunking, to improve your AI model's text processing capabilities
Dev.to AI
Ever Wondered How to Make Your RAG More Effective?
Improve your RAG effectiveness by connecting instead of searching
Medium · RAG
Why StarRocks Is Better Than Elasticsearch for RAG and AI-Powered Vector Search Analytics
Learn why StarRocks outperforms Elasticsearch for RAG and AI-powered vector search analytics, and how to apply this knowledge to improve your data architecture
Medium · LLM
Up next
Watch this before applying for jobs as a developer.
Tech With Tim
Watch →