Retrieval Optimization: Tokenization to Vector Quantization

Coursera Courses ↗ · Coursera

Open Course on Coursera

Free to audit · Opens on Coursera

Retrieval Optimization: Tokenization to Vector Quantization

Coursera · Advanced ·🔍 RAG & Vector Search ·1mo ago
Skills: RAG Basics90%
In Retrieval Optimization: Tokenization to Vector Quantization, taught by Kacper Łukawski, Developer Relations Lead of Qdrant, you’ll learn all about tokenization and also how to optimize vector search in your large-scale customer-facing RAG applications. You’ll explore the technical details of how vector search works and how to optimize it for better performance. This course focuses on optimizing the first step in your RAG and search results. You’ll see how different tokenization techniques like Byte-Pair Encoding, WordPiece, and Unigram work and how they affect search relevancy. You’ll also learn how to address common challenges such as terminology mismatches and truncated chunks in embedding models. To optimize your search, you need to be able to measure its quality. You will learn several quality metrics for this purpose. Most vector databases use Hierarchical Navigable Small Worlds (HNSW) for approximate nearest-neighbor search. You’ll see how to balance the HNSW parameters for higher speed and maximum relevance. Finally, you would use different vector quantization techniques to enhance memory usage and search speed. What you’ll do, in detail: 1. Learn about the internal workings of the embedding model and how your text is turned into vectors. 2. Understand how several tokenizers such as Byte-Pair Encoding, WordPiece, Unigram, and SentencePiece are trained. 3. Explore common challenges with tokenizers such as unknown tokens, domain-specific identifiers, and numerical values, that can negatively affect your vector search. 4. Understand how to measure the quality of your search across several quality metrics. 5. Understand how the main parameters in HNSW algorithms affect the relevance and speed of vector search and how to optimally adjust these parameters. 6. Experiment with the three major quantization methods, product, scalar, and binary, and learn how they impact memory requirements, search quality, and speed. By the end of this course, you’l
Watch on Coursera ↗ (saves to browser)
Sign in to unlock AI tutor explanation · ⚡30

Related AI Lessons

RAG - Sliding Window, Token Based Chunking and PDF Chunking Packages
Learn about RAG chunking mechanisms, including Sliding Window, Token Based, and PDF Chunking, to improve your AI model's text processing capabilities
Dev.to AI
Ever Wondered How to Make Your RAG More Effective?
Improve your RAG effectiveness by connecting instead of searching
Medium · RAG
Why StarRocks Is Better Than Elasticsearch for RAG and AI-Powered Vector Search Analytics
Learn why StarRocks outperforms Elasticsearch for RAG and AI-powered vector search analytics, and how to apply this knowledge to improve your data architecture
Medium · LLM
Production RAG: Shipping a RAG System Into an Enterprise Product
Learn how to ship a RAG system into an enterprise product, overcoming operational realities and challenges beyond the demo stage
Medium · RAG
Up next
Watch this before applying for jobs as a developer.
Tech With Tim
Watch →