Skip to content
DeepCamp
ExploreMy FeedVideosRoadmapsNewsSearch
Sign in Get started
ExploreMy FeedVideosRoadmapsNewsSearch Sign inGet started
Filter
All Videos 116,442
Level
All levels Beginner Intermediate Advanced
Duration
Any length Short (<5 min) Medium (5–20 min) Long (>20 min)
Topic
🧠 Large Language Models 15,156 ✍️ Prompt Engineering 578 🤖 AI Agents & Automation 5,312 📐 ML Fundamentals 8,975 🎨 Image & Video AI 2,284 💻 AI-Assisted Coding 3,544 🛠️ AI Tools & Apps 13,864 👁️ Computer Vision 1,000 🛡️ AI Safety & Ethics 6,157 📄 Research Papers Explained 7,204 🔍 RAG & Vector Search 3,257 📰 AI News & Updates 19,474 🔐 Cybersecurity 3,848 📣 Digital Marketing & Growth 4,685 🖊️ Copywriting & Content Strategy 1,778 📊 Data Analytics & Business Intelligence 4,249 🖌️ UI/UX Design 2,264 🏗️ Systems Design & Architecture 3,195 📋 Product Management 680 📅 Project Management 203 🚀 Entrepreneurship & Startups 7,108
Channel
GaryVee 1,031 Analytics Vidhya 1,028 HubSpot Marketing 1,016 Microsoft Developer 1,015 Tech With Tim 1,010 DataCamp 1,007 Databricks 1,006 MLOps.community 1,005 codebasics 1,003 Real Python 1,003 David Bombal 1,003 Zapier 1,002 DesignCourse 1,002 NeuralNine 1,001 MIT OpenCourseWare 1,001 Lenny's Podcast 1,001 Google Cloud Tech 1,001 Ali Abdaal 1,001 freeCodeCamp.org 1,000 a16z 1,000 WIRED 1,000 Two Minute Papers 1,000 Traversy Media 1,000 The New Stack 1,000 The Futur 1,000 TechCrunch 1,000 Stanford Online 1,000 Seeker 1,000 Olivio Sarikas 1,000 NVIDIA Developer 1,000
✕ Clear filters
8 videos

📄 Research Papers Explained

The latest AI papers broken down — attention, RLHF, diffusion, MoE and more

Save your progress and get personalised paths

Free account: saved library, learning streaks, AI-built roadmaps.

Sign up free →
Mixture of Experts (MoE), Visually Explained
31:46
📄 Research Papers Explained
Mixture of Experts (MoE), Visually Explained
Jia-Bin Huang Advanced 1mo ago
Mixture of Experts (MoE) Introduction
29:59
📄 Research Papers Explained
Mixture of Experts (MoE) Introduction
Vizuara Beginner 11mo ago
Reinforcement Learning from Human Feedback explained with math derivations and the PyTorch code.
2:15:13
📄 Research Papers Explained
Reinforcement Learning from Human Feedback explained with math derivations and the PyTorch code.
Umar Jamil Beginner 2y ago
Mistral / Mixtral Explained: Sliding Window Attention, Sparse Mixture of Experts, Rolling Buffer
1:26:21
📄 Research Papers Explained
Mistral / Mixtral Explained: Sliding Window Attention, Sparse Mixture of Experts, Rolling Buffer
Umar Jamil Beginner 2y ago
Fine-Tune Mixtral 8x7B (Mistral's Mixture of Experts MoE) Model - Walkthrough Guide
23:12
📄 Research Papers Explained
Fine-Tune Mixtral 8x7B (Mistral's Mixture of Experts MoE) Model - Walkthrough Guide
Brev Beginner 2y ago
Mixtral 8x7B DESTROYS Other Models (MoE = AGI?)
20:50
📄 Research Papers Explained
Mixtral 8x7B DESTROYS Other Models (MoE = AGI?)
Matthew Berman Advanced 2y ago
RLHF - Reinforcement Learning from Human Feedback
56:30
📄 Research Papers Explained
RLHF - Reinforcement Learning from Human Feedback
West Coast Machine Learning Beginner 2y ago
Research Paper Deep Dive -  The Sparsely-Gated Mixture-of-Experts (MoE)
22:39
📄 Research Papers Explained
Research Paper Deep Dive - The Sparsely-Gated Mixture-of-Experts (MoE)
650 AI Lab Advanced 3y ago

© 2026 DeepCamp — For the ones who figure it out.

A TechAssembly Ltd product — Created by Sam Iso

ToolHub Tools All Videos AI News Search Privacy
TechAssembly Powered by TechAssembly.io
TechAssembly DeepCamp AI
👋 Hi! I'm DeepCamp AI. Ask me to find content, explain AI concepts, or suggest a learning path. What are you curious about?
Powered by TechAssembly.io

Share