Skip to content
DeepCamp
Explore
My Feed
Videos
Roadmaps
News
Search
Sign in
Get started
Explore
My Feed
Videos
Roadmaps
News
Search
Sign in
Get started
Home
›
Research Papers Explained
📄
Foundations
Research Papers Explained
The latest AI papers broken down — attention, RLHF, diffusion, MoE and more
8
videos
Level: All
Beginner
Intermediate
Advanced
Any Length
Short (<5m)
Medium (5-20m)
Long (>20m)
Newest
Popular
Oldest
31:46
📄 Research Papers Explained
Mixture of Experts (MoE), Visually Explained
Jia-Bin Huang
Advanced
1mo ago
29:59
📄 Research Papers Explained
Mixture of Experts (MoE) Introduction
Vizuara
Beginner
11mo ago
2:15:13
📄 Research Papers Explained
Reinforcement Learning from Human Feedback explained with math derivations and the PyTorch code.
Umar Jamil
Beginner
2y ago
1:26:21
📄 Research Papers Explained
Mistral / Mixtral Explained: Sliding Window Attention, Sparse Mixture of Experts, Rolling Buffer
Umar Jamil
Beginner
2y ago
23:12
📄 Research Papers Explained
Fine-Tune Mixtral 8x7B (Mistral's Mixture of Experts MoE) Model - Walkthrough Guide
Brev
Beginner
2y ago
20:50
📄 Research Papers Explained
Mixtral 8x7B DESTROYS Other Models (MoE = AGI?)
Matthew Berman
Advanced
2y ago
56:30
📄 Research Papers Explained
RLHF - Reinforcement Learning from Human Feedback
West Coast Machine Learning
Beginner
2y ago
22:39
📄 Research Papers Explained
Research Paper Deep Dive - The Sparsely-Gated Mixture-of-Experts (MoE)
650 AI Lab
Advanced
3y ago
Ask AI
DeepCamp AI
✕
👋 Hi! I'm DeepCamp AI. Ask me to find content, explain AI concepts, or suggest a learning path. What are you curious about?
Send
Powered by
TechAssembly.io
×
Share
Copy