Prompt Engineering Myths Everyone Still Believes

LearnThatStack · Beginner ·✍️ Prompt Engineering ·3mo ago
Skills: Prompt Craft90%
Most prompt engineering advice is cargo cult science. Here's what research actually says about chain-of-thought, few-shot learning, personas, and "magic phrases." You've seen the tips everywhere: "Think step by step." "You are an expert." "Here are 10 examples." But when researchers actually tested these techniques, the results were far off from popular advice and tips. Chain-of-thought can drop accuracy by 36%. Role prompts showed zero improvement across 162 personas tested. And the "optimal" prompt phrases are model-specific — there's no universal magic. In this video, I break down four popular prompting myths using research insights research from Google, DeepMind, and leading AI labs. TIMESTAMPS: 0:00 - What is Cargo Cult Prompting? 0:44 - Myth 1: Chain-of-Thought Always Helps 1:57 - Myth 2: Few-Shot Labels Teach the Model 3:10 - Myth 3: Personas Make AI Smarter 4:02 - Myth 4: Magic Phrases Work Everywhere 4:50 - What Actually Works More Videos : Software Egineering Basics - https://www.youtube.com/playlist?list=PLWP-VtjCVpWyLNBm3zz_sGyC5mVwiAOvj Software Design - https://www.youtube.com/playlist?list=PLWP-VtjCVpWx7kPq30XRN6O6LjVQ4VL95 Sources: [P1] Kojima et al. (2022) Zero-shot-CoT: https://arxiv.org/abs/2205.11916 [P2] Wei et al. (2022) Chain-of-Thought Prompting: https://arxiv.org/abs/2201.11903 [P3] Turpin et al. (2023) Unfaithful CoT / up to 36% drop: https://arxiv.org/abs/2305.04388 [F1] Min et al. (2022) Demonstrations / random labels sometimes small drop: https://arxiv.org/abs/2202.12837 [F2] Kossen et al. (2023/2024) ICL learns label relationships / large drops possible: https://arxiv.org/abs/2307.12375 [F3] Lu et al. (2022) Example order sensitivity: https://arxiv.org/abs/2104.08786 [F4] Zhao et al. (2021) Calibrate Before Use: https://arxiv.org/abs/2102.09690 [S1] Zheng et al. (2023/2024) Personas don’t improve accuracy: https://arxiv.org/abs/2311.10054 [O1] Yang et al. (2023) OPRO (LLMs as Optimizers) / 34%→80.2% (setup-specific): https
Watch on YouTube ↗ (saves to browser)
Sign in to unlock AI tutor explanation · ⚡30

Related AI Lessons

Chapters (6)

What is Cargo Cult Prompting?
0:44 Myth 1: Chain-of-Thought Always Helps
1:57 Myth 2: Few-Shot Labels Teach the Model
3:10 Myth 3: Personas Make AI Smarter
4:02 Myth 4: Magic Phrases Work Everywhere
4:50 What Actually Works
Up next
Why AI keeps lying to you
DeepLearningAI
Watch →