LLM Fine-Tuning 19: Fine-Tune Any LLM with Axolotl ๐Ÿ”ฅ Low-Code YAML Based Training (No Heavy Coding)

Sunny Savita ยท Beginner ยท๐Ÿง  Large Language Models ยท2mo ago
In this video, youโ€™ll learn how to fine-tune ANY Large Language Model (LLM) using Axolotl with a low-code, YAML-based workflow โ€” no heavy coding required Axolotl is a powerful open-source framework built on top of Hugging Face that simplifies LLM fine-tuning, LoRA / QLoRA, SFT, DPO, RLHF, and multimodal training using a single YAML configuration file. What youโ€™ll learn in this video: What Axolotl is and why itโ€™s better than plain Hugging Face training How YAML-based low-code fine-tuning works Fine-tuning LLMs using LoRA & QLoRA Training models like LLaMA, Mistral, Qwen, Mixtral Datasetโ€ฆ
Watch on YouTube โ†— (saves to browser)

Chapters (6)

Introduction & Agenda
3:22 What is Axolotl?
10:06 Why config-driven training matters in Axolotl
14:31 What Axolotl can do that Hugging Face alone cannot
28:38 Training methods supported by Axolotl and documentation overview
37:50 Difference between Core Hugging Face, Unsloth, LLaMA Factory, and Axolotl
5 Levels of AI Agents - From Simple LLM Calls to Multi-Agent Systems
Next Up
5 Levels of AI Agents - From Simple LLM Calls to Multi-Agent Systems
Dave Ebbelaar (LLM Eng)