LLM Fine-Tuning 19: Fine-Tune Any LLM with Axolotl ๐ฅ Low-Code YAML Based Training (No Heavy Coding)
In this video, youโll learn how to fine-tune ANY Large Language Model (LLM) using Axolotl with a low-code, YAML-based workflow โ no heavy coding required
Axolotl is a powerful open-source framework built on top of Hugging Face that simplifies LLM fine-tuning, LoRA / QLoRA, SFT, DPO, RLHF, and multimodal training using a single YAML configuration file.
What youโll learn in this video:
What Axolotl is and why itโs better than plain Hugging Face training
How YAML-based low-code fine-tuning works
Fine-tuning LLMs using LoRA & QLoRA
Training models like LLaMA, Mistral, Qwen, Mixtral
Datasetโฆ
Watch on YouTube โ
(saves to browser)
Chapters (6)
Introduction & Agenda
3:22
What is Axolotl?
10:06
Why config-driven training matters in Axolotl
14:31
What Axolotl can do that Hugging Face alone cannot
28:38
Training methods supported by Axolotl and documentation overview
37:50
Difference between Core Hugging Face, Unsloth, LLaMA Factory, and Axolotl
DeepCamp AI