Large Language Model Fine-Tuning with PEFT and LoRA (Practical Implementation)

AI Researcher · Intermediate ·📄 Research Papers Explained ·1y ago
This video explains how to fine-tune large language model (e.g, Flan-t5-base) efficiently using PEFT (Parameter-Efficient Fine-Tuning) and LoRA (Low-Rank Adaptation). This approach reduces computational costs while maintaining high performance. [code]: https://github.com/manishasirsat/peft_llm_tuning [dataset]: https://huggingface.co/datasets/knkarthick/dialogsum [paper]: 1) Hu, E. J., Shen, Y., Wallis, P., Allen-Zhu, Z., Li, Y., Wang, S., ... & Chen, W. (2021). Lora: Low-rank adaptation of large language models. arXiv preprint arXiv:2106.09685; 2) Xu, L., Xie, H., Qin, S. Z. J., Tao, X., &…
Watch on YouTube ↗ (saves to browser)

Chapters (3)

Intro
0:46 why LoRA PEFT is needed?
5:53 LLM fine-tuning: a practical demo
He Left It Out to Rust… But It Never Did 🧪    #shorts
Next Up
He Left It Out to Rust… But It Never Did 🧪 #shorts
Jacky Chou from Indexsy