Fine-tuning TinyLlama with custom Medical dataset for Beginners | HuggingFace | TinyLlama
In this hands-on tutorial, we'll dive into fine-tuning TinyLlama for medical text dialogue conversational AI application
What you'll learn
- Setting up TinyLlama for medical domain adaptation
- Implementing fine-tuning with minimal computing resources
- Understanding Weights & Biases integration for training monitoring
- Working with practical batch sizes and gradient accumulation
- Real-world considerations and optimization strategies
⭐️ Timeline ⭐️
00:00 : Step-by-step code walkthrough - Llama 8B model - HF TinyLLama
03:46 : HF TinyLlama and Lora adaptation for Resource-conscious trainin…
Watch on YouTube ↗
(saves to browser)
Chapters (6)
: Step-by-step code walkthrough - Llama 8B model - HF TinyLLama
3:46
: HF TinyLlama and Lora adaptation for Resource-conscious training approach
12:20
: Data preparation and Finetuning
21:12
: Parameter optimization for optimal performance
23:21
: Save the model, push to HuggingFace and test the finetuned-model
26:51
: Conclusion
DeepCamp AI