Distilling LLMs with Datawizz and Fireworks AI
Use Datawizz to distill small, efficient SLMs & deploy them to the Fireworks AI platform for fast & efficient inference.
This quick tutorial will cover:
- Connecting Datawizz as a proxy to collect LLM logs
- Fine-tuning a Llama 3.2 model with these logs
- Deploying the new model to a dedicated server in FireworksAI
- Smartly routing traffic to the new model
Read more here: https://docs.datawizz.ai/models/model-deployment#fireworks-ai
Watch on YouTube ↗
(saves to browser)
DeepCamp AI