Lightning Talk: From Pretrained To Personal: Privacy-First... Daniel Holanda Noronha & Iswarya Alex
Lightning Talk: From Pretrained To Personal: Privacy-First Fine-Tuning on AI PCs - Daniel Holanda Noronha & Iswarya Alex, AMD
Pytorch on AI PCs crossed a threshold: local hardware can now support meaningful model fine-tuning, not just inference. This unlocks a new class of enterprise workflows where sensitive data never leaves the device, yet models can still be personalized and adapted using PyTorch.
In this session, we’ll show how to design on-device fine-tuning pipelines for AI PCs, focusing on enterprise scenarios where privacy is non-negotiable: regulated healthcare data, government and public-sector workloads, financial services, and proprietary enterprise systems. We’ll walk through key decisions such as selecting efficient pre-trained models, and how the right PyTorch optimizations enable effective personalization on large private datasets.
We'll also showcase practical fine-tuning techniques such as supervised fine-tuning (SFT), LoRA, and QLoRA, and show how mixed-precision training and correct use of training vs. evaluation modes make these approaches efficient and practical on AI PCs while preserving privacy. The result is a cloud-free, privacy-first fine-tuning blueprint that turns AI PCs into secure personalization engines for enterprise AI.
Watch on YouTube ↗
(saves to browser)
Sign in to unlock AI tutor explanation · ⚡30
More on: Fine-tuning LLMs
View skill →Related AI Lessons
⚡
⚡
⚡
⚡
The Complete Guide to Programmatic Image Generation
Dev.to · Iteration Layer
I Tested 25 AI Headshot Generators. Here Are 9 That Actually Look Real (2026 Guide)
Medium · AI
Gemini Stalling? Optimize Performance with Google Workspace Login & Usage Management
Dev.to AI
I Built a Watermark Remover — Here’s What I Actually Learned
Dev.to · Eric Cheung
🎓
Tutor Explanation
DeepCamp AI