How to Fine-Tune FLUX-dev and Comparing it to a Fine-Tuned PixArt Model
Links + Notes ๐ https://www.oxen.ai/blog/how-to-fine-tune-a-flux-1-dev-lora-with-code-step-by-step
Join Fine-Tune Fridays ๐ง https://oxen.ai/community
Discord ๐ฟ https://discord.com/invite/s3tBEn7Ptg
Use Oxen AI ๐ https://oxen.ai/
Oxen.ai offers one click fine-tuning or fine-tunes for you! Built on top of the worlds best data versioning tool, we offer tools to automate model evals, generate synthetic data, and effortlessly fine-tune models.
--
Chapters
0:00 Welcome to Fine-Tuning FLUX.1-dev
0:49 The Problem with AI Toolkit
1:49 A bit about FLUX and Bโฆ
Watch on YouTube โ
(saves to browser)
Chapters (15)
Welcome to Fine-Tuning FLUX.1-dev
0:49
The Problem with AI Toolkit
1:49
A bit about FLUX and Black Forest Labs
4:09
FLUX.1 Kontext
6:17
The Tasks
10:57
The Model
16:34
The Data: How much do you need and how to generate synthetic data
20:35
The Hardware
20:58
A walk through of the code
23:09
Downloading the weights
25:19
Loading the model
27:09
Adding a LoRA to the model
28:17
Loading the VAE and Text Encoders
36:01
The Core Fine-Tuning Loop
55:30
Results and Comparison to PixArt
DeepCamp AI