Basic Theory | Neural Style Transfer #2
❤️ Become The AI Epiphany Patreon ❤️ ► https://www.patreon.com/theaiepiphany
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
The second video in the neural style transfer series! 🎨
You'll learn about:
✔️ The basic theory behind how neural style transfer works
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
I hope the video provides you with a strong basic intuition and understanding, but for those of you who want to take it further here are some additional materials relevant to this video:
papers ►
✔️(original NST paper, arxiv, old) https://arxiv.org/pdf/1508.06576.pdf
✔️(original NST paper, CVPR, new) https://www.cv-foundation.org/openaccess/content_cvpr_2016/papers/Gatys_Image_Style_Transfer_CVPR_2016_paper.pdf
blogs/articles ►
✔️ History of style transfer (3 part blog series by Adobe's Aaron Hertzmann) https://research.adobe.com/news/image-stylization-history-and-future/
✔️Nice overview of fast NST algorithms https://www.fritz.ai/style-transfer/
Note: It seems that YouTube's video transcoding process messed up the intro and outro NST clips - they look much nicer and higher quality on my machine.
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
⌚️ Timetable:
00:00 - intro & NST series overview
02:25 - what I want this series to be
03:30 - defining the task of NST
04:01 - 2 types of style transfer
04:43 - a glimpse of the image style transfer history
06:55 - explanation of the content representation
10:10 - explanation of the style representation
14:12 - putting it all together (animation)
[Credits] Music:
https://www.youtube.com/watch?v=J2X5mJ3HDYE [NCS]
[Credits] Images:
Found the useful Gram matrix intuition image in this blog: https://towardsdatascience.com/light-on-math-machine-learning-intuitive-guide-to-neural-style-transfer-ef88e46697ee
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
💰 BECOME A PATREON OF THE AI EPIPHANY ❤️
If these videos, GitHub projects, and blogs help you,
consider helping me out by supporting me on Patreon!
The AI Epiphany ► https://www.patreon.com/theaiepiphany
One-time donations: https://www.paypal
Watch on YouTube ↗
(saves to browser)
Sign in to unlock AI tutor explanation · ⚡30
Playlist
Uploads from Aleksa Gordić - The AI Epiphany · Aleksa Gordić - The AI Epiphany · 2 of 60
1
▶
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
Intro | Neural Style Transfer #1
Aleksa Gordić - The AI Epiphany
Basic Theory | Neural Style Transfer #2
Aleksa Gordić - The AI Epiphany
Optimization method | Neural Style Transfer #3
Aleksa Gordić - The AI Epiphany
Advanced Theory | Neural Style Transfer #4
Aleksa Gordić - The AI Epiphany
Anyone can make deepfakes now!
Aleksa Gordić - The AI Epiphany
What is Computer Vision? | The Art of Creating Seeing Machines
Aleksa Gordić - The AI Epiphany
Feed-forward method | Neural Style Transfer #5
Aleksa Gordić - The AI Epiphany
Alan Turing | Computing Machinery and Intelligence
Aleksa Gordić - The AI Epiphany
Feed-forward method (training) | Neural Style Transfer #6
Aleksa Gordić - The AI Epiphany
What is Google Deep Dream? (Basic Theory) | Deep Dream Series #1
Aleksa Gordić - The AI Epiphany
Semantic Segmentation in PyTorch | Neural Style Transfer #7
Aleksa Gordić - The AI Epiphany
How to get started with Machine Learning
Aleksa Gordić - The AI Epiphany
How to learn PyTorch? (3 easy steps) | 2021
Aleksa Gordić - The AI Epiphany
PyTorch or TensorFlow?
Aleksa Gordić - The AI Epiphany
3 Machine Learning Projects For Beginners (Highly visual) | 2021
Aleksa Gordić - The AI Epiphany
Machine Learning Projects (Intermediate level) | 2021
Aleksa Gordić - The AI Epiphany
Cheapest (0$) Deep Learning Hardware Options | 2021
Aleksa Gordić - The AI Epiphany
How to learn deep learning? (Transformers Example)
Aleksa Gordić - The AI Epiphany
How do transformers work? (Attention is all you need)
Aleksa Gordić - The AI Epiphany
Developing a deep learning project (case study on transformer)
Aleksa Gordić - The AI Epiphany
Vision Transformer (ViT) - An image is worth 16x16 words | Paper Explained
Aleksa Gordić - The AI Epiphany
GPT-3 - Language Models are Few-Shot Learners | Paper Explained
Aleksa Gordić - The AI Epiphany
Google DeepMind's AlphaFold 2 explained! (Protein folding, AlphaFold 1, a glimpse into AlphaFold 2)
Aleksa Gordić - The AI Epiphany
Attention Is All You Need (Transformer) | Paper Explained
Aleksa Gordić - The AI Epiphany
Graph Attention Networks (GAT) | GNN Paper Explained
Aleksa Gordić - The AI Epiphany
Graph Convolutional Networks (GCN) | GNN Paper Explained
Aleksa Gordić - The AI Epiphany
Graph SAGE - Inductive Representation Learning on Large Graphs | GNN Paper Explained
Aleksa Gordić - The AI Epiphany
PinSage - Graph Convolutional Neural Networks for Web-Scale Recommender Systems | Paper Explained
Aleksa Gordić - The AI Epiphany
OpenAI CLIP - Connecting Text and Images | Paper Explained
Aleksa Gordić - The AI Epiphany
Temporal Graph Networks (TGN) | GNN Paper Explained
Aleksa Gordić - The AI Epiphany
Graph Neural Network Project Update! (I'm coding GAT from scratch)
Aleksa Gordić - The AI Epiphany
Graph Attention Network Project Walkthrough
Aleksa Gordić - The AI Epiphany
How to get started with Graph ML? (Blog walkthrough)
Aleksa Gordić - The AI Epiphany
DQN - Playing Atari with Deep Reinforcement Learning | RL Paper Explained
Aleksa Gordić - The AI Epiphany
AlphaGo - Mastering the game of Go with deep neural networks and tree search | RL Paper Explained
Aleksa Gordić - The AI Epiphany
DeepMind's AlphaGo Zero and AlphaZero | RL paper explained
Aleksa Gordić - The AI Epiphany
OpenAI - Solving Rubik's Cube with a Robot Hand | RL paper explained
Aleksa Gordić - The AI Epiphany
MuZero - Mastering Atari, Go, Chess and Shogi by Planning with a Learned Model | RL Paper explained
Aleksa Gordić - The AI Epiphany
EfficientNetV2 - Smaller Models and Faster Training | Paper explained
Aleksa Gordić - The AI Epiphany
Implementing DeepMind's DQN from scratch! | Project Update
Aleksa Gordić - The AI Epiphany
MLP-Mixer: An all-MLP Architecture for Vision | Paper explained
Aleksa Gordić - The AI Epiphany
DeepMind's Android RL Environment - AndroidEnv
Aleksa Gordić - The AI Epiphany
When Vision Transformers Outperform ResNets without Pretraining | Paper Explained
Aleksa Gordić - The AI Epiphany
Non-Parametric Transformers | Paper explained
Aleksa Gordić - The AI Epiphany
Chip Placement with Deep Reinforcement Learning | Paper Explained
Aleksa Gordić - The AI Epiphany
Text Style Brush - Transfer of text aesthetics from a single example | Paper Explained
Aleksa Gordić - The AI Epiphany
Graphormer - Do Transformers Really Perform Bad for Graph Representation? | Paper Explained
Aleksa Gordić - The AI Epiphany
GANs N' Roses: Stable, Controllable, Diverse Image to Image Translation | Paper Explained
Aleksa Gordić - The AI Epiphany
VQ-VAEs: Neural Discrete Representation Learning | Paper + PyTorch Code Explained
Aleksa Gordić - The AI Epiphany
VQ-GAN: Taming Transformers for High-Resolution Image Synthesis | Paper Explained
Aleksa Gordić - The AI Epiphany
Multimodal Few-Shot Learning with Frozen Language Models | Paper Explained
Aleksa Gordić - The AI Epiphany
Focal Transformer: Focal Self-attention for Local-Global Interactions in Vision Transformers
Aleksa Gordić - The AI Epiphany
AudioCLIP: Extending CLIP to Image, Text and Audio | Paper Explained
Aleksa Gordić - The AI Epiphany
RMA: Rapid Motor Adaptation for Legged Robots | Paper Explained
Aleksa Gordić - The AI Epiphany
DALL-E: Zero-Shot Text-to-Image Generation | Paper Explained
Aleksa Gordić - The AI Epiphany
DETR: End-to-End Object Detection with Transformers | Paper Explained
Aleksa Gordić - The AI Epiphany
DINO: Emerging Properties in Self-Supervised Vision Transformers | Paper Explained!
Aleksa Gordić - The AI Epiphany
DeepMind DetCon: Efficient Visual Pretraining with Contrastive Detection | Paper Explained
Aleksa Gordić - The AI Epiphany
Do Vision Transformers See Like Convolutional Neural Networks? | Paper Explained
Aleksa Gordić - The AI Epiphany
Fastformer: Additive Attention Can Be All You Need | Paper Explained
Aleksa Gordić - The AI Epiphany
More on: Neural Network Basics
View skill →Related AI Lessons
⚡
⚡
⚡
⚡
The ABCs of reading medical research and review papers these days
Medium · LLM
#1 DevLog Meta-research: I Got Tired of Tab Chaos While Reading Research Papers.
Dev.to AI
How to Set Up a Karpathy-Style Wiki for Your Research Field
Medium · AI
The Non-Optimality of Scientific Knowledge: Path Dependence, Lock-In, and The Local Minimum Trap
ArXiv cs.AI
Chapters (8)
intro & NST series overview
2:25
what I want this series to be
3:30
defining the task of NST
4:01
2 types of style transfer
4:43
a glimpse of the image style transfer history
6:55
explanation of the content representation
10:10
explanation of the style representation
14:12
putting it all together (animation)
🎓
Tutor Explanation
DeepCamp AI