Swin Transformer

Machine Learning Studio · Intermediate ·🧠 Large Language Models ·1y ago
In this video, we continue the vision transformer series, covering Swin Transformer, a general-purpose transformer backbone for computer vision. Swin Transformer is based on two key ideas: (1) designing a multi-scale hierarchical backbone suitable for computer vision, and (2) a carefully designed Swin Block composed of two window-based attention for efficient self-attention computation, while still enabling the long-range interactions between visual tokens. Link to Relative Attention video: https://www.youtube.com/watch?v=XdlmDfa2hew
Watch on YouTube ↗ (saves to browser)
5 Levels of AI Agents - From Simple LLM Calls to Multi-Agent Systems
Next Up
5 Levels of AI Agents - From Simple LLM Calls to Multi-Agent Systems
Dave Ebbelaar (LLM Eng)