Swin Transformer
In this video, we continue the vision transformer series, covering Swin Transformer, a general-purpose transformer backbone for computer vision. Swin Transformer is based on two key ideas: (1) designing a multi-scale hierarchical backbone suitable for computer vision, and (2) a carefully designed Swin Block composed of two window-based attention for efficient self-attention computation, while still enabling the long-range interactions between visual tokens.
Link to Relative Attention video: https://www.youtube.com/watch?v=XdlmDfa2hew
Watch on YouTube ↗
(saves to browser)
DeepCamp AI