A Quantitative Study of Locality in GPU Caches for Memory-Divergent Workloads

West Coast Machine Learning · Advanced ·📐 ML Fundamentals ·4mo ago
In this meetup, we continued the review of the paper, Quantitative Study of Locality in GPU Caches for Memory-Divergent Workloads. https://link.springer.com/article/10.1007/s10766-022-00729-2 Our Meetup: https://www.meetup.com/East-Bay-Tri-Valley-Machine-Learning-Meetup/ *Content* 00:00 GPU caching 06:02 Testing setup 14:29 Spatial Utilization warps 50:15 Memory-Divergent workloads 01:08:10 Cache impacts ============================ 😊About Us West Coast Machine Learning is a channel dedicated to exploring the exciting world of machine learning! Our group of techies is passionate about deep learning, neural networks, computer vision, tiny ML, and other cool geeky machine learning topics. We love to dive deep into the technical details and stay up to date with the latest research developments. Our Meetup group and YouTube channel is the perfect place to connect with other like-minded individuals who share your love of machine learning. We offer a mix of research paper discussions, coding reviews, and other data science topics. So, if you're looking to stay up to date with the latest developments in machine learning, connect with other techies, and learn something new, be sure to subscribe to our channel and join our Meetup community today! Meetup: https://www.meetup.com/east-bay-tri-valley-machine-learning-meetup/ ============================= #GPUs #GPU-cache #improving-training #cache-locality
Watch on YouTube ↗ (saves to browser)
Sign in to unlock AI tutor explanation · ⚡30

Related AI Lessons

Chapters (5)

GPU caching
6:02 Testing setup
14:29 Spatial Utilization warps
50:15 Memory-Divergent workloads
1:08:10 Cache impacts
Up next
Generative Artificial Intelligence Full Course 2026 | Gen AI Tutorial For Beginners | Simplilearn
Simplilearn
Watch →