Explainable deep learning models for healthcare - CDSS 3
Skills:
ML Maths Basics70%
This course will introduce the concepts of interpretability and explainability in machine learning applications. The learner will understand the difference between global, local, model-agnostic and model-specific explanations. State-of-the-art explainability methods such as Permutation Feature Importance (PFI), Local Interpretable Model-agnostic Explanations (LIME) and SHapley Additive exPlanation (SHAP) are explained and applied in time-series classification. Subsequently, model-specific explanations such as Class-Activation Mapping (CAM) and Gradient-Weighted CAM are explained and implemented. The learners will understand axiomatic attributions and why they are important. Finally, attention mechanisms are going to be incorporated after Recurrent Layers and the attention weights will be visualised to produce local explanations of the model.
Watch on Coursera ↗
(saves to browser)
Sign in to unlock AI tutor explanation · ⚡30
More on: ML Maths Basics
View skill →Related AI Lessons
⚡
⚡
⚡
⚡
The GenAI Honeymoon is Over: The Brutal Realities of Production AI
Medium · Data Science
Quantization From First Principles: Build Your Own INT8 Inference Engine
Medium · Machine Learning
Quantization From First Principles: Build Your Own INT8 Inference Engine
Medium · Data Science
Quantization From First Principles: Build Your Own INT8 Inference Engine
Medium · Python
🎓
Tutor Explanation
DeepCamp AI