Data Engineering with Delta Lake on Databricks
Skills:
ETL Basics90%
Build production-ready data pipelines using Delta Live Tables and the Medallion Architecture on Databricks. This hands-on course teaches you to design, implement, and monitor ETL workflows that transform raw data into reliable, business-ready datasets through a structured bronze-silver-gold layering pattern.
This course is primarily aimed at first- and second-year undergraduates interested in engineering or science, along with professionals with an interest in programming.
You will start by mastering DLT fundamentals — declarative pipeline syntax in both SQL and Python, streaming ingestion with Auto Loader, and schema evolution strategies. Next, you will implement each Medallion Architecture layer: bronze for raw ingestion with lineage tracking, silver for data cleaning with expectations-based quality gates, and gold for business aggregations optimized with Z-ordering and partitioning.
The course culminates in a capstone project where you build a complete inventory management system using Change Data Capture with `apply_changes()`, multi-source ingestion, and end-to-end pipeline orchestration. Every concept is reinforced through labs on Databricks Community Edition — no paid account required.
Whether you are transitioning from batch ETL to streaming or building your first lakehouse pipeline, this course gives you the practical skills employers demand in modern data engineering roles.
Watch on Coursera ↗
(saves to browser)
Sign in to unlock AI tutor explanation · ⚡30
More on: ETL Basics
View skill →Related AI Lessons
⚡
⚡
⚡
⚡
Comparing Tools for Intelligent Demand Prediction in Retail
Dev.to AI
Implementing Intelligent Demand Prediction for Grocery Retail
Dev.to AI
Reverse ETL:What It Is, Use Cases, and How to Implement It
Dev.to · BladePipe
Building a Real Estate Data Pipeline That Aggregates 3,000+ Listings Daily from BizBuySell, CREXi &…
Medium · Data Science
🎓
Tutor Explanation
DeepCamp AI