Foundations of Open Generative AI Engineering
Skills:
LLM Engineering90%
The Foundations of Open Generative AI Engineering course introduces learners to the principles, architectures, and trade-offs that define the open generative AI landscape. Starting with the distinctions between open source, open weights, and open access models, learners explore different licensing frameworks—including MIT, Apache, and CreativeML Open RAIL-M—and their implications for commercial use, attribution, and compliance.
The course then covers the core architectures of open large language models (LLMs) such as Llama, Mistral, and Mixtral, alongside diffusion models used for image generation. Learners analyze how factors like parameter size, context windows, and inference speed impact performance and suitability for different applications. The final module develops a structured decision-making framework for evaluating open vs. closed models, balancing cost, scalability, customization, privacy, and data sovereignty. By completing a model selection analysis report, learners gain the ability to critically assess and recommend appropriate generative AI models for real-world use cases.
Watch on Coursera ↗
(saves to browser)
Sign in to unlock AI tutor explanation · ⚡30
More on: LLM Engineering
View skill →Related AI Lessons
⚡
⚡
⚡
⚡
Thursday Thoughts: The Models We Can't Run
Dev.to · Rob
Big Tech firms are accelerating AI investments and integration, while regulators and companies focus on safety and responsible adoption.
Dev.to AI
35 ChatGPT Prompts for Recruiters (That Actually Work in 2026)
Dev.to · ClawGear
Stop Writing Like a Robot: The Prompt That Makes ChatGPT Sound Human
Medium · ChatGPT
🎓
Tutor Explanation
DeepCamp AI