GenAI and LLMs on AWS
This course will teach you how to deploy and manage large language models (LLMs) in production using AWS services like Amazon Bedrock. By the end of the course, you will know how to:
Choose the right LLM architecture and model for your application using services.
Optimize cost, performance and scalability of LLMs on AWS using auto-scaling groups, spot instances and container orchestration
Monitor and log metrics from your LLM to detect issues and continuously improve quality
Build reliable and secure pipelines to train, deploy and update models using AWS services
Comply with regulations when d…
Watch on Coursera ↗
(saves to browser)
DeepCamp AI