Prompt Engineering for Vision Models

Coursera Courses ↗ · Coursera

Open Course on Coursera

Free to audit · Opens on Coursera

Prompt Engineering for Vision Models

Coursera · Beginner ·👁️ Computer Vision ·1mo ago
Prompt engineering is used not only in text models but also in vision models. Depending on the vision model, they may use text prompts, but can also work with pixel coordinates, bounding boxes, or segmentation masks. In this course, you’ll learn to prompt different vision models like Meta’s Segment Anything Model (SAM), a universal image segmentation model, OWL-ViT, a zero-shot object detection model, and Stable Diffusion 2.0, a widely used diffusion model. You’ll also use a fine-tuning technique called DreamBooth to tune a diffusion model to associate a text label with an object of your preference. In detail, you’ll explore: 1. Image Generation: Prompt with text and by adjusting hyperparameters like strength, guidance scale, and number of inference steps. 2. Image Segmentation: Prompt with positive or negative coordinates, and with bounding box coordinates. 3. Object detection: Prompt with natural language to produce a bounding box to isolate specific objects within images. 4. In-painting: Combine the above techniques to replace objects within an image with generated content. 5. Personalization with Fine-tuning: Generate custom images based on pictures of people or places that you provide, using a fine-tuning technique called DreamBooth. 6. Iterating and Experiment Tracking: Prompting and hyperparameter tuning are iterative processes, and therefore experiment tracking can help to identify the most effective combinations. This course will use Comet, a library to track experiments and optimize visual prompt engineering workflows.
Watch on Coursera ↗ (saves to browser)
Sign in to unlock AI tutor explanation · ⚡30

Related AI Lessons

Inside SAM 3D: how Meta turns a single image into 3D
Learn how Meta's SAM 3D technology turns a single image into 3D, revolutionizing the field of computer vision
Medium · Machine Learning
Inside SAM 3D: how Meta turns a single image into 3D
Learn how Meta's SAM 3D technology generates 3D models from single images, revolutionizing the field of computer vision
Medium · Deep Learning
Demystifying CNNs: How Convolutional Filters and Max-Pooling Actually Work
Learn how Convolutional Neural Networks (CNNs) use convolutional filters and max-pooling to recognize images
Medium · Data Science
Your "Biometric Age Check" Isn't Verifying Identity — And Defense Lawyers Know It
Biometric age checks don't verify identity, a crucial distinction for developers in computer vision and biometrics
Dev.to AI
Up next
How Transformers Finally Ate Vision – Isaac Robinson, Roboflow
AI Engineer
Watch →