Quick Answer
Generative Fill is Photoshop’s AI-powered tool that lets you add, extend, or replace content inside a selection using a text prompt. Make a selection, click “Generative Fill” in the contextual taskbar, type what you want (or leave the prompt empty to fill based on surrounding context), and click Generate. Photoshop produces three variations on a non-destructive generative layer. It runs on Adobe’s Firefly Image 4 model and requires an active Creative Cloud subscription with generative credits.
Generative Fill has gone from a flashy beta feature to a tool that production designers rely on daily. But getting consistent, usable results takes more than clicking “Generate” and hoping. This guide covers the complete workflow — selection strategy, prompt writing, output refinement, and the mistakes that waste your generative credits.
What Generative Fill Actually Does (and Doesn’t Do)
Generative Fill uses Adobe’s Firefly image model to synthesize new pixels inside a selected area. It analyzes the surrounding context — lighting, perspective, color palette, texture — and generates content that attempts to blend seamlessly with the rest of the image.
It is not the same as Content-Aware Fill (which still exists and uses a different, older algorithm). It is not an image generator that creates entire images from scratch — that’s the Generate Image panel. And it is not the Remove Tool, which is optimized specifically for deleting objects. Generative Fill is the middle ground: targeted synthesis inside a selection, guided by an optional text prompt.
For a broader look at how Generative Fill fits alongside other AI features in the current release, see our overview of the biggest AI features that changed real design workflows in 2026.
Step-by-Step Workflow
The following workflow applies to Photoshop 2026 (v26.x). Earlier versions may have slightly different UI placement, but the core process is the same.
1. Make Your Selection
Generative Fill works on any active selection. You can use the Rectangular Marquee, Lasso, Object Selection Tool, or even Select Subject — whatever fits your task. The selection defines the boundary of what gets generated.
Selection sizing matters. The most common beginner mistake is making a selection that’s too tight. Generative Fill needs context pixels around the edges to blend properly. As a rule of thumb, extend your selection 10–20% beyond the area you actually want to change. You can always mask back the edges afterward.
2. Open the Generative Fill Bar
With an active selection, the contextual taskbar appears at the bottom of the canvas. Click Generative Fill. You’ll see a text prompt field and a Generate button. You can also access it from the menu: Edit > Generative Fill.
3. Write Your Prompt (or Don’t)
Empty prompt: Leave the field blank to let Photoshop fill based purely on surrounding context. This works best for background extension, removing objects (when the Remove Tool isn’t suitable), and seamless patching.
Text prompt: Describe what you want to appear in the selection. Be specific about the subject but let Photoshop handle the style matching. More on prompt strategy below.
4. Generate and Review Variations
Click Generate. Photoshop creates three variations, each on a non-destructive Generative Layer. Use the arrows in the Properties panel to cycle through them. If none work, click Generate again for three more. Each generation costs generative credits.
5. Refine the Result
Almost no Generative Fill output should go straight to final delivery. Professional practice is to treat the output as a base layer, then:
- Use the generative layer’s built-in mask to paint away problem edges
- Clone Stamp or Healing Brush over texture mismatches
- Add adjustment layers to match color/tone if the fill skews warm or cool
- Run a second, smaller Generative Fill on stubborn sub-areas
"I run Generative Fill to get the structure, then spend five minutes painting over it. That combo is faster than building from scratch and more reliable than trusting the AI output raw."
— Common workflow pattern reported across professional retouching communities
Prompt Examples That Actually Work
The biggest misconception about Generative Fill prompts is that more detail equals better results. In practice, shorter, more specific prompts consistently outperform long descriptive paragraphs. Firefly already reads the surrounding image context; your prompt should add information that the context alone can’t provide.
| Task | Weak Prompt | Better Prompt |
|---|---|---|
| Add foliage behind a product | beautiful green plants and leaves in the background with soft bokeh and natural lighting | tropical leaves, shallow depth of field |
| Place sunglasses on a model | a pair of stylish modern black sunglasses sitting on the person’s face | black aviator sunglasses |
| Extend a sky | a continuation of the sky with clouds matching the same color palette and lighting | (empty prompt) |
| Swap background to studio | professional photography studio with gray seamless paper backdrop and softbox lighting from the left | gray seamless backdrop, studio lighting |
| Add a coffee cup to a table | a ceramic coffee cup filled with black coffee sitting on the table surface | white ceramic coffee mug |
| Fill in missing floor area | wooden floor continuing the same pattern with correct perspective | (empty prompt) |
Key prompt principles:
- Name the object, not the scene. Firefly sees the scene already. Tell it what to add.
- Specify material or style only when it matters. “Leather armchair” beats “a comfortable-looking chair.”
- Skip lighting and color instructions. The model reads these from the image context. Overriding them in the prompt usually creates mismatches.
- Use empty prompts for extensions and removals. When you want “more of the same,” context alone gives the cleanest results.
- Avoid negative phrasing. “No people” or “without text” doesn’t work reliably. Describe what you do want.
Limitations You Should Know About
Generative Fill has improved significantly since launch, but it still has clear boundaries. Understanding them saves time and credits.
- Text rendering is unreliable. Don’t ask Generative Fill to add text, logos, or signage. It will hallucinate letterforms. Use the Type tool instead.
- Pattern continuity is hit-or-miss. Repeating patterns (plaid, stripes, tile grids) often break at fill boundaries. You’ll need manual patching for these.
- Hands, fingers, and fine anatomy remain a weak spot, though significantly better than the 2023 version.
- Maximum resolution. Generative Fill processes at a maximum of 1024×1024 pixels per generation. Larger selections are downscaled internally and upscaled back, which can cause softness. For high-resolution work, fill in sections rather than one massive selection.
- Generative credits are finite. Each generation (three variations) costs credits. Plans include a monthly allotment; exceeding it requires purchasing more. See Adobe’s generative credits documentation for current allocations.
- Internet required. Generation happens on Adobe’s servers. No offline mode exists.
- Content policy restrictions. Firefly’s commercially-safe training means it won’t generate certain content (violence, recognizable faces from prompts, copyrighted characters). This is by design but can limit editorial and conceptual work.
For teams working with external AI generators alongside Photoshop, our guide on combining AI generators with Photoshop without losing brand consistency covers how to integrate third-party outputs into your Photoshop workflow.
Troubleshooting Common Problems
Generative Fill button is grayed out
Check three things: (1) You have an active selection on the canvas. (2) Your document is in RGB color mode, 8-bit or 16-bit — Generative Fill doesn’t support CMYK, Lab, or 32-bit. (3) You’re signed into Creative Cloud and connected to the internet.
Results look blurry or soft
Your selection is probably larger than the 1024px internal processing cap. Solution: break the area into smaller sections and fill them individually. Alternatively, run Generative Fill at the resolution it handles well, then use upscaling (Image > Image Size with Preserve Details 2.0) afterward.
Fill doesn’t match the surrounding lighting
This usually happens when the selection edge cuts through a strong lighting gradient. Expand your selection to include some of the lighting transition zone. The model needs to “see” the light direction to match it. If the mismatch persists, fix it with a Curves adjustment layer clipped to the generative layer.
Visible seam at the selection boundary
Feather your selection before generating (Select > Modify > Feather, typically 5–15px depending on resolution). This gives the model a softer transition zone. For stubborn seams, paint on the generative layer’s mask with a soft black brush to blend the edge manually.
Prompt is ignored or results seem random
Three common causes: (1) The prompt conflicts with what the model sees in the surrounding context. If the image is a beach scene and you prompt “snow-covered mountain,” the model may compromise awkwardly. (2) The selection is too small for the prompted content to fit naturally. (3) Your prompt is too long and the model is latching onto only part of it. Simplify.
“Unable to process” or generation fails
Check that your document dimensions are within Photoshop’s supported limits (maximum 30,000px on either side). Also verify you haven’t exhausted your monthly generative credits. If neither applies, it may be a temporary server-side issue — wait a few minutes and retry.
Generative Fill vs. Other Photoshop AI Tools
Photoshop now has several AI-powered tools that overlap in capability. Here’s when to use which:
| Tool | Best For | Not Ideal For |
|---|---|---|
| Generative Fill | Adding objects, swapping backgrounds, extending scenes with a prompt | Simple object removal, full image generation |
| Generative Expand | Extending canvas/crop boundaries outward | Filling interior areas of an image |
| Remove Tool | Deleting unwanted objects, blemishes, distractions | Adding new content, complex scene changes |
| Content-Aware Fill | Quick texture-based patches, works offline | Anything requiring semantic understanding |
Frequently Asked Questions
Is Generative Fill free?
It’s included with any Photoshop subscription, but it uses generative credits. Most Creative Cloud plans include a monthly allotment of credits. Once depleted, you can purchase additional credits or wait for the monthly reset. Check the Adobe generative credits FAQ for your plan’s allocation.
Can I use Generative Fill output commercially?
Yes. Adobe’s Firefly models are trained on licensed and public-domain content, and Adobe provides an IP indemnity for Firefly-generated content used commercially. This is one of the key advantages over open-source alternatives for production work.
Does Generative Fill work on video frames?
Not directly within Photoshop. Generative Fill operates on still images. For video, Adobe has been rolling out generative features in Premiere Pro and After Effects separately. You can use Generative Fill on individual exported frames, but there’s no frame-to-frame consistency mechanism.
What’s the difference between Generative Fill and Generative Expand?
Generative Expand is specifically for outpainting — extending the image beyond its current canvas boundaries. It’s accessed through the Crop tool by dragging the crop handles outward. Generative Fill works inside the existing canvas on any selection. Under the hood, they use the same Firefly model, but the UI and use cases differ.
Can I edit a Generative Fill after applying it?
The generative layer preserves your prompt and variations, so you can cycle through the three options or generate new ones at any time. However, you can’t retroactively modify the prompt of an existing generation — you’d need to generate again with a new prompt. The layer itself is non-destructive: it has a mask you can paint on, and you can adjust opacity, blending mode, or add adjustment layers above it.
Does it work in Photoshop on iPad?
Yes. Generative Fill is available in Photoshop for iPad with the same feature set as the desktop version. Performance may be slightly slower due to the round-trip to Adobe’s servers, but the output quality is identical since generation happens server-side.
How is this different from using Midjourney or Stable Diffusion?
External generators create entire images from text prompts. Generative Fill operates within an existing image, respecting its context, lighting, and perspective. It’s a compositing and editing tool, not an image creation tool. For workflows that combine external generators with Photoshop, see our guide on maintaining brand consistency across AI tools.
For the latest updates on Generative Fill and other Photoshop AI features, see Adobe’s official Photoshop release notes and the Photoshop AI features page.
Key Takeaways
- Selection strategy matters more than prompt quality — give the model enough context pixels by extending selections 10–20% beyond the target area
- Keep prompts short and specific: name the object and key material, skip lighting and color descriptions that the model already reads from context
- Use empty prompts for extensions and context-based fills — they outperform descriptive prompts when you want “more of the same”
- Treat every Generative Fill output as a starting layer — plan for manual refinement with masking, cloning, and adjustment layers before delivery
- For large selections, fill in sections to avoid the 1024px processing cap that causes blurry output