Adobe has shipped a staggering number of AI-powered features over the past two years. Some were headline-grabbing demos that fizzled in production. Others quietly became indispensable. As we settle into 2026, it’s worth separating the signal from the noise: which Photoshop AI features have actually changed how designers, retouchers, and compositors work day to day?
Generative Fill: From Party Trick to Production Tool
When Generative Fill launched in mid-2023, it was impressive but unreliable. Skin textures looked waxy, fabric patterns didn’t match, and anything involving hands or text was a coin flip. Two years of iteration have changed the calculus. The version shipping in Photoshop 2026 (v26.x) uses Adobe’s latest Firefly Image 4 model, and the improvements are meaningful where it counts.
Where it works well now: Background extension for product photography, filling in cropped edges after perspective correction, and removing unwanted objects from mid-complexity scenes. Retouchers report that Generative Fill now handles skin-adjacent fills — like extending a shoulder or filling behind a moved subject — with noticeably fewer artifacts than before.
Where it still struggles: Precise pattern continuity (think plaid shirts or tiled floors), text rendering inside fills, and matching extreme lighting conditions. Experienced compositors still treat Generative Fill output as a starting layer that needs manual refinement, not a finished result.
"Generative Fill cut my background-extension time by about 70%. But I still paint over every fill by hand before delivery. The AI gets you 80% of the way — that last 20% is where the craft lives."
— Senior retoucher workflow, reported across multiple design forums
Generative Expand: Quietly Indispensable
Generative Expand — the crop tool’s AI outpainting capability — has turned out to be one of the most practically useful AI features in the entire application. The use case is simple: you need a horizontal image from a vertical shot, or your client changed the deliverable aspect ratio after the shoot.
For social media designers juggling Instagram (1:1), Stories (9:16), and landscape (16:9) from a single asset, Generative Expand has become a genuine time-saver. It handles sky extensions, simple environmental backgrounds, and studio-style negative space reliably enough that many designers now plan for it in their asset pipeline.
Practical limitation: The feature works best when expanding into areas of low visual complexity. Expanding a portrait into a busy street scene will produce plausible but obviously synthetic results. The rule of thumb most designers have adopted: use it for expanding away from the subject, not around it.
Remove Tool: The Invisible Workhorse
The AI-powered Remove Tool doesn’t get the attention that generative features do, but it may be the single most-used AI feature among working professionals. It’s effectively Content-Aware Fill rebuilt from scratch with a diffusion model, and the results on common retouching tasks — removing blemishes, power lines, stray objects, sensor dust — are substantially better than the old approach.
What makes it stick in professional workflows is consistency. Unlike Generative Fill, the Remove Tool doesn’t hallucinate new content; it reconstructs what should be there based on the surrounding pixels. That predictability matters when you’re retouching 200 images from a single shoot.
Generate Image: Photoshop’s Built-In Image Generator
Adobe’s Generate Image panel, powered by Firefly, lets designers create reference images, placeholder assets, and texture elements without leaving Photoshop. It launched as a beta feature and has steadily improved, with the Firefly Image 4 model delivering notably better photorealism and prompt adherence.
Where designers actually use it: Mockup comps for client presentations, generating quick background plates for compositing tests, and creating texture variations for surface work. It’s particularly useful in the early ideation phase, where speed matters more than pixel-perfect accuracy.
Where it falls short: The commercially-safe training dataset means Firefly’s output skews generic compared to open-source alternatives like Stable Diffusion or Flux. Designers working in editorial, fashion, or fine art often find the aesthetic range too narrow for their needs.
Selection Improvements: AI That Saves Hours
Less glamorous but arguably more impactful: Photoshop’s AI-driven selection tools have become remarkably good. Select Subject now handles fine hair, translucent fabrics, and complex edges with an accuracy that would have been unthinkable five years ago. The improvements to Object Selection and automatic masking in the Properties panel have meaningfully reduced the time spent on cutouts — a task that still accounts for a large percentage of production retouching work.
For compositors, the combination of improved selections and the Select and Mask workspace means that a mask that once took 30–45 minutes of careful brushwork can now be roughed in within 2–3 minutes. Manual refinement is still needed for hero shots, but the AI provides a far better starting point than any previous algorithm.
What Hasn’t Changed (Yet)
For all the progress, several areas remain largely untouched by AI in Photoshop’s current release:
- Color grading and tonal work: Neural Filters offers some AI-assisted color transfer, but serious color work still relies on Curves, Levels, and manual adjustment layers. There’s no AI “match this look” tool that professionals trust for final output.
- Typography and layout: Photoshop’s text engine has received no meaningful AI upgrades. For layout-heavy work, designers still move to Illustrator or InDesign.
- Batch processing intelligence: AI features are largely single-image operations. There’s no way to say “apply this Generative Fill approach across 50 similar images” — a significant gap for production workflows.
- Non-destructive AI layers: Most AI operations generate rasterized output. You can’t go back and adjust a Generative Fill prompt after the fact. This limits how AI integrates into non-destructive editing workflows.
The Speed and Iteration Story
The cumulative effect of these AI features on production speed is real but uneven. For a social media designer producing 20–30 assets a week across multiple aspect ratios, the combination of Generative Expand, improved selections, and the Remove Tool can save several hours per week. For a high-end retoucher working on a single hero image for a campaign, the time savings are more modest — AI handles the rough work faster, but the finishing pass takes just as long as it always did.
Where AI has most clearly changed the game is in iteration speed during the concept phase. Designers can now generate three or four compositional variations in the time it used to take to build one. This doesn’t replace creative judgment, but it does mean clients see more options sooner, and creative directors can evaluate directions faster.
"The biggest shift isn’t that AI makes the final image better. It’s that I can explore five directions before lunch instead of committing to one and hoping the client likes it."
The Bottom Line
Photoshop’s AI features in 2026 are best understood as acceleration tools, not replacement tools. They compress the tedious middle of a workflow — rough selections, background fills, aspect ratio adaptation — while leaving the creative bookends (concept and finishing) firmly in human hands. The designers getting the most out of them are the ones who’ve learned where to trust the AI and where to take over.
For Adobe’s part, the pace of improvement has been impressive. The Firefly models powering these features have improved meaningfully with each generation, and the integration into Photoshop’s existing tool paradigms — selections, fills, cropping — means designers don’t have to learn an entirely new way of working. They just work faster.
For the latest on Adobe’s AI roadmap, see the official Photoshop AI features page and the Photoshop release notes.
Key Takeaways
- Generative Fill and Generative Expand have matured into reliable production tools for background work and aspect ratio adaptation, though they still need manual refinement for hero-level output
- The Remove Tool and AI-powered selections are the quiet workhorses — less flashy than generative features but arguably more impactful for daily production work
- The biggest workflow gain is in iteration speed during the concept phase, not in final output quality
- Key gaps remain: no AI-assisted batch processing, limited non-destructive AI editing, and no meaningful AI upgrades for color grading or typography