Quick Overview
Adobe shipped four Photoshop updates between October 2025 and April 2026 (v27.0–v27.5). Most coverage focused on the AI assistant and multi-model support. This article covers seven practical features that received less attention but deliver measurable workflow improvements for designers, illustrators, and photographers.
Photoshop’s 2026 cycle has been heavy on announcements: a conversational AI assistant, partner models from Google and Black Forest Labs, Firefly Boards integration. Those made the keynotes. But some of the most useful additions landed without fanfare — tucked into contextual task bars, buried in adjustment panels, or quietly activated across web and desktop. If you stopped reading after the headlines, you missed tools that solve real production problems. Here are seven worth knowing.
1. AI Assistant’s Guided Mode — A Built-In Tutor, Not Just a Shortcut
The AI assistant grabbed attention for its automatic mode, where you type a plain-language instruction and Photoshop executes the edit. That’s useful for quick jobs. But Guided mode is the feature with staying power.
Instead of silently completing an edit, Guided mode walks through each step: which tool it selected, why it chose a specific selection method, and what parameters it applied. You watch the process unfold and can intervene at any point. For designers learning new techniques or onboarding junior team members, this turns every edit into a mini-tutorial.
The mode also supports AI Markup — draw directly on the canvas (circle an object, sketch a rough shape) and the assistant interprets your annotation as an editing instruction. This is faster than typing for spatial tasks like “remove everything inside this area” or “extend the background here.”
Where to find it: Open the AI assistant panel (available on web and mobile, desktop coming later), then toggle from Automatic to Guided in the panel header. For team adoption strategies, see our AI assistant team playbook.
2. Generative Fill with Reference Images — Identity-Preserving Compositing
Generative Fill now runs on Firefly Image 4 with 2K output, which alone makes a difference: sharper detail, fewer edge artifacts, better prompt adherence. But the real workflow change is Reference Image support.
Upload a reference photo alongside your prompt, and Generative Fill preserves that object’s identity — its exact geometry, materials, and proportions — while matching the lighting, perspective, and color of the target scene. This is geometry-aware compositing, not style transfer. The generated result places the specific referenced object into your composition as if it were photographed there.
Practical use case: Product photography compositing. Drop a shoe, bottle, or device into a lifestyle scene and the result maintains product accuracy rather than generating a “shoe-like object.” This used to require manual compositing with careful light matching — now it takes one generation pass plus minor refinement.
Where to find it: Select an area, open the Generative Fill bar, and click the image icon next to the prompt field to attach a reference. For prompt strategies, see our Generative Fill guide.
3. Harmonize — One-Click Composite Matching
Compositing has always required manual work to match lighting direction, color temperature, and shadow behavior between pasted elements and the background. Harmonize automates that matching step.
Select a composited layer, click Harmonize in the Contextual Task Bar, and the tool analyzes the surrounding scene to adjust lighting, shadows, and color cast on the selected element. It generates three variations so you can choose the best match. The result is non-destructive — you can toggle it off or regenerate.
This isn’t perfect for every scenario. Complex multi-light setups or highly stylized illustrations may need manual correction. But for standard product-in-scene composites, event mockups, or social media graphics, Harmonize cuts what used to be 10–20 minutes of curves-and-color-balance work down to about 30 seconds.
Where to find it: Select a layer, then look for the Harmonize button in the Contextual Task Bar. If the bar isn’t visible, enable it under Window > Contextual Task Bar.
4. Generative Upscale — Detail Recovery, Not Just Interpolation
Photoshop has had upscaling for years, but previous methods (Preserve Details 2.0, Super Resolution in Camera Raw) interpolated pixels. Generative Upscale, built on Topaz Labs technology, works differently: it regenerates the entire image to create plausible new detail.
You can upscale up to 4x, pushing images to roughly 56MP while preserving — and in some cases recovering — texture, edge sharpness, and fine patterns. The results are substantially better than interpolation for AI-generated images, old digital photos, and web-sourced assets that need to go to print.
Important caveat: Because the tool regenerates detail, it can introduce subtle inaccuracies. Fine text, logos, and precise geometric patterns should be checked after upscaling. For hero images and editorial photography, it delivers excellent results. For technical or compliance-sensitive imagery, verify the output.
Where to find it: Image > Image Size, then select Generative Upscale from the resample method dropdown. Also accessible via the Contextual Task Bar on generated layers.
5. Clarity, Dehaze, and Grain — Camera Raw Controls as Adjustment Layers
Version 27.3 (January 2026) added three adjustment layers that previously required a roundtrip to Camera Raw or Lightroom:
- Clarity — enhances midtone contrast for punch and definition without blowing highlights or crushing shadows. Excellent for architectural and product photography.
- Dehaze — cuts atmospheric haze, fog, and low-contrast conditions. Essential for landscape, drone, and outdoor event photography.
- Grain — adds film-like texture as a non-destructive layer. Useful for matching a photographic aesthetic across mixed-source images, or for masking AI-generated smoothness in composited work.
All three are fully maskable. That means you can apply Dehaze to the background of a portrait without affecting the subject, or add Grain only to AI-generated areas to match the look of photographed elements. The Color & Vibrance adjustment layer also gained Temperature and Tint controls, enabling white balance correction without leaving Photoshop.
Where to find it: Layer > New Adjustment Layer, or the Adjustments panel. They sit alongside existing options like Curves and Levels.
6. Web and Desktop Workflow Parity — Editing Anywhere Without Compromise
Photoshop on the web has quietly become a capable editing environment rather than a preview tool. The 2026 updates pushed near-parity with desktop for several core workflows:
- Full AI assistant access on web and mobile — the same natural-language editing available on desktop, including voice input on mobile.
- Generative Fill and Remove running at the same quality and resolution as desktop.
- Cloud document sync that preserves layers, masks, and adjustment settings across devices.
- Firefly Boards integration (v27.5) — open cloud documents in Firefly Boards for AI exploration, then send variations back to Photoshop for refinement.
The practical impact: you can start a composite on desktop, review and refine it on an iPad during a client meeting, then hand off via Firefly Boards for rapid variation generation. The file stays consistent across surfaces.
Desktop-only features still exist (Actions, advanced scripting, some filter categories), but for the AI-driven editing pipeline, the platform gap has narrowed significantly. For more on browser-based workflows, see our online Photoshop workflows guide.
7. Multi-Model AI Selection — Pick the Right Generator for the Job
Photoshop is no longer locked to a single AI backbone. The Generative Fill interface now lets you choose from multiple models depending on the task:
- Adobe Firefly Image 4 — the default, commercially safe option with Content Credentials. Best for client-facing work where IP provenance matters.
- Google Gemini 3 (Nano Banana Pro) — strong at character consistency and complex multi-element scenes.
- Black Forest Labs FLUX.2 Pro — excels at photorealistic textures and accurate text rendering within images.
Adobe also opened access to 25+ models through Firefly Custom Models (public beta), including options from OpenAI and Runway. You can train a custom model on your own images for consistent brand style. For strategies on combining generators with brand guidelines, see our brand consistency playbook.
What Changed Since Photoshop 2025
Photoshop 2025 (v26) introduced Generative Fill and Expand powered by Firefly Image 3, along with the Remove Tool and improved Select Subject. Here is what the 2026 cycle added on top of that foundation:
- AI model upgrade: Firefly Image 3 → Firefly Image 4 with 2K output (was ~1K).
- Reference Image compositing: Not available in 2025. Now supports identity-preserving object placement.
- Multi-model support: 2025 was Firefly-only. 2026 adds Google, FLUX, and 25+ custom models.
- AI assistant: Did not exist in 2025. Now in public beta with automatic, guided, and voice modes.
- Harmonize tool: New in 2026. Automates composite light/color matching.
- Generative Upscale: Replaces interpolation-based upscaling with AI regeneration (Topaz Labs).
- Generative Remove: Rebuilt with a diffusion model, replacing the patch-based Content-Aware Fill approach.
- New adjustment layers: Clarity, Dehaze, Grain — previously Camera Raw only.
- Web/mobile parity: AI features now run at full quality on web and mobile, not just desktop.
- Firefly Boards: Bidirectional workflow between Photoshop and Adobe’s generative workspace (v27.5).
Who Should Use Each Feature
| Feature | Best For |
|---|---|
| AI Assistant (Guided) | Beginners learning Photoshop; teams onboarding junior designers; anyone exploring unfamiliar tools. |
| Generative Fill + Reference Image | Product photographers; e-commerce teams; anyone compositing specific objects into scenes. |
| Harmonize | Composite artists; marketing designers doing product placement; social media creators. |
| Generative Upscale | Print production; anyone working with low-res source material or AI-generated images that need to go large. |
| Clarity / Dehaze / Grain | Landscape and architectural photographers; retouchers blending AI-generated and photographed elements. |
| Web/Desktop Parity | Remote and hybrid teams; freelancers working across devices; client-facing review workflows. |
| Multi-Model Selection | Agencies and studios needing different AI strengths per project; brand teams requiring Content Credentials. |
Frequently Asked Questions
Is the Photoshop AI assistant available on desktop?
As of April 2026, the AI assistant is in public beta on Photoshop for web and mobile. Desktop access has not been announced yet. Adobe has indicated desktop integration is planned but has not committed to a specific date.
Does Generative Fill with Reference Image work for faces and people?
Reference Image support is designed for objects, products, and scene elements. Adobe has restricted face and person generation with reference images to comply with responsible AI guidelines. For non-face subjects — products, architecture, vehicles, clothing — it works reliably.
How many generative AI credits does Photoshop 2026 use?
Generative Fill, Remove, and Expand each consume one generative credit per generation. Generative Upscale costs more credits depending on the target resolution. Paid Creative Cloud subscribers receive a monthly credit allocation, and Adobe ran an unlimited-generations promotional period through April 9, 2026.
Can I use third-party AI models like FLUX for commercial work?
Yes, but with different IP protections. Adobe Firefly models include Content Credentials and IP indemnification for commercial use. Third-party models (FLUX, Gemini) generate content without Adobe’s IP indemnity. For client work where provenance documentation matters, stick with Firefly or check the specific model’s commercial license terms.
What is the difference between Generative Upscale and Super Resolution?
Super Resolution (in Camera Raw) uses interpolation — it estimates new pixels based on surrounding data. Generative Upscale (Topaz Labs integration) regenerates the image using AI, creating new plausible detail that did not exist in the original. Generative Upscale produces significantly sharper results at 4x magnification but may introduce subtle inaccuracies in fine text or geometric patterns.
Do the new adjustment layers (Clarity, Dehaze, Grain) work with Smart Objects?
Yes. Like all Photoshop adjustment layers, Clarity, Dehaze, and Grain are non-destructive and can be applied above Smart Object layers. They also support layer masks, allowing you to selectively apply the effect to specific regions of the image.
Key Takeaways
- The AI assistant’s Guided mode is more valuable long-term than Automatic — it teaches while it edits.
- Reference Image compositing in Generative Fill is the biggest practical upgrade for product and e-commerce photography workflows.
- Harmonize and Generative Upscale solve specific bottlenecks (composite matching, low-res recovery) that previously ate significant manual time.
- Clarity, Dehaze, and Grain as maskable adjustment layers eliminate Camera Raw roundtrips for common photo corrections.
- Web/desktop parity and multi-model selection make Photoshop a more flexible platform — but manual finishing skills remain essential for professional output.