|PSN Editorial Staff

What’s New in Photoshop 2026: AI Assistant, Generative Updates, and Workflow Impact

A comprehensive look at the AI assistant public beta, Firefly Image 4 generative tools, new adjustment layers, and how these updates change real design workflows in Photoshop version 27.

Quick Overview

Adobe has shipped four major Photoshop updates since October 2025 (v27.0 through v27.5). The headline features: a conversational AI assistant now in public beta, Generative Fill powered by Firefly Image 4 at 2K resolution, reference-image compositing, new Clarity/Dehaze/Grain adjustment layers, and Firefly Boards integration. This article covers every significant change and what it means for working designers.

Photoshop 2026 (version 27) is the most AI-forward release Adobe has ever shipped. Across four updates from October 2025 to April 2026, the app has gained a conversational editing assistant, a new generative backbone, multi-model AI support, and non-destructive tools that address longstanding workflow gaps. Here is everything that changed and whether it matters for your day-to-day work.

The AI Assistant: Conversational Editing Arrives

The most visible addition in Photoshop 2026 is the AI assistant, which entered public beta in March 2026 on Photoshop for web and mobile. Instead of hunting through menus, you describe what you want in plain language — “remove the person on the left,” “add a soft glow to the background,” “change the sky to golden hour” — and the assistant executes the edit.

Adobe built two operating modes:

  • Automatic mode completes the edit in a single step. Best for quick, straightforward tasks like object removal or color shifts.
  • Guided mode walks you through each action, explaining which tool it uses and why. This is genuinely useful for learning — it turns the assistant into a built-in tutor.

On mobile, the assistant supports voice input, letting you describe edits by speaking. Adobe has also introduced AI Markup: draw directly on the canvas (circle an object, sketch a rough shape) and the assistant interprets your annotation as an edit instruction or generation prompt.

Paid Creative Cloud subscribers get unlimited generations through a promotional period ending April 9, 2026. Free-tier users receive 20 generations. For a detailed breakdown of how teams are integrating the assistant into production workflows, see our AI assistant team playbook.

Generative Fill: Firefly Image 4 and 2K Output

Generative Fill now runs on Adobe Firefly Image 4, a substantial upgrade over the previous model. The practical differences are immediately visible:

  • 2K resolution output. Generated content is sharper and more detailed, reducing the need for post-generation cleanup.
  • Better prompt matching. The model follows complex prompts more accurately, with fewer hallucinated elements and better scene coherence.
  • Fewer artifacts. Edge blending between generated and existing content has improved noticeably, especially around fine details like hair and foliage.

The biggest workflow addition is Reference Image support. You can now provide a reference photo, and Generative Fill preserves that object’s identity with geometry-aware compositing that matches scale, rotation, lighting, color, and perspective. This is a significant step toward usable product photography compositing — place a specific product into a new scene while maintaining its exact appearance. For prompt strategies and common mistakes, see our Generative Fill guide.

Multi-Model AI: Choose Your Generator

Photoshop 2026 is no longer locked to a single AI model. Adobe has opened the platform to partner models, giving you a choice depending on the task:

  • Adobe Firefly Image 4 — the default, commercially safe option with Content Credentials built in.
  • Google Gemini 3 (Nano Banana Pro) — strong at character consistency and complex scene composition.
  • Black Forest Labs FLUX.2 Pro — excels at photorealistic textures and text accuracy within images.
  • Firefly Fill & Expand model (beta) — optimized specifically for inpainting and outpainting tasks, extended to Generate Similar in v27.4.

Adobe also announced access to 25+ AI models through Firefly Custom Models (public beta), including options from OpenAI, Runway (Gen-4.5), and others. You can train a custom model on your own images to capture a specific style, character, or photographic look — useful for brand consistency work. For strategies on combining AI generators with brand guidelines, see our brand consistency playbook.

Generative Remove: Rebuilt from Scratch

Content-Aware Fill has been completely rebuilt using a diffusion model, replacing the older patch-based approach. The result is dramatically better reconstruction, especially for:

  • Removing blemishes and sensor dust from portraits
  • Eliminating power lines and stray objects from landscapes
  • Cleaning up complex textured areas where the old Content-Aware Fill would produce obvious smearing

According to Adobe, the Generative Remove tool is now the most-used AI feature among professional users. Its strength is predictable reconstruction rather than hallucination — it fills removed areas with plausible texture rather than inventing new content.

New Adjustment Layers: Clarity, Dehaze, and Grain

Version 27.3 (January 2026) added three adjustment layers that were previously available only in Camera Raw or Lightroom:

  • Clarity — enhances midtone contrast for punch and definition without blowing highlights or crushing shadows.
  • Dehaze — cuts through atmospheric haze, fog, and low-contrast conditions. Particularly effective for landscape and architectural photography.
  • Grain — adds film-like texture as a non-destructive layer. Useful for matching a photographic aesthetic or masking AI-generated smoothness.

All three are fully maskable and non-destructive, meaning you can apply them selectively to parts of an image. The Color & Vibrance adjustment layer has also been expanded to include Temperature and Tint controls, enabling white balance correction directly in Photoshop without roundtripping to Camera Raw.

Selection and Background Improvements

Select Subject and Remove Background received major upgrades in this release cycle. The AI now handles:

  • Fine hair — wispy strands and flyaways are captured with significantly better accuracy.
  • Translucent fabrics — veils, sheer curtains, and similar materials retain their transparency.
  • Complex edges — irregular boundaries like foliage against sky are cleaner.

Adobe estimates that tasks requiring 30–45 minutes of manual masking now take 2–3 minutes with the improved AI selections. For a detailed comparison of all background removal approaches, see our fastest background removal methods guide.

Firefly Boards and Cross-App Workflow

Version 27.5 (released April 1, 2026) introduced Firefly Boards integration, creating a bidirectional workflow between Photoshop and Adobe’s generative workspace:

  • Open Photoshop cloud documents directly in Firefly Boards for rapid AI-driven exploration.
  • Generate variations using text-to-image, generative fill, and style transfer within the Boards environment.
  • Send selected variations back to Photoshop for precision refinement.

This is most useful during early concept phases when you want to explore multiple compositional directions before committing to detailed editing in Photoshop. Teams report faster client presentations and quicker creative direction evaluation as a result.

Other Notable Additions

  • Harmonize Tool — builds realistic composites by matching lighting, color temperature, and shadow direction between composited elements. A practical upgrade for anyone doing product placement or scene compositing.
  • Generative Expand — AI outpainting via the Crop Tool. Reliable for sky extensions and simple background fills; works best when expanding away from subjects.
  • Generative Upscale (Topaz Labs integration) — increases resolution and recovers detail on generated or low-resolution content.

What’s Still Missing

Despite the progress, several workflow gaps remain:

  • No AI batch processing — you cannot run generative operations across a set of similar images automatically.
  • No retrospective prompt editing — once content is generated, you cannot go back and modify the original prompt. You must regenerate from scratch.
  • Limited non-destructive AI layers — generated content still flattens into raster layers rather than remaining parametrically editable.
  • Color grading, typography, and layout remain entirely manual processes with no AI assistance.
  • Human finishing work is still essential for hero-level creative outputs. AI tools accelerate the middle of the process but rarely deliver final-quality results unattended.

Version Timeline

VersionDateKey Additions
v27.0October 2025Initial Photoshop 2026 release (Adobe MAX)
v27.3January 2026Clarity, Dehaze, Grain layers; Reference Image for Generative Fill; 2K output
v27.4February 2026Firefly Fill & Expand model extended to Generate Similar
v27.5April 2026Firefly Boards integration; AI assistant public beta

Bottom Line: Who Benefits Most

Photoshop 2026 delivers the most significant workflow acceleration for three groups:

  • Composite artists and retouchers — improved selections, diffusion-based removal, and reference-image compositing cut production time substantially.
  • Marketing and brand teams — multi-model support and Firefly Boards allow rapid asset variation without leaving the Adobe ecosystem.
  • Beginners — the AI assistant’s guided mode and natural-language editing lower the barrier to entry for complex operations.

For experienced designers who already have efficient manual workflows, the AI tools are best treated as accelerators for specific bottlenecks (background removal, object compositing, early exploration) rather than wholesale replacements for skilled editing. The fundamentals — masking, color theory, composition — remain the foundation that makes AI-assisted output look professional.

Key Takeaways

  • The AI assistant (public beta) enables natural-language and voice-driven editing on web and mobile, with automatic and guided modes.
  • Generative Fill now uses Firefly Image 4 with 2K output, reference-image support, and multi-model selection including Google Gemini 3 and FLUX.2 Pro.
  • New Clarity, Dehaze, and Grain adjustment layers bring Camera Raw controls into Photoshop as maskable, non-destructive layers.
  • Generative Remove has been rebuilt with a diffusion model, dramatically improving object removal quality.
  • Gaps remain: no AI batch processing, no retrospective prompt editing, and generated content still flattens to raster.