Quick Answer
Photoshop’s AI Assistant (public beta since March 10, 2026) lets you edit photos through plain-language prompts and voice commands. For solo use it’s straightforward. For teams shipping client work, you need structure: prompt logs, approval gates, and clear rules about when to finish manually. This playbook gives you that structure.
Adobe’s March 10 announcement introduced the AI Assistant for Photoshop on web and mobile, followed by the March 19 Firefly expansion adding custom models and the Project Moonlight private beta. Together, these updates make conversational editing a real production option. This article focuses on what our existing coverage hasn’t: how a team actually adopts this safely.
What Changed in March 2026 (Confirmed Facts)
The following details come directly from Adobe’s official blog posts:
- AI Assistant public beta (Mar 10): Available in Photoshop web and mobile. Accepts natural-language prompts for removing distractions, changing backgrounds, refining lighting, and adjusting color. Supports voice commands on mobile.
- AI Markup (Mar 10): Draw directly on an image inside the contextual task bar and pair annotations with text prompts to control where changes land.
- Two editing modes: Automatic (AI applies the edit) or guided (AI walks you through it step-by-step).
- Credit limits: Paid subscribers get unlimited generations through April 9; free users get 20 per month.
- Firefly Custom Models (Mar 19): Public beta lets you train models on your own images, optimized for character, illustration, and photographic styles. Models are private by default.
- 30+ third-party models: Firefly now offers models from Adobe, Google, OpenAI, Runway, and Black Forest Labs inside one workspace.
- Project Moonlight (Mar 19): Private beta for agentic, multi-step AI assistants across Photoshop, Express, and Acrobat.
Editorial Note
Adobe has not published team-governance features, role-based access controls, or audit-log tooling for AI Assistant edits as of this writing. The workflow recommendations below are our own, based on production experience with the beta. We clearly label what is confirmed versus our commentary throughout.
Workflow Before vs. After AI Assistant
| Stage | Before (Manual) | After (AI-Assisted) |
|---|---|---|
| Rough cleanup | Clone Stamp / Healing Brush, 15–30 min per image | Prompt: “remove distractions from the background” — seconds |
| Background swap | Select Subject → Refine Edge → new layer → color-match | Prompt + AI Markup to constrain region; auto mode applies in one step |
| Lighting correction | Curves + Dodge/Burn + masking | Prompt: “brighten subject, warm the fill light” — iterate via chat |
| Variant generation | Duplicate PSD → manual edits per version | Re-prompt for variations; compare and pick |
| Final polish | Manual: frequency separation, dodge/burn, sharpening | Still manual — AI not reliable enough for final-pixel work |
The pattern: AI Assistant dramatically compresses early and mid stages. Final polish stays manual. Teams that recognize this boundary avoid the biggest risk — shipping AI-generated output that hasn’t been human-verified.
Governance Checkpoints for Team Workflows
Adobe hasn’t shipped built-in governance tools for AI Assistant yet (our commentary). Until they do, we recommend teams add these checkpoints manually:
- Pre-edit brief. Before touching a file, the designer records the task scope: what the AI will handle and what stays manual. A single line in your project tracker is enough — “AI: background swap + lighting pass. Manual: skin retouch + text placement.”
- Prompt log. Save every prompt used in a sidecar text file or shared doc, tagged with the PSD filename and timestamp. This makes results reproducible and auditable.
- Mid-edit checkpoint. After the AI pass, a second team member reviews the working file before manual finishing begins. This catches hallucinated detail (extra fingers, warped text, color drift) early.
- Final approval gate. The finished file goes through your existing review process — no change here — but reviewers should specifically check areas touched by AI against the prompt log.
Prompt Versioning: Why and How
Conversational editing is iterative. You rarely get a perfect result on the first prompt, and teams need to know which prompt produced the approved output. Our recommended approach:
- Name, don’t number. Instead of “v1, v2, v3,” use descriptive labels:
bg-swap-sunset,bg-swap-studio-white. - Pin the winning prompt. Once a direction is approved, copy the exact prompt into the project brief so anyone can reproduce it.
- Record the model. If your team uses Firefly Custom Models, note which model version generated the result. Custom model training can change outputs between sessions.
- Snapshot before manual edits. Save a “post-AI, pre-manual” version of the file. If the client changes direction, you can re-branch from the AI output without redoing manual work.
Review and Approval Gates
For teams with existing review processes, AI Assistant doesn’t replace your gates — it adds one:
| Gate | Who | What to Check |
|---|---|---|
| 1. Scope approval | Art director / lead | AI vs. manual split is appropriate for this asset |
| 2. Post-AI review | Peer designer | No artifacts, anatomy errors, brand-off colors, or hallucinated elements |
| 3. Final QA | Art director / client | Production-ready quality, prompt log attached |
When Manual Editing Still Wins
AI Assistant speeds up a lot of tasks, but some work should stay fully manual:
- High-end skin retouching. Frequency separation and dodge/burn produce results AI can’t match for beauty, fashion, and portrait work.
- Precise product cutouts. E-commerce product shots need pixel-perfect edges. The Pen Tool is still more reliable than prompt-based selection for hard-edge products.
- Typography and brand lockups. AI-generated text remains inconsistent. Place, kern, and align type manually.
- Legally sensitive composites. If the output will appear in regulated advertising (pharma, finance, real estate), human-controlled edits with documented steps remain the safest path.
- Complex hand and body anatomy. As noted in our AI features evaluation, generative tools still struggle with anatomical accuracy.
Firefly Custom Models: Team Implications
The March 19 Firefly update added custom model training (public beta). For teams, the key confirmed facts: models are trained on your uploaded images, they’re private by default, and they’re optimized for character, illustration, and photographic styles. Our take on what this means for team workflows:
- Brand consistency gains. Train a model on approved brand assets so generated elements (backgrounds, textures, supplementary imagery) match your visual identity without manual color-grading every output.
- Version control matters. Re-training a custom model on updated assets can change output. Document which model version was used for each deliverable.
- Asset selection is a design decision. The images you upload to train the model define its output. Treat training-set curation like any other creative brief — get sign-off from the art director.
Credit Management for Teams
Under the current beta terms, paid Photoshop subscribers have unlimited AI generations through April 9, 2026. After that, Adobe will likely revert to credit-based usage. Teams should plan now:
- Use the unlimited window to experiment, build prompt libraries, and train team members.
- Track which tasks consume the most generations so you can budget credits once limits return.
- Establish a “prompt-first” discipline: write and review the prompt before executing, rather than iterating blindly.
Key Takeaways
- AI Assistant compresses early and mid-stage editing dramatically; final polish stays manual for production-quality work.
- Add governance checkpoints (scope brief, prompt log, post-AI peer review) to catch artifacts before manual finishing begins.
- Version prompts descriptively and pin the winning prompt in your project brief for reproducibility.
- Firefly Custom Models let teams bake brand consistency into AI generations — but treat model training like a creative brief that needs sign-off.
- Use the unlimited-generation beta window (through April 9) to build prompt libraries and train your team before credits kick back in.