each::sense is in private beta.
Eachlabs | AI Workflows for app builders
flux-2-flash-edit

FLUX-2

FLUX.2 [dev] from Black Forest Labs enables fast image-to-image editing with precise, natural-language modifications and hex color control.

Avg Run Time: 10.000s

Model Slug: flux-2-flash-edit

Release Date: December 23, 2025

Playground

Input

Output

Example Result

Preview and download your result.

Preview
Your request will cost $0.005 per megapixel for input and $0.005 per megapixel for output.

API & SDK

Create a Prediction

Send a POST request to create a new prediction. This will return a prediction ID that you'll use to check the result. The request should include your model inputs and API key.

Get Prediction Result

Poll the prediction endpoint with the prediction ID until the result is ready. The API uses long-polling, so you'll need to repeatedly check until you receive a success status.

Readme

Table of Contents
Overview
Technical Specifications
Key Considerations
Tips & Tricks
Capabilities
What Can I Use It For?
Things to Be Aware Of
Limitations

Overview

FLUX.2 Flash Edit is a high-speed image-to-image editing model developed by Black Forest Labs, designed for precise, production-ready image transformations using natural language prompts. The model represents an advancement in the FLUX.2 family, specifically optimized for rapid inference while maintaining fine control over editing operations. It enables users to make targeted modifications to existing images through text descriptions, preserving subject identity, composition, and visual continuity while applying requested changes. The underlying architecture builds on FLUX.2's diffusion-based approach, incorporating optimizations like FP8 performance enhancement and unified generation plus editing capabilities that make image-to-image tasks faster and more consistent compared to earlier iterations. What distinguishes this model is its combination of low-latency performance with precise adherence control, making it suitable for both creative workflows and brand-focused production pipelines that require dependable, repeatable results.

Technical Specifications

Architecture
Diffusion-based transformer model from Black Forest Labs FLUX.2 family
Text Encoder
Single text encoder (Mistral Small 3.1) with max sequence length of 512 tokens
Resolution
Supports image sizes between 512 and 2048 pixels; can edit images up to approximately 4 megapixels
Input/Output Formats
PNG and JPG for both input and output, maintaining color accuracy and fidelity
Maximum Input Images
Up to 4 images per request
Guidance Scale Range
0 to 20 (default 2.5)
Performance Optimization
FP8 performance optimization for faster inference
Seed Control
Supports manual seed specification for reproducibility or random generation

Key Considerations

  • Guidance scale (default 2.5) controls how closely the model adheres to your prompt; higher values increase prompt adherence but may reduce image naturalness
  • Input images are resized to 1 megapixel for processing, so plan accordingly for cost optimization
  • The model preserves subject identity and composition by design, making it ideal for targeted edits rather than complete transformations
  • Prompt clarity is essential; natural language descriptions work best when specific about what to modify
  • Multiple reference images can be provided to guide the editing direction
  • Seed specification enables reproducible results for iterative refinement
  • Safety checking is available and enabled by default to filter NSFW content
  • Output resolution and input image size both affect processing cost and quality

Tips & Tricks

  • Start with the default guidance scale of 2.5 and adjust upward only if the model isn't following your prompt closely enough; values above 10 may produce unnatural results
  • Use specific, descriptive prompts rather than vague instructions; for example, "Remove the meat from the hamburger" is more effective than "Change the hamburger"
  • When making color adjustments, hex color codes can be incorporated into prompts for precise color control
  • For iterative refinement, save the seed value from successful edits to reproduce similar results with minor prompt modifications
  • Provide multiple reference images when you want the model to consider different aspects of the desired edit
  • Test with square_hd (1024x1024) resolution first to understand model behavior before scaling to larger sizes
  • For e-commerce applications, use consistent seeds and guidance scales across product image batches to maintain visual coherence
  • Break complex edits into multiple steps rather than attempting all changes in a single prompt

Capabilities

  • Precise image-to-image editing with natural language control
  • Maintains subject identity and composition while applying targeted modifications
  • Supports multiple reference images to guide editing direction
  • Fine-grained color control through hex color specifications
  • Real-time or near-real-time processing with low-latency performance
  • Handles up to 4 megapixel output resolution with exceptional detail and color precision
  • Produces brand-ready visuals suitable for professional workflows
  • Consistent results across multiple generations with seed control
  • Effective for both subtle refinements and significant style transformations
  • Unified generation and editing pipeline for streamlined workflows

What Can I Use It For?

  • E-commerce product photography retouching and background modification
  • Brand asset creation and style consistency across product catalogs
  • UI mockup generation and refinement for design workflows
  • Fashion and apparel image editing with precise color and style adjustments
  • Real estate photography enhancement and staging modifications
  • Marketing material creation with rapid iteration capabilities
  • Social media content optimization and restyling
  • Professional photography retouching for color correction and element removal
  • Creative design projects requiring precise visual control
  • Batch processing of similar images with consistent styling

Things to Be Aware Of

  • The model excels at targeted edits but may struggle with requests that fundamentally alter image composition or add entirely new elements not suggested by the original image
  • Processing cost is based on megapixels of both input and output; larger images incur proportionally higher costs
  • Input images are automatically resized to 1 megapixel during processing, which may affect fine detail preservation in very high-resolution source images
  • NSFW content detection is active by default, which may flag certain legitimate creative or artistic content
  • Guidance scale adjustments require experimentation to find optimal balance between prompt adherence and natural-looking results
  • The model performs best with clear, descriptive prompts; ambiguous instructions may produce inconsistent results
  • Real-time performance is achieved through optimization trade-offs; extremely complex edits may require longer processing times
  • User feedback indicates strong performance for professional workflows with consistent, repeatable results
  • Community discussions highlight effectiveness for batch processing and production pipelines
  • Some users report excellent results for color grading and style transfer applications

Limitations

  • Maximum resolution of approximately 4 megapixels limits application to very large-scale image editing projects
  • The model is optimized for targeted edits rather than complete image regeneration or composition changes
  • Complex multi-step edits may require sequential processing rather than single-prompt execution