each::sense is live
Eachlabs | AI Workflows for app builders
flux-trained

FLUX

FLUX PuLID: FLUX-dev based Pure and Lightning ID Customization via Contrastive Alignment

Official Partner

Avg Run Time: 41.000s

Model Slug: flux-trained

Playground

Input

Enter a URL or choose a file from your computer.

Advanced Controls

Output

Example Result

Preview and download your result.

flux-trained
flux-trained
flux-trained
flux-trained
The total cost depends on how long the model runs. It costs $0.001540 per second. Based on an average runtime of 41 seconds, each run costs about $0.0631. With a $1 budget, you can run the model around 15 times.

API & SDK

Create a Prediction

Send a POST request to create a new prediction. This will return a prediction ID that you'll use to check the result. The request should include your model inputs and API key.

Get Prediction Result

Poll the prediction endpoint with the prediction ID until the result is ready. The API uses long-polling, so you'll need to repeatedly check until you receive a success status.

Readme

Table of Contents
Overview
Technical Specifications
Key Considerations
Tips & Tricks
Capabilities
What Can I Use It For?
Things to Be Aware Of
Limitations

Overview

flux-trained — Image-to-Image AI Model

flux-trained, powered by Black Forest Labs' FLUX PuLID architecture based on FLUX-dev, revolutionizes image-to-image AI with pure and lightning-fast ID customization through contrastive alignment, enabling precise identity preservation in edits without quality loss. Developed as part of the flux family, this model excels at transforming input images while maintaining facial details, styles, and compositions, solving common issues in AI photo editing like identity drift or inconsistent outputs. Ideal for developers seeking a Black Forest Labs image-to-image solution, flux-trained supports high-resolution edits up to 4 megapixels, making it perfect for professional workflows in e-commerce and content creation.

Technical Specifications

What Sets flux-trained Apart

Unlike generic image-to-image models, flux-trained leverages FLUX-dev's contrastive alignment for superior ID fidelity, distilling pure identity customization that outperforms standard fine-tuning methods in speed and accuracy. This enables users to lock in subject identities across edits with minimal steps, ideal for AI image editor API integrations requiring consistent character rendering.

  • Lightning ID Customization: Achieves sub-second inference for ID-preserving edits via 4-step distillation, inheriting FLUX's advanced understanding of lighting and textures—users can customize identities 10x faster than traditional LoRAs while running on consumer GPUs (~13GB VRAM).
  • Multi-Reference Support: Handles up to 4 input images for style transfer and character consistency, preserving geometry and details in high-res outputs—perfect for generating product variants or multi-scene narratives without hallucinations.
  • 4MP Resolution Editing: Edits at up to 4 megapixels with coherent spatial relationships and readable text, supporting any aspect ratio—outshines smaller models in detail retention for image to image AI model applications like bulk catalog generation.

Technical specs include latent flow matching for efficient processing, input resolutions from 64x64, and output in standard image formats, with average times under 1 second on optimized hardware.

Key Considerations

Image customization with identity preservation

Integrated with FLUX.1-dev text-to-image model


High identity similarity maintenance


Background, lighting, and style consistency


Advanced editing options

Tips & Tricks

How to Use flux-trained on Eachlabs

Access flux-trained through Eachlabs' Playground for instant testing with text prompts, input images (up to 4 references), and settings like resolution or steps; integrate via API or SDK by providing base image, customization prompt, and ID lock parameters for 4MP outputs in seconds. Eachlabs delivers production-ready, high-fidelity edits optimized for image-to-image AI model scale.

---

Capabilities

Identity Customization: Seamless integration of specific identities into generated images.

Attribute Modification: Ability to alter attributes such as age, expression, and hairstyle through text prompts.

High-Quality Output: Generation of images with high resolution and fidelity.

What Can I Use It For?

Use Cases for flux-trained

For creators building personalized avatars, flux-trained takes a source portrait and applies "transform this face onto a cyberpunk character in neon-lit streets, maintain exact eye shape and smile" to deliver photorealistic results with perfect ID lock—streamlining avatar pipelines for gaming apps.

Marketers using AI photo editing for e-commerce can input product shots with references: feed a shoe image plus style refs for "place on urban sidewalk with dynamic shadows, match brand lighting"—generating A/B test variants at scale without studio reshoots.

Developers integrating flux-trained API for apps edit user uploads with multi-references, ensuring consistent branding across scenes, like adapting a logo to various backgrounds while preserving text legibility in 100+ languages.

Designers handling automated image editing API workflows use its pose guidance and hex color matching to refine mockups, such as "adjust fabric texture on this dress to silk sheen, hex #C0C0C0, keep model pose intact"—accelerating iterative design cycles.

Things to Be Aware Of

  • Combine Text Prompts with Visual Inputs
    Use creative and descriptive text prompts alongside input images to generate dynamic and unique outputs. For example:
    • "A futuristic version of [input identity] in a cyberpunk city"
    • "A hand-drawn sketch of [input identity] in an ancient warrior's armor."
  • Style Transformation
    Explore various artistic styles, such as watercolor, oil painting, or surrealism, while preserving the subject’s identity.
    • Example: "An oil painting of [input identity] in the style of the Renaissance."
  • Attribute Modification
    Modify age, expression, or other features using text prompts.
    • Example: "Make [input identity] look 20 years older with a happy expression."
  • Background Customization
    Experiment with generating outputs in different settings.
    • Example: "[Input identity] standing on a tropical beach during sunset."
  • High-Quality Outputs for Printing
    Use input images with resolutions of 512x512 or higher and set parameters for ultra-high-quality outputs suitable for printing or showcasing.
  • Focus on Lighting and Composition
    Add lighting effects or scene descriptions for dramatic results.
    • Example: "A cinematic portrait of [input identity] under a spotlight in a dark theater."
  • Multiple Identity Blends
    Try blending multiple identities or input features to create hybrid outputs.
    • Example: "A fusion of [input identity 1] and [input identity 2] as a superhero."
  • Creative Use Cases
    Push the boundaries of the model’s capabilities by combining functionality with other AI tools.
    • Example: Use outputs as input for video editing, digital animations, or augmented reality applications.

Experiment with different timestep values


Test various styles and compositions


Try different lighting and background combinations


Compare stylized and realistic outputs


Experiment with portrait and full-body photos


Create images in various artistic styles

Limitations

Identity Fidelity: While improved, the model may still struggle with accurately replicating certain identities, particularly male faces.
Output Format: PNG, WEBP

Pricing

Pricing Detail

This model runs at a cost of $0.001540 per second.

The average execution time is 41 seconds, but this may vary depending on your input data.

The average cost per run is $0.063140

Pricing Type: Execution Time

Cost Per Second means the total cost is calculated based on how long the model runs. Instead of paying a fixed fee per run, you are charged for every second the model is actively processing. This pricing method provides flexibility, especially for models with variable execution times, because you only pay for the actual time used.