Flux Dev

flux-dev

A 12 billion parameter rectified flow transformer capable of generating images from text descriptions

Partner Model
Fast Inference
REST API

Model Information

Response Time~10 sec
StatusActive
Version
0.0.1
Updated9 days ago

Prerequisites

  • Create an API Key from the Eachlabs Console
  • Install the required dependencies for your chosen language (e.g., requests for Python)

API Integration Steps

1. Create a Prediction

Send a POST request to create a new prediction. This will return a prediction ID that you'll use to check the result. The request should include your model inputs and API key.

import requests
import time
API_KEY = "YOUR_API_KEY" # Replace with your API key
HEADERS = {
"X-API-Key": API_KEY,
"Content-Type": "application/json"
}
def create_prediction():
response = requests.post(
"https://api.eachlabs.ai/v1/prediction/",
headers=HEADERS,
json={
"model": "flux-dev",
"version": "0.0.1",
"input": {
"seed": null,
"image": "your_file.image/jpeg",
"prompt": "your prompt here",
"guidance": "3.5",
"num_outputs": "1",
"aspect_ratio": "1:1",
"output_format": "webp",
"output_quality": "80",
"prompt_strength": "0.8",
"num_inference_steps": "28",
"disable_safety_checker": false
}
}
)
prediction = response.json()
if prediction["status"] != "success":
raise Exception(f"Prediction failed: {prediction}")
return prediction["predictionID"]

2. Get Prediction Result

Poll the prediction endpoint with the prediction ID until the result is ready. The API uses long-polling, so you'll need to repeatedly check until you receive a success status.

def get_prediction(prediction_id):
while True:
result = requests.get(
f"https://api.eachlabs.ai/v1/prediction/{prediction_id}",
headers=HEADERS
).json()
if result["status"] == "success":
return result
elif result["status"] == "error":
raise Exception(f"Prediction failed: {result}")
time.sleep(1) # Wait before polling again

3. Complete Example

Here's a complete example that puts it all together, including error handling and result processing. This shows how to create a prediction and wait for the result in a production environment.

try:
# Create prediction
prediction_id = create_prediction()
print(f"Prediction created: {prediction_id}")
# Get result
result = get_prediction(prediction_id)
print(f"Output URL: {result['output']}")
print(f"Processing time: {result['metrics']['predict_time']}s")
except Exception as e:
print(f"Error: {e}")

Additional Information

  • The API uses a two-step process: create prediction and poll for results
  • Response time: ~10 seconds
  • Rate limit: 60 requests/minute
  • Concurrent requests: 10 maximum
  • Use long-polling to check prediction status until completion

Overview

FLUX.1 [dev] is a 12-billion-parameter rectified flow transformer developed by Black Forest Labs, designed to generate high-quality images from text descriptions. It serves as an open-weight model aimed at advancing scientific research and empowering artists to develop innovative workflows.

Technical Specifications

Model Architecture: FLUX.1 [dev] is a rectified flow transformer comprising 12 billion parameters, optimized for text-to-image generation tasks.

Training Methodology: The model was trained using guidance distillation, enhancing its efficiency while maintaining high output quality.

Key Considerations

License Restrictions: FLUX.1 [dev] is released under a non-commercial license, permitting use for personal, scientific, and certain commercial purposes as outlined in the license agreement.


Ethical Use: The model and its derivatives must not be used in ways that violate laws, exploit or harm minors, disseminate false information, or engage in activities that harass or bully individuals or groups.


Complete necessary data preprocessing steps: Ensure that input data is appropriately prepared before using the model.


Legal Information

By using this model, you agree to:

  • Black Forest Labs API agreement
  • Black Forest Labs Terms of Service

Tips & Tricks

Prompt Engineering: The quality of generated images is heavily influenced by the specificity and clarity of the input prompts. Experimenting with different prompting styles can yield better results.


Optimal parameter settings for training and inference: Adjust parameters to achieve the best results.

Capabilities

High-Quality Image Generation: FLUX.1 [dev] produces cutting-edge output quality, closely matching the performance of state-of-the-art models like FLUX.1 [pro].


Open Weights for Research: The availability of open weights encourages new scientific research and the development of innovative artistic workflows.


Training with Guidance Distillation: Improved efficiency through guided training methodologies.

What can I use for?

Artistic Creation: Artists can leverage FLUX.1 [dev] to generate unique and high-quality images based on textual descriptions, enhancing creative workflows.


Research and Development: Researchers can utilize the model's open weights to explore advancements in AI-driven image generation and related fields.


Educational Purposes: Generate educational materials with ease.

Things to be aware of

Experiment with Diverse Prompts: Test the model's versatility by inputting a wide range of text descriptions to observe the variety and quality of generated images.


Specific Examples: Experiment with diverse text descriptions to generate varied images.


Practical Use Cases: Test the model in real-world applications.


Parameter Tweaks: Optimize output by exploring different parameter settings.


Expected Output Examples: Preview the types of images the model can create.


Advanced Scenarios: Leverage advanced features for specialized tasks.


Creative Applications: Push the boundaries of art and creativity using the model.


Tool Integration: Combine the model with other tools for enhanced functionality.

Limitations

Prompt Sensitivity: The model's output quality and relevance are highly dependent on the input prompts, and it may not always generate images that perfectly match the descriptions.


Bias Amplification: As a statistical model, FLUX.1 [dev] may inadvertently amplify existing societal biases present in the training data.

Output Format: PNG, JPG, WEBP

Related AI Models

sana

Sana by Nvidia

sana

Text to Image
stable-diffusion-3.5-large

Stable Diffusion 3.5 Large

stable-diffusion-3-5-large

Text to Image
photon-flash

Photon Flash

photon-flash

Text to Image
photon

Photon

photon

Text to Image