Stable Diffusion 3.5 Large

stable-diffusion-3.5-large

A text-to-image model that generates high-resolution images with fine details. It supports various artistic styles and produces diverse outputs from the same prompt, thanks to Query-Key Normalization.

Fast Inference
REST API

Model Information

Response Time~8 sec
StatusActive
Version
0.0.1
Updated11 days ago

Prerequisites

  • Create an API Key from the Eachlabs Console
  • Install the required dependencies for your chosen language (e.g., requests for Python)

API Integration Steps

1. Create a Prediction

Send a POST request to create a new prediction. This will return a prediction ID that you'll use to check the result. The request should include your model inputs and API key.

import requests
import time
API_KEY = "YOUR_API_KEY" # Replace with your API key
HEADERS = {
"X-API-Key": API_KEY,
"Content-Type": "application/json"
}
def create_prediction():
response = requests.post(
"https://api.eachlabs.ai/v1/prediction/",
headers=HEADERS,
json={
"model": "stable-diffusion-3.5-large",
"version": "0.0.1",
"input": {
"cfg": "3.5",
"seed": null,
"image": "your_file.image/jpeg",
"steps": "35",
"prompt": "your prompt here",
"aspect_ratio": "1:1",
"output_format": "webp",
"output_quality": "90",
"prompt_strength": "0.85"
}
}
)
prediction = response.json()
if prediction["status"] != "success":
raise Exception(f"Prediction failed: {prediction}")
return prediction["predictionID"]

2. Get Prediction Result

Poll the prediction endpoint with the prediction ID until the result is ready. The API uses long-polling, so you'll need to repeatedly check until you receive a success status.

def get_prediction(prediction_id):
while True:
result = requests.get(
f"https://api.eachlabs.ai/v1/prediction/{prediction_id}",
headers=HEADERS
).json()
if result["status"] == "success":
return result
elif result["status"] == "error":
raise Exception(f"Prediction failed: {result}")
time.sleep(1) # Wait before polling again

3. Complete Example

Here's a complete example that puts it all together, including error handling and result processing. This shows how to create a prediction and wait for the result in a production environment.

try:
# Create prediction
prediction_id = create_prediction()
print(f"Prediction created: {prediction_id}")
# Get result
result = get_prediction(prediction_id)
print(f"Output URL: {result['output']}")
print(f"Processing time: {result['metrics']['predict_time']}s")
except Exception as e:
print(f"Error: {e}")

Additional Information

  • The API uses a two-step process: create prediction and poll for results
  • Response time: ~8 seconds
  • Rate limit: 60 requests/minute
  • Concurrent requests: 10 maximum
  • Use long-polling to check prediction status until completion

Overview

Stable Diffusion 3.5 Large is a text-to-image generation model developed by Stability AI. With 8 billion parameters, it specializes in creating high-quality, detailed images that match written descriptions effectively. The model is based on a technology called the Multimodal Diffusion Transformer (MMDiT) and uses a unique method to ensure consistent and reliable results, making the training process more stable and efficient

Technical Specifications

Architecture: Multimodal Diffusion Transformer (MMDiT)

Parameters: 8 billion

Image Resolution: Capable of generating images up to 1 megapixel

Inference Steps: Standard model requires more steps, while the Turbo variant produces images in fewer steps due to Adversarial Diffusion Distillation (ADD)

Key Considerations

Prompt Specificity: Detailed prompts yield more accurate and relevant images.Overly vague or contradictory prompts may lead to unexpected results.


Legal Information

By using this model, you agree to:

  • Stability AI API agreement
  • Stability AI Terms of Service

Tips & Tricks

Optimal Prompt Length: Aim for 1–2 sentences that capture the essence of the desired image. Avoid overly complex phrasing.

Prompt Strength and CFG Balance: Start with default settings and adjust gradually. For abstract outputs, increase cfg; for prompt-specific images, decrease it slightly.

Aspect Ratio:

  • Use 1:1 for social media posts.
  • Choose 16:9 for wide-screen visuals.
  • Pick 4:5 or 3:4 for portraits.

Steps and Quality:

  • For fast previews, set steps between 10–20 and a medium output_quality.
  • For final outputs, increase steps to 30–50 and maximize output_quality.

Seed Reusability: Generate multiple outputs with random seeds to explore variety, then lock in a specific seed to refine or iterate further.

Image Input: Upload an image for inpainting or to anchor a generated concept. Pair with a detailed prompt for focused edits.

Iterative Refinement: Refine your prompts iteratively to progressively achieve the desired output.

Prompt Structuring: Clearly define elements such as style, subject, action, composition, lighting, and technical parameters in your prompts to achieve desired results. For instance, specifying "a futuristic treehouse city at sunset, intricate details of glass and wood structures" can guide the model to generate a detailed image matching this description.

    Start Simple:

    • Begin with straightforward prompts and gradually add complexity to refine results.

    Leverage Styles:

    • Use terms like "oil painting," "digital art," or "watercolor" to explore different artistic styles.

    Combine Concepts:

    • Experiment with merging multiple ideas in a single prompt for unique outputs (e.g., "a futuristic cityscape with medieval elements").

    Capabilities

    High-Quality Image Generation: Produces photorealistic images with high fidelity to the input prompt.

    Versatile Style Adaptation: Capable of emulating a wide range of artistic styles, from realistic photography to abstract art.

    Prompt Adherence: Demonstrates strong alignment with detailed and complex textual descriptions.

    What can I use for?

    Digital Art Creation: Generate artwork for personal projects, concept designs, or professional use.

    Content Generation: Create visual content for blogs, social media, marketing materials, and more.

    Things to be aware of

    Style Blending: Combine multiple artistic styles in a single prompt to create unique, hybrid images.

    Scene Composition: Experiment with different scene descriptions to explore the model's interpretative capabilities.

    Lighting Effects: Adjust lighting parameters in your prompts to see how the model renders various atmospheres and moods.

    Limitations

    Complex Scenes: Struggles with overly intricate prompts or highly specific scenes.

    Resolution: Dependent on output_quality settings; very high resolutions may not always maintain sharpness.

    Reproducibility: Randomized seed values can make it hard to recreate exact results.

    Art Style Consistency: May vary in maintaining a consistent artistic style across multiple outputs.


    Output Format: WEBP,JPG,PNG

    Related AI Models

    recraft-20b

    Recraft 20B

    recraft-20b

    Text to Image
    photon

    Photon

    photon

    Text to Image
    fooocus-api

    Fooocus

    fooocus-api

    Text to Image
    stable-diffusion-3.5-large-turbo

    Stable Dıffusion 3.5 Large Turbo

    stable-diffusion-3-5-large-turbo

    Text to Image