Stable Diffusion 3.5 Medium

stable-diffusion-3.5-medium

Stable Diffusion 3.5 Medium is 2.5 billion parameter image model with improved MMDiT-X architecture

Fast Inference
REST API

Model Information

Response Time~8 sec
StatusActive
Version
0.0.1
Updated27 days ago

Prerequisites

  • Create an API Key from the Eachlabs Console
  • Install the required dependencies for your chosen language (e.g., requests for Python)

API Integration Steps

1. Create a Prediction

Send a POST request to create a new prediction. This will return a prediction ID that you'll use to check the result. The request should include your model inputs and API key.

import requests
import time
API_KEY = "YOUR_API_KEY" # Replace with your API key
HEADERS = {
"X-API-Key": API_KEY,
"Content-Type": "application/json"
}
def create_prediction():
response = requests.post(
"https://api.eachlabs.ai/v1/prediction/",
headers=HEADERS,
json={
"model": "stable-diffusion-3.5-medium",
"version": "0.0.1",
"input": {
"cfg": "5",
"seed": null,
"image": "your_file.image/jpeg",
"steps": "40",
"prompt": "your prompt here",
"aspect_ratio": "1:1",
"output_format": "webp",
"output_quality": "90",
"prompt_strength": "0.85"
}
}
)
prediction = response.json()
if prediction["status"] != "success":
raise Exception(f"Prediction failed: {prediction}")
return prediction["predictionID"]

2. Get Prediction Result

Poll the prediction endpoint with the prediction ID until the result is ready. The API uses long-polling, so you'll need to repeatedly check until you receive a success status.

def get_prediction(prediction_id):
while True:
result = requests.get(
f"https://api.eachlabs.ai/v1/prediction/{prediction_id}",
headers=HEADERS
).json()
if result["status"] == "success":
return result
elif result["status"] == "error":
raise Exception(f"Prediction failed: {result}")
time.sleep(1) # Wait before polling again

3. Complete Example

Here's a complete example that puts it all together, including error handling and result processing. This shows how to create a prediction and wait for the result in a production environment.

try:
# Create prediction
prediction_id = create_prediction()
print(f"Prediction created: {prediction_id}")
# Get result
result = get_prediction(prediction_id)
print(f"Output URL: {result['output']}")
print(f"Processing time: {result['metrics']['predict_time']}s")
except Exception as e:
print(f"Error: {e}")

Additional Information

  • The API uses a two-step process: create prediction and poll for results
  • Response time: ~8 seconds
  • Rate limit: 60 requests/minute
  • Concurrent requests: 10 maximum
  • Use long-polling to check prediction status until completion

Overview

Stable Diffusion 3.5 Medium is an advanced image generation model designed to create highly detailed and visually appealing content based on textual prompts. The model enables users to transform their ideas into images by leveraging state-of-the-art diffusion techniques. Its flexibility allows for a wide range of creative possibilities, including artwork, photorealistic images, and stylized designs.

Technical Specifications

  • Prompt Strength: Determines how strongly the model adheres to the given prompt. Lower values allow for more abstract outputs, while higher values enforce stricter adherence.
  • CFG (Classifier-Free Guidance): Controls the balance between creativity and prompt adherence. Higher values produce outputs closely tied to the prompt, while lower values add more randomness.
  • Steps: Specifies the number of iterations for the diffusion process. Higher values yield better details but increase generation time.
  • Aspect Ratio: Supports multiple formats for different visual needs.
  • Output Quality: Fine-tune the quality slider to control image resolution and file size.

Key Considerations

Prompt Length: Avoid excessively long prompts as they may confuse the model. Aim for concise yet descriptive instructions.

Style Consistency: When generating multiple images, use the same seed to maintain consistency across outputs.


Legal Information

By using this model, you agree to:

  • Stability AI API agreement
  • Stability AI Terms of Service

Tips & Tricks

Experiment with CFG:
Start with moderate values (e.g., 7-10) and adjust based on your needs. Use higher values for precise outputs and lower values for creative explorations.

Use Prompt Strength Wisely:

  • For creative or abstract results, set prompt strength around 0.5-0.7.
  • For exact representations, increase it to 0.8 or above.

Optimize Steps and Quality:

  • For quick previews, use lower steps (e.g., 20-30) and moderate quality.
  • For final outputs, increase steps (e.g., 50-70) and maximize quality.

Seed Exploration:
Generate multiple images with different seeds to explore diverse variations of the same prompt.

Prompt Clarity:
Use detailed and descriptive prompts to achieve the desired results. For example, instead of "a cat," try "a fluffy white cat sitting on a sunny windowsill with a garden in the background."

Aspect Ratio Selection:
Adjust the aspect ratio to match the intended use of the output. For instance, use a 16:9 ratio for landscapes and a 1:1 ratio for social media posts.

Capabilities

Generates high-quality images based on textual descriptions.

Offers flexibility in style, format, and aspect ratio.

Supports reproducible outputs using seed values.

Balances creative and literal interpretations through prompt strength and CFG.

What can I use for?

Art and Design: Create custom artwork or illustrations.

Content Creation: Generate unique images for blogs, social media, or marketing campaigns.

Storytelling: Visualize characters, scenes, or settings for creative writing.

Prototyping: Produce quick visual concepts for design projects.

Things to be aware of

Stylized Imagery:
Experiment with descriptive prompts like "a futuristic city skyline at sunset, cyberpunk style" to explore different aesthetics.

Photorealistic Results:
Use prompts with clear specifications, e.g., "a close-up of a golden retriever lying on a wooden floor with sunlight streaming through the window."

Iterative Refinement:
Start with a broad concept, then refine prompts and settings to perfect the output.

Creative Variations:
Adjust the seed, aspect ratio, and CFG to produce diverse versions of the same idea.

Limitations

Complex Scenes: The model may struggle with highly intricate scenes or overlapping elements. Simplify prompts if needed.

Abstract Prompts: Results can be unpredictable for abstract or vague instructions. Be specific to achieve better outcomes.

Fine Details: Extremely fine details may require higher steps and CFG values, increasing generation time.

Output Format:JPG,PNG,WEBP

Related AI Models

pixart-xl-2

PixArt XL 2

pixart-xl-2

Text to Image
omni-zero-couples

Omni Zero Couple

omni-zero-couples

Text to Image
ideogram-v2-turbo

Ideogram V2 Turbo

ideogram-v2-turbo

Text to Image
sana

Sana by Nvidia

sana

Text to Image