Input
Configure model parameters
Output
View generated results
Result
Preview, share or download your results with a single click.

Prerequisites
- Create an API Key from the Eachlabs Console
- Install the required dependencies for your chosen language (e.g., requests for Python)
API Integration Steps
1. Create a Prediction
Send a POST request to create a new prediction. This will return a prediction ID that you'll use to check the result. The request should include your model inputs and API key.
import requestsimport timeAPI_KEY = "YOUR_API_KEY" # Replace with your API keyHEADERS = {"X-API-Key": API_KEY,"Content-Type": "application/json"}def create_prediction():response = requests.post("https://api.eachlabs.ai/v1/prediction/",headers=HEADERS,json={"model": "imagen-4-fast","version": "0.0.1","input": {"prompt": "your prompt here","aspect_ratio": "1:1","output_format": "jpg","safety_filter_level": "block_only_high"},"webhook_url": ""})prediction = response.json()if prediction["status"] != "success":raise Exception(f"Prediction failed: {prediction}")return prediction["predictionID"]
2. Get Prediction Result
Poll the prediction endpoint with the prediction ID until the result is ready. The API uses long-polling, so you'll need to repeatedly check until you receive a success status.
def get_prediction(prediction_id):while True:result = requests.get(f"https://api.eachlabs.ai/v1/prediction/{prediction_id}",headers=HEADERS).json()if result["status"] == "success":return resultelif result["status"] == "error":raise Exception(f"Prediction failed: {result}")time.sleep(1) # Wait before polling again
3. Complete Example
Here's a complete example that puts it all together, including error handling and result processing. This shows how to create a prediction and wait for the result in a production environment.
try:# Create predictionprediction_id = create_prediction()print(f"Prediction created: {prediction_id}")# Get resultresult = get_prediction(prediction_id)print(f"Output URL: {result['output']}")print(f"Processing time: {result['metrics']['predict_time']}s")except Exception as e:print(f"Error: {e}")
Additional Information
- The API uses a two-step process: create prediction and poll for results
- Response time: ~13 seconds
- Rate limit: 60 requests/minute
- Concurrent requests: 10 maximum
- Use long-polling to check prediction status until completion
Overview
A text-to-image diffusion model developed by Google DeepMind. It generates high-quality, photorealistic images from text prompts. Bring your ideas to life—faster, sharper, and truer to your vision.
Technical Specifications
Aspect ratios supported:
- 1:1: 1024×1024
- 3:4: 896×1280
- 4:3: 1280×896
- 9:16: 768×1408
- 16:9: 1408×768
Prompt languages: English, Chinese (Simplified & Traditional), Hindi, Japanese, Korean,Portuguese, Spanish.
Prompt length: On average, 1 token equals about 4 characters, so prompts can be up to approximately 1,900 characters long.
Key Considerations
Prompt quality matters: Clear, detailed prompts lead to better results.
Not fact-grounded: Expect some inconsistencies in details and realism.
Style & text limits: Handles diverse styles and text better—but not perfectly.
Tips & Tricks
Be specific: Use clear, detailed prompts. Define the subject, key features, and any actions it’s performing.
Set the scene: Describe the environment and mood—include background elements, lighting, weather, or time of day.
Specify a style: Mention the desired artistic style, such as photorealism, vector art, or a specific art movement.
Guide composition: Include parameters for camera angle and compositional elements. Structured, descriptive language helps generate more targeted, intentional visuals.
Capabilities
Photorealism: Create lifelike images of people, animals, landscapes, and more—down to the finest detail.
Ultra-sharp details: Rich textures, vibrant colors, and stunning close-ups with natural depth and gradients.
Smarter text rendering: Better spelling, longer passages, and more sophisticated typography—ideal for comics, collectibles, and design work.
More styles, more control: From hyper-realistic to abstract, it handles a wide range of visual styles with improved accuracy.
Fast mode: Explore ideas at lightning speed—up to 10× faster than Google’s earlier models.
High-resolution output: Generate crisp, creative visuals at up to 2K resolution.
What can I use for?
Comics and storybooks: Generate characters, scenes, and panels with readable text and clear visuals.
Concept art: Quickly explore visual ideas for games, films, and animation.
Commercial and marketing visuals: Produce eye-catching imagery for product mockups, posters, and digital content.
Collectibles and packaging: Design greeting cards, covers, and layouts with improved text rendering.
Things to be aware of
No customization tools: It doesn’t support style transfer, subject tuning, or few-shot personalization.
No image editing: Features like inpainting, outpainting, masking, or image upscaling are not available.
No negative prompting: You can’t exclude elements (e.g., “no text,” “no watermark”) via prompts.
Limitations
Lack of factual grounding: Imagen isn’t built for real-world accuracy. It can introduce artifacts in complex scenes, especially
with small faces, text, or thin structures.
Centering issues: Struggles with perfect alignment, such as placing a circle exactly in the center.
Unclear prompts: Nonsensical input (like random characters or emojis) can lead to unpredictable results.
Output Format: JPG,PNG
Related AI Models
You can seamlessly integrate advanced AI capabilities into your applications without the hassle of managing complex infrastructure.