Input
Configure model parameters
Output
View generated results
Result
Preview, share or download your results with a single click.
Prerequisites
- Create an API Key from the Eachlabs Console
- Install the required dependencies for your chosen language (e.g., requests for Python)
API Integration Steps
1. Create a Prediction
Send a POST request to create a new prediction. This will return a prediction ID that you'll use to check the result. The request should include your model inputs and API key.
import requestsimport timeAPI_KEY = "YOUR_API_KEY" # Replace with your API keyHEADERS = {"X-API-Key": API_KEY,"Content-Type": "application/json"}def create_prediction():response = requests.post("https://api.eachlabs.ai/v1/prediction/",headers=HEADERS,json={"model": "veo-3","version": "0.0.1","input": {"generate_audio": false,"negative_prompt": "your negative prompt here","aspect_ratio": "16:9","seed": 0,"prompt": "your prompt here"},"webhook_url": ""})prediction = response.json()if prediction["status"] != "success":raise Exception(f"Prediction failed: {prediction}")return prediction["predictionID"]
2. Get Prediction Result
Poll the prediction endpoint with the prediction ID until the result is ready. The API uses long-polling, so you'll need to repeatedly check until you receive a success status.
def get_prediction(prediction_id):while True:result = requests.get(f"https://api.eachlabs.ai/v1/prediction/{prediction_id}",headers=HEADERS).json()if result["status"] == "success":return resultelif result["status"] == "error":raise Exception(f"Prediction failed: {result}")time.sleep(1) # Wait before polling again
3. Complete Example
Here's a complete example that puts it all together, including error handling and result processing. This shows how to create a prediction and wait for the result in a production environment.
try:# Create predictionprediction_id = create_prediction()print(f"Prediction created: {prediction_id}")# Get resultresult = get_prediction(prediction_id)print(f"Output URL: {result['output']}")print(f"Processing time: {result['metrics']['predict_time']}s")except Exception as e:print(f"Error: {e}")
Additional Information
- The API uses a two-step process: create prediction and poll for results
- Response time: ~90 seconds
- Rate limit: 60 requests/minute
- Concurrent requests: 10 maximum
- Use long-polling to check prediction status until completion
Overview
Google Veo 3 is designed for high-quality text-to-video generation. It takes textual prompts and optionally a seed value to produce short videos with cinematic coherence. Google Veo 3 focuses on realism, detail, and smooth camera motion. It supports a wide range of video types such as aerial shots, slow motion, and first-person perspective videos, offering creative control through natural language descriptions.
Technical Specifications
Frame Rate: Up to 30 fps.
Resolution: Supports generation up to 1080p in preview modes.
Motion Awareness: Integrated camera and object motion tracking.
Semantic Understanding: Natural language input with multi-modal scene grounding.
Consistency: Improved temporal and spatial coherence across frames.
Key Considerations
Video duration is fixed to short clips and cannot be extended beyond a few seconds per run.
Input text is the sole control mechanism; no image, audio, or video input is supported.
Outputs may occasionally contain unnatural object deformations or flickering.
Explicit, graphic, or flagged terms may cause failure or result in blank output.
Abstract prompts may lead to hallucinated or visually ambiguous results.
Real names, brands, or sensitive entities should be avoided in prompts.
Tips & Tricks
Prompt (String):
-
Use clear and concrete descriptions:
✅ “A white horse galloping through a foggy forest at sunrise”
❌ “Freedom in nature” -
For dynamic motion:
"A drone shot of a bustling city at night with moving cars and bright lights" -
To describe visual style:
"In the style of a cinematic sci-fi film, with dramatic lighting" -
Avoid flagged or unsafe content such as:
- Violence: "explosions", "war scenes"
- NSFW: "nudity", "suggestive themes"
- Identifiable individuals or real persons
Seed (Integer):
- Controls variation for repeatable results.
- Use values like 12345, 42, etc. for consistent video output across different generations.
- Changing the seed results in slightly different visuals with the same prompt.
- Leaving seed empty allows random variation.
Capabilities
Generate short cinematic-style videos from natural language.
Supports descriptions of motion, objects, scenery, and atmosphere.
Capable of handling various themes like nature, futuristic, urban, fantasy, and more.
Supports camera controls like zoom, pan, dolly, and aerial views through language.
What can I use for?
Creating short visual concepts for storytelling or ideation
Generating teaser videos for creative projects
Producing aesthetic motion clips based on detailed prompts
Visualizing scene ideas for design, animation, or narrative planning
Things to be aware of
Combine camera directions with settings:
“A slow pan across a desert at golden hour”
Mix motion and mood:
“A handheld shot following a child running through a sunflower field in slow motion”
Experiment with time of day and lighting:
“A mountain village at dusk, with lights flickering on and smoke rising from chimneys”
Add genre-based visual tones:
“Cyberpunk city with neon signs and rainy streets, drone footage”
Limitations
Realism may degrade with overly abstract prompts
May generate flickering or frame inconsistencies
No interactive editing or feedback loop — one-shot generation
Prompts involving copyrighted characters or brands may fail
Output Type: MP4
Related AI Models
You can seamlessly integrate advanced AI capabilities into your applications without the hassle of managing complex infrastructure.