Input
Configure model parameters
Output
View generated results
Result
Preview, share or download your results with a single click.
Prerequisites
- Create an API Key from the Eachlabs Console
- Install the required dependencies for your chosen language (e.g., requests for Python)
API Integration Steps
1. Create a Prediction
Send a POST request to create a new prediction. This will return a prediction ID that you'll use to check the result. The request should include your model inputs and API key.
import requestsimport timeAPI_KEY = "YOUR_API_KEY" # Replace with your API keyHEADERS = {"X-API-Key": API_KEY,"Content-Type": "application/json"}def create_prediction():response = requests.post("https://api.eachlabs.ai/v1/prediction/",headers=HEADERS,json={"model": "kling-v2-1-master-image-to-video","version": "0.0.1","input": {"cfg_scale": 0.5,"negative_prompt": "your negative prompt here","aspect_ratio": "16:9","duration": 5,"image_url": "your image url here","prompt": "your prompt here"},"webhook_url": ""})prediction = response.json()if prediction["status"] != "success":raise Exception(f"Prediction failed: {prediction}")return prediction["predictionID"]
2. Get Prediction Result
Poll the prediction endpoint with the prediction ID until the result is ready. The API uses long-polling, so you'll need to repeatedly check until you receive a success status.
def get_prediction(prediction_id):while True:result = requests.get(f"https://api.eachlabs.ai/v1/prediction/{prediction_id}",headers=HEADERS).json()if result["status"] == "success":return resultelif result["status"] == "error":raise Exception(f"Prediction failed: {result}")time.sleep(1) # Wait before polling again
3. Complete Example
Here's a complete example that puts it all together, including error handling and result processing. This shows how to create a prediction and wait for the result in a production environment.
try:# Create predictionprediction_id = create_prediction()print(f"Prediction created: {prediction_id}")# Get resultresult = get_prediction(prediction_id)print(f"Output URL: {result['output']}")print(f"Processing time: {result['metrics']['predict_time']}s")except Exception as e:print(f"Error: {e}")
Additional Information
- The API uses a two-step process: create prediction and poll for results
- Response time: ~300 seconds
- Rate limit: 60 requests/minute
- Concurrent requests: 10 maximum
- Use long-polling to check prediction status until completion
Overview
Kling v2.1 Master Image to Video is a text-guided video generation model that transforms a single input image into a short animated sequence. It interprets textual prompts in combination with visual cues from the input image to create dynamic, coherent motion. Kling v2.1 Master Image to Video is designed for creative storytelling, visual enhancement, and cinematic motion synthesis from static visuals.
Technical Specifications
Kling v2.1 Master Image to Video is a model that creates short videos based on a single input image.
Kling v2.1 Master Image to Video analyzes the image and adds natural-looking motion, such as facial expressions, hair movement, or camera panning.
The motion is generated artificially and does not reflect real video footage — it is a simulated animation based on the still image.
Visual details from the original image are preserved as much as possible, but small changes or shifts may occur due to the animation process.
The output videos are short in length, typically between 5 to 10 seconds.
Kling v2.1 Master Image to Video supports three aspect ratios: 16:9 (horizontal), 9:16 (vertical), and 1:1 (square).
All output videos are silent (no audio is included).
Key Considerations
Image resolution does not need to exceed 1024×1024; high-resolution images may be auto-scaled.
Low cfg_scale values produce more diverse motion but less prompt fidelity.
Longer durations may lead to less visual consistency unless prompts are clear and minimal.
Negative prompts can help suppress undesired motion or visual styles (e.g., “blurry”, “distorted”).
Motion is generated synthetically and does not preserve fine details from the original image pixel-by-pixel.
Legal Information for Kling v1 Pro Image to Video
By using this Kling v1 Pro Image to Video, you agree to:
- Kling Privacy
- Kling SERVICE AGREEMENT
Tips & Tricks
- prompt
Use action-driven prompts (e.g., “the camera slowly zooms out while the person turns their head”) to suggest motion. Avoid overly abstract prompts. -
image_url
Use high-quality images with focused subjects and minimal background noise. Centered framing helps guide motion synthesis. -
aspect_ratio
Recommended values:- 16:9 for cinematic framing
- 9:16 for mobile-style vertical output
-
1:1 for social-friendly square clips
Match the input image’s native ratio when unsure.
-
duration
Set between 5 to 10.- 5: Short and crisp actions
-
10: Extended motion sequences
Use 5 for sharp, contained effects; use 10 only if the prompt and image suggest sustained motion.
-
negative_prompt
Suggested examples:-
“blurry”, “low contrast”, “chaotic background”, “bad anatomy”
These help reduce visual artifacts and enforce cleaner outputs.
-
“blurry”, “low contrast”, “chaotic background”, “bad anatomy”
-
cfg_scale
Range: 0.0 to 1.0- 0.4 – 0.6: Balanced control and creativity
- 0.7+: Stronger prompt adherence, less visual flexibility
- Below 0.4: More abstract and cinematic variations, may ignore prompt details
Capabilities
Converts a still image into coherent motion based on a guiding prompt.
Produces short, animated sequences up to 10 seconds.
Supports custom framing with multiple aspect ratios.
Maintains high semantic alignment between image content and prompt guidance.
Supports creative experimentation through negative prompt suppression and cfg scale tuning.
What can I use for?
Creating short visual storytelling clips from a single photo and text.
Enhancing static portraits with camera movement or natural actions.
Designing dynamic video previews for marketing, concept art, or editorial use.
Simulating cinematic effects like slow zoom, pan, or subtle expression changes.
Social media animations with controlled aspect ratio and content styling.
Things to be aware of
Animate historical or archival photos with modern camera effects.
Add prompt-driven weather motion, like “light wind blowing through hair”.
Experiment with surreal or artistic prompts on abstract illustrations.
Limitations
Does not support audio or interactive motion control.
May occasionally introduce visual inconsistencies, especially with long durations.
Outputs are limited to short clips; continuous or multi-scene animations are not supported.
Image content dominates the motion context — complex prompts alone won’t define action.
Negative prompts help reduce noise but cannot fully remove structural artifacts in low-quality images
Output Format: MP4
Related AI Models
You can seamlessly integrate advanced AI capabilities into your applications without the hassle of managing complex infrastructure.