Input
Configure model parameters
Output
View generated results
Result
Preview, share or download your results with a single click.
Prerequisites
- Create an API Key from the Eachlabs Console
- Install the required dependencies for your chosen language (e.g., requests for Python)
API Integration Steps
1. Create a Prediction
Send a POST request to create a new prediction. This will return a prediction ID that you'll use to check the result. The request should include your model inputs and API key.
import requestsimport timeAPI_KEY = "YOUR_API_KEY" # Replace with your API keyHEADERS = {"X-API-Key": API_KEY,"Content-Type": "application/json"}def create_prediction():response = requests.post("https://api.eachlabs.ai/v1/prediction/",headers=HEADERS,json={"model": "kling-v2-1-pro-image-to-video","version": "0.0.1","input": {"cfg_scale": 0.5,"negative_prompt": "your negative prompt here","aspect_ratio": "16:9","duration": 5,"image_url": "your image url here","prompt": "your prompt here"},"webhook_url": ""})prediction = response.json()if prediction["status"] != "success":raise Exception(f"Prediction failed: {prediction}")return prediction["predictionID"]
2. Get Prediction Result
Poll the prediction endpoint with the prediction ID until the result is ready. The API uses long-polling, so you'll need to repeatedly check until you receive a success status.
def get_prediction(prediction_id):while True:result = requests.get(f"https://api.eachlabs.ai/v1/prediction/{prediction_id}",headers=HEADERS).json()if result["status"] == "success":return resultelif result["status"] == "error":raise Exception(f"Prediction failed: {result}")time.sleep(1) # Wait before polling again
3. Complete Example
Here's a complete example that puts it all together, including error handling and result processing. This shows how to create a prediction and wait for the result in a production environment.
try:# Create predictionprediction_id = create_prediction()print(f"Prediction created: {prediction_id}")# Get resultresult = get_prediction(prediction_id)print(f"Output URL: {result['output']}")print(f"Processing time: {result['metrics']['predict_time']}s")except Exception as e:print(f"Error: {e}")
Additional Information
- The API uses a two-step process: create prediction and poll for results
- Response time: ~120 seconds
- Rate limit: 60 requests/minute
- Concurrent requests: 10 maximum
- Use long-polling to check prediction status until completion
Overview
Kling v2.1 Pro Image to Video transforms a single image into a dynamic video sequence using motion synthesis driven by a textual prompt. Kling v2.1 Pro Image to Video generates short video clips that animate the content and context of the input image, creating motion aligned with the described scenario or action.
Technical Specifications
Kling v2.1 Pro Image to Video uses a latent video diffusion mechanism that combines temporal dynamics with frame coherence.
Kling v2.1 Pro Image to Video is trained on high-resolution video-image pairs to retain facial and structural integrity across time steps.
Kling v2.1 Pro produces output video clips of 5 to 10 seconds.
Motion inference is conditioned on both image content and prompt context to ensure temporal consistency.
Uses frame-level refinement and context propagation to reduce flickering and maintain alignment with the original image.
Key Considerations
Kling v2.1 Pro Image to Video is best suited for scenes with a single primary subject. Multiple focal points may reduce clarity.
Prompts that conflict with the input image content can result in artifacts or unnatural motion.
Excessive camera motion or unrealistic physical movements in the prompt may reduce Kling v2.1 Pro Image to Video's ability to retain subject consistency.
Backgrounds may animate subtly but are not guaranteed to change drastically unless specified in the prompt.
Legal Information for Kling v1 Pro Image to Video
By using this Kling v1 Pro Image to Video, you agree to:
- Kling Privacy
- Kling SERVICE AGREEMENT
Tips & Tricks
Prompt
-
Use concise but descriptive language.
✅ “a woman walking forward with wind blowing through hair”
❌ “girl moving, cool, dynamic” - Use verbs and motion-related cues: walking, turning, zooming, panning, flying.
Negative Prompt
-
Avoid generic terms like "bad quality." Be specific:
✅ “low-res face, camera shake, unnatural animation”
Helps clean up artifacts and improve motion stability.
Duration
- Range: 5 to 10 seconds.
- Use 5 for quick transitions or expressions, 10 for full-body or slow motion scenes.
Aspect Ratio
- Available options: 16:9, 9:16, 1:1.
-
Choose based on content placement:
- 16:9: Landscape videos, natural scenes
- 9:16: Vertical shots, human subjects
- 1:1: Balanced compositions, centered action
CFG Scale
- Range: 0.0 to 1.0
-
Controls how strongly the output follows the prompt.
- 0.3–0.5: Balanced, softer influence of prompt (recommended for natural motion)
- 0.6–0.8: Stronger motion fidelity to prompt (use if result deviates too much)
- 0.9–1.0: Very strict prompt adherence, may reduce realism if overused
Capabilities
Animate still portraits with subtle facial or body movements.
Simulate cinematic motion such as zoom, pan, tilt, or reveal.
Convey emotional or atmospheric changes (e.g., “surprised expression with slight backward movement”).
Transform static artwork or product visuals into engaging motion content.
Maintain visual consistency across frames to preserve image identity.
What can I use for?
Creating video teasers from static image-based concepts.
Generating animated visuals for character profiles, avatars, or portraits.
Adding expressive motion to brand visuals, cover images, or promotional material.
Visual storytelling for social content based on art or photography.
Animating reference poses for use in film or motion previsualization.
Things to be aware of
Animate facial expressions using prompts like “smiling with a blink”, “looking left and raising eyebrows”.
Create stylized movements like “slow motion camera zoom toward face” or “gentle camera pan from left to right”.
Experiment with image types: photographs, illustrations, AI-generated portraits.
Use negative prompts to refine eye alignment, reduce warping, or remove distractions.
Limitations
Complex multi-subject scenes may introduce inconsistencies in motion or cause distortions.
Backgrounds do not undergo large transformations unless directly guided by the prompt.
Lighting and shadows are inferred; inconsistent input lighting may reduce realism.
Fine details such as small accessories may flicker during animation.
Outputs are limited to short video durations (max 10s); long-form scenes are not supported.
Output Format: MP4
Related AI Models
You can seamlessly integrate advanced AI capabilities into your applications without the hassle of managing complex infrastructure.