Kling v1 Pro Image to Video
kling-v1-pro-image-to-video
Kling v1 Pro Image to Video takes your images and transforms them into smooth, high-quality videos, delivering consistent and reliable results every time.
Model Information
Input
Configure model parameters
Output
View generated results
Result
Preview, share or download your results with a single click.
Prerequisites
- Create an API Key from the Eachlabs Console
- Install the required dependencies for your chosen language (e.g., requests for Python)
API Integration Steps
1. Create a Prediction
Send a POST request to create a new prediction. This will return a prediction ID that you'll use to check the result. The request should include your model inputs and API key.
import requestsimport timeAPI_KEY = "YOUR_API_KEY" # Replace with your API keyHEADERS = {"X-API-Key": API_KEY,"Content-Type": "application/json"}def create_prediction():response = requests.post("https://api.eachlabs.ai/v1/prediction/",headers=HEADERS,json={"model": "kling-v1-pro-image-to-video","version": "0.0.1","input": {"negative_prompt": "blur, distort, and low quality","static_mask_url": "your static image url here","tail_image_url": "your tail image url here","cfg_scale": 0.5,"aspect_ratio": "16:9","duration": 5,"image_url": "your image url here","prompt": "your prompt here"},"webhook_url": ""})prediction = response.json()if prediction["status"] != "success":raise Exception(f"Prediction failed: {prediction}")return prediction["predictionID"]
2. Get Prediction Result
Poll the prediction endpoint with the prediction ID until the result is ready. The API uses long-polling, so you'll need to repeatedly check until you receive a success status.
def get_prediction(prediction_id):while True:result = requests.get(f"https://api.eachlabs.ai/v1/prediction/{prediction_id}",headers=HEADERS).json()if result["status"] == "success":return resultelif result["status"] == "error":raise Exception(f"Prediction failed: {result}")time.sleep(1) # Wait before polling again
3. Complete Example
Here's a complete example that puts it all together, including error handling and result processing. This shows how to create a prediction and wait for the result in a production environment.
try:# Create predictionprediction_id = create_prediction()print(f"Prediction created: {prediction_id}")# Get resultresult = get_prediction(prediction_id)print(f"Output URL: {result['output']}")print(f"Processing time: {result['metrics']['predict_time']}s")except Exception as e:print(f"Error: {e}")
Additional Information
- The API uses a two-step process: create prediction and poll for results
- Response time: ~270 seconds
- Rate limit: 60 requests/minute
- Concurrent requests: 10 maximum
- Use long-polling to check prediction status until completion
Overview
Kling v1 Pro Image to Video is a generative model that transforms a single static image into a smooth, coherent video. By interpreting the visual features of the input image and combining them with a textual prompt, Kling v1 Pro Image to Video animates motion in a stylistic and realistic way. It supports short video durations and offers fine-grained control through several parameters to shape the animation's style, pacing, and framing.
Technical Specifications
Kling v1 Pro Image to Video is designed to generate short animated video clips from a single image input.
The model supports motion continuity through optional tail and mask images, helping refine transition and detail preservation.
Temporal coherence is maintained throughout the generated frames using internal motion consistency mechanisms.
Kling v1 Pro Image to Video handles varying aspect ratios without cropping by interpreting image layout relative to selected framing.
Generation is optimized for two fixed durations (5s and 10s), balancing rendering speed and visual smoothness.
Input images should be clear, high-resolution, and visually rich to achieve better motion consistency.
Prompt guidance strongly affects the type of motion and scene interpretation. Use specific, descriptive language.
Results can vary depending on prompt-image alignment. Make sure the prompt corresponds to what is visually available in the image.
Negative prompts are useful for removing unwanted elements such as blurriness, artifacts, or stylistic anomalies.
Avoid combining conflicting concepts in prompt and image to ensure fluid motion generation.
Key Considerations
Always ensure that the input image URL is accessible and stable, preferably in .jpg or .png format.
When using a tail image, keep it visually aligned with the main image to avoid motion jumps.
If a static mask is used, ensure its resolution matches the input image and that masked areas are clearly defined.
Avoid extremely low-resolution input images, as it may reduce frame coherence and detail quality.
Motion generation is limited to visual context present in the image—objects fully or partially visible are prioritized.
Legal Information for Kling v1 Pro Image to Video
By using this Kling v1 Pro Image to Video, you agree to:
- Kling Privacy
- Kling SERVICE AGREEMENT
Tips & Tricks
- Prompt: Use descriptive action phrases (e.g., "a woman walking through a meadow at sunset"). Focus on movement, environment, and lighting.
- Negative Prompt: For cleaner results, include terms like "blurry", "distorted", "low quality", "artifact", or "flicker".
- Cfg Scale: Ideal values range from 0.4 to 0.8. Lower values (e.g., 0.4) allow more creativity, while higher values (e.g., 0.8) adhere more closely to the prompt.
- Duration: Choose between 5 or 10. Use 5 for fast render and concise motion, 10 for richer and extended transitions.
-
Aspect Ratio:
- 16:9 for cinematic landscape visuals,
- 9:16 for vertical mobile-style outputs,
- 1:1 for centered, social media-friendly formats.
- Tail Image Url: Optional. Add a visually similar follow-up image to influence the final frames with smoother continuity.
- Static Mask Url: Use when you want to freeze specific parts of the input image while the rest animates (e.g., keeping a background static while the subject moves).
Capabilities
Animate a single image into a video with consistent motion.
Reflect prompt-based style and thematic animation through text input.
Control motion flow using optional tail or static image masking.
Produce videos in vertical, horizontal, or square formats.
What can I use for?
Creating short animated visuals from portraits or artistic images.
Generating social media content from static artwork or photos.
Prototyping animated storyboards based on character or scene illustrations.
Visual storytelling for concept art and still photography.
Things to be aware of
Upload a static character illustration and prompt motion like "turning around slowly" to bring it to life.
Use tail images that continue a scene (e.g., character slightly moved forward) to enrich final transitions.
Freeze background using a static mask while animating only the foreground subject for cinematic depth.
Limitations
Only two fixed durations (5, 10) are supported. No custom frame lengths.
Motion is generated from the image context and prompt—no external motion reference or keyframes are accepted.
Complex scenes with overlapping elements may result in visual artifacts or misinterpreted movement.
Image masking requires careful segmentation. Poor mask quality can cause unnatural freezes or tearing.
Fine facial movements and lip-sync are not supported.
Output Format: MP4