Input
Configure model parameters
Output
View generated results
Result
Preview, share or download your results with a single click.
Prerequisites
- Create an API Key from the Eachlabs Console
- Install the required dependencies for your chosen language (e.g., requests for Python)
API Integration Steps
1. Create a Prediction
Send a POST request to create a new prediction. This will return a prediction ID that you'll use to check the result. The request should include your model inputs and API key.
import requestsimport timeAPI_KEY = "YOUR_API_KEY" # Replace with your API keyHEADERS = {"X-API-Key": API_KEY,"Content-Type": "application/json"}def create_prediction():response = requests.post("https://api.eachlabs.ai/v1/prediction/",headers=HEADERS,json={"model": "kling-v1-6-standard-image-to-video","version": "0.0.1","input": {"cfg_scale": 0.5,"negative_prompt": "your negative prompt here","aspect_ratio": "16:9","duration": "5","image_url": "your image url here","prompt": "your prompt here"},"webhook_url": ""})prediction = response.json()if prediction["status"] != "success":raise Exception(f"Prediction failed: {prediction}")return prediction["predictionID"]
2. Get Prediction Result
Poll the prediction endpoint with the prediction ID until the result is ready. The API uses long-polling, so you'll need to repeatedly check until you receive a success status.
def get_prediction(prediction_id):while True:result = requests.get(f"https://api.eachlabs.ai/v1/prediction/{prediction_id}",headers=HEADERS).json()if result["status"] == "success":return resultelif result["status"] == "error":raise Exception(f"Prediction failed: {result}")time.sleep(1) # Wait before polling again
3. Complete Example
Here's a complete example that puts it all together, including error handling and result processing. This shows how to create a prediction and wait for the result in a production environment.
try:# Create predictionprediction_id = create_prediction()print(f"Prediction created: {prediction_id}")# Get resultresult = get_prediction(prediction_id)print(f"Output URL: {result['output']}")print(f"Processing time: {result['metrics']['predict_time']}s")except Exception as e:print(f"Error: {e}")
Additional Information
- The API uses a two-step process: create prediction and poll for results
- Response time: ~200 seconds
- Rate limit: 60 requests/minute
- Concurrent requests: 10 maximum
- Use long-polling to check prediction status until completion
Overview
Kling v1.6 Standard Image to Video is a generative video model that transforms a single input image into a short cinematic video. Kling v1.6 Standart Image to Video is designed to create realistic, dynamic, and stylized motion from static images using natural language prompts. It supports various aspect ratios and durations, enabling flexible storytelling and visual effects. Kling v1.6 Standart Image to Video can be used for visual enhancement, storytelling, short-form content creation, and more.
Technical Specifications
Kling v1.6 is a transformer-based video generation model designed to animate static visuals with temporally coherent motion.
Kling v1.6 Standart Image to Video can synthesize videos of 5 or 10 seconds, with frame interpolation and consistent temporal stability.
Built on a latent diffusion backbone, Kling v1.6 employs keyframe expansion and latent motion fields to extrapolate realistic movement.
The motion is conditioned not only on the input image but also on the prompt, enabling semantic and stylized animation control.
Kling v1.6 supports variable aspect ratios (16:9, 9:16, 1:1) and renders videos that are spatially consistent with the input format.
It is designed to maintain fine detail fidelity from the source image while introducing cinematic motion and scene dynamics.
Key Considerations
Only one image can be used per generation cycle. Multiple-image input is not supported.
Image content should not include excessive text, overlays, or logos unless intended to appear in the video output.
Kling v1.6 cannot generate audio or subtitles. Only the video output is supported.
Videos are generated at fixed frame rates; there is no current support for custom frame control.
Prompts with abstract or contradictory descriptions may result in unstable motion or inconsistent scenes.
Prompt and image content must align thematically to avoid content mismatch or visual dissonance.
Legal Information for Kling v1.6 Standart Image to Video
By using this Kling v1.6 Standart Image to Video, you agree to:
- Kling Privacy
- Kling SERVICE AGREEMENT
Tips & Tricks
prompt: Use descriptive and action-oriented language. Examples:
- “a child running through a sunflower field, camera follows from behind”
- “a futuristic city at sunset, drone camera pans slowly above the skyline”
- Include light direction, atmosphere, or background elements for richer results.
negative_prompt: Helps exclude unwanted elements. Suggestions:
- “blurry, distorted, watermark, duplicate, glitch, broken face, text”
- Keep it focused; avoid overloading with unrelated terms.
cfg_scale: Recommended range is 0.6 – 0.8.
- Lower values (0.4–0.6) may yield more creative interpretations.
- Higher values (0.8–1) enforce stricter prompt adherence but may reduce motion fluidity.
aspect_ratio:
- 16:9: Ideal for cinematic landscape views.
- 9:16: Best suited for mobile or vertical video formats.
- 1:1: Balanced framing for centralized subjects.
duration:
- 5: Quick visual output, suitable for fast previews or loops.
- 10: Better for showcasing slow motion or richer scenes.
image_url:
- Use clear, centered, and well-lit subjects.
- Background should support motion; avoid flat or blank backdrops.
Capabilities
Transforms static images into dynamic video sequences
Interprets textual prompts to influence camera motion and scene atmosphere
Maintains consistent visual style and subject integrity across all frames
Generates video with optional stylistic realism or dream-like motion
Offers aspect ratio flexibility for different content formats
What can I use for?
Creating short, visually compelling video scenes from illustrations or concept art
Adding cinematic movement to portraits, product renders, or key visuals
Generating content for social platforms in vertical or landscape format
Prototyping visual narratives using still frames and descriptive prompts
Enhancing storytelling in digital media, branding, and visual design
Things to be aware of
Try prompting environmental motion (e.g., “leaves rustling”, “water flowing”) to add ambient movement.
Experiment with camera actions: “camera slowly rotates”, “zooming in on the subject”.
Combine time of day and weather elements: “golden hour sunlight”, “storm clouds gathering”.
Use 1:1 aspect ratio for symmetrical subjects and character-focused shots.
Test different cfg_scale values to balance prompt adherence and motion creativity.
Limitations
Kling v1.6 cannot generate sound or music.
Does not support multi-image stitching or storytelling across frames.
Faces and text may deform slightly during motion if not specified clearly in the prompt.
Prompt language must remain consistent and descriptive; vague input reduces output quality.
Scene transitions and cuts are not available; the motion remains continuous throughout.
Outputs are fixed in duration and aspect ratio must be chosen prior to generation.
Output Format: MP4
Related AI Models
You can seamlessly integrate advanced AI capabilities into your applications without the hassle of managing complex infrastructure.