PixVerse v4.5 Image to Video
pixverse-v4-5-image-to-video
Pixverse is a model designed to generate dynamic video content from static images.
Model Information
Input
Configure model parameters
Output
View generated results
Result
Preview, share or download your results with a single click.
Prerequisites
- Create an API Key from the Eachlabs Console
- Install the required dependencies for your chosen language (e.g., requests for Python)
API Integration Steps
1. Create a Prediction
Send a POST request to create a new prediction. This will return a prediction ID that you'll use to check the result. The request should include your model inputs and API key.
import requestsimport timeAPI_KEY = "YOUR_API_KEY" # Replace with your API keyHEADERS = {"X-API-Key": API_KEY,"Content-Type": "application/json"}def create_prediction():response = requests.post("https://api.eachlabs.ai/v1/prediction/",headers=HEADERS,json={"model": "pixverse-v4-5-image-to-video","version": "0.0.1","input": {"seed": 1,"image_url": "your image here","quality": "540p","prompt": "your prompt here","motion_mode": "normal","duration": 5,"negative_prompt": "your negative prompt here","style": "your style here"},"webhook_url": ""})prediction = response.json()if prediction["status"] != "success":raise Exception(f"Prediction failed: {prediction}")return prediction["predictionID"]
2. Get Prediction Result
Poll the prediction endpoint with the prediction ID until the result is ready. The API uses long-polling, so you'll need to repeatedly check until you receive a success status.
def get_prediction(prediction_id):while True:result = requests.get(f"https://api.eachlabs.ai/v1/prediction/{prediction_id}",headers=HEADERS).json()if result["status"] == "success":return resultelif result["status"] == "error":raise Exception(f"Prediction failed: {result}")time.sleep(1) # Wait before polling again
3. Complete Example
Here's a complete example that puts it all together, including error handling and result processing. This shows how to create a prediction and wait for the result in a production environment.
try:# Create predictionprediction_id = create_prediction()print(f"Prediction created: {prediction_id}")# Get resultresult = get_prediction(prediction_id)print(f"Output URL: {result['output']}")print(f"Processing time: {result['metrics']['predict_time']}s")except Exception as e:print(f"Error: {e}")
Additional Information
- The API uses a two-step process: create prediction and poll for results
- Response time: ~45 seconds
- Rate limit: 60 requests/minute
- Concurrent requests: 10 maximum
- Use long-polling to check prediction status until completion
Overview
PixVerse v4.5 Image to Video is a generative video model that transforms a single input image into a dynamic video sequence. It allows motion-enhanced storytelling by extending visual elements in the source image and integrating guided prompts to influence movement, style, and tone.PixVerse v4.5 Image to Video is designed to generate short video clips while preserving the identity and structure of the original image.
Technical Specifications
PixVerse v4.5 Image to Video leverages a combination of temporal consistency modules and frame prediction engines.
High-resolution image inputs are dynamically scaled based on selected output quality to maintain visual fidelity.
Duration and motion modes affect frame interpolation depth and speed transitions.
Seed values affect the deterministic nature of video generation — identical inputs with the same seed will result in identical outputs.
Key Considerations
Content with highly abstract or cluttered elements may not animate smoothly.
Text or logos in the image may get distorted during motion generation.
Videos longer than 8 seconds are not supported.
Fast motion mode may reduce frame-to-frame consistency in complex scenes.
Styles may override some natural color tones from the original image.
Legal Information for PixVerse v4.5 Image to Video
By using this PixVerse v4.5 Image to Video, you agree to:
Piverse Terms Of Service
Pixverse Privacy Policy
Tips & Tricks
image_url: Use high-resolution images with clear subjects centered in the frame. Avoid busy or overexposed backgrounds.
prompt: Provide directional cues like “a woman walking in the rain” or “robot turning around” to guide the motion path.
negative_prompt: Use to suppress unwanted motion artifacts, e.g., “no glitches”, “no shaking”, “no distortion”.
duration: Choose between 5 or 8 seconds. Shorter durations maintain motion smoothness more reliably for complex images.
quality: Use 720p for balance between detail and speed. 1080p may increase rendering time. Lower options like 360p can be used for previews.
motion_mode:
- normal is ideal for subtle, natural transitions.
- fast creates more exaggerated motion — suitable for action scenes or fast-moving concepts.
style:
- anime, 3d_animation, clay, comic, and cyberpunk define the tone of the video.
- For realism, avoid using stylized modes unless intended.
- cyberpunk works best with city scenes or dark color palettes.
seed: Use fixed seed values (e.g., 42 or 12345) for reproducibility, or leave empty to allow random variations.
Capabilities
Converts static images into coherent short video sequences.
Supports style-driven motion synthesis.
Handles complex image compositions with structured motion.
Retains essential features from original input.
Provides fine-grain control over motion type, visual quality, and artistic direction.
What can I use for?
Enhancing marketing visuals by adding subtle animation.
Creating animated avatars or profile visuals from portraits.
Generating motion previews for concept art or character designs.
Making short clips for social media from static illustrations.
Building stylized loops for creative content or music visuals.
Things to be aware of
Combine a high-resolution portrait with the comic style and prompt: “smiling and turning to the left”.
Use a landscape image with the cyberpunk style and prompt: “drone flying above the city”.
Try different seeds with the same inputs to explore motion variations.
Apply a negative_prompt like “no blur, no color change” to preserve image integrity.
Select fast motion mode on character poses to simulate a cinematic turn or jump.
Limitations
Currently limited to 5s and 8s video durations only.
Not suitable for full-scene transformations or rapid zoom effects.
Some styles may produce overly saturated results depending on the input.
Prompt guidance influences motion subtly; do not expect major scene changes from text.
Style and motion effects may override or distort fine image details.
Output Format: MP4