PixVerse v4.5 Text to Video
pixverse-v4-5-text-to-video
The Eachlabs Pixverse Text-to-Video model that will handle your video creation.
Model Information
Input
Configure model parameters
Output
View generated results
Result
Preview, share or download your results with a single click.
Prerequisites
- Create an API Key from the Eachlabs Console
- Install the required dependencies for your chosen language (e.g., requests for Python)
API Integration Steps
1. Create a Prediction
Send a POST request to create a new prediction. This will return a prediction ID that you'll use to check the result. The request should include your model inputs and API key.
import requestsimport timeAPI_KEY = "YOUR_API_KEY" # Replace with your API keyHEADERS = {"X-API-Key": API_KEY,"Content-Type": "application/json"}def create_prediction():response = requests.post("https://api.eachlabs.ai/v1/prediction/",headers=HEADERS,json={"model": "pixverse-v4-5-text-to-video","version": "0.0.1","input": {"quality": "540p","aspect_ratio": "16:9","duration": "5","motion_mode": "normal","negative_prompt": "your negative prompt here","prompt": "your prompt here","seed": 0,"style": "your style here"},"webhook_url": ""})prediction = response.json()if prediction["status"] != "success":raise Exception(f"Prediction failed: {prediction}")return prediction["predictionID"]
2. Get Prediction Result
Poll the prediction endpoint with the prediction ID until the result is ready. The API uses long-polling, so you'll need to repeatedly check until you receive a success status.
def get_prediction(prediction_id):while True:result = requests.get(f"https://api.eachlabs.ai/v1/prediction/{prediction_id}",headers=HEADERS).json()if result["status"] == "success":return resultelif result["status"] == "error":raise Exception(f"Prediction failed: {result}")time.sleep(1) # Wait before polling again
3. Complete Example
Here's a complete example that puts it all together, including error handling and result processing. This shows how to create a prediction and wait for the result in a production environment.
try:# Create predictionprediction_id = create_prediction()print(f"Prediction created: {prediction_id}")# Get resultresult = get_prediction(prediction_id)print(f"Output URL: {result['output']}")print(f"Processing time: {result['metrics']['predict_time']}s")except Exception as e:print(f"Error: {e}")
Additional Information
- The API uses a two-step process: create prediction and poll for results
- Response time: ~45 seconds
- Rate limit: 60 requests/minute
- Concurrent requests: 10 maximum
- Use long-polling to check prediction status until completion
Overview
PixVerse v4.5 Text to Video generates short video clips from text prompts. It interprets detailed text descriptions to create dynamic, visually engaging videos with various styles and motion settings. PixVerse v4.5 Text to Video supports multiple aspect ratios and quality levels, providing flexibility for different use cases. It is designed to translate creative ideas into moving images while maintaining user control over the video’s appearance and behavior.
Technical Specifications
Supports diverse visual styles including anime, 3D animation, clay, comic, and cyberpunk.
Generates videos in multiple resolutions from 360p up to 1080p.
Optimized for short video generation with durations capped at 8 seconds.
Supports aspect ratios including widescreen (16:9), square (1:1), portrait (9:16, 3:4), and classic (4:3).
Enables motion adjustments to either slow and smooth or fast and dynamic movement.
Incorporates random seed control for reproducibility.
Designed to balance between video quality, style diversity, and processing speed.
Key Considerations
Video length is limited to short clips (5 or 8 seconds), which may not suit long-form content needs.
Style selection significantly impacts output; some styles like clay or cyberpunk may add unique color palettes and textures.
Aspect ratio choice affects framing; portrait formats suit mobile screens, while widescreen is better for desktop or presentations.
Negative prompts should be specific and clear to reduce unwanted visual noise.
The motion mode impacts perceived speed and smoothness; fast motion may cause less detail clarity.
Seed control is optional but essential if exact video reproducibility is required.
Legal Information for PixVerse v4.5 Text to Video
By using this PixVerse v4.5 Text to Video, you agree to:
Pixverse Terms Of Service
Pixverse Privacy Policy
Tips & Tricks
- Prompt: Use detailed and vivid descriptions to achieve more accurate and rich video content.
- Negative Prompt: Use to exclude undesired elements (e.g., “blurry, low quality, watermark”) for cleaner results.
-
Aspect Ratio:
- Use 16:9 for standard widescreen videos.
- Choose 9:16 or 3:4 for mobile or social media portrait formats.
- Use 1:1 for square videos, suitable for many social platforms.
- Select 4:3 for classic or legacy screen formats.
-
Duration:
- 5 seconds is ideal for quick social media clips or previews.
- 8 seconds allows slightly more content or detail in the video.
-
Quality:
- 360p or 540p for faster generation with lower fidelity.
- 720p balances quality and speed well for most uses.
- 1080p for highest clarity and detail but longer generation time.
-
Motion Mode:
- normal for balanced motion with smooth transitions.
- fast for dynamic, quick action effects but potentially less detailed frames.
-
Style:
- Select anime for stylized, colorful animated visuals.
- Use 3d_animation for realistic or semi-realistic CGI looks.
- clay gives a tactile, handcrafted appearance.
- comic applies graphic novel or illustrated effects.
- cyberpunk adds neon and futuristic aesthetics.
- Seed: Set a fixed integer to reproduce consistent videos across different runs or users.
Capabilities
Converts detailed text descriptions into short video clips.
Supports multiple visual styles to fit varied creative directions.
Allows customization of video format and resolution.
Provides motion controls for dynamic video pacing.
Facilitates reproducibility with seed-based output control.
Optimizes output for social media, marketing previews, and content generation.
What can I use for?
Creating animated short clips from story ideas or marketing text.
Generating video previews for concepts or pitches.
Producing stylized content in various artistic themes.
Developing social media posts with eye-catching animated visuals.
Enhancing presentations with brief animated sequences.
Experimenting with different motion speeds and artistic styles.
Things to be aware of
Generate the same prompt with different styles to compare visual effects.
Adjust motion mode to see how speed changes the feel of the video.
Use negative prompts to filter out unwanted elements such as noise or artifacts.
Create portrait and landscape versions of the same video for different platforms.
Fix the seed to create matching videos for multiple uses or users.
Experiment with lower quality settings for faster results when high fidelity is not critical.
Limitations
Limited to short video durations (5 or 8 seconds).
Not suitable for detailed or complex long-form videos.
Certain styles may introduce visual noise or reduce clarity.
High-quality videos require longer processing times.
Motion modes trade-off between smoothness and speed, affecting detail.
Seed control does not guarantee identical output if underlying model updates.
Output Format: MP4