PixVerse v4 Text to Video
pixverse-v4-text-to-video
The Eachlabs Pixverse Text-to-Video model that will handle your video creation.
Model Information
Input
Configure model parameters
Output
View generated results
Result
Preview, share or download your results with a single click.
Prerequisites
- Create an API Key from the Eachlabs Console
- Install the required dependencies for your chosen language (e.g., requests for Python)
API Integration Steps
1. Create a Prediction
Send a POST request to create a new prediction. This will return a prediction ID that you'll use to check the result. The request should include your model inputs and API key.
import requestsimport timeAPI_KEY = "YOUR_API_KEY" # Replace with your API keyHEADERS = {"X-API-Key": API_KEY,"Content-Type": "application/json"}def create_prediction():response = requests.post("https://api.eachlabs.ai/v1/prediction/",headers=HEADERS,json={"model": "pixverse-v4-text-to-video","version": "0.0.1","input": {"style": "your style here","seed": 0,"prompt": "your prompt here","negative_prompt": "your negative prompt here","motion_mode": "normal","duration": "5","aspect_ratio": "16:9","quality": "540p"},"webhook_url": ""})prediction = response.json()if prediction["status"] != "success":raise Exception(f"Prediction failed: {prediction}")return prediction["predictionID"]
2. Get Prediction Result
Poll the prediction endpoint with the prediction ID until the result is ready. The API uses long-polling, so you'll need to repeatedly check until you receive a success status.
def get_prediction(prediction_id):while True:result = requests.get(f"https://api.eachlabs.ai/v1/prediction/{prediction_id}",headers=HEADERS).json()if result["status"] == "success":return resultelif result["status"] == "error":raise Exception(f"Prediction failed: {result}")time.sleep(1) # Wait before polling again
3. Complete Example
Here's a complete example that puts it all together, including error handling and result processing. This shows how to create a prediction and wait for the result in a production environment.
try:# Create predictionprediction_id = create_prediction()print(f"Prediction created: {prediction_id}")# Get resultresult = get_prediction(prediction_id)print(f"Output URL: {result['output']}")print(f"Processing time: {result['metrics']['predict_time']}s")except Exception as e:print(f"Error: {e}")
Additional Information
- The API uses a two-step process: create prediction and poll for results
- Response time: ~45 seconds
- Rate limit: 60 requests/minute
- Concurrent requests: 10 maximum
- Use long-polling to check prediction status until completion
Overview
Pixverse T2V is focused on generating coherent and expressive video clips directly from textual descriptions. With support for various video styles, motion configurations, durations, and resolutions, Pixverse T2V is adaptable to different creative contexts. It emphasizes flexibility and user control while maintaining high aesthetic quality and temporal consistency.
Technical Specifications
Prompts should describe clear visual scenes with motion elements, e.g. "A cat jumping onto a windowsill on a rainy day". Avoid vague descriptions for better results.
Video duration is short (5 or 8 seconds), so limit overly complex scenes.
For photorealistic or stylistic rendering, align prompt content with selected style.
Motion mode "fast" adds dynamic movement but may reduce stability.
Negative prompts help exclude unwanted elements, like "blurry, distorted, dark background".
Ensure seed is fixed for reproducible outputs if consistency is required.
Key Considerations
Videos are currently limited to short durations (5–8s). Longer scenes may be cut or condensed.
Motion quality varies depending on prompt clarity, style, and motion mode.
Prompt language matters. Overly technical or abstract language may reduce visual fidelity.
There’s no direct audio or sound generation.
Aspect ratios can affect composition; avoid extreme framing mismatches.
Legal Information for Pixverse T2V
By using this Pixverse T2V, you agree to:
Pixverse Terms Of Service
Pixverse Privacy Policy
Tips & Tricks
Prompt
-
Use descriptive, concrete phrases:
✅ “A dog running across a golden field at sunset”
❌ “Something cool happening outside” - Emphasize motion: "spinning", "flying", "walking", "dancing", "waving".
Negative Prompt
-
Use this to avoid common issues:
Example: "blurry, low quality, distorted, dark background"
Helps guide Pixverse T2V away from undesired visual traits.
Aspect Ratio
- 16:9: Best for landscape/wide scenes.
- 9:16: Ideal for portrait mobile-style videos.
- 1:1: Square format, good for centered compositions.
- Match ratio with subject type to avoid cropping or awkward framing.
Duration
- 5: Recommended for quicker scenes or focused motion.
- 8: Suitable for actions with progression or multi-stage events.
Quality
- 360p: Faster generation, basic clarity.
- 540p: Balanced output.
- 720p: High quality, default option for many cases.
- 1080p: Best detail, longer generation time.
Motion Mode
- normal: Balanced motion and frame coherence.
- fast: Adds more dynamic movement, but may introduce flicker.
Style
- anime: For vivid, character-based scenes.
- 3d_animation: Smooth and soft shaded look.
- clay: Stylized, handcrafted appearance.
- comic: Bold outlines, graphic-novel feel.
- cyberpunk: Neon-lit, dystopian aesthetics.
Seed
- Use fixed integers (e.g., 42, 1234) to replicate results.
- Leaving seed blank allows randomized generation, suitable for creative variety.
Capabilities
Generate short video clips based entirely on text prompts.
Apply distinct visual styles (anime, comic, etc.) to outputs.
Control motion dynamics and resolution.
Support for aspect ratios compatible with web and mobile formats.
Filter out undesired visual traits using negative prompting.
What can I use for?
Creating stylized video content for social media or storytelling.
Generating animated visual ideas from scene descriptions.
Exploring motion-based concepts in different aesthetic styles.
Producing quick visual drafts for ideation or prototyping.
Generating unique animated content for creative or illustrative use.
Things to be aware of
Short character actions: “A robot dancing under strobe lights” with cyberpunk style and fast motion.
Stylized storytelling: “A paper airplane flying over a clay mountain” with clay style and 4:3 ratio.
Cinematic vibes: “A hero walking through smoke in slow motion” with 720p quality and 16:9 ratio.
Mobile-focused content: Use 9:16 aspect ratio and 5s duration for vertical content ideas.
Stylized animations: Use anime style for vibrant character motion, especially for creative shorts.
Limitations
No control over frame-by-frame output or audio.
Cannot generate complex narratives exceeding the duration limit.
Output consistency is bound by randomness unless a seed is fixed.
Temporal artifacts may appear in fast motion modes.
Output Format:MP4