Wan 2.1-1.3B
wan-2.1-1.3b
Wan is an advanced and powerful video generation model developed by Tongyi Lab of Alibaba Group to generate 5-second, 480p videos.
Model Information
Input
Configure model parameters
Output
View generated results
Result
Preview, share or download your results with a single click.
Prerequisites
- Create an API Key from the Eachlabs Console
- Install the required dependencies for your chosen language (e.g., requests for Python)
API Integration Steps
1. Create a Prediction
Send a POST request to create a new prediction. This will return a prediction ID that you'll use to check the result. The request should include your model inputs and API key.
import requestsimport timeAPI_KEY = "YOUR_API_KEY" # Replace with your API keyHEADERS = {"X-API-Key": API_KEY,"Content-Type": "application/json"}def create_prediction():response = requests.post("https://api.eachlabs.ai/v1/prediction/",headers=HEADERS,json={"model": "wan-2.1-1.3b","version": "0.0.1","input": {"seed": null,"prompt": "your prompt here","frame_num": "81","resolution": "480p","aspect_ratio": "16:9","sample_shift": "8","sample_steps": "30","sample_guide_scale": "6"}})prediction = response.json()if prediction["status"] != "success":raise Exception(f"Prediction failed: {prediction}")return prediction["predictionID"]
2. Get Prediction Result
Poll the prediction endpoint with the prediction ID until the result is ready. The API uses long-polling, so you'll need to repeatedly check until you receive a success status.
def get_prediction(prediction_id):while True:result = requests.get(f"https://api.eachlabs.ai/v1/prediction/{prediction_id}",headers=HEADERS).json()if result["status"] == "success":return resultelif result["status"] == "error":raise Exception(f"Prediction failed: {result}")time.sleep(1) # Wait before polling again
3. Complete Example
Here's a complete example that puts it all together, including error handling and result processing. This shows how to create a prediction and wait for the result in a production environment.
try:# Create predictionprediction_id = create_prediction()print(f"Prediction created: {prediction_id}")# Get resultresult = get_prediction(prediction_id)print(f"Output URL: {result['output']}")print(f"Processing time: {result['metrics']['predict_time']}s")except Exception as e:print(f"Error: {e}")
Additional Information
- The API uses a two-step process: create prediction and poll for results
- Response time: ~25 seconds
- Rate limit: 60 requests/minute
- Concurrent requests: 10 maximum
- Use long-polling to check prediction status until completion
Overview
Wan 2.1-1.3B is a model that generates short videos from text prompts. It allows users to control various aspects of video generation, including resolution, aspect ratio, frame count, and sampling parameters. By adjusting these settings, users can refine the output to better match their intended results.
Technical Specifications
- Model Type: Diffusion-based video generation
- Parameter Count: 1.3B
- Training Data: Trained on a diverse set of video clips and textual descriptions.
- Motion Generation: Implements a frame interpolation technique to enhance smoothness.
- Temporal Consistency: Designed to maintain coherence between frames for natural motion.
- Optimization: Uses a combination of attention mechanisms and latent space representations for improved efficiency.
Key Considerations
- Higher frame counts improve fluidity but require more processing.
- More sample steps lead to higher quality but increase generation time.
- Changing aspect ratio affects the composition and framing of the video.
- Sample shift modifies motion dynamics; careful tuning is recommended.
Tips & Tricks
- Aspect Ratio:
- Use 16:9 for landscape formats (YouTube, presentations).
- Use 9:16 for vertical videos (social media, mobile content).
- Frame Number:
- 17 frames: Quick animations with minimal motion.
- 33 frames: Balanced motion for short clips.
- 49-81 frames: Smoother animation but higher processing cost.
- Resolution:
- Model supports 480p, ensuring quick generation while maintaining clarity.
- Sample Steps:
- 10-20: Faster results, lower detail.
- 30-40: Balanced quality and speed.
- 50: Highest quality but longest processing time.
- Sample Guide Scale:
- 0-5: Loose adherence to prompt, more randomness.
- 6-10: Balanced guidance.
- 11-20: Strong adherence to prompt, risk of rigidity.
- Sample Shift:
- Adjusting between 0-20 alters motion intensity; 8-12 is a good starting point.
Capabilities
- Wan 2.1-1.3B generates short AI-driven videos from textual descriptions.
- Supports adjustable resolution, aspect ratio, and motion settings.
- Provides control over video dynamics through sampling and guiding parameters.
What can I use for?
- Content Creation: Wan 2.1-1.3B generates video clips for social media and creative projects.
- Concept Visualization: Transform written ideas into animated previews.
- AI-Driven Motion Art: Experiment with text-based animation techniques.
Things to be aware of
Social Media Content: Create quick video clips for TikTok, Instagram, and YouTube Shorts.
Concept Visualization: Turn written ideas into moving images for storytelling or marketing.
Artistic Experiments: Explore creative possibilities with AI-generated motion.
Limitations
- Variability in Output: Small changes in parameters can lead to significantly different results.
- Motion Artifacts: Some animations may appear unnatural, requiring careful tuning of sample shift and frame count.
Output Format: MP4