Live Portrait
live-portrait
Live portrait is adding mimics and lip sync to your static portrait driven by a video
Model Information
Input
Configure model parameters
Output
View generated results
Result
Preview, share or download your results with a single click.
Prerequisites
- Create an API Key from the Eachlabs Console
- Install the required dependencies for your chosen language (e.g., requests for Python)
API Integration Steps
1. Create a Prediction
Send a POST request to create a new prediction. This will return a prediction ID that you'll use to check the result. The request should include your model inputs and API key.
import requestsimport timeAPI_KEY = "YOUR_API_KEY" # Replace with your API keyHEADERS = {"X-API-Key": API_KEY,"Content-Type": "application/json"}def create_prediction():response = requests.post("https://api.eachlabs.ai/v1/prediction/",headers=HEADERS,json={"model": "live-portrait","version": "0.0.1","input": {"face_image": "your_file.image/jpeg","driving_video": "your_file.video/mp4","live_portrait_dsize": "512","live_portrait_scale": "2.3","video_frame_load_cap": "128","live_portrait_lip_zero": "True","live_portrait_relative": "True","live_portrait_vx_ratio": "0","live_portrait_vy_ratio": "-0.12","live_portrait_stitching": "True","video_select_every_n_frames": "1","live_portrait_eye_retargeting": false,"live_portrait_lip_retargeting": false,"live_portrait_lip_retargeting_multiplier": "1","live_portrait_eyes_retargeting_multiplier": "1"}})prediction = response.json()if prediction["status"] != "success":raise Exception(f"Prediction failed: {prediction}")return prediction["predictionID"]
2. Get Prediction Result
Poll the prediction endpoint with the prediction ID until the result is ready. The API uses long-polling, so you'll need to repeatedly check until you receive a success status.
def get_prediction(prediction_id):while True:result = requests.get(f"https://api.eachlabs.ai/v1/prediction/{prediction_id}",headers=HEADERS).json()if result["status"] == "success":return resultelif result["status"] == "error":raise Exception(f"Prediction failed: {result}")time.sleep(1) # Wait before polling again
3. Complete Example
Here's a complete example that puts it all together, including error handling and result processing. This shows how to create a prediction and wait for the result in a production environment.
try:# Create predictionprediction_id = create_prediction()print(f"Prediction created: {prediction_id}")# Get resultresult = get_prediction(prediction_id)print(f"Output URL: {result['output']}")print(f"Processing time: {result['metrics']['predict_time']}s")except Exception as e:print(f"Error: {e}")
Additional Information
- The API uses a two-step process: create prediction and poll for results
- Response time: ~50 seconds
- Rate limit: 60 requests/minute
- Concurrent requests: 10 maximum
- Use long-polling to check prediction status until completion
Overview
Live Portrait is an innovative AI model that animates still portraits, transforming them into lifelike, expressive videos. By leveraging advanced motion transfer techniques, the model creates realistic animations based on facial expressions and movements, making it ideal for content creation, storytelling, and more.
Technical Specifications
Model Architecture:
- Utilizes deep learning techniques for motion transfer and facial keypoint detection.
- Integrates GAN-based methods to enhance animation realism.
Input Format:
- Static images: PNG, JPEG with a resolution of 512x512 pixels or higher recommended.
Output Format:
- MP4 with frame rates adjustable between 15–30 FPS.
Key Considerations
Input Quality:
- Low-quality or blurry images may result in poor animation performance.
Ethical Usage:
- Avoid using the model for unethical purposes, such as creating misleading or harmful content.
Copyright Compliance:
- Ensure you have the rights to use the input images for animation.
Tips & Tricks
Facial Alignment:
- Use images where the face is straight and centered for the best animation output.
Custom Animations:
- Experiment with different motion sources to create unique animations.
Use high-resolution face images for the best results.
Capabilities
Dynamic Animations:
- Converts static portraits into smooth, lifelike animations.
Customizable Outputs:
- Adjust frame rates, animation length, and style settings.
Versatile Applications:
- Suitable for storytelling, virtual avatars, social media content, and more.
What can I use for?
Content Creation:
- Enhance marketing campaigns with animated portraits.
Storytelling:
- Bring historical figures or fictional characters to life.
Virtual Avatars:
- Create engaging avatars for gaming or live-streaming platforms.
Education:
- Use animated portraits in e-learning to make lessons more interactive.
Creating engaging video content from static photos.
Breathing life into digital portraits for presentations.
Things to be aware of
Custom Expressions:
- Upload a portrait and test various motion templates for personalized animations.
Batch Animations:
- Animate multiple portraits to generate dynamic group content.
Social Media Sharing:
- Create short animated videos or GIFs for Instagram, TikTok, or YouTube.
Experimental Art:
- Use unique motion sources for abstract or creative animations.
Historical Revivals:
- Animate old photos of historical figures for documentaries or museums.
Limitations
Complex Backgrounds:
- Background elements in the input image may interfere with the animation focus.
Facial Occlusions:
- Objects covering the face can disrupt motion tracking.
Output Style:
- Animations are realistic but may lack fine-grained details for highly artistic applications.
Real-Time Processing:
- Limited to pre-processed animations and may not support live-streaming scenarios fully.
Processing time may vary based on the size and complexity of inputs.
Outputs depend heavily on the quality of the driving video and face image.
Output Format: MP4