Magic Animate
magic-animate
MagicAnimate: Temporally Consistent Human Image Animation using Diffusion Model
Model Information
Input
Configure model parameters
Output
View generated results
Result
Preview, share or download your results with a single click.
Prerequisites
- Create an API Key from the Eachlabs Console
- Install the required dependencies for your chosen language (e.g., requests for Python)
API Integration Steps
1. Create a Prediction
Send a POST request to create a new prediction. This will return a prediction ID that you'll use to check the result. The request should include your model inputs and API key.
import requestsimport timeAPI_KEY = "YOUR_API_KEY" # Replace with your API keyHEADERS = {"X-API-Key": API_KEY,"Content-Type": "application/json"}def create_prediction():response = requests.post("https://api.eachlabs.ai/v1/prediction/",headers=HEADERS,json={"model": "magic-animate","version": "0.0.1","input": {"seed": null,"image": "your_file.image/jpeg","video": "your_file.video/mp4","guidance_scale": "7.5","num_inference_steps": "25"}})prediction = response.json()if prediction["status"] != "success":raise Exception(f"Prediction failed: {prediction}")return prediction["predictionID"]
2. Get Prediction Result
Poll the prediction endpoint with the prediction ID until the result is ready. The API uses long-polling, so you'll need to repeatedly check until you receive a success status.
def get_prediction(prediction_id):while True:result = requests.get(f"https://api.eachlabs.ai/v1/prediction/{prediction_id}",headers=HEADERS).json()if result["status"] == "success":return resultelif result["status"] == "error":raise Exception(f"Prediction failed: {result}")time.sleep(1) # Wait before polling again
3. Complete Example
Here's a complete example that puts it all together, including error handling and result processing. This shows how to create a prediction and wait for the result in a production environment.
try:# Create predictionprediction_id = create_prediction()print(f"Prediction created: {prediction_id}")# Get resultresult = get_prediction(prediction_id)print(f"Output URL: {result['output']}")print(f"Processing time: {result['metrics']['predict_time']}s")except Exception as e:print(f"Error: {e}")
Additional Information
- The API uses a two-step process: create prediction and poll for results
- Response time: ~70 seconds
- Rate limit: 60 requests/minute
- Concurrent requests: 10 maximum
- Use long-polling to check prediction status until completion
Overview
Magic Animate is a model that animates static human images by applying motion patterns extracted from a reference video. This approach ensures temporal consistency, resulting in smooth and natural animations.
Technical Specifications
Magic Animate employs a diffusion model to animate static human images. By leveraging motion information from a reference video, the Magic Animate generates temporally consistent animations, ensuring smooth transitions and realistic movements.
Key Considerations
Temporal Consistency: The Magic Animate ensures that animations are smooth and free from temporal artifacts.
Motion Alignment: The quality of the output heavily depends on the alignment between the input image and the reference video's motion.
Parameter Sensitivity: Adjusting parameters like num_inference_steps and guidance_scale can significantly impact the animation quality.
Tips & Tricks
Input Image:
- Ensure the image is high-resolution and well-lit.
- The subject should be clearly visible without obstructions.
Reference Video:
- Select videos where the motion aligns with the intended animation.
- Ensure the video's perspective matches that of the input image for seamless integration.
Parameter Settings:
- Number of Inference Steps:
- Range: 1 to 200.
- For detailed and refined animations, consider setting this parameter between 100 and 150.
- Guidance Scale:
- Range: 1 to 50.
- A value between 15 and 25 often provides a good balance between adhering to the input image and incorporating the reference video's motion.
- Seed:
- Setting a specific seed ensures reproducibility of results.
- If variability is desired, use different seed values for each run.
Capabilities
Realistic Animation: Transforms static images into dynamic animations by applying motion from reference videos.
Temporal Consistency: Ensures that the generated animations are smooth and free from temporal artifacts.
Parameter Control: Offers adjustable parameters to fine-tune the animation process according to user preferences.
What can I use for?
Content Creation with Magic Animate: Enhance static images by adding realistic motion for multimedia projects.
Virtual Avatars: Animate character images for use in virtual environments or presentations.
Educational Tools: Create dynamic visual aids from static images to facilitate learning and engagement.
Things to be aware of
Diverse Motions: Experiment with various reference videos to observe how different motions affect the animation.
Parameter Exploration: Adjust num_inference_steps and guidance_scale to see their impact on the animation quality.
Background Simplification: Use images with simple backgrounds to evaluate the Magic Animate's performance in isolating and animating the subject.
Limitations
Pose Compatibility: The Magic Animate performs best when the poses in the input image and reference video are similar.
Complex Backgrounds: Intricate backgrounds in the input image might lead to less accurate animations.
Motion Complexity: Highly complex or rapid motions in the reference video can sometimes result in unnatural animations.
Output Format: MP4