each::sense is live
Eachlabs | AI Workflows for app builders
bytedance-dreamactor-v2

DREAMACTOR

Transfer motion from a video to characters in an image with Dreamactor v2. Especially great performance for non human and multiple characters, producing stable, fluid, and realistic motion.

Avg Run Time: 220.000s

Model Slug: bytedance-dreamactor-v2

Release Date: February 6, 2026

Playground

Input

Enter a URL or choose a file from your computer.

Enter a URL or choose a file from your computer.

Output

Example Result

Preview and download your result.

$0.05 per second based on generated duration

API & SDK

Create a Prediction

Send a POST request to create a new prediction. This will return a prediction ID that you'll use to check the result. The request should include your model inputs and API key.

Get Prediction Result

Poll the prediction endpoint with the prediction ID until the result is ready. The API uses long-polling, so you'll need to repeatedly check until you receive a success status.

Readme

Table of Contents
Overview
Technical Specifications
Key Considerations
Tips & Tricks
Capabilities
What Can I Use It For?
Things to Be Aware Of
Limitations

Overview

ByteDance | DreamActor | v2 is an advanced image-to-video model from the DreamActor family that transfers motion from a reference video to characters in a static input image, creating stable and fluid animations. Developed by Bytedance, it excels in handling non-human subjects like animals or creatures and multiple characters simultaneously, delivering realistic motion where other models often struggle with consistency or artifacts. This makes it ideal for creators needing precise motion retargeting without retraining or complex setups. Available via the ByteDance | DreamActor | v2 API on each::labs, it solves the challenge of animating custom images with professional-grade dynamics, producing outputs that maintain character identity and physics-aware movement.

Technical Specifications

  • Input Types: Static image (URL or upload) + reference motion video + optional text prompt
  • Output Format: Video file with synchronized motion transfer
  • Max Duration: Up to 15 seconds (1-second increments, similar to Bytedance family models)
  • Resolutions: 720p (default), 480p support
  • Aspect Ratios: 16:9, 9:16, 1:1, 4:3, 3:4, 3:2, 2:3, auto-detect
  • Audio: Synchronized audio generation possible in Bytedance ecosystem
  • Processing Time: Variable, optimized for API use on platforms like each::labs
  • Pricing: Competitive at approximately $0.05 per second

These specs position ByteDance | DreamActor | v2 as a cost-effective choice for Bytedance image-to-video tasks focused on motion transfer.

Key Considerations

Before using ByteDance | DreamActor | v2, ensure you have a high-quality reference video for motion capture and a clear input image with defined characters. It performs best on each::labs for seamless API integration, requiring no local setup. Opt for this model over general image-to-video tools when precise motion retargeting to non-human or multi-character scenes is needed, trading higher resolutions for superior stability. Cost scales with duration at low rates, making it efficient for iterative workflows, but test short clips first to verify motion fidelity.

Tips & Tricks

Access bytedance-dreamactor-v2 seamlessly on Eachlabs via the Playground for instant testing, API for production apps, or SDK for custom integrations. Upload a character reference image and motion driving video, adjust settings like duration for short-form clips, and generate fluid, high-fidelity image-to-video outputs in minutes—perfect for "Bytedance image-to-video" workflows with stable results across diverse subjects.

Capabilities

  • Transfers precise motion from reference video to single or multiple characters in input images
  • Excels with non-human subjects like animals, creatures, or objects, maintaining anatomical accuracy
  • Produces stable, fluid animations with minimal flicker or identity drift across frames
  • Supports multi-character scenes with independent motion application per subject
  • Handles diverse aspect ratios including auto-detect for social media-ready outputs
  • Integrates physics-aware motion for realistic weight, momentum, and interactions
  • Configurable durations up to 15 seconds in 1-second increments
  • API-ready for developers via each::labs with image URL and video reference inputs

What Can I Use It For?

Content Creators: Animate static character art with dance motions from a reference video. Example: Input fantasy elf image and dance clip; prompt: "Elf performs elegant ballet spin, flowing hair and robes." Leverages multi-character stability for group scenes.

Marketers: Bring product mockups to life by transferring demo motions to custom visuals. Example: Robot toy image + walking video; prompt: "Robot marches forward confidently, lights blinking in sync." Non-human expertise ensures precise mechanical movements.

Game Developers: Prototype character animations from concept art using motion capture clips. Example: Monster design + attack sequence; prompt: "Multi-limbed beast lunges aggressively, tentacles whipping." Fluid physics aid realistic gameplay previews.

Designers: Create promotional videos for apparel with walking cycles on models. Example: Group photo + runway strut; prompt: "Models walk synchronized on catwalk, fabrics swaying naturally." Each::labs API streamlines iteration for professional outputs.

Things to Be Aware Of

ByteDance | DreamActor | v2 may introduce minor artifacts in highly occluded reference videos or low-contrast input images, leading to imperfect motion bleed. Users often overlook matching reference duration to output, causing clipped animations—always preview short tests. Resource needs are moderate, but complex multi-character transfers increase processing time on each::labs. Common mistake: Using prompts overriding motion data, which dilutes transfer accuracy; prioritize reference video quality over text.

Limitations

ByteDance | DreamActor | v2 caps at 720p resolution, lacking 1080p for ultra-high-def needs. It focuses strictly on motion transfer, without native text-to-video or editing extensions. Performance dips on extreme deformations or fast jerky motions not well-represented in references. Limited community resources as a newer model mean fewer optimized prompts available.