RAY-2
The Video Reframe model automatically adjusts a video’s aspect ratio for different formats while keeping key subjects in view. It’s ideal for quickly optimizing content for various platforms without losing visual quality.
Avg Run Time: 70.000s
Model Slug: luma-dream-machine-ray-2-flash-video-reframe
Playground
Input
Enter a URL or choose a file from your computer.
Invalid URL.
(Max 50MB)
Output
Example Result
Preview and download your result.
API & SDK
Create a Prediction
Send a POST request to create a new prediction. This will return a prediction ID that you'll use to check the result. The request should include your model inputs and API key.
Get Prediction Result
Poll the prediction endpoint with the prediction ID until the result is ready. The API uses long-polling, so you'll need to repeatedly check until you receive a success status.
Readme
Overview
luma-dream-machine-ray-2-flash-video-reframe — Video-to-Video AI Model
The luma-dream-machine-ray-2-flash-video-reframe model from Luma's ray-2 family automatically reframes videos to fit different aspect ratios, ensuring key subjects stay centered and visible without manual cropping or quality loss. This video-to-video AI model solves the common pain point of adapting content for platforms like Instagram, TikTok, or YouTube, delivering fast optimizations powered by Ray2's advanced motion coherence. Developers and creators searching for Luma video-to-video tools can use it to programmatically transform footage at scale, maintaining authentic motion and physics.
Built on Luma's Dream Machine architecture, luma-dream-machine-ray-2-flash-video-reframe leverages Ray2 Flash for rapid processing, supporting resolutions from 540p to 4K with upscaling and video lengths up to 10 seconds on paid plans.
Technical Specifications
What Sets luma-dream-machine-ray-2-flash-video-reframe Apart
The luma-dream-machine-ray-2-flash-video-reframe stands out in the video-to-video AI model landscape with its automatic subject tracking during reframing, powered by Ray2's realistic camera motion concepts like dolly shots and orbits. This enables seamless adaptation to vertical, square, or widescreen formats while preserving frame-to-frame consistency, unlike basic croppers that distort motion.
It integrates Ray2 Flash mode for up to 3x faster generation speeds compared to standard Ray models, ideal for high-volume workflows like social media repurposing. Users benefit from queue-free processing on paid tiers, turning hours-long waits into minutes.
Key technical specs include support for 540p, 720p, 1080p, and 4K outputs with upscaling; input videos up to 10 seconds; and formats optimized for professional pipelines. For developers seeking a luma-dream-machine-ray-2-flash-video-reframe API, it offers programmatic control over reframing parameters without needing mocap or green screens.
- Native subject preservation in reframed outputs, using Ray2's physics-aware engine for natural movement retention.
- Composable camera controls via prompts, enabling cinematic reframes like "orbit around subject in 9:16."
- Scalable for batch processing, perfect for video-to-video AI model integrations in apps.
Key Considerations
- The model excels at maintaining subject focus during aspect ratio changes, but complex scenes with multiple moving subjects may require manual review.
- For best results, use source videos with clear subject separation and minimal background clutter.
- Avoid low-resolution or heavily compressed source material, as this can impact output quality.
- There is a trade-off between speed and quality; higher quality settings may increase processing time.
- Prompt engineering is less about text prompts and more about providing well-structured input videos and specifying desired output formats clearly.
Tips & Tricks
How to Use luma-dream-machine-ray-2-flash-video-reframe on Eachlabs
Access luma-dream-machine-ray-2-flash-video-reframe seamlessly on Eachlabs via the Playground for instant testing, API for production-scale integrations, or SDK for custom apps. Upload your input video, select target aspect ratios like 9:16 or 1:1, add optional prompts for camera motion, and generate high-quality outputs up to 4K in seconds—perfect for video-to-video workflows with preserved subject focus.
---Capabilities
- Automatically reframes videos to fit a wide range of aspect ratios while keeping key subjects in view.
- Maintains cinematic quality and natural motion, even after significant cropping or resizing.
- Can generate new video content from still images, enabling video-to-video and image-to-video transformations.
- Supports rapid content localization by allowing background and subject modifications for different markets.
- Delivers consistent, brand-aligned visuals across large volumes of content.
What Can I Use It For?
Use Cases for luma-dream-machine-ray-2-flash-video-reframe
Content creators repurposing horizontal footage for TikTok can upload a 16:9 clip and specify "reframe to 9:16 portrait with subject centered," instantly generating vertical versions that keep performers in frame with smooth motion intact—eliminating tedious manual edits.
Marketers optimizing product demos for multiple platforms feed demo videos into luma-dream-machine-ray-2-flash-video-reframe, using its Ray2 physics to reframe for Instagram Reels or YouTube Shorts while highlighting key items like gadgets in dynamic orbits, boosting engagement without reshooting.
Developers building Luma video-to-video apps for e-commerce automate aspect ratio adjustments, inputting catalog videos and prompts like "reframe to square format tracking the product," to create platform-specific assets at scale with 4K upscaling for professional polish.
Video editors handling client projects use it for quick format switches in pre-production, extending short clips up to 10 seconds during reframing to test storyboards with realistic camera moves, streamlining workflows for advertising teams.
Things to Be Aware Of
- Some users report that highly dynamic scenes with overlapping subjects may challenge the model's subject tracking.
- Occasional artifacts or unnatural cropping can occur in edge cases, especially with low-quality source material.
- Processing high-resolution or long-duration videos may require substantial computational resources.
- Consistency across frames is generally strong, but minor flickering or jitter may appear in complex transitions.
- Positive feedback highlights the model's speed, ease of use, and ability to produce visually appealing outputs with minimal manual intervention.
- Some users express a desire for more granular control over subject prioritization and reframing logic.
- The model is frequently updated, with ongoing improvements in motion coherence and visual fidelity reported by the community.
Limitations
- The model may struggle with videos containing multiple equally prominent subjects or rapid, unpredictable motion.
- Not ideal for scenarios requiring precise manual control over every frame or highly customized reframing logic.
- Output quality is dependent on the quality and clarity of the input video; poor source material can limit results.
Related AI Models
You can seamlessly integrate advanced AI capabilities into your applications without the hassle of managing complex infrastructure.
