RAY-2
The Video Reframe model adjusts videos to different aspect ratios while keeping the main subject centered. It ensures the composition stays clear and visually balanced across formats.
Avg Run Time: 170.000s
Model Slug: luma-dream-machine-ray-2-video-reframe
Playground
Input
Enter a URL or choose a file from your computer.
Invalid URL.
(Max 50MB)
Output
Example Result
Preview and download your result.
API & SDK
Create a Prediction
Send a POST request to create a new prediction. This will return a prediction ID that you'll use to check the result. The request should include your model inputs and API key.
Get Prediction Result
Poll the prediction endpoint with the prediction ID until the result is ready. The API uses long-polling, so you'll need to repeatedly check until you receive a success status.
Readme
Overview
luma-dream-machine-ray-2-video-reframe — Video-to-Video AI Model
Developed by Luma as part of the Dream Machine platform, luma-dream-machine-ray-2-video-reframe is a video-to-video AI model that intelligently reframes and resizes videos to different aspect ratios while keeping the main subject centered and visually balanced. Instead of crude cropping or stretching, this model expands video borders and fills in missing edges with contextually appropriate content that maintains the original grain, lighting, and motion coherence—solving a critical pain point for creators managing content across multiple platforms.
The model excels at transforming a single video asset into platform-optimized versions. Feed it a 16:9 cinematic shot, and it generates a 9:16 vertical version for Instagram Reels or TikTok without losing visual integrity. This capability is powered by Luma's Ray2 architecture, which understands physics and lighting at a deep level, ensuring that expanded regions feel natural and integrated rather than artificially generated.
Technical Specifications
What Sets luma-dream-machine-ray-2-video-reframe Apart
Temporal consistency during expansion: When the model extends video borders, it maintains frame-to-frame coherence, matching the original footage's grain structure, color grading, and lighting conditions. This means expanded regions don't look like obvious AI fills—they integrate seamlessly with the source material, a capability that distinguishes it from basic video aspect ratio converters.
Composition-aware subject centering: The model automatically keeps your primary subject in focus and properly framed across different aspect ratios. Whether you're reframing a product shot, interview footage, or cinematic scene, the composition remains visually balanced without requiring manual keyframing or adjustment instructions.
Multi-platform output in a single generation: Support for 16:9, 9:16, and 1:1 aspect ratios means you can generate versions optimized for cinema, vertical social media, and square feeds from one source video. Output resolution reaches native 1080p with optional 4K upscaling, and videos can extend up to 10 seconds on standard plans.
Physics-aware edge generation: Because luma-dream-machine-ray-2-video-reframe inherits Ray2's understanding of motion, depth, and lighting, expanded regions respect the physical logic of the original scene. Parallax effects, shadows, and object interactions remain consistent—critical for professional video editing workflows.
Key Considerations
- The model is optimized for short clips (5–10 seconds); extending beyond 30 seconds may result in noticeable quality loss
- Best results are achieved when the main subject is clearly defined and not occluded by complex backgrounds or multiple moving objects
- For optimal reframing, provide source videos with sufficient margin around the subject to allow for cropping or expansion without losing key elements
- Quality vs speed: Higher resolutions and longer durations increase processing time and resource usage
- Prompt engineering: Use clear, concise instructions when leveraging natural-language editing features; specify desired aspect ratios and subject focus for best results
- Avoid using highly compressed or low-resolution source material, as this can impact the quality of reframed outputs
Tips & Tricks
How to Use luma-dream-machine-ray-2-video-reframe on Eachlabs
Access luma-dream-machine-ray-2-video-reframe through Eachlabs via the Playground for interactive testing or through the API for production workflows. Provide your source video and specify your target aspect ratio (16:9, 9:16, or 1:1); the model handles composition centering and edge-filling automatically. Output arrives as high-quality MP4 video at 1080p native resolution with optional 4K upscaling, ready for immediate use across platforms.
---END---Capabilities
- Accurately reframes videos to a wide range of aspect ratios while keeping the main subject centered and the composition visually balanced
- Supports both image-to-video and video-to-video transformations, enabling flexible content adaptation
- Maintains high visual fidelity and realistic motion, even after significant cropping or expansion
- Integrates with natural-language editing for intuitive, text-guided refinements
- Offers upscaling to 4K for high-resolution outputs suitable for professional workflows
- Provides camera motion control for creative reframing and dynamic scene transitions
What Can I Use It For?
Use Cases for luma-dream-machine-ray-2-video-reframe
Social media content creators and marketers: Creators shooting cinematic video content for YouTube or broadcast can feed their 16:9 master into luma-dream-machine-ray-2-video-reframe to automatically generate 9:16 vertical versions for Instagram Reels, TikTok, and YouTube Shorts. The model preserves the original composition and visual quality, eliminating the need for manual cropping or reshoots. Example prompt: "Reframe this product demo video to 9:16 for mobile, keeping the product centered and maintaining the studio lighting."
E-commerce and product teams: Product videographers can generate multiple aspect ratio versions from a single shoot—square for product pages, vertical for mobile apps, and widescreen for marketing emails. The intelligent edge-filling ensures product visibility and appeal across all formats without quality degradation.
Filmmakers and post-production professionals: Directors working on projects destined for multiple distribution channels (theatrical, streaming, broadcast) can use luma-dream-machine-ray-2-video-reframe to adapt footage for different exhibition formats while maintaining cinematic integrity. The model's understanding of lighting and motion ensures that reframed sequences feel intentional rather than algorithmically stretched.
Developers building AI video editing platforms: API users integrating a video-to-video AI model into their applications can leverage luma-dream-machine-ray-2-video-reframe's composition awareness and temporal consistency to offer clients professional-grade video reformatting without building custom machine learning pipelines.
Things to Be Aware Of
- Some users report that quality may degrade when generating clips longer than 10–30 seconds, especially at higher resolutions
- The model currently does not support audio, which may limit its use in certain production workflows
- Complex scenes with multiple moving subjects or intricate choreography may require careful prompting and iteration to maintain consistency
- Resource requirements can be significant for high-resolution or batch processing tasks; users recommend planning for longer processing times with 4K outputs
- Positive feedback highlights the model’s ease of use, realistic visuals, and reliable subject tracking during reframing
- Common concerns include occasional artifacts at the edges of reframed videos and challenges with highly dynamic or cluttered scenes
- Users appreciate the integration of natural-language editing, which streamlines the refinement process without manual masking or keyframing
Limitations
- Limited to short video durations (optimal up to 10 seconds; quality may drop beyond 30 seconds)
- No audio support as of mid-2025, restricting use in projects requiring synchronized sound
- May struggle with complex multi-subject scenes or precise camera choreography, requiring additional user intervention for best results
Related AI Models
You can seamlessly integrate advanced AI capabilities into your applications without the hassle of managing complex infrastructure.
