Luma Dream Machine | Ray 2 | Video Reframe
The Video Reframe model adjusts videos to different aspect ratios while keeping the main subject centered. It ensures the composition stays clear and visually balanced across formats.
Avg Run Time: 170.000s
Model Slug: luma-dream-machine-ray-2-video-reframe
Category: Video to Video
Input
Enter an URL or choose a file from your computer.
Click to upload or drag and drop
(Max 50MB)
Output
Example Result
Preview and download your result.
Create a Prediction
Send a POST request to create a new prediction. This will return a prediction ID that you'll use to check the result. The request should include your model inputs and API key.
Get Prediction Result
Poll the prediction endpoint with the prediction ID until the result is ready. The API uses long-polling, so you'll need to repeatedly check until you receive a success status.
Overview
Luma Dream Machine’s “luma-dream-machine-ray-2-video-reframe” model is a specialized AI video generator developed by Luma Labs, designed to intelligently reframe videos for different aspect ratios while maintaining the main subject’s position and visual balance. Built on the Ray2 architecture, this model is part of Luma’s broader suite of generative video tools, emphasizing realism, natural motion, and logical event sequencing. The Video Reframe capability is particularly aimed at creators and professionals who need to adapt content for multiple platforms without losing compositional integrity.
Key features include the ability to expand or crop video content in any direction, ensuring the subject remains centered and the composition stays visually appealing across formats. The model leverages advanced visual reasoning and camera motion control to preserve motion consistency and cinematic quality during reframing. Its integration with Luma’s Modify with Instructions and Modify Video features enables natural-language editing and broader video transformations, making it a versatile tool for VFX, advertising, film, and design workflows. What sets this model apart is its focus on seamless aspect ratio adaptation, high-quality output at 1080p (with upscaling to 4K), and intuitive controls for both technical and creative users.
Technical Specifications
- Architecture: Ray2 (Luma Labs proprietary generative video model)
- Parameters: Not publicly disclosed
- Resolution: Native 1080p output, upscaling available to 4K
- Input/Output formats: Accepts video and image inputs; outputs video clips (common formats include MP4 and MOV, but specifics may vary)
- Performance metrics: Generates up to 10-second clips per run (extendable to ~30 seconds with potential quality degradation); optimized for realistic visuals and motion consistency; no audio support as of mid-2025
Key Considerations
- The model is optimized for short clips (5–10 seconds); extending beyond 30 seconds may result in noticeable quality loss
- Best results are achieved when the main subject is clearly defined and not occluded by complex backgrounds or multiple moving objects
- For optimal reframing, provide source videos with sufficient margin around the subject to allow for cropping or expansion without losing key elements
- Quality vs speed: Higher resolutions and longer durations increase processing time and resource usage
- Prompt engineering: Use clear, concise instructions when leveraging natural-language editing features; specify desired aspect ratios and subject focus for best results
- Avoid using highly compressed or low-resolution source material, as this can impact the quality of reframed outputs
Tips & Tricks
- For best reframing results, start with videos that have the subject centered and minimal background clutter
- When adapting for social media formats (e.g., vertical or square), ensure the original video has enough headroom and side margins to prevent cropping important details
- Use the Modify with Instructions feature to fine-tune the framing, such as requesting “keep the person’s face centered” or “expand background to the left”
- Iteratively preview and adjust aspect ratio settings to find the optimal composition before final export
- For advanced results, combine reframing with camera motion concepts (e.g., dolly or orbit) to create dynamic transitions between aspect ratios
Capabilities
- Accurately reframes videos to a wide range of aspect ratios while keeping the main subject centered and the composition visually balanced
- Supports both image-to-video and video-to-video transformations, enabling flexible content adaptation
- Maintains high visual fidelity and realistic motion, even after significant cropping or expansion
- Integrates with natural-language editing for intuitive, text-guided refinements
- Offers upscaling to 4K for high-resolution outputs suitable for professional workflows
- Provides camera motion control for creative reframing and dynamic scene transitions
What Can I Use It For?
- Repurposing marketing and advertising videos for multiple social media platforms with different aspect ratio requirements
- Adapting cinematic footage for vertical, square, or custom formats without losing key visual elements
- Enhancing VFX and film production pipelines by quickly generating alternate aspect ratio versions for review or distribution
- Creating dynamic hero shots or product animations from static images or existing video clips
- Streamlining content localization and adaptation for international campaigns
- Supporting creative projects where seamless reframing and composition preservation are critical, as documented in user showcases and technical blogs
Things to Be Aware Of
- Some users report that quality may degrade when generating clips longer than 10–30 seconds, especially at higher resolutions
- The model currently does not support audio, which may limit its use in certain production workflows
- Complex scenes with multiple moving subjects or intricate choreography may require careful prompting and iteration to maintain consistency
- Resource requirements can be significant for high-resolution or batch processing tasks; users recommend planning for longer processing times with 4K outputs
- Positive feedback highlights the model’s ease of use, realistic visuals, and reliable subject tracking during reframing
- Common concerns include occasional artifacts at the edges of reframed videos and challenges with highly dynamic or cluttered scenes
- Users appreciate the integration of natural-language editing, which streamlines the refinement process without manual masking or keyframing
Limitations
- Limited to short video durations (optimal up to 10 seconds; quality may drop beyond 30 seconds)
- No audio support as of mid-2025, restricting use in projects requiring synchronized sound
- May struggle with complex multi-subject scenes or precise camera choreography, requiring additional user intervention for best results
Related AI Models
You can seamlessly integrate advanced AI capabilities into your applications without the hassle of managing complex infrastructure.