luma/ray-2 models

Eachlabs | AI Workflows for app builders

Readme

ray-2 by Luma — AI Model Family

The ray-2 model family represents Luma's high-performance video generation engine, designed for creators who need fast iteration, realistic physics, and flexible input workflows. Ray-2 powers Luma's Dream Machine interface and solves a core creative challenge: generating coherent, motion-rich video clips from text, images, or existing footage without sacrificing speed or visual quality. Built on 10x the compute power of its predecessor, ray-2 delivers photorealistic results optimized for both rapid prototyping and professional workflows.

The ray-2 family includes four distinct models: ray-2 Text-to-Video (720p), ray-2 Text-to-Video (540p), ray-2 Video Reframe, and ray-2 Flash Video Reframe—each tailored to different creative starting points and production constraints.

ray-2 Capabilities and Use Cases

Text-to-Video Models (720p and 540p)

The text-to-video variants generate 5–10 second clips from natural language prompts, with the ability to extend sequences to approximately 30 seconds through chaining. Both models output at 24 FPS and support multiple aspect ratios (16:9, 9:16, 4:3, 3:4, 21:9, 9:21), making them ideal for social media, web content, and narrative storytelling. The 720p variant delivers standard production quality, while the 540p option provides faster generation for rapid iteration and lower-bandwidth workflows.

Example use case: A product marketer could prompt: "A sleek wireless headphone rotating slowly on a white surface with soft studio lighting, 3-second clip"—and receive a photorealistic product shot ready for e-commerce or advertising.

Video Reframe and Flash Video Reframe

These video-to-video models transform existing footage by changing aspect ratio, applying stylistic modifications, or extending sequences. Ray-2 Video Reframe excels at preserving composition while adding dynamic motion, while Ray-2 Flash Video Reframe prioritizes speed for time-sensitive workflows. Both models are essential for creators working with pre-existing visual assets—character portraits, product photos, mood frames, or raw footage that needs refinement.

Use case: A filmmaker with a 16:9 landscape shot could reframe it to 9:16 for vertical social media distribution while maintaining subject focus and adding subtle camera drift for visual interest.

Pipeline Integration

Ray-2 models work synergistically: start with text-to-video to generate a hero shot, then use video-to-video reframing to adapt that output across multiple platforms and aspect ratios. This workflow reduces iteration cycles and maintains visual consistency across deliverables.

What Makes ray-2 Stand Out

Ray-2 excels at realistic physics and motion quality—a critical differentiator in AI video generation. The model understands object weight, lighting behavior, and camera movement in ways that eliminate the "floaty" or unstable artifacts common in competing solutions. This makes ray-2 particularly strong for product visualization, character animation, and any content where believability matters.

Speed and iteration are core strengths. Ray-2 generates clips faster than larger flagship models, enabling creators to test multiple prompts and variations without extended wait times. For agencies and content teams operating on tight deadlines, this efficiency translates directly to productivity gains.

The family also supports upscaling to 4K resolution, allowing creators to generate at standard resolutions and enhance output quality for final delivery. Ray-2 is ideal for:

  • Content creators and agencies needing rapid turnaround on video assets
  • Product marketers requiring consistent, photorealistic demonstrations
  • Animators and storytellers prioritizing motion coherence and physics accuracy
  • Teams working across multiple platforms and aspect ratios simultaneously

Access ray-2 Models via each::labs API

All ray-2 models are accessible through a single, unified API on each::labs, eliminating the friction of managing multiple platform accounts. Whether you're building with the Playground for quick experimentation or integrating via SDK for production workflows, each::labs provides seamless access to the complete ray-2 family.

The platform's unified interface means you can test ray-2 Text-to-Video alongside Video Reframe models, compare outputs, and deploy the best results—all without leaving your development environment.

Sign up to explore the full ray-2 model family on each::labs and accelerate your video creation pipeline.

FREQUENTLY ASKED QUESTIONS

Dev questions, real answers.

The latest video model from Luma AI, faster and more realistic than the original.

Yes, it can generate perfectly looping videos for backgrounds.

Use Ray 2 on Eachlabs with the pay-as-you-go model.