each::sense is in private beta.
Eachlabs | AI Workflows for app builders

TOPAZ

Topaz Video Upscale uses advanced AI enhancement to intelligently increase video resolution while maintaining natural motion, clarity, and fine detail. It’s ideal for restoring low-quality footage or upgrading older videos to professional-grade quality without compromising realism.

Avg Run Time: 120.000s

Model Slug: topaz-upscale-video

Release Date: November 28, 2025

Playground

Input

Enter a URL or choose a file from your computer.

Output

Example Result

Preview and download your result.

Unsupported conditions - pricing not available for this input format

API & SDK

Create a Prediction

Send a POST request to create a new prediction. This will return a prediction ID that you'll use to check the result. The request should include your model inputs and API key.

Get Prediction Result

Poll the prediction endpoint with the prediction ID until the result is ready. The API uses long-polling, so you'll need to repeatedly check until you receive a success status.

Readme

Table of Contents
Overview
Technical Specifications
Key Considerations
Tips & Tricks
Capabilities
What Can I Use It For?
Things to Be Aware Of
Limitations

Overview

Topaz Video AI (often referred to by users as Topaz Video Upscale or Topaz Video Enhance AI) is a commercial AI-powered video enhancement and upscaling application developed by Topaz Labs, a company known for its machine-learning-based photo and video tools. It is designed to increase video resolution (e.g., SD to HD, HD to 4K or 8K) while restoring detail, reducing noise, and maintaining natural motion, making it popular among professional editors, restoration specialists, and advanced hobbyists.

The software bundles multiple specialized AI models (e.g., for upscaling, de-noising, deinterlacing, stabilization, frame interpolation, and SDR-to-HDR conversion) and applies them per-frame with temporal consistency, so details and motion remain stable across time. Internally, it uses deep neural networks trained on large corpora of real and synthetic footage; current releases (around Video AI 7) integrate diffusion-based models (such as Starlight Sharp) for high-detail enhancement and models like Gaia and HyPerion for high-quality upscaling and SDR→HDR tone expansion. What makes it stand out in community feedback is its ability to upscale difficult legacy or low-quality footage to visually convincing 4K/8K with relatively few artifacts, plus strong slow-motion and frame interpolation that are competitive with or better than many alternatives in high-end workflows.

Technical Specifications

  • Architecture: Proprietary deep-learning video enhancement stack using a collection of CNN-based and diffusion-based models (e.g., Gaia, Starlight Sharp, HyPerion) with temporal modeling across frames
  • Parameters: Not publicly disclosed (multiple large models shipped as part of the application)
  • Resolution:
  • Input: Typically from sub-SD (e.g., 240p/360p) up through HD and 4K, including interlaced SD sources
  • Output: Commonly up to 4K and 8K; community reports indicate stable upscaling to 4K for most content and 8K for high-quality sources
  • Input/Output formats:
  • Input: Standard video container formats (e.g., MP4/H.264, MOV/ProRes, MKV, etc.), including progressive and interlaced sources; users frequently process camera originals, DVDs, Blu-rays, and screen captures
  • Output: High-quality encodes such as H.264/H.265 MP4, ProRes/MOV and other editing-friendly formats, with user-selectable bitrate and codec settings (reported by reviewers and user tutorials)
  • Performance metrics (from user benchmarks and reviews):
  • Processing speed strongly dependent on GPU; users report from below real-time (e.g., 0.1–0.5x) to several frames per second for 4K upscaling on modern GPUs
  • Significant VRAM usage; users commonly report 8–12 GB+ VRAM being beneficial for 4K and higher with complex models and frame interpolation
  • Quality assessments from independent comparisons rank Topaz Video AI among the highest for detail preservation and artifact control in 4K upscaling and slow-motion creation, though not always the fastest solution available

Key Considerations

  • Topaz Video AI is compute-intensive; for practical 4K or 8K workflows, a modern GPU with substantial VRAM (8–12 GB or more) is strongly recommended, especially when using advanced models or frame interpolation.
  • Choice of model is critical: Gaia-type models are often recommended by users for natural, high-quality upscaling of live-action, while diffusion-based Starlight Sharp is favored for challenging low-resolution footage where extra sharpness is needed.
  • There is a trade-off between quality and speed: higher-quality models and higher output resolutions (4K/8K, heavy denoising, or interpolation to 60+ fps) can slow processing dramatically; users often batch jobs overnight.
  • Over-sharpening or over-denoising can lead to “plastic” or artificial textures; community advice is to start with conservative enhancement settings and preview small clips before committing to full renders.
  • Frame interpolation (for slow motion or 60+ fps output) can introduce artifacts around fast motion, thin structures, or cuts; users recommend disabling interpolation across scene changes and testing on high-motion segments.
  • Deinterlacing older broadcast or DVD material can be sensitive: selecting the appropriate deinterlace/upscale model and ensuring correct field order is a common community recommendation to avoid combing or jitter.
  • Color and tone mapping (e.g., SDR-to-HDR using HyPerion) should be applied with reference monitoring where possible; some users note the need to fine-tune HDR intensity and avoid clipping highlights.
  • Storage and I/O throughput are important: high-bitrate 4K/8K outputs and intermediate ProRes files can be large, so fast SSDs and ample disk space are recommended in professional workflows.
  • For iterative work, users often export short representative segments (problem scenes, fast motion, low light) to evaluate settings before applying them to an entire feature or long-form project.
  • Since it is model-based rather than prompt-based, there is no text prompt engineering; “prompting” effectively consists of choosing the right AI model, resolution scale, and tuning sliders per content type.

Tips & Tricks

  • Model selection and settings
  • Use Gaia (or equivalent “high-quality” model) for relatively clean HD sources where preserving natural look is more important than aggressive sharpening; many users report this as the best default for professional footage.
  • Use Starlight Sharp or similar diffusion-based detail models for heavily compressed, low-resolution, or noisy sources (e.g., old web videos, VHS transfers) where more aggressive reconstruction is required.
  • For archival or documentary restoration, pair an upscaling model with moderate denoising and minimal sharpening to avoid introducing a “synthetic” look to historical footage.
  • When upscaling anime or stylized content, community posts often suggest using less aggressive noise reduction and sharpening to avoid line distortion and banding, and to test multiple models on short segments.
  • Resolution and scaling strategies
  • If the source is very low resolution (e.g., 360p), some users report better results by first upscaling to 1080p with a conservative model, then to 4K, instead of jumping directly to 4K; this can reduce haloing and artifacts.
  • Avoid unnecessary 8K output unless the content and delivery environment justify it; users note that 8K significantly increases processing time and storage with marginal benefit for most viewing distances.
  • Frame rate and interpolation
  • For sports and action footage, community feedback suggests interpolating to 60 fps for smoothness but avoiding extremely high frame rates unless the output platform fully supports them; check motion artifacts around players and balls.
  • For cinematic content, many editors prefer to keep 24/25/30 fps and only use frame interpolation for specific slow-motion segments, then conform back to the original timeline frame rate.
  • Workflow and iterative refinement
  • Export a 5–10 second test clip from representative scenes (dark, bright, fast motion, skin tones) and run several model/setting combinations in parallel to compare; this is a common user strategy to converge on optimal settings efficiently.
  • Keep original and processed clips side-by-side in an NLE (non-linear editor) or player that allows quick A/B switching; several reviewers emphasize this as key to avoiding over-processing.
  • For long-form projects (documentaries, restored films), users recommend splitting processing into reels or segments to manage crashes, GPU timeouts, or power interruptions more safely.
  • Advanced techniques (examples from user workflows)
  • Hybrid AI pipeline: some creators generate AI-driven shots or scenes elsewhere and then use Topaz Video AI specifically to upscale to 4K and refine detail before final grading, as described in hybrid AI production workflows.
  • SDR-to-HDR prep: for SDR archives destined for HDR delivery, users apply HyPerion SDR→HDR conversion in a relatively conservative mode, then perform final HDR grading in a dedicated color tool to fine-tune highlights and color volume.
  • Grain management: use the built-in grain-adding feature lightly after strong denoising to reintroduce an organic texture and to help mask subtle banding or compression in gradients, especially for filmic looks.

Capabilities

  • Upscales low-resolution or SD/HD footage to higher resolutions (4K and 8K) with strong detail reconstruction and relatively low artifact levels compared to many alternatives.
  • Reduces noise and compression artifacts effectively, particularly in older digital video, low-light footage, and heavily compressed web video, while maintaining temporal stability across frames.
  • Provides high-quality slow motion and frame interpolation, allowing conversion to 60+ fps for smoother playback in sports, action, and other high-motion content.
  • Offers AI-based stabilization that reduces camera shake and jitter, helping convert handheld or archival footage into more professional-looking material.
  • Includes models for SDR-to-HDR conversion (e.g., HyPerion) that expand dynamic range and color depth toward HDR10-style outputs.
  • Supports deinterlacing and restoration of interlaced legacy content (e.g., broadcast, DVD), turning it into progressive HD/4K footage suitable for modern displays.
  • Provides film grain addition and fine control over sharpening, enabling users to tailor texture and perceived sharpness for different aesthetics (cinematic vs ultra-clean).
  • Handles a wide range of source material, including live-action, documentary, historical archives, sports, and some animation/anime, with model choices to adapt to each.
  • Frequently used as a “last-mile” enhancement step in professional workflows to bring AI-generated or otherwise upscaled shots up to 4K quality before color grading and finishing.

What Can I Use It For?

  • Professional restoration of archival and historical footage:
  • Documentary filmmakers and archivists use Topaz Video AI to upscale and clean old film scans, broadcast recordings, and newsreels for HD/4K documentary releases, improving clarity while keeping a natural look.
  • Restoring family and personal archives:
  • Users report upscaling 1980s–2000s home videos, VHS transfers, and DV camcorder footage to 1080p or 4K, reducing noise and improving color for family viewing or digital archiving.
  • Film and TV post-production:
  • Post-production teams use it to enhance B-roll, stock footage, or lower-resolution inserts to match 4K timelines, as well as to create smooth slow-motion shots from footage originally captured at lower frame rates.
  • Hybrid AI and VFX workflows:
  • In hybrid AI production workflows, creators generate AI-driven imagery and then use Topaz to upscale to 4K, refine details, and stabilize shots before compositing and final editing.
  • Sports and action content:
  • Sports videographers and content creators use frame interpolation and upscaling to create smooth 60 fps highlight reels from standard broadcast footage, emphasizing critical plays with clear slow motion.
  • YouTube and social media content:
  • Content creators upscale older or smartphone footage to 4K, denoise, and sharpen to increase perceived production value for channels and social feeds, especially when repurposing legacy content.
  • Wedding and event videography:
  • Wedding videographers apply denoising and sharpening to low-light reception footage and up-res older projects to match newer 4K deliveries, improving clarity while keeping skin tones natural.
  • Corporate, training, and e-learning content:
  • Companies update legacy training videos by upscaling and cleaning them for modern displays, improving readability of on-screen text and clarity of demonstrations without reshooting.
  • Technical and industrial footage:
  • Some industrial and surveillance-related use cases (mentioned in comparisons) rely on AI upscaling to enhance visibility of details (faces, machinery, indicators) in low-resolution monitoring footage.
  • Anime and stylized content:
  • Enthusiasts report using Topaz Video AI to upscale older anime DVDs or web releases to HD/4K, though model selection and gentle settings are needed to avoid line distortions and banding.

Things to Be Aware Of

  • Experimental or specialized models (e.g., diffusion-based Starlight Sharp, HyPerion SDR→HDR) can produce impressive results but may require more experimentation and previewing, as they can introduce halos, over-contrast, or exaggerated textures if pushed too hard.
  • Users note that frame interpolation can struggle with complex motion, rapid camera cuts, or thin, fast-moving objects (e.g., wires, netting), sometimes creating “warping” or ghosting; disabling interpolation across cuts and testing on motion-heavy segments is a common recommendation.
  • Community feedback often mentions high GPU and VRAM requirements, especially at 4K/8K and with frame interpolation; older or low-spec systems may experience crashes, very slow processing, or be unable to use certain models.
  • Processing times can be long: user benchmarks and reviews describe multi-hour renders for long 4K projects, with speed varying widely depending on GPU, model choice, and whether interpolation or SDR→HDR conversion is enabled.
  • Some users report that aggressive denoising and sharpening can make skin and organic textures look too smooth or “plastic,” particularly in beauty or narrative content; conservative settings and grain reintroduction are widely recommended to avoid this.
  • There are occasional reports of banding or posterization in skies and gradients when source material is heavily compressed or when output bit depth/codec choices are suboptimal; using higher bit-depth formats and adding light grain can mitigate this.
  • Positive user feedback themes:
  • High perceived quality of upscaling, particularly for legacy SD/HD sources going to 4K.
  • Strong noise reduction and artifact cleanup without destroying too much detail when tuned carefully.
  • Professional-grade slow motion and frame interpolation that rival or exceed many other tools for demanding editors.
  • Valuable in hybrid workflows where it serves as a dedicated enhancement/upscale stage before grading and finishing.
  • Common concerns or negative feedback patterns:
  • Heavy hardware demands and long render times compared with some faster, lower-quality solutions.
  • Occasional instability or crashes on long jobs or with certain GPU/driver combinations, leading users to segment long projects.
  • Learning curve around selecting the right model and balancing denoise/sharpen parameters; inexperienced users can easily over-process footage.
  • Some reviewers and comparison articles note that while quality is excellent, there are alternatives that can be simpler or faster for casual users, even if they do not always match the same peak quality.

Limitations

  • High computational and hardware requirements: achieving the best results, especially at 4K/8K with advanced models and frame interpolation, typically requires a modern, powerful GPU with substantial VRAM and can involve long processing times.
  • Not universally optimal for all content types: highly stylized animation, extreme compression artifacts, or very noisy low-light footage may still show artifacts or unnatural textures, and careful tuning is required to avoid over-processing.
  • Frame interpolation and advanced enhancement models can produce artifacts in challenging motion or scene-change scenarios, making the tool less suitable for fully automated “fire-and-forget” batch processing without human review, particularly in critical professional deliveries.

Pricing

Pricing Type: Dynamic

Price = input duration url x unit price