Eachlabs | AI Workflows for app builders
mureka-generate-instrumental

MUREKA

Mureka Generate Instrumental is a music generation model that produces instrumental tracks without vocals.

Avg Run Time: 120.000s

Model Slug: mureka-generate-instrumental

Playground

Input

Output

Example Result

Preview and download your result.

{
"output":[
0:{
"duration":210790
"flac_url":"https://storage.googleapis.com/1019uploads/979ebb1f-d5e3-4a46-8b4f-2f35cd4a3153.flac"
"id":"119183415181313"
"url":"https://storage.googleapis.com/1019uploads/cd584f25-f815-4c41-9a8d-66d45480a352.mp3"
"wav_url":"https://storage.googleapis.com/1019uploads/18c60ba0-e9e3-48f3-95a1-ae98cd4048c4.wav"
}
1:{
"duration":242670
"flac_url":"https://storage.googleapis.com/1019uploads/9264de48-b732-49e7-bd4d-fe2eb0895c24.flac"
"id":"119183415181314"
"index":1
"url":"https://storage.googleapis.com/1019uploads/aff9492f-3632-4bca-8dc4-115a4a222e58.mp3"
"wav_url":"https://storage.googleapis.com/1019uploads/d31f4118-dffc-4ed1-8296-8d40aedc9a97.wav"
}
]
}
Each execution costs $0.0600. With $1 you can run this model about 16 times.

API & SDK

Create a Prediction

Send a POST request to create a new prediction. This will return a prediction ID that you'll use to check the result. The request should include your model inputs and API key.

Get Prediction Result

Poll the prediction endpoint with the prediction ID until the result is ready. The API uses long-polling, so you'll need to repeatedly check until you receive a success status.

Readme

Table of Contents
Overview
Technical Specifications
Key Considerations
Tips & Tricks
Capabilities
What Can I Use It For?
Things to Be Aware Of
Limitations

Overview

mureka-generate-instrumental — Text to Audio AI Model

mureka-generate-instrumental is a specialized music generation model that creates instrumental tracks from text descriptions, eliminating the need for vocal processing or post-production vocal removal. Developed by Mureka as part of their advanced music-generation family, this model solves a critical problem for content creators, producers, and developers who need copyright-free, royalty-ready instrumental music without the complexity of vocal editing or stem separation workflows.

Unlike general-purpose music generation tools that produce mixed vocal and instrumental tracks, mureka-generate-instrumental focuses exclusively on instrumental composition. This targeted approach delivers cleaner outputs optimized for background music, soundtracks, and production use cases where instrumental-only audio is the end goal. The model leverages Mureka's MusiCoT technology—a music reasoning framework that plans song structure (verse, chorus, bridge) before generating audio—ensuring coherent, musically logical compositions rather than random audio sequences.

The primary differentiator is its ability to generate structured, multi-instrument arrangements directly from natural language prompts. Users describe their desired instrumental style, mood, tempo, and instrumentation, and the model produces full production-ready tracks in minutes—a capability that significantly reduces the time required for AI music-generation workflows compared to models requiring post-generation stem separation or vocal removal.

Technical Specifications

What Sets mureka-generate-instrumental Apart

Structured Music Reasoning: mureka-generate-instrumental uses Chain-of-Thought (CoT) technology to plan song architecture before synthesis. This means the model decides on time signatures, BPM, drum patterns, bass lines, and synth arrangements before generating audio—resulting in compositions with proper verse-chorus-bridge structure rather than ambient loops. For developers building AI music APIs or content creators needing broadcast-quality instrumental tracks, this architectural planning eliminates the need for manual editing or regeneration cycles.

Multi-Instrument Stem Export: The model supports direct export of individual instrument stems (drums, bass, melody, synths, pads) in professional formats compatible with DAWs like Ableton, FL Studio, and Logic Pro. This capability is essential for producers who need granular control over mixing and mastering—they can adjust individual instrument levels, apply effects, or re-record specific sections without regenerating the entire composition.

Multi-Modal Input for Instrumental Composition: Beyond text prompts, mureka-generate-instrumental accepts reference audio files, allowing users to upload an instrumental track and generate new music that mirrors its style, instrumentation, and energy level. This reference-based generation is particularly valuable for content creators maintaining consistent sonic branding across multiple projects or developers building AI music-generation APIs that require style consistency.

Technical Specifications: The model generates tracks up to 240 seconds in length, exports in MP3 and WAV formats, and typically completes generation within 1-2 minutes. Output quality is optimized for professional use, with support for various BPM ranges and time signatures. Full commercial rights are included on all paid plans, making generated tracks suitable for monetization, streaming, and commercial projects.

Key Considerations

  • Use detailed prompts specifying genre, tempo, instruments (e.g., "less drums, more acoustic guitar, slower tempo") for optimal genre accuracy and produced feel
  • Toggle instrumental mode explicitly to avoid vocal generation; set duration upfront as a core parameter for targeted lengths
  • Balance quality and speed by starting with default V7.5 model, iterating with single tweaks rather than overhauling prompts
  • Account for increased character limits (prompts to 1000, lyrics to 3000) to incorporate complex instructions without truncation
  • Avoid vague prompts; include reference audio for melody guidance to enhance consistency in creative directions
  • Monitor for predictable outputs in early generations—switch parameters like vocal gender (even if instrumental) or model version for variety

Tips & Tricks

How to Use mureka-generate-instrumental on Eachlabs

Access mureka-generate-instrumental through Eachlabs' Playground or API. Provide a text description of your desired instrumental style, mood, tempo, and instrumentation—or upload a reference audio file for style matching. The model accepts parameters including BPM, time signature, and instrument selection. Generation completes in 1-2 minutes, delivering MP3 and WAV exports plus individual instrument stems for professional DAW integration. Full commercial licensing is included, enabling immediate use in production, streaming, and monetized projects.

---END---

Capabilities

  • Generates high-fidelity instrumental tracks across genres like EDM, rock, R&B, and cinematic with structured, produced feel
  • Supports controllable generation via text prompts, duration settings, and optional reference audio for guided melody and style
  • Produces versatile outputs including full tracks, loops, and separable stems for further editing
  • Handles extended prompt lengths for detailed instructions, enabling precise control over tempo, instruments, and mix balance
  • Offers creative variety as an "alternate flavor" when other generations are too predictable, with progressive model improvements for better acoustic detail

What Can I Use It For?

Use Cases for mureka-generate-instrumental

Content Creators and Streamers: Streamers and video creators need copyright-free background music that won't trigger content ID claims. mureka-generate-instrumental enables creators to generate unique instrumental tracks by typing prompts like "uplifting lo-fi hip-hop with vinyl crackle and jazz chords" or "cinematic orchestral underscore with strings and French horns." Each generated track is royalty-free and commercially licensed, eliminating licensing friction for YouTube, Twitch, and podcast workflows.

Music Producers and Composers: Professional producers use mureka-generate-instrumental as a rapid ideation tool. Rather than starting from a blank DAW session, they generate instrumental foundations using text or reference tracks, then import stems into their production software for mixing, arrangement refinement, and creative layering. This workflow reduces composition time by 30-70% compared to writing from scratch, while maintaining full creative control over the final output.

Developers Building AI Music APIs: Developers integrating music generation into applications—such as video editing platforms, game engines, or marketing automation tools—benefit from mureka-generate-instrumental's structured output and API accessibility. The model's ability to generate consistent, multi-instrument arrangements from simple text prompts makes it ideal for building white-label music-generation features without requiring deep music production expertise.

Marketing and Advertising Teams: Brands creating product videos, commercials, or social media content need instrumental music that matches specific moods and brand aesthetics. Teams can generate custom instrumental tracks for "luxury product showcase with ambient electronic soundscape" or "energetic fitness montage with driving percussion and synth bass," then export stems for final mixing with voiceovers or sound effects in post-production.

Things to Be Aware Of

  • Model updates frequently improve defaults (e.g., V7.5 over V7), so use latest versions for best reconstruction and efficiency
  • Supports unlimited parallel generations on some endpoints with no rate limits, ideal for batch testing prompts
  • Users report strong genre accuracy and iterability, praising "produced feel" and easy tweaks for drums/guitar/tempo
  • Resource-friendly for iterative use, with stem downloads enabling post-processing without heavy recomputation
  • Positive feedback on handling longer contexts post-updates, reducing truncation issues in complex prompts
  • Common quirk: Early generations may feel predictable; resolve by tweaking one element (e.g., "brighter mix") or uploading references
  • Performance scales with prompt detail—users note better results from specific modifiers over generic descriptions

Limitations

  • Lacks detailed public disclosure on exact parameter counts or full training data, limiting deep architectural analysis
  • Primarily optimized for music generation; may not excel in non-instrumental or highly experimental edge cases without iteration
  • Potential for predictable outputs in initial runs, requiring multiple refinements or model/version switches for diversity