# Veo 3.1 | Text to Video > The most advanced video generation model by Google DeepMind. Creates realistic scenes, natural sounds, and physically consistent motion from a single text prompt. Perfect for storytelling, cinematic ads, and short films. ## Model Information - **Name**: veo3.1-text-to-video - **Version**: 0.0.1 - **Category**: Text to Video - **Output Type**: video - **Average Response Time**: 85s - **Updated**: 1/2/2026 ## API Access - [Interactive Demo](https://www.eachlabs.ai/ai-models/veo3-1-text-to-video) - Try the model with a web interface ## API Documentation ### Authentication All API requests require authentication using your API key. Include your API key in the request headers: ``` X-API-Key: YOUR_API_KEY ``` ### Base URL ``` https://api.eachlabs.ai/v1 ``` ### Endpoints #### Create a Prediction Send a POST request to create a new prediction. This will return a prediction ID that you'll use to check the result. **Endpoint:** `POST https://api.eachlabs.ai/v1/prediction/` **Headers:** - `X-API-Key: YOUR_API_KEY` - `Content-Type: application/json` **Request Body:** ```json { "model": "veo3-1-text-to-video", "version": "0.0.1", "input": { "prompt": "Two-person street interview in Paris. The host holds a small microphone and casually talks with a passerby near a café terrace with the Eiffel Tower in the background. Natural daylight, lively ambient city sounds — people chatting, distant traffic, light breeze.\n\nDialogue:\nHost: “Hey! Did you catch the update?”\nPerson: “Of course — VE0 3.1 just dropped on eachlabs! You have to check it out, it’s unreal.”", "aspect_ratio": "16:9", "duration": "8", "enhance_prompt": true, "auto_fix": true, "resolution": "720p", "generate_audio": true }, "webhook_url": "" } ``` **Response:** ```json { "status": "success", "message": "Prediction created successfully", "predictionID": "25cd93ae-5046-462d-85ec-7c2ec5710321" } ``` Use the `predictionID` from this response to poll for results. #### Get Prediction Result Poll the prediction endpoint with the predictionID until the result is ready. **Endpoint:** `GET https://api.eachlabs.ai/v1/prediction/{PREDICTION_ID}` **Headers:** - `X-API-Key: YOUR_API_KEY` ### Code Examples #### cURL **Create Prediction:** ```bash curl -X POST \ -H "X-API-Key: YOUR_API_KEY" \ -H "Content-Type: application/json" \ --data '{ "model": "veo3-1-text-to-video", "version": "0.0.1", "input": { "prompt": "Two-person street interview in Paris. The host holds a small microphone and casually talks with a passerby near a café terrace with the Eiffel Tower in the background. Natural daylight, lively ambient city sounds — people chatting, distant traffic, light breeze.\n\nDialogue:\nHost: “Hey! Did you catch the update?”\nPerson: “Of course — VE0 3.1 just dropped on eachlabs! You have to check it out, it’s unreal.”", "aspect_ratio": "16:9", "duration": "8", "enhance_prompt": true, "auto_fix": true, "resolution": "720p", "generate_audio": true }, "webhook_url": "" }' \ https://api.eachlabs.ai/v1/prediction/ ``` **Get Result:** ```bash curl -X GET \ -H "X-API-Key: YOUR_API_KEY" \ https://api.eachlabs.ai/v1/prediction/{{PREDICTION_ID}} ``` #### Python ```python import requests import time API_KEY = 'YOUR_API_KEY' HEADERS = { "X-API-Key": API_KEY, "Content-Type": "application/json" } def create_prediction(): response = requests.post( "https://api.eachlabs.ai/v1/prediction/", headers=HEADERS, json={ "model": "veo3-1-text-to-video", "version": "0.0.1", "input": { "prompt": "Two-person street interview in Paris. The host holds a small microphone and casually talks with a passerby near a café terrace with the Eiffel Tower in the background. Natural daylight, lively ambient city sounds — people chatting, distant traffic, light breeze.\n\nDialogue:\nHost: “Hey! Did you catch the update?”\nPerson: “Of course — VE0 3.1 just dropped on eachlabs! You have to check it out, it’s unreal.”", "aspect_ratio": "16:9", "duration": "8", "enhance_prompt": true, "auto_fix": true, "resolution": "720p", "generate_audio": true }, "webhook_url": "" } ) prediction = response.json() if prediction["status"] != "success": raise Exception(f"Prediction failed: {prediction}") return prediction["predictionID"] def get_prediction(prediction_id): while True: result = requests.get( f"https://api.eachlabs.ai/v1/prediction/{prediction_id}", headers=HEADERS ).json() if result["status"] == "success": return result elif result["status"] == "error": raise Exception(f"Prediction failed: {result}") time.sleep(1) # Wait before polling again # Usage try: prediction_id = create_prediction() print(f"Prediction created: {prediction_id}") result = get_prediction(prediction_id) print(f"Output URL: {result['output']}") print(f"Processing time: {result['metrics']['predict_time']}s") except Exception as e: print(f"Error: {e}") ``` #### JavaScript/Node.js ```javascript import axios from 'axios'; const API_KEY = 'YOUR_API_KEY'; const HEADERS = { 'X-API-Key': API_KEY, 'Content-Type': 'application/json' }; async function createPrediction() { try { const response = await axios.post( 'https://api.eachlabs.ai/v1/prediction/', { model: 'veo3-1-text-to-video', version: '0.0.1', input: { "prompt": "Two-person street interview in Paris. The host holds a small microphone and casually talks with a passerby near a café terrace with the Eiffel Tower in the background. Natural daylight, lively ambient city sounds — people chatting, distant traffic, light breeze.\n\nDialogue:\nHost: “Hey! Did you catch the update?”\nPerson: “Of course — VE0 3.1 just dropped on eachlabs! You have to check it out, it’s unreal.”", "aspect_ratio": "16:9", "duration": "8", "enhance_prompt": true, "auto_fix": true, "resolution": "720p", "generate_audio": true }, webhook_url: "" }, { headers: HEADERS } ); const prediction = response.data; if (prediction.status !== 'success') { throw new Error(`Prediction failed: ${JSON.stringify(prediction)}`); } return prediction.predictionID; } catch (error) { console.error('Error creating prediction:', error); throw error; } } async function getPrediction(predictionId) { while (true) { try { const response = await axios.get( `https://api.eachlabs.ai/v1/prediction/${predictionId}`, { headers: HEADERS } ); const result = response.data; if (result.status === 'success') { return result; } else if (result.status === 'error') { throw new Error(`Prediction failed: ${JSON.stringify(result)}`); } await new Promise(resolve => setTimeout(resolve, 1000)); } catch (error) { console.error('Error getting prediction:', error); throw error; } } } // Usage async function runPrediction() { try { const predictionId = await createPrediction(); console.log(`Prediction created: ${predictionId}`); const result = await getPrediction(predictionId); console.log(`Output URL: ${result.output}`); console.log(`Processing time: ${result.metrics.predict_time}s`); } catch (error) { console.error(`Error: ${error.message}`); } } runPrediction(); ``` #### Go ```go package main import ( "bytes" "encoding/json" "fmt" "io/ioutil" "net/http" "time" ) const ( APIKey = "YOUR_API_KEY" BaseURL = "https://api.eachlabs.ai/v1" ) type PredictionRequest struct { Model string `json:"model"` Version string `json:"version"` Input interface{} `json:"input"` WebhookURL string `json:"webhook_url"` } type PredictionResponse struct { Status string `json:"status"` PredictionID string `json:"predictionID"` } type PredictionResult struct { Status string `json:"status"` Output string `json:"output"` Metrics PredictionMetrics `json:"metrics"` } type PredictionMetrics struct { PredictTime float64 `json:"predict_time"` } func createPrediction() (string, error) { reqBody := PredictionRequest{ Model: "veo3-1-text-to-video", Version: "0.0.1", Input: { "prompt": "Two-person street interview in Paris. The host holds a small microphone and casually talks with a passerby near a café terrace with the Eiffel Tower in the background. Natural daylight, lively ambient city sounds — people chatting, distant traffic, light breeze.\n\nDialogue:\nHost: “Hey! Did you catch the update?”\nPerson: “Of course — VE0 3.1 just dropped on eachlabs! You have to check it out, it’s unreal.”", "aspect_ratio": "16:9", "duration": "8", "enhance_prompt": true, "auto_fix": true, "resolution": "720p", "generate_audio": true }, WebhookURL: "", } jsonData, err := json.Marshal(reqBody) if err != nil { return "", err } req, err := http.NewRequest("POST", BaseURL+"/prediction/", bytes.NewBuffer(jsonData)) if err != nil { return "", err } req.Header.Set("X-API-Key", APIKey) req.Header.Set("Content-Type", "application/json") client := &http.Client{} resp, err := client.Do(req) if err != nil { return "", err } defer resp.Body.Close() body, err := ioutil.ReadAll(resp.Body) if err != nil { return "", err } var predResp PredictionResponse if err := json.Unmarshal(body, &predResp); err != nil { return "", err } if predResp.Status != "success" { return "", fmt.Errorf("prediction failed: %s", string(body)) } return predResp.PredictionID, nil } func getPrediction(predictionID string) (*PredictionResult, error) { for { req, err := http.NewRequest("GET", fmt.Sprintf("%s/prediction/%s", BaseURL, predictionID), nil) if err != nil { return nil, err } req.Header.Set("X-API-Key", APIKey) client := &http.Client{} resp, err := client.Do(req) if err != nil { return nil, err } body, err := ioutil.ReadAll(resp.Body) resp.Body.Close() if err != nil { return nil, err } var result PredictionResult if err := json.Unmarshal(body, &result); err != nil { return nil, err } if result.Status == "success" { return &result, nil } else if result.Status == "error" { return nil, fmt.Errorf("prediction failed: %s", string(body)) } time.Sleep(1 * time.Second) } } func main() { predictionID, err := createPrediction() if err != nil { fmt.Printf("Error creating prediction: %v\n", err) return } fmt.Printf("Prediction created: %s\n", predictionID) result, err := getPrediction(predictionID) if err != nil { fmt.Printf("Error getting prediction: %v\n", err) return } fmt.Printf("Output URL: %s\n", result.Output) fmt.Printf("Processing time: %.2fs\n", result.Metrics.PredictTime) } ``` ### Response Format #### Create Prediction Response ```json { "status": "success", "message": "Prediction created successfully", "predictionID": "25cd93ae-5046-462d-85ec-7c2ec5710321" } ``` #### Get Prediction Response (Success) ```json { "status": "success", "predictionID": "25cd93ae-5046-462d-85ec-7c2ec5710321", "output": "https://output-url.com/result", "metrics": { "predict_time": 2.5 } } ``` #### Get Prediction Response (Processing) ```json { "status": "processing", "predictionID": "25cd93ae-5046-462d-85ec-7c2ec5710321", "message": "Prediction is being processed" } ``` #### Error Response ```json { "status": "error", "message": "Error description", "details": "Additional error details" } ``` ### Rate Limits - Maximum 100 requests per minute per API key - Maximum 10 concurrent predictions per API key - Webhook timeout: 30 seconds ### Error Handling Common HTTP status codes: - `200` - Success - `400` - Bad Request (invalid input parameters) - `401` - Unauthorized (invalid API key) - `429` - Too Many Requests (rate limit exceeded) - `500` - Internal Server Error ### Webhooks (Optional) You can provide a webhook URL to receive prediction results automatically: ```json { "model": "veo3-1-text-to-video", "version": "0.0.1", "input": { "prompt": "Two-person street interview in Paris. The host holds a small microphone and casually talks with a passerby near a café terrace with the Eiffel Tower in the background. Natural daylight, lively ambient city sounds — people chatting, distant traffic, light breeze.\n\nDialogue:\nHost: “Hey! Did you catch the update?”\nPerson: “Of course — VE0 3.1 just dropped on eachlabs! You have to check it out, it’s unreal.”", "aspect_ratio": "16:9", "duration": "8", "enhance_prompt": true, "auto_fix": true, "resolution": "720p", "generate_audio": true }, "webhook_url": "https://your-domain.com/webhook" } ``` The webhook will receive a POST request with the prediction result when complete. #### Webhook Response Format When your webhook URL is called, it will receive a POST request with the following payload: **Success Response:** ```json { "error": "", "exec_id": "25cd93ae-5046-462d-85ec-7c2ec5710321", "flow_id": "", "output": "Output URL", "status": "succeeded" // succeeded or failed } ``` **Error Response:** ```json { "error": "Error description here", "exec_id": "25cd93ae-5046-462d-85ec-7c2ec5710321", "flow_id": "", "output": "", "status": "failed" } ``` **Fields:** - `exec_id`: The prediction ID (same as predictionID from create response) - `flow_id`: Flow identifier (empty for single model predictions) - `output`: URL to the result file or output data - `status`: Either "succeeded" or "failed" - `error`: Error message if status is "failed", empty string if succeeded ## Input Parameters ### Prompt - **Type**: string - **Component**: input - **Required**: Yes - **Description**: The text prompt describing the video you want to generate - **Default**: - **Minimum**: 0 - **Maximum**: 0 ### Auto Fix - **Type**: boolean - **Component**: checkbox - **Required**: No - **Description**: Whether to automatically attempt to fix prompts that fail content policy or other validation checks by rewriting them - **Default**: True - **Minimum**: 0 - **Maximum**: 0 ### Duration - **Type**: integer - **Component**: select - **Required**: No - **Description**: The duration of the generated video in seconds - **Default**: 8 - **Minimum**: 0 - **Maximum**: 0 - **Options**: "4,6,8" ### Generate Audio - **Type**: boolean - **Component**: checkbox - **Required**: No - **Description**: Whether to generate audio for the video. If false, %33 less credits will be used. - **Default**: True - **Minimum**: 0 - **Maximum**: 0 ### Aspect Ratio - **Type**: string - **Component**: select - **Required**: No - **Description**: The aspect ratio of the generated video. If it is set to 1:1, the video will be outpainted. - **Default**: 16:9 - **Minimum**: 0 - **Maximum**: 0 - **Options**: "9:16,16:9" ### Resolution - **Type**: string - **Component**: select - **Required**: No - **Description**: The resolution of the generated video - **Default**: 720p - **Minimum**: 0 - **Maximum**: 0 - **Options**: "720p,1080p" ### Seed - **Type**: integer - **Component**: input - **Required**: No - **Description**: A seed to use for the video generation - **Default**: - **Minimum**: 0 - **Maximum**: 0 ### Negative Prompt - **Type**: string - **Component**: input - **Required**: No - **Description**: A negative prompt to guide the video generation - **Default**: - **Minimum**: 0 - **Maximum**: 0 ### Enhance Prompt - **Type**: boolean - **Component**: checkbox - **Required**: No - **Description**: Whether to enhance the video generation - **Default**: True - **Minimum**: 0 - **Maximum**: 0 ## Pricing Pricing information not available for this model. ## Documentation

Veo 3.1 is a state-of-the-art AI video generation model developed by Google DeepMind. It is designed to create realistic scenes, natural sounds, and physically consistent motion from a single text prompt, making it ideal for storytelling, cinematic ads, and short films. The model builds upon its predecessor, Veo 3, by enhancing audio capabilities, narrative control, and realism, particularly in capturing true-to-life textures. Veo 3.1 supports features like video extension, frame-specific generation, and image-based direction, allowing users to guide the content of generated videos with up to three reference images.

The underlying architecture of Veo 3.1 leverages advanced generative AI technology to combine high performance with enterprise-grade reliability. It is part of Google's efforts to empower creatives with more artistic control over audio and visual elements. The model's ability to generate synchronized audio, including speech, ambiance, and music, further enhances its cinematic capabilities.

What makes Veo 3.1 unique is its ability to produce high-fidelity videos with stunning realism, supporting resolutions up to 1080p. It is accessible through the Gemini API, allowing developers to integrate it programmatically into various applications.

### Technical Specifications ### Key Considerations ### Capabilities ### Use Cases ### Tips and Tricks ### Things to Be Aware Of ### Limitations ## Example Usage ```json { "prompt": "Two-person street interview in Paris. The host holds a small microphone and casually talks with a passerby near a café terrace with the Eiffel Tower in the background. Natural daylight, lively ambient city sounds — people chatting, distant traffic, light breeze.\n\nDialogue:\nHost: “Hey! Did you catch the update?”\nPerson: “Of course — VE0 3.1 just dropped on eachlabs! You have to check it out, it’s unreal.”", "aspect_ratio": "16:9", "duration": "8", "enhance_prompt": true, "auto_fix": true, "resolution": "720p", "generate_audio": true } ``` ## Hardware Requirements ## Related Models - [Kling | v2.6 | Pro | Text to Video](https://www.eachlabs.ai/ai-models/kling-v2-6-pro-text-to-video) - Kling | v2.6 | Pro | Text to Video - [Ovi | Text to Video](https://www.eachlabs.ai/ai-models/ovi-text-to-video) - Ovi | Text to Video - [Ltx v2 | Text to Video](https://www.eachlabs.ai/ai-models/ltx-v-2-text-to-video) - Ltx v2 | Text to Video - [Seedance V1.5 | Pro | Text to Video](https://www.eachlabs.ai/ai-models/seedance-v1-5-pro-text-to-video) - Seedance V1.5 | Pro | Text to Video ## Support - [Eachlabs Documentation](https://api.eachlabs.ai/v1/docs) - Platform documentation and guides - [API Reference](https://api.eachlabs.ai/v1/docs?spec=models) - Complete API documentation - [Contact Support](https://www.eachlabs.ai/contact) - Get help with integration and usage