Couple Image Generation

eachlabs-couple

Couple Image Generation by Eachlabs is an image model that generates a couple using two images and a prompt.

Fast Inference
REST API

Model Information

Response Time~40 sec
StatusActive
Version
0.0.1
Updated3 days ago

Prerequisites

  • Create an API Key from the Eachlabs Console
  • Install the required dependencies for your chosen language (e.g., requests for Python)

API Integration Steps

1. Create a Prediction

Send a POST request to create a new prediction. This will return a prediction ID that you'll use to check the result. The request should include your model inputs and API key.

import requests
import time
API_KEY = "YOUR_API_KEY" # Replace with your API key
HEADERS = {
"X-API-Key": API_KEY,
"Content-Type": "application/json"
}
def create_prediction():
response = requests.post(
"https://api.eachlabs.ai/v1/prediction/",
headers=HEADERS,
json={
"model": "eachlabs-couple",
"version": "0.0.1",
"input": {
"prompt": "a couple in cristmas market and looking at the camera",
"reference_image": "https://storage.googleapis.com/magicpoint/models/women.png",
"input_image": "https://storage.googleapis.com/magicpoint/models/man.png"
}
}
)
prediction = response.json()
if prediction["status"] != "success":
raise Exception(f"Prediction failed: {prediction}")
return prediction["predictionID"]

2. Get Prediction Result

Poll the prediction endpoint with the prediction ID until the result is ready. The API uses long-polling, so you'll need to repeatedly check until you receive a success status.

def get_prediction(prediction_id):
while True:
result = requests.get(
f"https://api.eachlabs.ai/v1/prediction/{prediction_id}",
headers=HEADERS
).json()
if result["status"] == "success":
return result
elif result["status"] == "error":
raise Exception(f"Prediction failed: {result}")
time.sleep(1) # Wait before polling again

3. Complete Example

Here's a complete example that puts it all together, including error handling and result processing. This shows how to create a prediction and wait for the result in a production environment.

try:
# Create prediction
prediction_id = create_prediction()
print(f"Prediction created: {prediction_id}")
# Get result
result = get_prediction(prediction_id)
print(f"Output URL: {result['output']}")
print(f"Processing time: {result['metrics']['predict_time']}s")
except Exception as e:
print(f"Error: {e}")

Additional Information

  • The API uses a two-step process: create prediction and poll for results
  • Response time: ~40 seconds
  • Rate limit: 60 requests/minute
  • Concurrent requests: 10 maximum
  • Use long-polling to check prediction status until completion

Overview

Couple Image Generation is an image-to-image model designed to merge two separate face images into a single, cohesive image. This model ensures a seamless blend while maintaining the unique characteristics of each individual. The process leverages deep learning techniques to create natural and aesthetically pleasing results.

Technical Specifications

  • Face Recognition & Blending: Advanced feature mapping ensures that each individual's distinct features are preserved while blending the images smoothly.
  • Resolution Optimization: The model processes images at a high resolution to maintain clarity and detail.
  • Adaptive Color Matching: The color tones of both images are adjusted automatically to ensure a natural-looking composition.

Key Considerations

  • The model is designed for merging two human faces; results may not be reliable for non-human objects.
  • Large variations in lighting, angles, or image quality between the two input images can affect the final result.
  • Some artifacts may appear if the input images contain extreme expressions, accessories, or occlusions.
  • Ethical considerations should be taken into account when using this model, ensuring responsible usage of generated images.

Tips & Tricks

  • prompt: Provide a concise textual description to influence the style of the generated image.
  • reference_image: This should be a high-quality face image with good lighting and clear facial details.
  • input_image: The main face image to be merged with the reference image; ensure that it is well-captured and similar in angle to the reference.

Additional Tips

  • Using images taken in similar lighting conditions enhances the blending quality.
  • Avoid extreme close-ups or low-resolution images to prevent loss of detail.
  • If merging images from different sources, pre-editing to match their brightness and contrast may help in achieving a more natural look.

Capabilities

  • Seamlessly merges two separate face images into a single, natural-looking image.
  • Retains individual facial characteristics while ensuring smooth transitions.
  • Adapts color tones and lighting conditions for better visual consistency.
  • Works with a variety of human face types and expressions.

What can I use for?

  • Couple Portraits: Merge two separate images into one for creative and sentimental portraits.
  • Virtual Reunions: Create images of individuals who were not photographed together.
  • Photo Restoration & Editing: Blend old or damaged images with newer ones for restoration purposes.
  • Art & Visualization: Generate artistic compositions by merging different facial elements.

Things to be aware of

  • Experiment with different face angles to see how well the model blends unique features.
  • Use images with different lighting conditions and compare the results.
  • Try adding a textual prompt to slightly influence the final image style.
  • Merge images of family members to explore genetic similarities in a single portrait.

Limitations

  • Struggles with extreme facial angles, occlusions, and heavily distorted images.
  • May not always perfectly align features when there are significant differences in facial structure.
  • Performance may vary based on skin tones, lighting conditions, and image resolutions.

Output Format: JPG

Related AI Models

controlnet-1.1-x-realistic-vision-v2.0

Controlnet Realistic Vision V2.0

controlnet-1-1-x-realistic-vision-v2-0

Image to Image
stable-diffusion-inpainting

Stable Diffusion Inpainting

stable-diffusion-inpainting

Image to Image
flux-depth-dev

Flux Depth Dev

flux-depth-dev

Image to Image
sdxl-controlnet

SDXL Controlnet

sdxl-controlnet

Image to Image