GFPGAN

gfpgan

GFPGAN is a photo Enhancer AI model for improving overall photo quality and resolution.

L40S 45GB
Fast Inference
REST API

Model Information

Response Time~5 sec
StatusActive
Version
0.0.1
Updated10 days ago

Prerequisites

  • Create an API Key from the Eachlabs Console
  • Install the required dependencies for your chosen language (e.g., requests for Python)

API Integration Steps

1. Create a Prediction

Send a POST request to create a new prediction. This will return a prediction ID that you'll use to check the result. The request should include your model inputs and API key.

import requests
import time
API_KEY = "YOUR_API_KEY" # Replace with your API key
HEADERS = {
"X-API-Key": API_KEY,
"Content-Type": "application/json"
}
def create_prediction():
response = requests.post(
"https://api.eachlabs.ai/v1/prediction/",
headers=HEADERS,
json={
"model": "gfpgan",
"version": "0.0.1",
"input": {
"img": "your_file.image/jpeg",
"scale": "2",
"version": "v1.4"
}
}
)
prediction = response.json()
if prediction["status"] != "success":
raise Exception(f"Prediction failed: {prediction}")
return prediction["predictionID"]

2. Get Prediction Result

Poll the prediction endpoint with the prediction ID until the result is ready. The API uses long-polling, so you'll need to repeatedly check until you receive a success status.

def get_prediction(prediction_id):
while True:
result = requests.get(
f"https://api.eachlabs.ai/v1/prediction/{prediction_id}",
headers=HEADERS
).json()
if result["status"] == "success":
return result
elif result["status"] == "error":
raise Exception(f"Prediction failed: {result}")
time.sleep(1) # Wait before polling again

3. Complete Example

Here's a complete example that puts it all together, including error handling and result processing. This shows how to create a prediction and wait for the result in a production environment.

try:
# Create prediction
prediction_id = create_prediction()
print(f"Prediction created: {prediction_id}")
# Get result
result = get_prediction(prediction_id)
print(f"Output URL: {result['output']}")
print(f"Processing time: {result['metrics']['predict_time']}s")
except Exception as e:
print(f"Error: {e}")

Additional Information

  • The API uses a two-step process: create prediction and poll for results
  • Response time: ~5 seconds
  • Rate limit: 60 requests/minute
  • Concurrent requests: 10 maximum
  • Use long-polling to check prediction status until completion

Overview

GFPGAN (Generative Facial Prior GAN) is a state-of-the-art AI model developed for high-quality face restoration in real-world scenarios. By leveraging generative adversarial networks (GANs) and facial priors, GFPGAN excels at restoring low-resolution, blurred, or damaged facial images while preserving high fidelity and naturalness. Its versatility makes it a popular choice for photo enhancement, historical image restoration, and creative applications.

Technical Specifications

Input Requirements

  • For best results, use images with clear facial regions and minimal obstructions.

Model Architecture:

  • Built on GAN architecture with pre-trained facial prior integration.
  • Refined loss functions to balance restoration and fidelity.

Input Requirements:

  • Formats: JPEG, PNG.
  • Resolution: Recommended input is up to 512x512 for optimal performance.

Output Features:

  • Restored images maintain original context and backgrounds.
  • Faces are enhanced with reconstructed features.

Key Considerations

Over-Restoration:

  • In some cases, the restored face might deviate slightly from the original.

Context Preservation:

  • Non-facial regions are minimally processed. Ensure the background meets the desired quality before input.

Tips & Tricks

Fine-Tune Settings:

  • Adjust restoration strength to balance detail enhancement and natural appearance.

Pre-Processing:

  • Crop images to focus on faces for better results.

Scale

  • 1.0: Minimal enhancement, retains most original features.
  • 1.5: Balanced restoration for moderate improvements.
  • 2.0: High-level enhancement, ideal for heavily degraded images.
  • 2.5+: Aggressive restoration, may introduce artifacts on high-quality inputs.

Capabilities

Creative Uses:

  • Enhances AI-generated faces or artistic projects with added realism.

Historical Photo Repair:

  • Revives old photographs for personal or archival purposes.

Detail Enhancement:

  • Recovers textures like skin, hair, and eyes with impressive clarity.

Face Restoration:

  • Repairs blurred, damaged, or low-quality facial images.

What can I use for?

Media Projects:

  • Restore archival images or enhance visuals for creative content.

Historical Preservation:

  • Digitally repair vintage photos for museums or personal collections.

AI Art Improvement:

  • Use as a finishing tool for AI-generated images to add detail and polish.

Personal Photo Enhancement:

  • Improve selfies, family portraits, and treasured memories.

Things to be aware of

Restore Vintage Photos:

  • Test the model on old or damaged images to witness its transformative abilities

Creative Enhancements:

  • Apply the model to artistic or AI-generated portraits for added depth.


Limitations

Generalization:

  • May struggle with extreme distortions or non-human faces.

Color Consistency:

  • Slight color variations may require manual correction.

Background Restoration:

  • Focuses primarily on faces, with less emphasis on backgrounds.

Output Format: PNG

Related AI Models

stable-diffusion-inpainting

Stable Diffusion Inpainting

stable-diffusion-inpainting

Image to Image
sdxl-controlnet

SDXL Controlnet

sdxl-controlnet

Image to Image
controlnet-1.1-x-realistic-vision-v2.0

Controlnet Realistic Vision V2.0

controlnet-1-1-x-realistic-vision-v2-0

Image to Image
each-faceswap-v1

Eachlabs Face Swap

each-faceswap-v1

Image to Image