Instant ID Generate Avatar
instant-id
Instant ID is making realistic images of real people instantly
Model Information
Input
Configure model parameters
Output
View generated results
Result
Preview, share or download your results with a single click.

Prerequisites
- Create an API Key from the Eachlabs Console
- Install the required dependencies for your chosen language (e.g., requests for Python)
API Integration Steps
1. Create a Prediction
Send a POST request to create a new prediction. This will return a prediction ID that you'll use to check the result. The request should include your model inputs and API key.
import requestsimport timeAPI_KEY = "YOUR_API_KEY" # Replace with your API keyHEADERS = {"X-API-Key": API_KEY,"Content-Type": "application/json"}def create_prediction():response = requests.post("https://api.eachlabs.ai/v1/prediction/",headers=HEADERS,json={"model": "instant-id","version": "0.0.1","input": {"prompt": "a person","negative_prompt": "your negative prompt here","num_inference_steps": "4","image": "your_file.image/jpeg","pose_image": "your pose image here","width": "640","height": "640","scheduler": "EulerDiscreteScheduler","guidance_scale": "7.5","pose_strength": "0.4","canny_strength": "0.3","enable_depth_controlnet": false,"depth_strength": "0.5","enable_lcm": false,"seed": null,"disable_safety_checker": false,"enable_pose_controlnet": "True","enhance_nonface_region": "True","lcm_guidance_scale": "1.5","ip_adapter_scale": "0.8","controlnet_conditioning_scale": "0.8","lcm_num_inference_steps": "5","enable_canny_controlnet": false,"sdxl_weights": "stable-diffusion-xl-base-1.0"}})prediction = response.json()if prediction["status"] != "success":raise Exception(f"Prediction failed: {prediction}")return prediction["predictionID"]
2. Get Prediction Result
Poll the prediction endpoint with the prediction ID until the result is ready. The API uses long-polling, so you'll need to repeatedly check until you receive a success status.
def get_prediction(prediction_id):while True:result = requests.get(f"https://api.eachlabs.ai/v1/prediction/{prediction_id}",headers=HEADERS).json()if result["status"] == "success":return resultelif result["status"] == "error":raise Exception(f"Prediction failed: {result}")time.sleep(1) # Wait before polling again
3. Complete Example
Here's a complete example that puts it all together, including error handling and result processing. This shows how to create a prediction and wait for the result in a production environment.
try:# Create predictionprediction_id = create_prediction()print(f"Prediction created: {prediction_id}")# Get resultresult = get_prediction(prediction_id)print(f"Output URL: {result['output']}")print(f"Processing time: {result['metrics']['predict_time']}s")except Exception as e:print(f"Error: {e}")
Additional Information
- The API uses a two-step process: create prediction and poll for results
- Response time: ~32 seconds
- Rate limit: 60 requests/minute
- Concurrent requests: 10 maximum
- Use long-polling to check prediction status until completion
Overview
Instant ID Generate Avatar model leverages advanced neural architectures for generating high-quality images by combining input prompts with pose control, depth control, and conditional data. With support for a wide range of configurations, it enables users to create personalized, high-fidelity outputs while maintaining flexibility in style and structure. Instant ID Generate Avatar is designed for intuitive usability and provides fine-grained control over the generation process through an array of configurable inputs.
Technical Specifications
Architecture: Combines diffusion-based models with multi-layer conditional nets for precise image generation with Instant ID Generate Avatar.
Pre-trained Weights: Includes advanced pre-trained weights such as stable-diffusion-xl-base-1.0 and dreamshaper-xl to ensure diverse artistic outputs.
Schedulers: Multiple scheduler options, such as DEISMultistepScheduler and EulerDiscreteScheduler, are available for precise control over inference quality and speed.
Fine-Tuning Controls: Parameters such as guidance_scale, ip_adapter_scale, and controlnet_conditioning_scale provide granular control over stylistic and compositional fidelity.
Key Considerations
Prompt Quality: Clear, descriptive prompts lead to better results. Use negative_prompt to explicitly exclude undesired features.
Pose and Depth Control: Ensure pose and depth input images align with the desired output structure for effective conditioning.
Safety Checker: Enabling or disabling the safety checker impacts output filtering. Use discretion when disabling it.
Tips & Tricks
General Tips for Instant ID Generate Avatar:
- Prompt: Use detailed and descriptive prompts for high-quality outputs. For instance, "a futuristic cityscape at sunset" yields better results than vague prompts.
- Negative Prompt: Refine outputs by excluding unwanted elements, such as "blurry details" or "oversaturated colors."
- Seed: Set a specific seed for reproducible results, or leave it unset for unique outputs.
Resolution:
- width and height: Opt for resolutions that match your intended use. For example:
- Low-resolution drafts: 640x640.
- Final render: 2048x2048 or higher (up to 4096x4096).
Style Selection:
- sdxl_weights: Experiment with different styles. Examples:
- Photorealistic: stable-diffusion-xl-base-1.0.
- Anime-inspired: anime-art-diffusion-xl.
Guidance and Scaling:
- guidance_scale: Higher values (20–50) enhance adherence to the prompt but may reduce creativity. Adjust based on desired style.
- ip_adapter_scale and controlnet_conditioning_scale: Use mid-range values (0.5–0.8) for balanced effects. Extreme values may overfit or underfit the conditioning input.
Controlnet Conditioning:
- pose_strength, canny_strength, and depth_strength:
- Recommended range: 0.5–0.8 for subtle yet effective conditioning.
- Use lower values (0.2–0.4) for minimal intervention.
Advanced Features for Instant ID Generate Avatar:
- Scheduler:
- For fast and smooth results, use DEISMultistepScheduler or DPMSolverMultistepScheduler.
- For precision, try EulerDiscreteScheduler.
- LCM Parameters:
- lcm_num_inference_steps: Set between 5–8 for a balance between speed and quality.
- lcm_guidance_scale: Values of 10–15 work best for controlled outputs.
Capabilities
High-Quality Output
The model excels in generating visually stunning images across diverse styles and resolutions.
Style Adaptability
Choose from a wide array of artistic weights to achieve desired aesthetic outcomes.
Precision Controls
Leverage pose, canny, and depth controls to craft outputs with fine detail and alignment.
What can I use for?
Creative Projects: Design unique illustrations, concept art, or storyboards.
Visualization: Generate detailed visuals for presentations or promotional material.
Experimentation: Explore artistic styles and techniques using pre-trained weights.
Things to be aware of
Generate a photorealistic portrait using stable-diffusion-xl-base-1.0 with fine-tuned controlnet settings.
Experiment with anime-inspired outputs using anime-art-diffusion-xl.
Combine pose control with a well-defined prompt to create dynamic, action-packed scenes.
Adjust guidance_scale and pose_strength to observe how the model interprets intricate instructions.
Limitations
Performance Variability: Results may vary significantly based on input prompt and style selection.
Pose Limitations: Poorly aligned or low-quality pose images can reduce output fidelity.
Complex Scenes: Highly intricate prompts may result in unexpected outputs or artifacts.
Controlnet Dependencies: Overuse of controlnets can sometimes overly constrain the creative potential of the model.
Output Format: PNG