Llava 13B
llava-13b
Llava 13B is an AI model that transforms image content into meaningful text descriptions.
L40S 45GB
Fast Inference
REST API
Model Information
Response Time~7 sec
StatusActive
Version
0.0.1
Updated18 days ago
Live Demo
Average runtime: ~7 seconds
Input
Configure model parameters
Output
View generated results
Result
Preview, share or download your results with a single click.
["Yes, ","you ","are ","allowed ","to ","swim ","in ","the ","lake ","near ","the ","pier. ","The ","image ","shows ","a ","pier ","extending ","out ","into ","the ","water, ","and ","the ","water ","appears ","to ","be ","calm, ","making ","it ","a ","suitable ","spot ","for ","swimming. ","However, ","it ","is ","always ","important ","to ","be ","cautious ","and ","aware ","of ","any ","potential ","hazards ","or ","regulations ","in ","the ","area ","before ","swimming."]
Cost is calculated based on execution time.The model is charged at $0.0011 per second. With a $1 budget, you can run this model approximately 129 times, assuming an average execution time of 7 seconds per run.
API Reference
View Full DocumentationPrerequisites
- Create an API Key from the Eachlabs Console
- Install the required dependencies for your chosen language (e.g., requests for Python)
API Integration Steps
1. Create a Prediction
Send a POST request to create a new prediction. This will return a prediction ID that you'll use to check the result. The request should include your model inputs and API key.
import requestsimport timeAPI_KEY = "YOUR_API_KEY" # Replace with your API keyHEADERS = {"X-API-Key": API_KEY,"Content-Type": "application/json"}def create_prediction():response = requests.post("https://api.eachlabs.ai/v1/prediction/",headers=HEADERS,json={"model": "llava-13b","version": "0.0.1","input": {"image": "your_file.image/jpeg","top_p": "1","prompt": "your prompt here","max_tokens": "1024","temperature": "0.2"}})prediction = response.json()if prediction["status"] != "success":raise Exception(f"Prediction failed: {prediction}")return prediction["predictionID"]
2. Get Prediction Result
Poll the prediction endpoint with the prediction ID until the result is ready. The API uses long-polling, so you'll need to repeatedly check until you receive a success status.
def get_prediction(prediction_id):while True:result = requests.get(f"https://api.eachlabs.ai/v1/prediction/{prediction_id}",headers=HEADERS).json()if result["status"] == "success":return resultelif result["status"] == "error":raise Exception(f"Prediction failed: {result}")time.sleep(1) # Wait before polling again
3. Complete Example
Here's a complete example that puts it all together, including error handling and result processing. This shows how to create a prediction and wait for the result in a production environment.
try:# Create predictionprediction_id = create_prediction()print(f"Prediction created: {prediction_id}")# Get resultresult = get_prediction(prediction_id)print(f"Output URL: {result['output']}")print(f"Processing time: {result['metrics']['predict_time']}s")except Exception as e:print(f"Error: {e}")
Additional Information
- The API uses a two-step process: create prediction and poll for results
- Response time: ~7 seconds
- Rate limit: 60 requests/minute
- Concurrent requests: 10 maximum
- Use long-polling to check prediction status until completion