
Eachlabs Annual Report 2025: The Story Behind The 873% Growth
Hey Eachlabbers, Eftal here.
I've been sitting with our yearly numbers for a while now, and I keep coming back to the same thought: this wasn't just a good year. This was the year.
873% execution volume growth from January to December. 2,996 active apps. And Text-to-Voice up 1,044x. Yes, you read that right. Over a thousand times more volume than where we started.
But let me tell you what those numbers actually mean.

The Year Voice Found Its Moment
Let's start with the elephant in the room: that 1,044x growth in Text-to-Voice.
Back in January, voice was an afterthought. A nice-to-have. Something developers added to projects as a checkbox feature. By December, it became the fastest-growing category on our entire platform.
What changed? Pipelines got smarter. Developers stopped treating voice as an output and started treating it as infrastructure. Suddenly, every video workflow needed narration. Every accessibility feature needed voice. Every agent needed to speak.
I remember writing in our August report about how visual AI was dominating everything. Turns out, once you have great video, you need great audio to match. Voice didn't compete with visual. It completed it.
Image-to-Video: 108x and Counting
Image-to-Video at 108x growth is the number that makes me smile every time I look at it.
Remember when video generation was that thing you'd show at conferences to get applause? When a 4-second clip took minutes to render and looked... experimental?
This year, video became boring. And I mean that as the highest compliment.
VEO3's maturation, the Kling and Minimax models finding their stride, developers finally building production systems instead of proof-of-concepts. 2025 was when video stopped being magic and started being infrastructure.
Image Editing Models: The Quiet Giant
Here's something that might surprise you: image editing models now represent 35% of all platform volume. Not Image-to-Video (which gets all the headlines at 27%), but good old image processing.
The 38x growth in image editing tells a story about what developers actually need day-to-day. Video is flashy. Voice is trending. But image editing is the workhorse. The thing that runs thousands of times before a single video gets rendered.
Character consistency. Style transfer. Upscaling. Inpainting. These aren't exciting demos anymore; they're production necessities. And when Nano Banana dropped, followed by ByteDance's Seedream response, we saw exactly what happens when giants fight over infrastructure territory: everyone builds more.

What 2,996 Active Apps Really Means
Let me be honest about this number, because I've always tried to be straight with you about metrics.
Our "active app" threshold is 10+ transactions per month. That means those 2,996 apps include everything from sophisticated production systems to someone's side project that's still pinging our API from a forgotten cron job.
But here's what matters: 156 projects are doing 10x Eachlabbing, orchestrating 20+ models each. And 1,264 projects are doing 2x Eachlabbing. That's not experimentation. That's architecture.
When I look at where our transaction volume actually concentrates, it's in those multi-model pipelines. Teams treating AI models like microservices, composing workflows that would have been science fiction two years ago.
461 Models: The Curation Story
We used 461 different models this year. That number sounds like abundance, and it is. But what you don't see is the brutal selection process behind it.
For every model that made it to production, there were three or four that didn't survive internal testing. Stability issues. Latency problems. Price-performance ratios that didn't make sense. We reject more than we accept because consistency matters more than catalog size.
The ecosystem is chaotic. New models drop every week. Some are revolutionary, most are incremental, and a few are straight-up broken. Our job is to filter that chaos so you can build with confidence.

The Volume Distribution Tells the Real Story
Look at where the volume actually goes:
Image editing models take 35%. Image-to-Video follows at 27%. Text-to-Image holds steady at 17%. Text-to-Text sits at 10%. Text-to-Video and Text-to-Voice each claim 4%. The remaining 3% goes to other model types.
Visual AI dominates. That's not surprising. But that 10% in Text-to-Text? That's the nervous system.
I wrote about this in our October report. Text models found a second life as glue. They're not generating content anymore; they're routing it. Summarizing. Reasoning. Reformatting. They're the logic layer that makes everything else work.
The most sophisticated apps on our platform aren't using one type of model. They're orchestrating all of them.
What This Year Taught Us
2025 was the year AI went modular for real.
Not as a concept. Not as a pitch deck. As infrastructure that thousands of developers actually build on every day.
We saw developers go from experimenting with multimodal to expecting multimodal as the baseline. From "can I use multiple models?" to "how do I orchestrate twenty of them efficiently?"
The 873% growth isn't just a number. It's proof that composable AI architectures work at scale. And somehow, we ended up with the platform to support it just as the market figured that out.
Looking Ahead
I don't know what 2026 will bring. More models, definitely. More categories, probably. More developers building things we haven't imagined yet, almost certainly.
But I do know this: the foundation is solid. The patterns are established. The infrastructure is ready.
And honestly? I wouldn't want to be anywhere else while it's happening.
See you in the new year, Eachlabbers.
Eftal