Why We Re-Invested in Galtea
Building the Evaluation Layer for Enterprise AI
As enterprises move from experimenting with AI to deploying it in production, one challenge is becoming increasingly clear: validating how these systems behave in the real world.
One year after investing in Galtea at pre-seed, we’re excited to announce our follow-on investment in the company’s €3m Seed round led by 42CAP, with participation from Mozilla Ventures and existing investors including Abac Nest Ventures.
At the time of our first investment, Galtea was an early spin-off from the Barcelona Supercomputing Center (BSC), built around a clear conviction: AI systems cannot scale without proper evaluation.
Today, the company has turned that thesis into a working enterprise platform and is beginning to see adoption in industries such as banking and telecommunications.
The challenge of validating AI systems
Generative AI has unlocked a wave of experimentation across enterprises. But moving from experimentation to production is proving far harder than expected.
AI systems behave differently from traditional software. They are probabilistic, dynamic, and interact with unpredictable user inputs.
This makes testing significantly more complex.
As a result, many enterprise AI initiatives remain stuck in pilot phases. Companies lack reliable ways to evaluate how these systems will behave once deployed in real-world environments.
In our view, AI adoption increasingly depends on robust methods for evaluating and validating AI systems before they reach production.
The missing layer in the AI stack
As the AI stack evolves, new infrastructure layers are emerging around model deployment: monitoring, observability, routing, and guardrails.
But systematic evaluation and validation remain relatively underdeveloped.
Our view is that AI systems will require a new generation of testing and validation tools, similar to how QA frameworks became essential in traditional software development.
Galtea’s platform focuses on this problem.
The company provides tools that allow enterprises to simulate, test, and evaluate AI systems before and after deployment.
Galtea’s conversation simulator generates realistic interactions and automated test scenarios to uncover failures before AI systems reach production.
By generating synthetic scenarios and simulated user interactions, the platform evaluates how AI systems behave across different tasks, risks, and performance metrics.
A team built for this problem
Galtea’s founders, Jorge Palomar and Baybars Külebi, previously worked together at the Barcelona Supercomputing Center.
There, they experienced firsthand the challenges enterprises face when validating complex AI systems in production environments.
That experience shaped the company’s vision of building a data-driven evaluation platform that helps organizations establish a reliable source of truth about how their AI systems behave in real-world scenarios.
Since our first investment, the team has moved quickly from research to product and has begun building early enterprise traction.
Why we re-invested
At Abac Nest Ventures, we invest in companies building enterprise technologies that help organizations adopt and scale new capabilities.
We believe evaluation and validation will become an important component of the AI stack as organizations move from experimentation to real-world deployment.
Galtea is focused on addressing that problem.
As AI systems move from experimentation to production, the need for reliable testing and validation will only increase.
We believe tools that help organizations evaluate and deploy AI systems safely will become an important part of the enterprise AI stack.
That’s why we’re excited to continue supporting the Galtea team on this journey.


