London-based Overmind, a startup constructing the supervision layer for AI brokers, at the moment introduced a €2.3 million (£2 million) Seed spherical to develop their technical groups, speed up product growth and scale go-to-market operations inside authorized, healthcare and FinTech – the place agentic AI holds potential however calls for rigorous regulatory compliance and information privateness.
The funding was led by specialist cybersecurity buyers Osney Capital, with participation from 14Peaks, Portfolio Ventures, Antler and Endurance Ventures.
Tyler Edwards, co-founder and CEO of Overmind says: “The AI safety trade is making an attempt to safe the fallacious factor. Fashions will all the time be weak to adversarial inputs – that’s a elementary property of how they work. However what occurs when an agent is reside in manufacturing, interacting with actual programs, and its behaviour begins to float? Proper now, most groups do not know. Overmind supplies the deployment-layer infrastructure wanted to observe agent interactions and intervene earlier than harm happens.“
Within the 2025–2026 European funding panorama for agentic AI and AI safety/governance applied sciences, quite a lot of startups have raised capital that helps contextualise Overmind’s Seed spherical.
Archestra, a London-based firm targeted on safely connecting AI brokers to inside information, secured €2.8 million in pre-Seed funding to develop security guardrails for autonomous programs. In Italy, Equixly raised €10 million to scale its AI-driven API safety testing platform. French startup Qevlar AI raised €9.1 million to construct agentic AI for safety operations centres.
Different adjoining firms – equivalent to Ranketta, with €1 million for AI analytics on model visibility in LLM outputs, and Omnia, with €3.5 million for agent-driven advertising and marketing platforms – additional illustrate the move of capital into applied sciences supporting AI adoption extra broadly.
Taken collectively, these rounds signify over €26 million in funding for startups addressing both autonomous AI brokers or the safety and governance challenges they convey.
Adam Cragg, Companion at Osney Capital: “Within the new frontier of autonomous AI, agent safety, efficiency, and execution are the final word aggressive benefits. Overmind supplies companies with really differentiated know-how that screens and secures agentic AI whereas iteratively bettering mannequin efficiency, enabling groups to scale with confidence. We’re excited to again such a robust founding workforce addressing a crucial market.”
Based in 2025, Overmind is a devtool that permits AI to study from manufacturing information. Utilizing sample of life evaluation, it turns real-world agent behaviour into steady enchancment, reportedly permitting groups to ship safe and extremely specialised brokers.
Overmind’s founding workforce combines experience from the UK intelligence group and high-growth know-how firms.
CEO Tyler Edwards spent eight years constructing AI programs for British intelligence companies together with MI5, MI6 and GCHQ. Akhat Rakishev, CTO, beforehand led machine studying infrastructure at Monzo and Lyst, whereas CRO Sam Brunt has scaled go-to-market at three unicorns: Funding Circle, Pipe and Vertice.
Adam French, Companion at Antler, shares: “Overmind is addressing some of the crucial bottlenecks within the development of AI: the safety and supervision of autonomous brokers. The founding workforce is uniquely positioned to resolve this and ship the ‘intelligence-grade’ safety AI instruments want. We’re proud to again a workforce that isn’t simply constructing a software, however is defining the safety commonplace for the way superintelligence can be safely deployed in manufacturing.”
As advances in agentic AI speed up, the menace panorama is present process a elementary shift that present safety instruments are ill-equipped to deal with.
The corporate explains that these fashions are inherently weak to adversarial inputs and information corruption, but the actual dangers lie in how these brokers behave when executing duties in reside environments.
This safety hole is turning into a barrier to innovation; Gartner estimates that 40% of agentic AI initiatives can be cancelled by 2027, pushed largely by insufficient danger controls. With out novel know-how, unsafe deployment will paralyse the adoption and progress of agentic AI.
Overmind helps companies fulfil the promise of agentic AI, with intelligence-grade know-how that observes, secures and improves agent efficiency. The platform supplies full visibility into agent behaviour, detecting and stopping deviations in real-time earlier than people may react.
Moreover, by means of reinforcement studying, Overmind reportedly improves agent efficiency and accuracy over time, in order that they don’t simply keep protected, they get higher.

