/start ups

News and resources on fintech start-ups, scale-ups, hubs, accelerators, VCs and funding worldwide.

Armilla AI launches on $1.5 million seed investment

Source: Armilla AI

Armilla AI, the first all-in-one AI governance platform that allows any business to prevent faulty AI and its consequences, today announced their company launch and $1.5M in funding.

The company backing includes renowned investor Naval Ravikant's Spearhead fund, as well as Alan and Eva Lau's Two Small Fish Ventures and C2 Ventures. Armilla seamlessly unites stakeholders with automated validation tools to test ML for robustness, accuracy, fairness, data drift, bias, and more. They provide the framework to create visibility at every phase of the model development process through transparency, auditability, and traceability. Armilla customers, from financial services to business leveraging machine learning, can now responsibly deploy robust, trustworthy AI models.

“AI models are making more critical decisions everyday, which means they require new oversight protocols that can ensure they are accurate, fair, and curb potential abuse,” said Yoshua Bengio, A.M Turing Award recipient, Founder of the Mila Québec AI Institute, and an Armilla investor. “This growing need for independent validation requires the same attention and investment used to build models themselves. This is how to responsibly build AI.” Prof. Bengio is an investor in Armilla along with Apstat partners Nicolas Chapados, and Jean-François Gagné.

Faulty AI is a byproduct of explosive growth and increased model complexity. Manual testing processes leave organizations unprepared to keep up with this shift, causing erroneous outputs including bias and other negative results. The Armilla quality assurance platform influences the entire model creation life-cycle, providing organizations with the tools to plan, experiment, validate, ship, and archive models. Armilla automates this process and continually runs more than 50+ tests to assess any miscalculations in ML models. The system includes Armilla FingerPrint™, a validation framework that learns the sensitivities and riskiest parts of any system. The framework then allows organizations to intelligently monitor their ML systems in production. The entire process is fully auditable by logging all conducted tests, issues discovered, and problems solved. Armilla can test for fallacies like gender and ethic bias, faulty credit score inclusion, overall model performance, and more. Now, previously siloed business stakeholders—executives, managers, risk and compliance data scientists—can see issues and collaborate directly on results in real time.

“AI governance in the modern world is not failing—it doesn’t even exist,” said Armilla CEO and co-founder Dan Adamson. “With the expertise of our team and advisors, we created the first audit platform that continuously stress-tests ML models to ensure they are strong, reliable, and accurate.”

”We’re committed to creating a system that breaks down barriers by eliminating AI that cause real harm,” said Armilla CPO Karthik Ramakrishnan. “For example, we’ve discovered hidden biases in data that discriminates against new immigrants for credit determinations, and detected faults in models that were charging single women more for homeowners insurance. These consequences are now avoidable for our customers, who have the tools to be on the side of doing what’s right for their customers and the community.”

“Regulators are beefing up their machine learning compliance requirements, and that includes the penalties that go with them. Armilla has created a unique approach to ensuring AI-assisted decisions are the most accurate and fair, while minimizing negative impacts on the people they serve,” said Oleg Rognyskyy, CEO of People.ai and scout at Spearhead. “The Armilla team is packed with AI experts, and we know they are the right group to help AI realize it’s true potential by supporting the ecosystem with proper governance.”

Comments: (0)