UK Launches New Open-Source Platform for AI Safety Testing

The UK AI Safety Institute announced the availability of its new Inspect platform designed for the evaluation and testing of artificial intelligence tech in order to help develop safe AI models. The Inspect toolset enables testers — including worldwide researchers, government agencies, and startups — to analyze the specific capabilities of such models and establish scores based on various criteria. According to the Institute, the “release comes at a crucial time in AI development, as more powerful models are expected to hit the market over the course of 2024, making the push for safe and responsible AI development more pressing than ever.”

“Inspect can evaluate AI models in various areas such as their core knowledge, ability to reason and autonomous capabilities,” reports Silicon Republic. “The announcement comes a month after the UK and U.S. agreed to a collaboration between their safety institutes, to develop common testing approaches for AI models.”

The collaboration between the two countries “follows commitments made at the AI Safety Summit in November of last year, where world leaders explored the need for global cooperation in combating the potential risks associated with AI technology,” notes PYMNTS.

“We hope to see the global AI community using Inspect to not only carry out their own model safety tests, but to help adapt and build upon the open-source platform so we can produce high-quality evaluations across the board,” said UK AI Safety Institute Chair Ian Hogarth.

The Inspect toolset — available under an open-source MIT License — is comprised of three components: data sets, solvers and scorers.

“Data sets provide samples for evaluation tests,” explains TechCrunch. “Solvers do the work of carrying out the tests. And scorers evaluate the work of solvers and aggregate scores from the tests into metrics. Inspect’s built-in components can be augmented via third-party packages written in Python.”

The platform was made available on Friday in what the UK press release claims is “the first time that an AI safety testing platform … has been spearheaded by a state-backed body … released for wider use.”

No Comments Yet

You can be the first to comment!

Leave a comment

You must be logged in to post a comment.