UK Launches New Open-Source Platform for AI Safety Testing

The UK AI Safety Institute announced the availability of its new Inspect platform designed for the evaluation and testing of artificial intelligence tech in order to help develop safe AI models. The Inspect toolset enables testers — including worldwide researchers, government agencies, and startups — to analyze the specific capabilities of such models and establish scores based on various criteria. According to the Institute, the “release comes at a crucial time in AI development, as more powerful models are expected to hit the market over the course of 2024, making the push for safe and responsible AI development more pressing than ever.” Continue reading UK Launches New Open-Source Platform for AI Safety Testing

U.S. and UK Form Partnership to Accelerate AI Safety Testing

The United States has entered into an agreement with the United Kingdom to collaboratively develop safety tests for the most advanced AI models. The memorandum of understanding aims at evaluating the societal and national defense risks posed by advanced models. Coming after commitments made at the AI Safety Summit in November, the deal is being described as the world’s first bilateral agreement on AI safety. The agreement, signed by U.S. Commerce Secretary Gina Raimondo and UK Technology Secretary Michelle Donelan, envisions the countries “working to align their scientific approaches” and to accelerate evaluations for AI models, systems and agents. Continue reading U.S. and UK Form Partnership to Accelerate AI Safety Testing

Nations Sign the Bletchley Declaration in Support of Ethical AI

U.S. Vice President Kamala Harris warned global leaders that the existential threats posed by artificial intelligence are very real and urgently need to be addressed. Harris’ remarks, delivered in a speech at the U.S. Embassy in Britain, summarized the prevailing view of world governments participating in the first global AI Safety Summit. The two-day event kicked off Wednesday with news that 27 nations — including the U.S., European Union member states and China — signed the Bletchley Declaration on AI, committing to voluntary guidelines to work as a group toward responsible and ethical AI. Continue reading Nations Sign the Bletchley Declaration in Support of Ethical AI

United Kingdom Investing $273 Million in AI Supercomputing

The UK government plans to invest at least £225 million (about $273 million) in AI supercomputing with the aim of bringing Great Britain into closer parity with AI leaders the U.S. and China. Among the new machines coming online is Dawn, which was built by the University of Cambridge Research Computing Services, Intel and Dell and is being hosted by the Cambridge Open Zettascale Lab. “Dawn Phase 1 represents a huge step forward in AI and simulation capability for the UK, deployed and ready to use now,” said Dr. Paul Calleja, director of Research Computing at Cambridge. Continue reading United Kingdom Investing $273 Million in AI Supercomputing

Germany, UK to Host Europe’s First Exascale Supercomputers

Europe is moving forward in the supercomputer space, with two new exascale machines set to come online. Jupiter will be installed at the Jülich Supercomputing Centre in Munich, with assembly set to start as early as Q1 2024. Scotland will be home to the UK’s first exascale supercomputer, to be hosted at the University of Edinburgh, with installation commencing in 2025. An exascale supercomputer can run calculations at speeds of one exaflop (1,000 petaflops) or greater. On completion, these two new supercomputers will land in the top percent of the world’s high-performers. Continue reading Germany, UK to Host Europe’s First Exascale Supercomputers