EU’s AI Act Could Present Dangers for Open-Source Coders

The EU’s draft AI Act is causing quite a stir, particular as it pertains to regulating general-purpose artificial intelligence, including guidelines for open source developers that specify procedures for accuracy, risk management, transparency, technical documentation and data governance, well as cybersecurity. The first law on AI by a major regulator anywhere, the proposed AI Act seeks to promote “trustworthy AI,” but some are critical that as written the legislation could hurt open efforts to develop AI systems. The EU is seeking industry input as the proposal heads for a vote this fall.

An article by Alex Engler, an analyst with the Washington, D.C.-based public policy think tank The Brookings Institution, says the AI Act takes “the unusual, and harmful, step of regulating open-source general purpose AI (GPAI).” While ostensibly trying to make the tools safer, the critical Brookings piece says the pending law “would create legal liability for open-source GPAI models, undermining their development” and “could further concentrate power over the future of AI in large technology companies.”

Based on Brookings’ interpretation of the AI Act, TechCrunch writes that “if a company were to deploy an open source AI system that led to some disastrous outcome, it’s not inconceivable the company could attempt to deflect responsibility by suing the open source developers on which they built their product.”

While the legislation “contains carve-outs” for some open source categories, like those focused exclusively on research, “as Engler notes, it’d be difficult — if not impossible — to prevent these projects from making their way into commercial systems, where they could be abused by malicious actors,” writes TechCrunch, citing a recent IRL scenario in which Stable Diffusion, “an open source AI system that generates images from text prompts, was released with a license prohibiting certain types of content. But it quickly found an audience within communities that use such AI tools to create pornographic deepfakes of celebrities.”

“Open source developers should not be subject to the same burden as those developing commercial software,” writes TechCrunch, advocating the Brookings directive. Allen Institute for AI CEO Oren Etzioni tells TechCrunch he agrees that the current draft of the AI Act presents problems. “Etzioni said that the burdens introduced by the rules could have a chilling effect on areas like the development of open text-generating systems, which he believes are enabling developers to ‘catch up’ to Big Tech companies like Google and Meta,” TechCrunch writes.

Meanwhile, EURACTIV writes that leading members of the European Parliament are seeking broad consensus on regulatory sandboxes required under the AI Act, which now includes compromise text indicating EU member states “must establish at least one AI regulatory sandbox each, which should be operational when the regulation enters into application.”

“Regulatory sandboxes are programs that enable entrepreneurs to test new products and services in the market while maintaining close rapport with regulators,” explains The Future Society. Text of the AI Act “now includes the possibility of setting up sandboxes at the regional or local level or jointly with other countries,” EURACTIV says, noting that the debate is likely to heat up in late September.

The Stanford Law School Journal describes the AI Act draft as combining “a risk-based approach based on the pyramid of criticality, with a modern, layered enforcement mechanism.” This means that “a lighter legal regime applies to AI applications with a negligible risk, and that applications with an unacceptable risk are banned. Between these extremes of the spectrum, stricter regulations apply as risk increases.”

No Comments Yet

You can be the first to comment!

Sorry, comments for this entry are closed at this time.