Senators Question Meta Platforms About Recent LLaMA Leak

Meta Platforms CEO Mark Zuckerberg received a letter this week from Senators Richard Blumenthal and Josh Hawley of the Subcommittee on Privacy, Technology & the Law that took the executive to task for an online leak of the company’s LLaMA artificial intelligence system. The 65-billion parameter language model, which is still under development, was open-sourced in February. Available on request through Meta’s GitHub portal, it wound up on 4chan and BitTorrent “making it available to anyone, anywhere in the world, without monitoring or oversight,” the senators wrote.

“Meta’s choice to distribute LLaMA in such an unrestrained and permissive manner raises important and complicated questions about when and how it is appropriate to openly release sophisticated AI models,” says the June 6 letter co-signed by Blumenthal (D-Connecticut) and Hawley (R-Missouri), respectively the committee’s chairman and ranking member.

“Given the seemingly minimal protections built into LLaMA’s release, Meta should have known that LLaMA would be broadly disseminated, and must have anticipated the potential for abuse,” the lawmakers wrote. The letter suggests Meta may have intended to widely disseminate LLaMA, stating that “while Meta has described the release as a leak, its chief AI scientist has stated that open models are key to its commercial success.”

The letter comes as the Biden administration emphasizes the need for tech companies to step up the safety of their products and software. “As part of its national cybersecurity strategy released in March,” reports The Wall Street Journal, “it called on software firms to take more responsibility for making sure their products can’t be hacked, and indicated it would support legislation to hold them liable if they don’t take reasonable steps to secure their products.”

Tech companies “are financially motivated to get products to market quickly, not give priority to security, so market forces alone aren’t always enough to keep critical systems safe, the strategy’s supporters say.”

However, singling “out the LLaMA leak seems to be a swipe at the open source community, which has been having both a moment and a red-hot debate over the past months,” writes VentureBeat, noting that startups and academics are pushing back against the “shift in AI to closed, proprietary LLMs.”

LLaMA was upon release “immediately hailed for its superior performance over models such as GPT-3, despite having 10 times fewer parameters,” VentureBeat reports, adding that other open-source AI models — including Stanford’s Alpaca and Vicuna, a collaboration from UC Berkeley, Carnegie Mellon, UC San Diego and Stanford — draw on LLaMA, as does Databricks’ Dolly.

The Senate Subcommittee on Privacy, Technology, & the Law last month grilled OpenAI CEO Sam Altman and others from the artificial intelligence community about AI safety, and Altman openly voiced concerns.

In April, Meta VP of AI research Joelle Pineau told VentureBeat that accountability and transparency are critical aspects of AI development. “Pineau doesn’t fully align herself with statements from OpenAI that cite safety concerns as a reason to keep models closed,” VentureBeat writes, saying she cites the Alpaca project as an ideal of “gated access.”

No Comments Yet

You can be the first to comment!

Leave a comment

You must be logged in to post a comment.