Google Debuts AI Test Kitchen, LaMDA Language Generator

Google has launched an AI Test Kitchen and is inviting users to sign up to test experimental AI-powered systems and provide feedback before the applications are deployed for commercial use. First up is the Language Model for Dialogue Applications (LaMDA), which has shown promising early results. The AI Test Kitchen has begun a gradual rollout to small groups of U.S. on Android with plans to include iOS in the coming weeks. According to Google, “as we move ahead with development, we feel a great responsibility to get this right.” 

“Similar to a real test kitchen, AI Test Kitchen will serve a rotating set of experimental demos,” Google wrote on a blog post that links to the registration page. “These aren’t finished products, but they’re designed to give you a taste of what’s becoming possible with AI in a responsible way,” the company emphasized.

The initial LaMDA outreach tests tools for three different use cases. The first demo, “Imagine It,” lets participating users name a place that prompts textual feedback about what it’s like to be there. Another demo, “List It,” lets you “share a goal or topic, and LaMDA will break it down into a list of helpful subtasks,” while the “Talk About It (Dogs Edition)” demo invites “open-ended conversation about dogs and only dogs, which explores LaMDA’s ability to stay on topic even if you try to veer off-topic.”

LaMDA, which Google previewed in May 2021, “contains 137 billion parameters, the configuration settings that determine how an AI processes data. The more parameters there are in a neural network, the more effectively it can perform computing tasks,” says SiliconANGLE, noting “Google trained the system on a natural language dataset containing 1.56 trillion words” to inform the AI’s natural language replies in response to user text prompts.

As part of LaMDA’s development, Google used “adversarial testing” resulting in measures designed to suppress inappropriate responses.

“Systems within AI Test Kitchen will attempt to automatically detect and filter out objectionable words or phrases that might be sexually explicit, hateful or offensive, violent or illegal, or divulge personal information, Google says,” but “warns offensive text might still occasionally make it through,” according to TechCrunch.Humn

“Even the most sophisticated chatbots today can quickly go off the rails, delving into conspiracy theories and offensive content when prompted with certain text,” adds TechCrunch, citing as a recent example Meta Platforms’ BlenderBot 3.0, whose mishaps are detailed by Business Insider.

Google says LaMDA is among its most promising new technologies but admits the model still “has difficulty differentiating between benign and adversarial prompts.” Envisioning a future of human-computer interaction where we’ll talk to computers “in the same conversational way you speak to friends and family,” Google says there’s much work to be done even as new breakthroughs in generative language models accelerate progress.

No Comments Yet

You can be the first to comment!

Sorry, comments for this entry are closed at this time.