Opera Browser Is Experimenting with Local Support for LLMs

Opera has become the first browser to add support for large language models (LLMs). At this point the feature is experimental, and available only on the Opera One Developer browser as part of the AI Feature Drops program. The update offers about 150 LLMs from more than 50 different families, including Meta’s LLaMA, Google’s Gemma, Mixtral and Vicuna. Opera had previously only offered local support for its own Aria AI, a competitor to Microsoft Copilot and OpenAI’s ChatGPT. The local LLMs are being offered for testing as a complimentary addition to Opera’s online Aria service.

The Register explains that the main difference between Opera’s new local support and connectivity to artificial intelligence chatbots like Aria, Copilot and others “is that they depend on being connected via the Internet to a dedicated server.

Opera says that with the locally run LLMs it’s added to Opera One Developer, data remains local to users’ PCs and doesn’t require an Internet connection except to download the LLM initially.” That translates to “lower latency because any data used is not sent across the Internet,” explains SiliconANGLE, noting “it also means it can’t be used to train another model.”

“It is using the Ollama open source framework in the browser to run these models on your computer,” TechCrunch reports, adding that “currently, all available models are a subset of Ollama’s library, but in the future, the company is looking to include models from different sources.”

“Using an LLM is a process that typically requires data being sent to a server,” an Opera blog post notes, detailing how “local LLMs are different, as they allow you to process your prompts directly on your machine without the data you’re submitting to the local LLM leaving your computer.”

“Introducing local LLMs in this way allows Opera to start exploring ways of building experiences within the fast-emerging local AI space,” Opera EVP Browsers and Gaming Krystian Kolondra said in a press release.

Each model variant will take up more than 2GB of local storage, so one must be mindful of not running out of space, according to TechCrunch. “If you want to save space, there are plenty of online tools like Quora’s Poe and HuggingChat to explore different models.”

No Comments Yet

You can be the first to comment!

Leave a comment

You must be logged in to post a comment.