New Microsoft Safety Tools Fix AI Flubs, Detect Proprietary IP
September 26, 2024
Microsoft has released a suite of “Trustworthy AI” features that address concerns about AI security and reliability. The four new capabilities include Correction, a content detection upgrade in Microsoft Azure that “helps fix hallucination issues in real time before users see them.” Embedded Content Safety allows customers to embed Azure AI Content Safety on devices where cloud connectivity is intermittent or unavailable, while two new filters flag AI output of protected material. Additionally, a transparency safeguard providing the company’s AI assistant, Microsoft 365 Copilot, with specific “web search query citations” is coming soon.
The Correction feature “is available in preview as part of the Azure AI Studio — a suite of safety tools designed to detect vulnerabilities, find hallucinations, and block malicious prompts,” reports The Verge.
“Once enabled, the correction system will scan and identify inaccuracies in AI output by comparing it with a customer’s source material,” The Verge adds. The tool will then rewrite the material, correcting inaccuracies.
Essentially a feedback loop that detects mismatches between user data and AI output, the AI is “usually able to do better the second try,” Chief Product Officer of Responsible AI Sarah Bird tells VentureBeat.
Correction “can be used with any text-generating AI model, including Meta’s Llama and OpenAI’s GPT-4o,” according to TechCrunch.
Embedded Content Safety allows devices to run checks, even when offline, which is “particularly relevant for applications like Microsoft’s Copilot for PC, which integrates AI capabilities directly into the operating system,” VentureBeat explains.
In a blog post detailing the new features, which include several privacy tools, Microsoft calls ECS a “proactive” approach to risk assessment.
Evaluations in Azure AI Studio lets customers “assess the quality and relevancy of outputs and how often their AI application outputs protected material.” That ability is amplified with Protected Material Detection for Code. Now previewing as part of Azure AI Content Safety, it helps detect pre-existing content and code.
This feature supports “informed coding decisions” when developers explore public source code using things like GitHub repositories.
VentureBeat says that Microsoft’s “push for trustworthy AI reflects a growing industry awareness of the potential risks associated with advanced AI systems,” positioning the company as responsible AI developer in the increasingly competitive cloud computing space.
No Comments Yet
You can be the first to comment!
Leave a comment
You must be logged in to post a comment.