Federal Policy Specifies Guidelines for Risk Management of AI

The White House is implementing a new AI policy across the federal government that will be implemented by the Office of Management and Budget (OMB). Vice President Kamala Harris announced the new rules, which require that all federal agencies have a senior leader overseeing AI systems use, in an effort to ensure that AI deployed in public service remains safe and unbiased. The move was positioned as making good on “a core component” of President Biden’s AI Executive Order (EO), issued in October. Federal agencies reported completing the 150-day actions tasked by the EO. Continue reading Federal Policy Specifies Guidelines for Risk Management of AI

Newsom Report Examines Use of AI by California Government

California Governor Gavin Newsom has released a report examining the beneficial uses and potential harms of artificial intelligence in state government. Potential plusses include improving access to government services by identifying groups that are hindered due to language barriers or other reasons, while dangers highlight the need to prepare citizens with next generation skills so they don’t get left behind in the GenAI economy. “This is an important first step in our efforts to fully understand the scope of GenAI and the state’s role in deploying it,” Newsom said, calling California’s strategy “a nuanced, measured approach.” Continue reading Newsom Report Examines Use of AI by California Government

DHS Moves to ‘Master’ AI While Keeping It Safe, Trustworthy

The Department of Homeland Security is harnessing artificial intelligence, according to a memo by Secretary Alejandro Mayorkas explaining the department will use AI to keep Americans safe while implementing safeguards to ensure civil rights, privacy rights and the U.S. Constitution are not violated. The DHS appointed Eric Hysen as chief AI officer, moving him into the role from his previous post as CIO. “DHS must master this technology, applying it effectively and building a world class workforce that can reap the benefits of Al, while meeting the threats posed by adversaries that wield Al,” Mayorkas wrote. Continue reading DHS Moves to ‘Master’ AI While Keeping It Safe, Trustworthy

Meta’s Multimodal AI Model Translates Nearly 100 Languages

Meta Platforms is releasing SeamlessM4T, the world’s “first all-in-one multilingual multimodal AI translation and transcription model,” according to the company. SeamlessM4T can perform speech-to-text, speech-to-speech, text-to-speech, and text-to-text translations for up to 100 languages, depending on the task. “Our single model provides on-demand translations that enable people who speak different languages to communicate more effectively,” Meta claims, adding that SeamlessM4T “implicitly recognizes the source languages without the need for a separate language identification model.” Continue reading Meta’s Multimodal AI Model Translates Nearly 100 Languages

AP Is Latest Org to Issue Guidelines for AI in News Reporting

After announcing a partnership with OpenAI last month, the Associated Press has issued guidelines for using generative AI in news reporting, urging caution in using artificial intelligence. The news agency has also added a new chapter in its widely used AP Stylebook pertaining to coverage of AI, a story that “goes far beyond business and technology” and is “also about politics, entertainment, education, sports, human rights, the economy, equality and inequality, international law, and many other issues,” according to AP, which says stories about AI should “show how these tools are affecting many areas of our lives.” Continue reading AP Is Latest Org to Issue Guidelines for AI in News Reporting

OpenAI: GPT-4 Can Help with Content Moderation Workload

OpenAI has shared instructions for training to handle content moderation at scale. Some customers are already using the process, which OpenAI says can reduce time for fine-tuning content moderation policies from weeks or months to mere hours. The company proposes its customization technique can also save money by having GPT-4 do the work of tens of thousands of human moderators. Properly trained, GPT-4 could perform moderation tasks more consistently in that it would be free of human bias, OpenAI says. While AI can incorporate biases from training data, technologists view AI bias as more correctable than human predisposition. Continue reading OpenAI: GPT-4 Can Help with Content Moderation Workload

Meta’s AudioCraft Turns Words into Music with Generative AI

Meta Platforms is releasing AudioCraft, a generative AI framework that creates “high-quality,” “realistic” audio and music from text prompts. AudioCraft consists of three models: MusicGen, AudioGen and EnCodec, all of which Meta announced it is open-sourcing. Released in June, MusicGen was trained on Meta-owned and licensed music, and generates music from text prompts, while AudioGen, which was trained on public domain samples, generates sound effects (like honking horns and barking dogs) from text prompts. The EnCodec decoder allows “higher quality music generation with fewer artifacts,” according to Meta. Continue reading Meta’s AudioCraft Turns Words into Music with Generative AI

Top Tech Firms Support Government’s Planned AI Safeguards

President Biden has secured voluntary commitments from seven leading AI companies who say they will support the executive branch goal of advancing safe, secure and transparent development of artificial intelligence. Executives from Amazon, Anthropic, Google, Inflection, Meta, Microsoft and OpenAI convened at the White House on Friday to support the accord, which some criticized as a half measure, claiming the companies have already embraced independent security testing and a commitment to collaborating with each other and the government. Biden stressed the need to deploy AI altruistically, “to help address society’s greatest challenges.” Continue reading Top Tech Firms Support Government’s Planned AI Safeguards

Mixed Reactions to ‘Pause’ on AI Models Larger than GPT-4

Respected members of the advanced tech community are going on record opposing the faction calling for a “pause” in large-model artificial intelligence development. Meta Platforms chief AI scientist Yann LeCun and DeepLearning.AI founder and CEO Andrew Ng, formerly at Alphabet where he helped launch Google Brain, were joined this past week by Bill Gates and former Google CEO Eric Schmidt in opposing the proposed six-month halt to development of AI models more advanced than OpenAI’s GPT-4, which is said to train on a trillion parameters — more than 500 times that of GPT-3. Continue reading Mixed Reactions to ‘Pause’ on AI Models Larger than GPT-4

Report: Enterprise Supplants Academia as Driving Force of AI

After many years of academia leading the way in the development of artificial intelligence, the tides have shifted and industry has taken over, according to the 2023 AI Index, a report created by Stanford University with help from companies including Google, Anthropic and Hugging Face. “In 2022, there were 32 significant industry-produced machine learning models compared to just three produced by academia,” the report says. The shift in influence is attributed mainly to the large resource demands — in staff, computing power and training data — required to create state of the art AI systems. Continue reading Report: Enterprise Supplants Academia as Driving Force of AI

Intel Promises 96 Percent Accuracy with New Deepfake Filter

Intel has debuted FakeCatcher, touting it as the first real-time deepfake detector. capable of determining whether digital video has been altered to change context or meaning. Intel says FakeCatcher has a 96 percent accuracy rate and returns results in milliseconds by analyzing the “blood flow” of pixel patterns, a process called photoplethysmography (PPG) that Intel borrowed from medical research. The company says potential use cases include social media platforms screening to prevent uploads of harmful deepfake videos and helping global news organizations to avoid inadvertent amplification of deepfakes. Continue reading Intel Promises 96 Percent Accuracy with New Deepfake Filter

Legal Questions Loom as OpenAI Widens Access to DALL-E

OpenAI is expanding its beta outreach for DALL-E 2 by inviting an additional one million waitlisted people to join the AI imaging platform over the coming weeks. DALL-E users will receive 50 credits during their first month of use and 15 credits every subsequent month, with each credit redeemable for an original DALL-E-prompted generation (returning four images) or an edit or variation prompt (which returns three images). Additional credits may be purchased in 115-generation increments for $15. Starting this month, users get rights to commercialize their DALL-E images. However, the move highlights the legal implications of AI and possible copyright infringement. Continue reading Legal Questions Loom as OpenAI Widens Access to DALL-E

AI Laws Becoming Decentralized with Cities First to Regulate

With the federal government still in the early phase of regulating artificial intelligence, cities and states are stepping in as they begin to actively deploy AI. While managing traffic patterns is straightforward, when it comes to policing and hiring practices, precautions must be taken to guard against algorithmic bias inherited from training data. The challenges are formidable. As with human reasoning, it is often difficult to trace the logic behind a machine’s decisions, making it challenging to identify a fix. Municipalities are evaluating different solutions, the goal being to prevent programmatic marginalization. Continue reading AI Laws Becoming Decentralized with Cities First to Regulate

Microsoft and Nvidia Debut World’s Largest Language Model

Microsoft and Nvidia have trained what they describe as the most powerful AI-driven language model to date, the Megatron-Turing Natural Language Generation model (MT-NLG), which has “set the new standard for large-scale language models in both model scale and quality,” the firms say. As the successor to the companies’ Turing NLG 17B and Megatron-LM, the new MT-NLG has 530 billion parameters, or “3x the number of parameters compared to the existing largest model of this type” and demonstrates unmatched accuracy in a broad set of natural language tasks. Continue reading Microsoft and Nvidia Debut World’s Largest Language Model

Facebook Counters AI Bias with a Data Set Featuring Actors

Facebook released an open-source AI data set of 45,186 videos featuring 3,011 U.S. actors who were paid to participate. The data set is dubbed Casual Conversations because the diverse group was recorded giving unscripted answers to questions about age and gender. Skin tone and lighting conditions were also annotated by humans. Biases have been a problem in AI-enabled technologies such as facial recognition. Facebook is encouraging teams to use the new data set. Most AI data sets comprise people unaware they are being recorded. Continue reading Facebook Counters AI Bias with a Data Set Featuring Actors