CES: Panelists Weigh Need for Safe AI That Serves the Public

A CES session on government AI policy featured an address by Assistant Secretary of Commerce for Communications and Information Alan Davidson (who is also administrator of the National Telecommunications and Information Administration), followed by a discussion of government activities, and finally industry perspective from execs at Google, Microsoft and Xperi. Davidson studied at MIT under nuclear scientist Professor Philip Morrison, who spent the first part of his career developing the atomic bomb and the second half trying to stop its use. That lesson was not lost on Davidson. At NTIA they are working to ensure “that new technologies are developed and deployed in the service of people and in the service of human progress.” Continue reading CES: Panelists Weigh Need for Safe AI That Serves the Public

Apple Says U.S. Data Breaches Up by More Than 20 Percent

Apple is emphasizing the importance of data encryption with a report that shows personal data breaches up 300 percent between 2013 and 2022. In the past two years, more than 2.6 billion personal records have been exposed, according to the newly released study “The Continued Threat to Personal Data: Key Factors Behind the 2023 Increase.” The report, created by Dr. Stuart Madnick, the founding director of Cybersecurity at MIT Sloan, cites increasing dependence on cloud computing as the main factor for the surge. U.S. data intrusions through Q3 of this year are 20 percent higher than all 12 months of 2022. Continue reading Apple Says U.S. Data Breaches Up by More Than 20 Percent

OpenAI Creates a Team to Examine Catastrophic Risks of AI

OpenAI recently announced it is developing formal AI risk guidelines and assembling a team dedicated to monitor and study threat assessment involving imminent “superintelligence” AI, also called frontier models. Topics under review include the required parameters for a robust monitoring and prediction framework and how malicious actors might want to leverage stolen AI model weights. The announcement was made shortly prior to the Biden administration issuing an executive order requiring the major players in artificial intelligence to submit reports to the federal government assessing potential risks associated with their models. Continue reading OpenAI Creates a Team to Examine Catastrophic Risks of AI

TikTok Creates New Tools for Labeling Content Created by AI

As creators embrace artificial intelligence to juice creativity, TikTok is launching a tool that helps them label their AI-generated content while also beginning to test “ways to label AI-generated content automatically.” “AI enables incredible creative opportunities, but can potentially confuse or mislead viewers,” TikTok said in announcing labels that can apply to “any content that has been completely generated or significantly edited by AI,” including video, photographs, music and more. The platform also touted a policy that “requires people to label AI-generated content that contains realistic images, audio or video, in order to help viewers contextualize.” Continue reading TikTok Creates New Tools for Labeling Content Created by AI

Google Digital Futures Project Pumps $20M into Responsible AI

Google is establishing a $20 million fund to promote responsible AI through its charitable arm, Google.org. The investment will provide grants to academics and think tanks as part of the company’s new Digital Futures Project, announced on the eve of today’s private meeting between Congress and AI-focused tech giants. “AI has the potential to make our lives easier and address some of society’s most complex challenges — like preventing disease, making cities work better and predicting natural disasters. But it also raises questions about fairness, bias, misinformation, security and the future of work,” Google said. Continue reading Google Digital Futures Project Pumps $20M into Responsible AI

MAGE AI Unifies Generative and Recognition Image Training

Researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have introduced a computer vision system that combines image recognition and image generation technology into one training model instead of two. The result, MAGE (short for MAsked Generative Encoder) holds promise for a wide variety of use cases and is expected to reduce costs through unified training, according to the team. “To the best of our knowledge, this is the first model that achieves close to state-of-the-art results for both tasks using the same data and training paradigm,” the researchers said. Continue reading MAGE AI Unifies Generative and Recognition Image Training

Altman Calls on China to Participate in Global AI Rulemaking

Sam Altman continues to call for coordinated international regulation of artificial intelligence. The OpenAI co-founder and CEO visited Seoul this past weekend to meet with South Korean President Yoon Suk Yeol, who issued a statement saying it is important to act “with a sense of speed” in establishing international standards or face unwanted “side effects.” Altman also virtually delivered a keynote address to Chinese AI researchers at an annual conference hosted by the Beijing Academy of Artificial Intelligence, calling on China to participate in global rulemaking. Continue reading Altman Calls on China to Participate in Global AI Rulemaking

Enterprise Anticipates AI Impact but Few Execs Are Prepared

Generative AI has become a buzzword in the business community, resulting in 65 percent of executives in a recent KPMG survey saying they believe the technology will have a high or extremely high impact on their organization in the next three to five years. Yet most say they are unprepared for immediate adoption, with 60 percent estimating they are 12 to 24 months from implementing their first generative AI solution. Fewer than half of respondents say they have the right technology, talent, and governance in place to successfully implement generative AI. Continue reading Enterprise Anticipates AI Impact but Few Execs Are Prepared

Generative AI May Improve Knowledge Workers’ Productivity

ChatGPT “occupational exposure” is a new area of study for jobs vulnerable to replacement by AI chatbots with strong language skills. A Princeton University survey suggests telemarketers, history teachers and sociologists are among those at risk, while physical laborers needn’t worry right now. A second study, by MIT graduate students, says language-dependent jobs are not destined for replacement, but are in for an AI assist. Asked to complete office tasks like writing press releases, emails and short reports, those using ChatGPT were 37 percent faster, and produced superior results. Continue reading Generative AI May Improve Knowledge Workers’ Productivity

Twitter Users Vote in Favor of Musk Stepping Down as CEO

Facing backlash against his executive leadership, Twitter’s new owner and CEO, billionaire Elon Musk, conducted an informal 12-hour poll over the weekend asking users of the popular social media platform whether he should keep his new position. “Should I step down as head of Twitter?” the controversial executive asked. “I will abide by the results of this poll.” After more than 17.5 million responses, the results indicate that a majority of users believe Musk should step down from his post (57.5 percent voted in the affirmative). As of press time, it remains unclear what action Musk may take in light of the poll results. Continue reading Twitter Users Vote in Favor of Musk Stepping Down as CEO

LinkedIn Test Raises Ethics Questions Over Parsing Big Data

LinkedIn’s experiments on users have drawn scrutiny from a new study that says the platform may have crossed a line into “social engineering.” The tests, over five years from 2015 to 2019, involved changing the “People You May Know” algorithm to alternate between weak and strong contacts when recommending new connections. Affecting an estimated 20 million users, the test was designed to collect insight to improve the Microsoft-owned platform’s performance, but may have impacted people’s career opportunities. The study was co-authored by researchers at LinkedIn, Harvard Business School, MIT and Stanford and appeared this month in Science. Continue reading LinkedIn Test Raises Ethics Questions Over Parsing Big Data

Startup QuEra Is Making Major Strides in Quantum Computing

Quantum startup QuEra Computing has emerged from stealth mode with a splashy announcement of $17 million in funding and completion of a 256-qubit device the company says “will be soon accessible to customers.” Launched in 2019 by scientists from Harvard and MIT, the Boston-based firm claims to have already generated $11 million in revenue from its scalable machines in a white-hot quantum space that includes tech giants including Amazon, IBM and Google jockeying for position. QuEra’s approach leverages what the company calls “nature’s perfect qubits,” based on 256-qubit atoms. Continue reading Startup QuEra Is Making Major Strides in Quantum Computing

Microsoft and Nvidia Debut World’s Largest Language Model

Microsoft and Nvidia have trained what they describe as the most powerful AI-driven language model to date, the Megatron-Turing Natural Language Generation model (MT-NLG), which has “set the new standard for large-scale language models in both model scale and quality,” the firms say. As the successor to the companies’ Turing NLG 17B and Megatron-LM, the new MT-NLG has 530 billion parameters, or “3x the number of parameters compared to the existing largest model of this type” and demonstrates unmatched accuracy in a broad set of natural language tasks. Continue reading Microsoft and Nvidia Debut World’s Largest Language Model

Clearview Facial Recognition Adds Deblur and Mask Removal

Undeterred by lawsuits and demands to stop scraping social media, facial recognition firm Clearview AI is plowing ahead with efforts to expand its database and introduce new tools. Company co-founder and CEO Hoan Ton-That said Clearview has collected more than 10 billion images from social media and the Internet, while the company is adding new tools to help users, often law enforcement, obtain matches. Most recently, the company developed a deblur tool in addition to mask removal, which uses machine learning to recreate the covered part of a person’s face. However, use of such tools raises concerns that individuals could be wrongly identified or biases could result. Continue reading Clearview Facial Recognition Adds Deblur and Mask Removal

SEC Probe of SolarWinds Attack Concerns Corporate Execs

A Securities and Exchange Commission investigation into the 2020 Russian cyberattack of SolarWinds has corporate executives concerned over the possibility that information unearthed in the probe will expose them to liability. Companies suspected of or known to have been downloading compromised software updates from SolarWinds have received letters requesting records of all breaches since October 2019, raising fears that sensitive cyber incidents previously unreported and unrelated to SolarWinds may be revealed, providing the SEC with details that many companies may never have wanted to disclose. Continue reading SEC Probe of SolarWinds Attack Concerns Corporate Execs