By
Paula ParisiSeptember 3, 2024
In an effort to create a safer environment for teens, social platform Snapchat is providing educators with resources to familiarize them with the app and help them understand how students use it. The company has launched a website called “An Educator’s Guide to Snapchat.” The announcement, timed to the start of the new school year, comes as lawmakers have been pressuring social networks to do more to protect children, with Florida and Indiana going so far as to enact school cell phone bans. Legislators in California and New York have been exploring similar prohibitions. Continue reading Snapchat Puts Focus on Teen Safety Resources for Teachers
By
Paula ParisiAugust 16, 2024
TikTok is entering the messaging services space with a new group chat feature that supports up to 32 participants, conversing and sharing content. TikTok users have taken to sharing the platform’s short-form videos on third-party apps such as Meta’s WhatsApp and Apple’s Messages, and this move aims to keep them doing so in-app, where people can also now view and comment together. The result takes TikTok into the realm of connecting with friends and community-building, as opposed to just passively viewing content. The group chats are only available for those over 15 years of age, as is the policy with DMs. Continue reading TikTok Introduces Group Messaging to Share Content In-App
By
Paula ParisiAugust 7, 2024
Google has unveiled three additions to its Gemma 2 family of compact yet powerful open-source AI models, emphasizing safety and transparency. The company’s Gemma 2 2B is a 2.6 billion parameter update to the lightweight 2B parameter Gemma 2, with built-in improvements in safety and performance. Built on Gemma 2, ShieldGemma is a suite of safety content classifier models that “filter the input and outputs of AI models and keep the user safe.” Interoperability model tool Gemma Scope offers what Google calls “unparalleled insight into our models’ inner workings.” Continue reading Latest Gemma 2 Models Emphasize Security and Performance
By
Paula ParisiAugust 6, 2024
The U.S. Department of Justice has filed suit against TikTok and its parent company, ByteDance, charging they’ve violated the Children’s Online Privacy Protection Act (COPPA) by allowing children to create TikTok accounts without parental consent, and collecting their data. The suit also alleges TikTok retained the personal data of minors who joined prior to COPPA going into effect in 2000, even after parents demanded it be deleted, a right under COPPA. This latest move in the ongoing legal battle with ByteDance follows the Chinese company’s own lawsuit against the U.S. government. Continue reading U.S. Raises Stakes in TikTok Legal Battle, Suing Under COPPA
By
Paula ParisiAugust 5, 2024
OpenAI has released its new Advanced Voice Mode in a limited alpha rollout for select ChatGPT Plus users. The feature, which is being implemented for the ChatGPT mobile app on Android and iOS, aims for more natural dialogue with the AI chatbot. Powered by GPT-4o, which is multimodal, Advanced Voice Mode is said to be able to sense emotional inflections, including excitement, sadness or singing. According to an OpenAI post on X, the company plans to “continue to add more people on a rolling basis” so that everyone using ChatGPT Plus will have access to the new feature in the fall. Continue reading OpenAI Brings Advanced Voice Mode Feature to ChatGPT Plus
By
Rob ScottAugust 1, 2024
Two landmark bills designed to bolster online safety for children — the Kids Online Safety Act (KOSA) and the Children and Teens’ Online Privacy Protection Act (COPPA 2.0) — were overwhelmingly approved by the U.S. Senate on Tuesday in bipartisan 91-3 votes. If approved by the House, the legislation would introduce new rules regarding what tech companies can offer to minors and how those firms use and share children’s data. The three senators who voted against the bills cited concerns that the regulations could stifle free speech, open the door to government censorship, and fail to adequately address the greatest threats to children online. Continue reading Senate Passes Two Bills to Strengthen Children’s Online Safety
By
Paula ParisiJuly 11, 2024
Federal regulators have taken the unprecedented step of banning the NGL messaging platform from providing service to users under 18. The action is part of a legal settlement between NGL Labs, the Federal Trade Commission and the Los Angeles District Attorney’s Office. NGL, whose niche is “anonymous” communication and features the tagline “Ask me anything,” has also agreed to pay $5 million in fines. An FTC investigation found that in addition to fraudulent business claims about divulging the identities of message senders for a fee, NGL also falsely claimed it used artificial intelligence to filter out cyberbullying and harmful messages. Continue reading Popular Messaging App Banned from Servicing Young Users
By
Paula ParisiJune 19, 2024
United States Surgeon General Dr. Vivek Murthy has renewed his push for Congress to enact social media warning label advising of potential mental health damage to adolescents. Murthy also called on tech companies to be more transparent with internal data on the impact of their products on American youth, requesting independent safety audits and restrictions on features that may be addictive, including autoplay, push notifications and infinite scroll, which he suggests “prey on developing brains and contribute to excessive use.” His federal campaign joins a groundswell of local laws restricting minors’ access to social media. Continue reading U.S. Surgeon General Calls for Social Media Warning Labels
By
Paula ParisiJune 11, 2024
California tech companies are bristling at a state bill that would force them to enact strict safety protocols, including installing “kill switches” to turn-off AI models that present a public risk. Silicon Valley has emerged as a global AI leader, and the proposed law would impact not only OpenAI, but Anthropic, Cohere, Google and Meta Platforms. The bill, SB 1047, focuses on what its lead sponsor, State Senator Scott Wiener, calls “common sense safety standards” for frontier models. Should the bill become law, it could affect even firms like Amazon that provide AI cloud services to California customers even though they are not based in the state. Continue reading Tech Firms Push Back Against California AI Safety Regulation
By
Paula ParisiJune 11, 2024
The New York legislature passed a bill prohibiting social media companies from providing children with so-called “addictive feeds” without parental consent. The Stop Addictive Feeds Exploitation (SAFE) for Kids Act specifies addictive feeds as those that prioritize exposure to content (using a recommendation engine, or other means) based on information collected about the user or device. “Non-addictive feeds,” in which the algorithm serves content in chronological order, are still permitted under the bill, which New York Governor Kathy Hochul has vowed to sign into law. Continue reading New York Lawmakers Aim to Make Social Feeds Safe for Kids
By
Paula ParisiMay 30, 2024
OpenAI has begun training a new flagship artificial intelligence model to succeed GPT-4, the technology currently associated with ChatGPT. The new model — which some are already calling GPT-5, although OpenAI hasn’t yet shared its name — is expected to take the company’s compute to the next level as it works toward artificial general intelligence (AGI), intelligence equal to or surpassing human cognitive abilities. The company also announced it has formed a new Safety and Security Committee two weeks after dissolving the old one upon the departure of OpenAI co-founder and Chief Scientist Ilya Sutskever. Continue reading OpenAI Is Working on New Frontier Model to Succeed GPT-4
By
Paula ParisiMay 24, 2024
Leading AI firms spanning Europe, Asia, North America and the Middle East have signed a new voluntary commitment to AI safety. The 16 signatory companies — including Amazon, Google DeepMind, Meta Platforms, Microsoft, OpenAI, xAI and China’s Zhipu AI — will publish outlines indicating how they will measure the risks posed by their frontier models. “In the extreme, leading AI tech companies including from China and the UAE have committed to not develop or deploy AI models if the risks cannot be sufficiently mitigated,” according to UK Technology Secretary Michelle Donelan. Continue reading Global Technology Companies Sign Pledge to Foster AI Safety
The UK AI Safety Institute announced the availability of its new Inspect platform designed for the evaluation and testing of artificial intelligence tech in order to help develop safe AI models. The Inspect toolset enables testers — including worldwide researchers, government agencies, and startups — to analyze the specific capabilities of such models and establish scores based on various criteria. According to the Institute, the “release comes at a crucial time in AI development, as more powerful models are expected to hit the market over the course of 2024, making the push for safe and responsible AI development more pressing than ever.” Continue reading UK Launches New Open-Source Platform for AI Safety Testing
By
ETCentric StaffApril 18, 2024
Meta will release a new Quest educational product later this year. As with 2023’s workplace-specific Meta Quest for Business, the as yet unnamed learning tool will allow teachers, trainers and administrators to access education-specific apps and features, and make it possible for them to manage multiple Quest devices at once. The classroom convenience of not having to individually update and prepare each headset for the same lesson was one of Meta’s key findings in researching what teachers wanted from virtual reality, Meta says, positioning education and training as a growing tech product sector, with lots of app activity. Continue reading Meta Education Initiative Aims to Put Quest VR in Classrooms
By
ETCentric StaffApril 3, 2024
The United States has entered into an agreement with the United Kingdom to collaboratively develop safety tests for the most advanced AI models. The memorandum of understanding aims at evaluating the societal and national defense risks posed by advanced models. Coming after commitments made at the AI Safety Summit in November, the deal is being described as the world’s first bilateral agreement on AI safety. The agreement, signed by U.S. Commerce Secretary Gina Raimondo and UK Technology Secretary Michelle Donelan, envisions the countries “working to align their scientific approaches” and to accelerate evaluations for AI models, systems and agents. Continue reading U.S. and UK Form Partnership to Accelerate AI Safety Testing