Florida Pushes Forward a Social Media Ban for Kids Under 16

Florida’s legislature has passed a bill banning children younger than 16 from having social media accounts despite some pushback from Governor Ron DeSantis, who said he will be wrestling with whether to sign the measure into law. Due to a procedural requirement, DeSantis will have to sign or veto the proposed legislation before lawmakers conclude the current session in a matter of weeks. He has expressed dissatisfaction with the lack of a provision to let parents override the restriction, which would curtail access to the most popular sites, potentially impacting TikTok, Instagram, Facebook, Snapchat and YouTube. Continue reading Florida Pushes Forward a Social Media Ban for Kids Under 16

New York City Classifies Social Media a ‘Public Health Threat’

New York has become the first city in the nation to designate a public health crisis with regard to use of social media by young children. In a State of the City address, Mayor Eric Adams name-checked TikTok, YouTube and Facebook, calling them (and “companies like” them) “addictive and dangerous.” Adams referenced last week’s advisory from the city’s Department of Health as “officially designating social media as a public health crisis hazard in New York City.” The advisory urges adults to establish “tech free times” for kids, and delay smartphone access until age 14. Continue reading New York City Classifies Social Media a ‘Public Health Threat’

California Privacy Protection Agency Issues Draft Rules for AI

The California Privacy Protection Agency (CPPA) is preparing new regulations to protect consumers from how businesses may potentially use AI. The state regulator, whose rulings have an outsized influence on Big Tech given the many large firms that are headquartered there, has issued draft rules for how consumer data can be used in what it is calling “automated decisionmaking technology,” or ADMT. The proposed regulations give consumers the right to opt out of ADMT and entitles the public to on-demand information as how AI is interacting with their data and how businesses plan to use it. Continue reading California Privacy Protection Agency Issues Draft Rules for AI

Samsung TV Plus Hits Refresh on a 60 Percent Viewer Surge

Samsung TV Plus reports it has seen enthusiastic consumer use over the past year, with a 60 percent rise in global viewership. Accordingly, the TV maker is upgrading its free streaming service — available on Galaxy devices, Samsung Smart TVs, Smart Monitors and Family Hub appliances and on the Web — with an emphasis on discoverability for kids and music programming. Launched in 2015, the free ad-supported TV (FAST) and ad-based video on-demand (AVOD) service offers content spanning news, sports, entertainment, music, and more, in 24 countries where it is accessed on 535 million TV and mobile devices. Continue reading Samsung TV Plus Hits Refresh on a 60 Percent Viewer Surge

Second Meta Whistleblower Testifies to Potential Child Harm

A second Meta Platforms whistleblower has come forward, testifying this week before a Senate subcommittee that the company’s social networks were potentially harming teens, and his warnings to that effect were ignored by top leadership. Arturo Bejar, from 2009 to 2015 a Facebook engineering director and an Instagram consultant from 2019 to 2021, told the Senate Judiciary Subcommittee on Privacy, Technology and Law that Meta officials failed to take steps to protect underage users on the platforms. Bejar follows former Facebook whistleblower Frances Haugen, who provided explosive Senate testimony in 2021. Continue reading Second Meta Whistleblower Testifies to Potential Child Harm

Dozens of States Sue Meta for Social Media Addiction in Kids

Meta Platforms has been sued in federal court by 33 states including California and New York that claim its Instagram and Facebook platforms addict and harm children. The action is to date the most sweeping state action to contend with the impact of social media on the mental health of children. The suit, filed Tuesday in U.S. District Court for the Northern District of California, alleges Meta violates consumer protection laws by targeting children and deceiving users about platform safety. Also that day, the District of Columbia and eight states filed separate complaints addressing the same issues. Continue reading Dozens of States Sue Meta for Social Media Addiction in Kids

Ireland Fines TikTok $368 Million for Mishandling of User Data

Ireland’s Data Protection Commission (DPC) announced a TikTok fine of about $368 million today based on how the popular social platform processes data of younger users. DPC announced in 2021 that it was investigating TikTok’s compliance with the European Union’s General Data Protection Regulation (GDPR) privacy and security laws. The investigation identified specific problems with TikTok’s default account settings, the Family Pairing settings, and its age verification process (although the age verification model did not violate GDPR, the probe found that TikTok did not sufficiently protect the privacy of children under 13 who were able to create an account). Continue reading Ireland Fines TikTok $368 Million for Mishandling of User Data

Illinois Law Protecting Child Vloggers Will Take Effect in 2024

Illinois has become the first state in the nation to pass legislation protecting children who are social media influencers. Beginning in July 2024, children under 16 who appear in monetized video content online will have a legal right to compensation for their work, even if that means litigating against their parents. “The rise of social media has given children new opportunities to earn a profit,” Illinois Senator David Koehler said about the bill he sponsored. “Many parents have taken this opportunity to pocket the money, while making their children continue to work in these digital environments. Continue reading Illinois Law Protecting Child Vloggers Will Take Effect in 2024

Government Advances Online Safety Legislation for Children

The Senate has cleared two children’s online safety bills despite pushback from civil liberties groups that say the digital surveillance used to monitor behavior will result in an Internet less safe for kids. The Kids Online Safety Act (KOSA) and the Children and Teens’ Online Privacy Protection Act (COPPA 2.0) are intended to address a mental health crisis experts blame in large part on social media, but critics say the bills could cause more harm than good by forcing social media firms to collect more user data as part of enforcement. The bills — which cleared the Senate Commerce Committee by unanimous vote — are also said to reduce access to encrypted services. Continue reading Government Advances Online Safety Legislation for Children

Louisiana Approves Parental Consent Bill for Online Accounts

A bill passed by the Louisiana State Legislature that bans minors from creating social media accounts without parental consent is the latest in a string of legal measures that take aim at the online world to combat a perceived mental health crisis among America’s youth. Utah also recently passed a law requiring consent of a parent or guardian when anyone under 18 wants to create a social account. And California now mandates some sites default to the highest privacy for minor accounts. The Louisiana legislation stands out as extremely restrictive, encompassing multiplayer games and video-sharing apps. Continue reading Louisiana Approves Parental Consent Bill for Online Accounts

Snapchat+ Introduces ‘My AI Snaps’ for Chatbot Snap Backs

Snapchat is rolling out a new feature for its premium Snapchat+ platform that enables users who send Snaps to My AI let the artificial intelligence know what they’re up to “receive a unique generative Snap back that keeps the conversation going” via My AI Snaps. The feature was previewed at the Snap Partner Summit in April as part of a larger push on AI updates, including the ability to invite the My AI chatbot to participate in group chats with friends and the ability to get AI Lens suggestions and place recommendations. In addition, the My AI chatbot — made free to all users this year — was updated to reply to users’ Snaps with a text-based response. Continue reading Snapchat+ Introduces ‘My AI Snaps’ for Chatbot Snap Backs

New Federal Bill Would Restrict Social Media Use for Minors

A bipartisan bill introduced in the Senate last week seeks to establish a federal age limit for using social media that would prohibit children 12 and under from creating their own accounts as a way to prevent them from independently logging on to social platforms. The Protecting Kids on Social Media Act takes issue with the engagement algorithms Big Tech uses to keep kids glued to their sites and would limit the type of coding that could be deployed to target young users between the ages of 13 and 17. If not logged into an account, users under 13 could still access other online content. Continue reading New Federal Bill Would Restrict Social Media Use for Minors

Utah’s Social Media Law Requires Age Verification for Minors

Utah has become the first state to pass laws requiring social media platforms to obtain age verification before users can register. The law is designed to force social networks to enforce parental consent provisions. As of March 2024, companies including Facebook, Instagram, Snap, TikTok and Twitter will be required to secure proof of age for Utah users via a valid ID instead of just letting people type in their birth date at sign-up. While Utah is out front on the issue, nine other states have proposed legislation that includes age checks, most recently Arkansas. Continue reading Utah’s Social Media Law Requires Age Verification for Minors

UK Online Safety Bill to Exert Pressure on Social Media Execs

British legislators seem ready to make good on a threat to add criminal liability and jail time for high-level social media executives who fail to protect children from online harm as part of the Online Safety Bill. While the bill also aims to protect adults from fraud and malfeasance, its strictest provisions are geared toward child protection. The current proposal could win approval by the House of Commons within the week, and would then move to the upper chamber, the House of Lords, later in the quarter for further revision. Enactment is anticipated by year’s end.
Continue reading UK Online Safety Bill to Exert Pressure on Social Media Execs

Facebook and Instagram Roll Out New Safety Tools for Teens

Meta Platforms is introducing updates to further protect teens on Facebook and Instagram. Starting this week, those under the age of 16 (or under 18 in certain countries) will be defaulted into more stringent private settings when they join Facebook. A similar default was put into effect on Instagram last year. Meta is also restricting “potentially suspicious adults.” For example, adults will be restricted from messaging teens they aren’t connected to and from seeing teens in their People You May Know recommendations. A “suspicious adult” is one that has recently been blocked or reported by a young person. Continue reading Facebook and Instagram Roll Out New Safety Tools for Teens