By
Paula ParisiSeptember 22, 2023
OpenAI has released the DALL-E 3 generative AI imaging platform in research preview. The latest iteration features more safety options and integrates with OpenAI’s ChatGPT, currently driven by the now seasoned large language model GPT-4. That is the ChatGPT version to which Plus subscribers and enterprise customers have access — the same who will be able to preview DALL-E 3. The free chatbot is built around GPT-3.5. OpenAI says GPT-4 makes for better contextual understanding by DALL-E, which even in version 2 evidenced some glaring comprehension glitches. Continue reading OpenAI’s Latest Version of DALL-E Integrates with ChatGPT
By
Paula ParisiSeptember 19, 2023
The UK’s Competition and Markets Authority has issued a report featuring seven proposed principles that aim to “ensure consumer protection and healthy competition are at the heart of responsible development and use of foundation models,” or FMs. Ranging from “accountability” and “diversity” to “transparency,” the principles aim to “spur innovation and growth” while implementing social safety measures amidst rapid adoption of apps including OpenAI’s ChatGPT, Microsoft 365 Copilot, Stability AI’s Stable Diffusion. The transformative properties of FMs can “have a significant impact on people, businesses, and the UK economy,” according to the CMA. Continue reading UK’s Competition Office Issues Principles for Responsible AI
By
Paula ParisiSeptember 19, 2023
California lawmakers have put data brokers on notice. A bill known as the Delete Act would allow consumers to require all such information peddlers to delete their personal information with a single request. The bill defines “data brokers” as any number of businesses that collect and sell people’s personal information, including residential address, marital status and purchases. Both houses last week passed the proposed legislation — Senate Bill 362 — and it now heads to Governor Newsom’s desk. If he signs it, the new law will go into effect in January 2026. Continue reading California Plans to Protect Consumer Privacy with Delete Act
By
Paula ParisiSeptember 8, 2023
California Governor Gavin Newsom signed an executive order for state agencies to study artificial intelligence and its impact on society and the economy. “We’re only scratching the surface of understanding what GenAI is capable of,” Newsom suggested. Recognizing “both the potential benefits and risks these tools enable,” he said his administration is “neither frozen by the fears nor hypnotized by the upside.” The move was couched as a “measured approach” that will help California “focus on shaping the future of ethical, transparent, and trustworthy AI, while remaining the world’s AI leader.” Continue reading Governor Newsom Orders Study of GenAI Benefits and Risks
By
Paula ParisiAugust 23, 2023
A draft agreement said to have been presented by the U.S. government to ByteDance that would let TikTok avoid a federal ban seeks “near unfettered access” to company data and “unprecedented control” over platform functions. The nearly 100-page document, reported on this week, seeks control federal officials don’t have over other media outlets — social or otherwise — raising domestic concerns about government overreach. The draft dates to summer 2022. It is not known whether it has been updated or if the secretive negotiations between ByteDance and the Committee on Foreign Investment in the United States (CFIUS) have since continued. Continue reading Plans for TikTok Containment Would Give Feds Broad Power
By
Paula ParisiAugust 21, 2023
Illinois has become the first state in the nation to pass legislation protecting children who are social media influencers. Beginning in July 2024, children under 16 who appear in monetized video content online will have a legal right to compensation for their work, even if that means litigating against their parents. “The rise of social media has given children new opportunities to earn a profit,” Illinois Senator David Koehler said about the bill he sponsored. “Many parents have taken this opportunity to pocket the money, while making their children continue to work in these digital environments. Continue reading Illinois Law Protecting Child Vloggers Will Take Effect in 2024
By
Paula ParisiAugust 17, 2023
OpenAI has shared instructions for training to handle content moderation at scale. Some customers are already using the process, which OpenAI says can reduce time for fine-tuning content moderation policies from weeks or months to mere hours. The company proposes its customization technique can also save money by having GPT-4 do the work of tens of thousands of human moderators. Properly trained, GPT-4 could perform moderation tasks more consistently in that it would be free of human bias, OpenAI says. While AI can incorporate biases from training data, technologists view AI bias as more correctable than human predisposition. Continue reading OpenAI: GPT-4 Can Help with Content Moderation Workload
By
Paula ParisiJuly 31, 2023
The Senate has cleared two children’s online safety bills despite pushback from civil liberties groups that say the digital surveillance used to monitor behavior will result in an Internet less safe for kids. The Kids Online Safety Act (KOSA) and the Children and Teens’ Online Privacy Protection Act (COPPA 2.0) are intended to address a mental health crisis experts blame in large part on social media, but critics say the bills could cause more harm than good by forcing social media firms to collect more user data as part of enforcement. The bills — which cleared the Senate Commerce Committee by unanimous vote — are also said to reduce access to encrypted services. Continue reading Government Advances Online Safety Legislation for Children
By
Paula ParisiJuly 27, 2023
Advancing President Biden’s push for responsible development of artificial intelligence, top AI firms including Anthropic, Google, Microsoft and OpenAI have launched the Frontier Model Forum, an industry forum that will work collaboratively with outside researchers and policymakers to implement best practices. The new group will focus on AI safety, research into its risks, and disseminating information to the public, governments and civil society. Other companies involved in building bleeding-edge AI models will also be invited to join and participate in technical evaluations and benchmarks. Continue reading Major Tech Players Launch Frontier Model Forum for Safe AI
By
Paula ParisiJuly 24, 2023
President Biden has secured voluntary commitments from seven leading AI companies who say they will support the executive branch goal of advancing safe, secure and transparent development of artificial intelligence. Executives from Amazon, Anthropic, Google, Inflection, Meta, Microsoft and OpenAI convened at the White House on Friday to support the accord, which some criticized as a half measure, claiming the companies have already embraced independent security testing and a commitment to collaborating with each other and the government. Biden stressed the need to deploy AI altruistically, “to help address society’s greatest challenges.” Continue reading Top Tech Firms Support Government’s Planned AI Safeguards
By
Paula ParisiJune 14, 2023
A bill passed by the Louisiana State Legislature that bans minors from creating social media accounts without parental consent is the latest in a string of legal measures that take aim at the online world to combat a perceived mental health crisis among America’s youth. Utah also recently passed a law requiring consent of a parent or guardian when anyone under 18 wants to create a social account. And California now mandates some sites default to the highest privacy for minor accounts. The Louisiana legislation stands out as extremely restrictive, encompassing multiplayer games and video-sharing apps. Continue reading Louisiana Approves Parental Consent Bill for Online Accounts
By
Paula ParisiJune 1, 2023
Twitter is emphasizing crowdsourced moderation. The launch of Community Notes for images in posts seeks to address instances where morphed or AI-generated images are posted. The idea is to expose altered content before it goes viral, as did the image of Pope Francis wearing a Balenciaga puffy coat in March and the fake image of an explosion at the Pentagon in May. Twitter says Community Notes about an image will appear with “recent and future” posts containing the graphic in question. Currently in the test phase, the feature works with tweets featuring a single image. Continue reading Twitter Community Notes Aim to Curb Impact of Fake Images
By
Paula ParisiMay 31, 2023
TikTok is floating a trial balloon of its own AI chatbot, named Tako, now testing in select markets. Tako invites users to ask questions about TikTok videos and is also designed to help with discovery and recommendations. Tako’s public testing was first reported by Israeli app intelligence firm Watchful. TikTok subsequently confirmed testing in the Philippines and said Tako tests were live in some other global markets, but said the chatbot is not yet deployed in the United States. Unlike Microsoft’s Bing Chat, Google’s Bard and Snap’s My AI, Tako seems hyperfocused on TikTok content. Continue reading TikTok Embraces AI with Tako Chatbot, Now in Limited Tests
By
Paula ParisiMay 11, 2023
AI startup Anthropic is sharing new details of the “safe AI” principles that helped train its Claude chatbot. Also known as “Constitutional AI,” the method draws inspiration from treatises that range from a Universal Declaration of Human Rights to Apple’s Terms of Service and Anthropic’s own research. “What ‘values’ might a language model have?,” Anthropic asks, noting “our recently published research on Constitutional AI provides one answer by giving language models explicit values determined by a constitution, rather than values determined implicitly via large-scale human feedback.” Continue reading Anthropic Shares Details of Constitutional AI Used on Claude
By
Paula ParisiMay 1, 2023
A bipartisan bill introduced in the Senate last week seeks to establish a federal age limit for using social media that would prohibit children 12 and under from creating their own accounts as a way to prevent them from independently logging on to social platforms. The Protecting Kids on Social Media Act takes issue with the engagement algorithms Big Tech uses to keep kids glued to their sites and would limit the type of coding that could be deployed to target young users between the ages of 13 and 17. If not logged into an account, users under 13 could still access other online content. Continue reading New Federal Bill Would Restrict Social Media Use for Minors