IAB and MRC Join Forces to Develop AR Advertising Guidelines

The global augmented reality market is expected to reach $289 billion by 2030, according to a recent study by Research and Markets, and advertisers have taken notice. While the majority of revenue is generated by software, then hardware, the augmented reality advertising market is projected to generate $1.2 billion in revenue in the U.S. in 2024, according to the Interactive Advertising Bureau. To help foster growth in that nascent sector, the IAB has teamed with the Media Ratings Council to create consistent definitions and measurement guidelines for ads within AR campaigns. Continue reading IAB and MRC Join Forces to Develop AR Advertising Guidelines

OpenAI Partners with Common Sense Media on AI Guidelines

As parents and educators grapple with figuring out how AI will fit into education, OpenAI is preemptively acting to help answer that question, teaming with learning and child safety group Common Sense Media on informational material and recommended guidelines. The two will also work together to curate “family-friendly GPTs” for the GPT Store that are “based on Common Sense ratings and standards,” the organization said. The partnership aims “to help realize the full potential of AI for teens and families and minimize the risks,” according to Common Sense. Continue reading OpenAI Partners with Common Sense Media on AI Guidelines

Bluesky Adds Automated Moderation, Rethinks Web Visibility

Bluesky, the decentralized social media app spun out by Twitter co-founder Jack Dorsey that is poised to become a competitor to that platform’s successor, X, has passed the 2 million users milestone just 10 months after its launch. Although still in private beta, and accessible only through an invite code, Bluesky has been making headlines recently, first for what was criticized as lax content moderation, and also for announcing a public web interface that would allow anyone (and everyone) to view posts by the private network’s members, a policy decision that has reportedly been reversed. Continue reading Bluesky Adds Automated Moderation, Rethinks Web Visibility

U.S., Britain and 16 Nations Aim to Make AI Secure by Design

The United States, Britain and 16 other countries have signed a 20-page agreement on working together to keep artificial intelligence safe from bad actors, mandating collaborative efforts for creating AI systems that are “secure by design.” The 18 countries said they will aim to ensure companies that design and utilize AI develop and deploy it in a way that protects their customers and the public from abuse. The U.S. Department of Homeland Security’s Cybersecurity and Infrastructure Security Agency (CISA) and the United Kingdom’s National Cyber Security Centre (NCSC) jointly released the Guidelines for Secure AI System Development. Continue reading U.S., Britain and 16 Nations Aim to Make AI Secure by Design

Germany, France and Italy Strike AI Deal, Pushing EU Forward

Germany, France and Italy have reached an agreement on a strategy to regulate artificial intelligence. The agreement comes on the heels of infighting among key European Union member states that has held up legislation and could potentially accelerate the broader EU negotiations. The three governments support binding voluntary commitments for large and small AI providers and endorse “mandatory self-regulation through codes of conduct” for foundation models while opposing “un-tested norms.” The paper underscores that “the AI Act regulates the application of AI and not the technology as such” and says the “inherent risks” are in the application, not the technology. Continue reading Germany, France and Italy Strike AI Deal, Pushing EU Forward

OpenAI Creates a Team to Examine Catastrophic Risks of AI

OpenAI recently announced it is developing formal AI risk guidelines and assembling a team dedicated to monitor and study threat assessment involving imminent “superintelligence” AI, also called frontier models. Topics under review include the required parameters for a robust monitoring and prediction framework and how malicious actors might want to leverage stolen AI model weights. The announcement was made shortly prior to the Biden administration issuing an executive order requiring the major players in artificial intelligence to submit reports to the federal government assessing potential risks associated with their models. Continue reading OpenAI Creates a Team to Examine Catastrophic Risks of AI

AI Startup Capsule Creates Video Editor for Enterprise Teams

AI tech startup Capsule is debuting a video editor it says can help enterprise teams achieve results “10x faster.” “Today, if you work at a large company — in marketing or comms, or maybe even sales or HR — creating even the simplest video can be daunting,” Capsule suggests. After querying more than 300 such enterprise teams about their pain points, Capsule focused on three areas of improvement: simplifying motion graphics, adhering to strict brand guidelines, and making the editing process more collaborative among teams across desktop and mobile, where apps are typically “siloed.” Continue reading AI Startup Capsule Creates Video Editor for Enterprise Teams

AP Is Latest Org to Issue Guidelines for AI in News Reporting

After announcing a partnership with OpenAI last month, the Associated Press has issued guidelines for using generative AI in news reporting, urging caution in using artificial intelligence. The news agency has also added a new chapter in its widely used AP Stylebook pertaining to coverage of AI, a story that “goes far beyond business and technology” and is “also about politics, entertainment, education, sports, human rights, the economy, equality and inequality, international law, and many other issues,” according to AP, which says stories about AI should “show how these tools are affecting many areas of our lives.” Continue reading AP Is Latest Org to Issue Guidelines for AI in News Reporting

OpenAI: GPT-4 Can Help with Content Moderation Workload

OpenAI has shared instructions for training to handle content moderation at scale. Some customers are already using the process, which OpenAI says can reduce time for fine-tuning content moderation policies from weeks or months to mere hours. The company proposes its customization technique can also save money by having GPT-4 do the work of tens of thousands of human moderators. Properly trained, GPT-4 could perform moderation tasks more consistently in that it would be free of human bias, OpenAI says. While AI can incorporate biases from training data, technologists view AI bias as more correctable than human predisposition. Continue reading OpenAI: GPT-4 Can Help with Content Moderation Workload

TikTok Launches Effect House for User-Generated AR Filters

TikTok has officially gone live with Effect House, the augmented reality tool that allows users to create AR filters and share them with the community. The ByteDance company has been testing the feature since last summer. Since then, at least 450 creators have used Effect House to create more than 1.5 billion videos that generated over 600 billion global views, according to TikTok. “Whether you’re teleporting into new worlds with Green Screen or freeze-framing with Time Warp Scan,” Effect House empowers expression “through a wide array of engaging and immersive formats.” Continue reading TikTok Launches Effect House for User-Generated AR Filters

Twitter Is Developing a New, Transparent Verification System

Tech blogger and app researcher Jane Manchun Wong discovered that Twitter is developing a new verification service. The original 2016 service placed a blue-and-white checkmark next to a verified personal account, brand or company. The service was halted in 2017 after it verified an account of Jason Kessler, an organizer of the Unite the Right rally in Charlottesville, Virginia. According to Twitter co-founder and chief executive Jack Dorsey, the company planned to expand the service in 2018 but didn’t have the bandwidth to do so. Continue reading Twitter Is Developing a New, Transparent Verification System

Twitter Guidelines Narrow Scope of Dehumanizing Speech

Almost a year ago, two of Twitter’s top executives decided that banning all speech considered “dehumanizing” would be a solution to making its site safer. This week Twitter unveiled its official guidelines of what constitutes dehumanizing speech — and they now solely focus on religious groups, representing a retreat from some of Twitter’s first unofficial rules. The company said the narrowing of its scope is due to unexpected obstacles in defining speech for its 350 million users who speak 43-plus languages. Continue reading Twitter Guidelines Narrow Scope of Dehumanizing Speech

Apple Bans Developers From Sharing Data Without Consent

For years, developers for Apple’s App Store have been able to ask users for access to their phone contacts and then share or sell the data of everyone listed in those digital address books, without their consent. That practice has recently been getting a lot of negative attention, and now Apple plans to ban developers from using that information. The updated Guidelines nixes the creation of databases of address book information collected from iPhone users as well as selling or sharing it with third parties. Continue reading Apple Bans Developers From Sharing Data Without Consent

YouTube Reserves Advertising to Channels with 10,000 Views

As YouTube weathers criticism from advertisers about placing their messages with objectionable videos, the company has made a major policy shift. Now, video channels must have more than 10,000 total views before YouTube will place ads there. Though the move may placate some marketers, it is also likely to ruffle the feathers of many creators, given that Internet data firm Pex estimates that 88 percent of all YouTube channels fall into the category of under-10,000 views. YouTube has been working on the policy since November. Continue reading YouTube Reserves Advertising to Channels with 10,000 Views

Tech Behemoths Establish Partnership on Artificial Intelligence

Amazon, Facebook, Google, IBM and Microsoft established the Partnership on AI to create ground rules for protecting people and their jobs in the face of rapidly expanding artificial intelligence. The organization is also intended to address the public’s concern about increasingly capable machines, and corporations’ worries about potential government regulation. One of the organization’s first efforts was to agree upon and then issue basic ethical standards for development and research in artificial intelligence. Continue reading Tech Behemoths Establish Partnership on Artificial Intelligence