August 9, 2013
A debate was sparked recently when a photographer sued BuzzFeed over the use of unlicensed images and BuzzFeed’s claims of fair use. A problematic issue is that in many instances, there are no actual human artists, writers, or editors creating what is seen online. When a search, automated process, or algorithm collects images, it falls under a copyright loophole. But fair use tools can be made in order to allow free content or maintain exclusivity.
The automated machines have authors and the photographers beat, argues Wired. “Aggregators — whether listmakers, search engines, online curation boards, content farms, and other sites — can scrape them from the Web and claim that posting these images is fair use.”
Algorithms are acting more like content creators rather than an index of online information. If a users searches for a topic, facts and images that Google scrapes from websites fall under fair use.
If an algorithm violates a copyright, the content owners can file a DMCA takedown request. But the burden is on the owner as it puts human against machine; the human must submit the actual request while the machine continues to scrape the Internet at all times of the day.
“What if the researchers at these companies could improve their bots enough for the algorithms to make intelligent decisions about fair use,” questions Wired. “If their systems can organize the Web and drive cars, surely they are capable of shouldering some of the responsibility for making smart decisions about fair use.”
Search engines can develop tools to prevent fair use from becoming plagiarism. Google already has tools that share revenue with content creators from music found in videos uploaded to YouTube.
Fair use algorithms can also be used to copy content that a creator or owner wants available. Tools can be made in order to determine what content creators want as fair use and what is to be exclusive or limited.