Copyright in the Age of Bots

If you’ve had any issues with copyright over the past few years, whether you are on YouTube, Facebook or even operating your own website, it’s probable that your issue didn’t start with a human, but with a bot.

According to YouTube, their automated Content ID system handles over 98% of copyright issues on the site. Similar bots monitor Facebook and Instagram.

However, even if you aren’t on those services or weren’t initially detected by those particular bots, you likely still were first spotted by a bot. Photographers are using bots to track infringements of their images and send invoices. Similar bots exist for tracking text and, of course, there are a slew of bots for detecting pirated content, including audio and video.

There are even bots for detecting unlicensed music samples, often catch such samples years or decades after the fact.

So, if you’ve ever seen been the subject of a copyright enforcement action, rightly or wrongly, it’s safe to say that a bot was probably involved in it at some point. Depending on the bot, it may have been only responsible for the detection but, in many cases, they also handle the enforcement, sending out notices and even handling appeals.

Copyright law was never meant to work this way. It is a law that was designed to let humans govern how other humans use content. However, the internet and many of its largest platforms have created volumes of infringement so huge that the bots become a necessary middleman.

But what does this mean for copyright? How does a system built for humans by humans react to a tidal wave of bots? The answer is mixed and very complicated.

Why Bots?

The first and most obvious question is: Why use bots in the first place?

The answer is because we have to. For one example, YouTube gets five hundred hours of video uploaded every minute. Facebook gets an estimated 2.4 million posts per second. Even with a massive army of reviewers, it would be impossible for humans to check even a modest fraction of that content.

These platforms have simply become too big to police and the issue isn’t just related to copyright. It also deals with other community standards violations including misinformation, hate speech, pornographic content and more. Bots are much more than a copyright problem.

However, they are an especially difficult one for copyright. Copyright law was written, quite deliberately, to allow for a lot of nuance and judgment. It was a law meant to be parsed by humans and bots, as amazing as they are at times, are not human.

This, in turn, is where copyright can begin to suffer.

(Un)Fair Use and Other Questions

Though humans are pretty good at grasping the importance of context, bots are not. For example, they struggle to tell the difference between clips that are meant to offer commentary or criticism or clips that are simply meant to exploit the original work.

When confronted with such a conundrum, bots will usually err on the side of caution and remove content. This hurts uses that, according to most people, would likely be considered a fair use and non-infringing.

This means that it can be very difficult and even unreliable to use third party content on sites monitored by bots. Even if that use is a probable fair use. This mostly hurts commentary and criticism of other works, but can also harm the creation of new works.

The reason for this is simple, though bots can learn and adapt, they can’t make real fair use judgments at this time. As such, there is always some kind of bright line rule and that rule will always allow some infringing uses to stay online and pull down some non-infringing uses. That’s because, with copyright, bright line rules don’t really exist in many areas.

However, what’s worst for those publishing online is that this creates something of a metagame, a game where the goal isn’t to be non-infringing or to use the content in a way that’s best for all involved, but to avoid the bots and we see this all the time on the internet.

Playing to the Bots, Not the Law

For those that post online, this creates something of a perverse incentive. The bots encourage people to not worry about the law itself, but to avoid detection by the bots.

Gone is the nuance at balancing act of copyright law and what is replaced is cold, unfeeling bots that don’t fully understand what they are doing. As smart as these bots often are, they are no substitute for human intuition and understanding.

However, what this ultimately does is rewrite copyright law from a functional perspective. Is using a clip of X seconds allowed under the law? That doesn’t really matter as, if it avoids detection by Content ID, it’s unlikely to ever have reprisal taken against it.

In short, the bots determine what is and is not allowed, regardless of what the law says.

While it is true that the bots are based on the law, once again, the law has nuance and an intention that humans understand. Bots don’t.

Fixing the Issue

The truth is that, for the vast majority of potential copyright disputes, the bots CAN actually handle it just fine. For all the nuance and gray areas in copyright law, most cases don’t really fall into those areas.

The problem is what happens to the cases that do. Those cases raise a question: Where are the humans?

The copyright bots were meant to aid humans. An ideal copyright bot should handle the clear cases outright (obvious piracy, significant infringement, etc.) but turned more nuanced cases over to humans. However, that rarely happens.

The reason is that, in most cases, the bots have been brought in not to aid humans, but to replace them. This means that, when the bot does make a mistake, no matter how rare that is, there’s not a human available to quickly fix it.

This is often true both on the provider side and the enforcement side. Companies want their copyright process to run as automatically as possible and, as part of that, dedicate as few human resources to it as possible.

When that happens, the bots run the show. They functionally get to decide what is and is not infringing without human interference. Since few online have the resources or interest in taking such disputes to court, bots get to act as judge, jury and executioner, all based on automated algorithms that only the bots themselves fully understand.

Bringing humans back into this process would blunt that. It would act as a buffer between the binary nature of the bots and the nuance of copyright. However, people are expensive and few want to spend money on enforcing copyright law.

Bottom Line

In the end, users are caught in the middle. Everything you post on Facebook and YouTube is parsed through a series of bots. The fate of everything you post is in the hands of automated systems that even their owners don’t fully understand.

The best advice I, or anyone else, can give is to recognize that reality. On the internet, copyright enforcement is a bot’s game and what the law says is only of secondary importance.

As such, I encourage creators to not use third party content from their work, source anything they do use from open use libraries, such as YouTube’s own audio library, and to view your content from the perspective of a bot.

Though it is a metagame that no creator should be forced to play, it’s a grim reality for creators today, and it’s what happens when copyright law is put in the hands of bots without humans to rein them in.

It may not exactly be a sci-fi dystopia, but it’s easy to see why so many are so frustrated with the status quo.

Want to Reuse or Republish this Content?

If you want to feature this article in your site, classroom or elsewhere, just let us know! We usually grant permission within 24 hours.

Click Here to Get Permission for Free