Image by Seanbatty from Pixabay

It is well known that self-driving vehicles are a positive development. I, for one, living in the US and thus having no access to decent public transport, must use my private car for many short trips (though I have chosen to live within walking distance of work to limit them as much as possible). Self-driving cars will reportedly save 350,000 lives per year, though some recent research has cast some doubt on the magnitude of the improvement.  Thinking of distracted, angry, drunk and other bad human drivers, it is not hard to believe that autonomous vehicles can do better.

Self-driving vehicles have been the subject of deeper legal and moral inquiries because they provide evidence (a) that AI machines can make autonomous decisions; and (b) that those decisions have what one can call at the very least moral or ethical overtones. Going back to the well-known runaway trolley problem used in ethics classes, think of an AI-driven car facing brake failure (there is no reason I can think of why brakes on a self-driving car would magically be infallible). Then the AI “driver” must decide where to go. Imagine that it must “decide” whether to prioritize the life of the passenger(s) or various groups of pedestrians (imagine that they vary by age, gender etc.).

Fortunately, the potential problems with self-driving cars are for the most part easy to identify. One issue that is rarely discussed, however, is whether replacing human drivers has in aggregate a positive valence. Many humans it seems—including myself—would be happy to offload the task of driving entirely and I would likely feel safer on the roads with fewer humans behind the wheel.

So this technology will replace humans. It is true that no trade or profession has a guarantee that their job will be immune to technological change.  Just like cars replaced the horse and buggy, AI machines will replace most human car and truck (lorry) drivers.

There are other areas where human replacement is underway. This is not surprising. AI machines are smart. They can beat the best humans at chess, Go, poker, Dota2, StarCraft and so much more.

When it comes to intellectual property, advocates of letting machines do more and replace humans extoll the advantages, especially in the patent field, of new inventions and cheaper and faster drug discovery due to the ability of AI machines to process a lot more data (“big data”) and perform in silico research.  Here again, as with cars, there is relatively little discussion of what will happen if companies that depend on Research and Development can “employ” machines to replace STEM PhDs.  An atrophy of employment opportunities for STEM researchers in applied sciences and technology might impact our millennia-old quest to understand nature.

But when it comes to copyright material, I am much more worried.  AI machines have already replaced hundreds of human journalists. Journalists are self-evidently important for the very existence of a healthy polity in a democratic society. Machines have begun to replace songwriters and composers. They can write award worthy poetry and short stories. They have taken work away from lawyers by writing contracts.

Now, last time I checked we live in a market economy.  Imagine that you are a record company or book publisher. One of your largest costs is the money you pay to people you like to call vendors but that the rest of us call authors. Indeed, reducing money paid to authors is reportedly one of the main reasons for the attempted merger of the two largest US publishing houses, now on hold after an intervention by the US competition law authorities. This is understandable, with Wall Street glasses on: An AI machine is not owed royalties, nor does it have, say, reversion rights. Even if you were to grant such rights to a human programmer or owner of the machine, then you might increase programmers’ employment, but the people who would have made a living writing novels or music are still out of a job.

Some say that certain humans will always write books and music and make art and that there will always be a market for masterpieces of human creativity. Perhaps. But I also know this: masters became masters by refining their craft over years, and a masterpiece is not typically their first work—though we can all think of a few geniuses who were exceptions to that rule.  Hence, the argument that all “mass market” works should best be left to cheaper machines while humans just keep the high creativity treads too much water to be taken at face value.  The truth is, most authors work by writing books that machines will soon be able to write. I expect an AI written law review article to be accepted for publication (as a test) within two years. Indeed, machines will be “better” — from a publisher or producer’s standpoint—than humans because they can detect patterns in the most successful existing works (say, of a particular genre of music) and “create” something that is likely to be a more immediate success. RIP deep tracks.

Machines will soon combine their “creative” ability with their ability to manipulate us using our (many) cognitive biases. Think of Facebook’s, er, Meta’s bots affecting election outcomes.  We will be fed content that fits our existing preferences and biases.

It gets worse. Changes in cultural productions and trends both lead and reflect societal changes, which in turn lead to political and, ultimately, legal changes. Literature in all forms, fine arts and music are among the most important vehicles to both mirror and propagate those changes throughout society. If those cultural vehicles are made of art, books and lyrics created by AI machines, then those machines will control at least a part of cultural, societal and political change. Think of it as self-driving culture, and it will be a U-turn as far as human evolution is concerned.

My suggestion is simple. Before we can identify the risks and valence of the replacement of human authors, let us not accelerate that replacement by putting behind it the full force of the market. This is what would happen if copyright were interpreted as protecting AI machine productions that have no human cause. [1]

To be clear, this is a debate about applying copyright law to the outputs of AI machines. The debate about the application of copyright law to the inputs, notably about text and data mining exceptions, is different. It may well be that many applications will emerge using TDM that do not challenge humans on the terrain that has ensured our dominion over other creatures and machines, namely our ‘higher mental faculties’.  Those should a priori be allowed.


 

[1] I discussed the more doctrinal aspects of the issue in a previous post on this blog.


_____________________________

To make sure you do not miss out on regular updates from the Kluwer Copyright Blog, please subscribe here.


Kluwer IP Law

The 2022 Future Ready Lawyer survey showed that 79% of lawyers think that the importance of legal technology will increase for next year. With Kluwer IP Law you can navigate the increasingly global practice of IP law with specialized, local and cross-border information and tools from every preferred location. Are you, as an IP professional, ready for the future?

Learn how Kluwer IP Law can support you.

Kluwer IP Law
This page as PDF

Leave a Reply

Your email address will not be published. Required fields are marked *