This article is part of a series in which OECD experts and thought leaders — from around the world and all parts of society — address key policy challenges, discussing and developing solutions now and for the future. Aiming to foster the fruitful exchange of expertise and perspectives across fields to help us rise to this critical challenge, opinions expressed do not necessarily represent the views of the OECD.
Artificial intelligence (AI) is doing exciting work these days. It is making art, generating personalised music, writing short stories, and any day now it will be driving in places other than San Francisco.
These and continued advances in AI are going to be hugely disruptive—socially, economically, and legally. It turns out that when AI steps into the shoes of people and does things that only people used to do, the law often discriminates between AI and human behaviour. That might intuitively sound appealing because an AI is not like a person no matter how much it can behave like a person. But it turns out that discriminating between AI and human behaviour tends to result in worse outcomes for people.
Consider a couple of ways in which the law treats driving by a person and an autonomous vehicle differently. You might not immediately go to taxes, but current tax policies encourage companies to use self-driving cars even if they are no better than human drivers. That is because, among other tax incentives, companies need to pay taxes to the government to employ a person. But if they can automate that job, they avoid payroll taxes. If tax is not exciting enough, liability frameworks also treat AI and human behaviour differently. When a taxi driver causes an accident, we evaluate it under a negligence framework where we ask whether a reasonable driver would have caused that same accident. When a self-driving car causes an accident, we evaluate it under a strict liability standard which involves liability regardless of the exercise of reasonable care. Having two different liability standards for the same behaviour results in some real perverse incentives, and ultimately may inhibit the introduction of safer technologies.
Patent law is another area where the law treats behaviour by a person and by a machine differently. Traditionally, to get a patent, you needed to have a human inventor. What makes someone an inventor varies by jurisdiction, but in the United States it is the person who “conceived” of an invention.
With companies now investing vast sums of money in automating aspects of R&D—particularly in biotech where AI is getting proficient at facilitating drug discovery and repurposing, as well as in materials engineering—what happens if an AI functionally comes up with an invention? What happens if there is no person involved who traditionally qualifies as an inventor? Can someone get a patent? Who would be the inventor, or is there no longer a need for an inventor? Who would own the patent?
Failing to provide patent protection on “AI-Generated Inventions” would mean that even in situations where an AI is more effective than a person, a company will need to use people if patent protection is critical
Leaving these questions unanswered is going to be a major problem if pharmaceutical companies need patents to develop new life-saving drugs and, at least in some situations, if it turns out that an AI can do a better job at finding a new antibody to treat COVID-19 than a group of pharmacologists. Failing to provide patent protection on “AI-Generated Inventions” would mean that even in situations where an AI is more effective than a person, a company will need to use people if patent protection is critical.
Partly in response to the lack of guidance in this area, and partly to encourage a dialogue on this subject, I led a team of patent attorneys internationally in filing patent applications for two AI-Generated Inventions: one was for a light that could attract attention in an emergency, and the other was for a beverage container based on fractal geometry. These inventions were conceived of by DABUS, which is an AI developed by Dr. Stephen Thaler. We filed the applications with DABUS listed as the inventor. Not, of course, because the AI was entitled to or capable of having any rights, but because listing the AI was transparent about how the invention was generated and because it prevented someone from taking false credit. We argued the AI’s owner should be entitled to own any patents on its output, the same way the owner of a 3D-printer would own a physical beverage container made by their machine. That outcome reflects a common law doctrine and rule of property ownership dating back to Roman times called accession.
Also on the Forum Network: Innovation in Real Places: Strategies for Prosperity in an Unforgiving World
We have thus far filed our applications in 17 jurisdictions, and South Africa granted our applications in June 2021. Three days later, Justice Beach in the Federal Court of Australia issued an extensive reasoned decision holding that an AI could be an inventor as a matter of law under the Australia Patent Act, and that at least in our case, the AI’s owner had the best claim of entitlement to the applications. However, in April 2022, a full panel of five judges of the Federal Court of Australia reversed that decision. It is now up to Australia’s High Court to decide whether to accept an appeal. Indeed, the applications have already been denied in several jurisdictions, including in the United States and United Kingdom and Europe, although judicial appeals are pending for all of these.
It may be that meaningful change needs to come, at least in some jurisdictions, from legislators. The United Kingdom Intellectual Property Office (UKIPO) is about to announce the results of their second consultation on AI & IP, and specifically whether they will recommend that Parliament change the law to explicitly allow patents for AI-Generated Inventions.
Subsistence of patent rights are just one of the many ways that advances in AI are going to challenge traditional doctrines of intellectual property law, and law generally. Perhaps the most exciting aspect of AI is that it will require us to rethink the legal standards that apply to people, and that gives us an opportunity to rethink the normative foundations of our laws.