In the first of a two-episode series, professor Ryan Abbott of the University of Surrey discusses his views on the tax implications of increasing automation and the need for a robot tax.
This transcript has been edited for length and clarity.
David D. Stewart: Welcome to the podcast. I’m David Stewart, editor in chief of Tax Notes Today International. This week: I, Taxpayer — part one.
As the labor market remains tight, and as companies seek to fill their needs through increased use of automation and robots, now seems like an appropriate time to revisit the question of whether and how robots should be taxed.
To explore two perspectives on this issue, we’re dedicating the next two episodes of Tax Notes Talk to highlighting some of the arguments for and against the taxation of robots.
This week’s episode — part one — features professor Ryan Abbott discussing the merits of taxing robots. Next week’s episode — part two — features professor Orly Mazur raising concerns over what a robot tax could do.
Tax Notes contributing editor Marie Sapirie will join us for some background on the issue in just a minute.
Marie, welcome back to the podcast.
Marie Sapirie: Thanks for having me.
David D. Stewart: I understand this is a big topic with many different arguments for and against. Could you give us a preview of the issues you covered in your interviews?
Marie Sapirie: Like you mentioned, my interview with Ryan is the first half of a two-part series on how tax policy might address the societal and economic changes that automation could cause.
In these two interviews, we go over key parameters of the debate over how the tax system might change in the wake of increasing automation, including the relevant definitions, such as what exactly is a robot tax; what options there are for policymakers to consider and how to choose among them; and also implementation questions.
David D. Stewart: Now, what makes this issue important to talk about today?
Marie Sapirie: That’s a question that our guests this week and next week will address in detail. But briefly, the big concern is that if automation causes large-scale unemployment, then the tax bases that governments rely upon will be dramatically reduced.
The debate over tax policy really took off in 2017, when the European Parliament considered and rejected a robot tax proposal. That proposal suggested a tax on the owners of robots to fund support for workers whose jobs were reduced or eliminated by robots.
But the more general question of how policymakers should respond to a period of time where rapid technological changes appear highly likely, and the effects of those changes are not really certain, is more of a perennial issue, historically. You can see that illustrated in the EU’s proposal, which opens with references to the story of Pygmalion from classical antiquity, and Mary Shelley’s famous creature in Frankenstein, which was published over 200 years ago.
While we aren’t dealing with Greek goddesses changing marble sculptures into living women or monsters who mine Plutarch for tips on how to be a noble human, we do seem to be in a period of rapid adoption of automation now, and that’s why the tax considerations are a current issue.
Also, it’s worth noting that the EU’s proposal was part of a broader package that was intended to regulate the widespread adoption of robots. It focused on nontax items like the formation of a regulatory body dedicated to robotics and artificial intelligence, an ethical framework for developing and using robots, and liability in situations where robots were involved.
The tax implications were not really the main focus there, but the proposal pointed out that if the concerns about potential widespread unemployment came to be, then maintaining the current basis of taxation could result in a big increase in inequality, as well as endanger social welfare and the social security systems.
David D. Stewart: I take it that this is not just any one country’s issue; this is more of a global question.
Marie Sapirie: That’s right. It’s definitely a worldwide debate.
As I mentioned, there was the EU proposal, which wasn’t enacted, but also in 2017 South Korea announced that it would reduce the deduction for investments in automation equipment that businesses could take on their corporate taxes. South Korea’s deduction was due to expire that year, so the proposal was to extend the deduction but reduce it.
David D. Stewart: Now, you spoke with both Ryan and Orly about this. This week we’ll be hearing from Ryan. Could you tell us about Ryan and what sort of things you talked about?
Marie Sapirie: I spoke with Ryan Abbott, a law professor at the University of Surrey and the author of the recent book, The Reasonable Robot: Artificial Intelligence and the Law. Ryan is also the coauthor of the article, “Should Robots Pay Taxes? Tax Policy in the Age of Automation,” which was published in the Harvard Law and Policy Review in 2018. He’s a licensed attorney, physician, and acupuncturist in the United States, and a solicitor advocate in England and Wales.
Last year he led a team of lawyers and researchers in successfully acquiring a patent in South Africa for a technology that was designed and created by an AI system. This was notable because the named patent inventor was the AI system. So, he’s been thinking about a wide variety of the legal aspects of increased automation.
In this interview, Ryan and I discussed the history of the proposals to implement a robot tax, the rationale and possible objections to the proposals, possible policy options for an automation tax, and also some of the other areas of the law that automation implicates.
David D. Stewart: All right. Let’s go to that interview.
Marie Sapirie: Thank you, Ryan, for joining me today to discuss the tax policy choices that lie ahead in an increasingly automated economy.
Ryan Abbott: Thank you. Very exciting to be here.
Marie Sapirie: You’ve been thinking and writing about this topic for a while now, including in a 2018 article titled “Should Robots Pay Taxes?” in which you and your coauthor wrote that we need to take at least an automation-neutral approach in our tax policies.
To set the stage, would you explain what is meant by the term “robot tax”?
Ryan Abbott: Sure. Just to be clear, I’m not a tax person; in fact, I don’t even do my own taxes. But this was an area of particular interest to me because it reflects a broader philosophy I have about the fact that people and machines can both end up doing the same thing, and yet the law applies very differently to those behaviors.
An example I give in the tax context is, if my university manages to replace me with a robot, which they will when the student satisfaction scores come out the same, they don’t have to make payroll taxes for me. Our tax policies are encouraging businesses to automate even when it might not be more efficient.
A robot tax is a potential solution to that problem. In a narrow sense, it could refer to a specific form of taxation on specifically robots, or more broadly, automation equipment.
My coauthor and I came to the conclusion that what was really needed was more tax-neutral laws between labor and capital taxes.
Marie Sapirie: One of the underlying concepts in many of the proposals to tax robots and automation is the idea of a robot worker. If Congress were to take up the idea to implement a robot tax, how should it define what a robot worker is, and what are the policy implications of choosing the definition?
Ryan Abbott: Sure. No. Well, that’s a great question. I think it’s one of the reasons why a narrower understanding of robot tax would be a difficult and problematic thing to implement.
If one has concerns about businesses being driven to automate, given current tax preferences for automation equipment, a potential solution to that is to try and come up with a tax on automation equipment and robots that are replacing human workers or that could be doing the job of a human worker. But it’s a very difficult thing to define that. It’s also something that would be subject to a lot of administrative overhead and gamesmanship.
Machines or robots don’t always replace people on a one-to-one basis; they’re not always physically embodied or doing things interchangeably with a person. That’s one of the reasons why I think a narrow robot tax has a lot of hurdles ahead of it that make a broader rebalancing of capital and labor taxation more interesting.
Marie Sapirie: History includes many examples of technological developments that have had a major impact on the course of human lives, and increasing automation seems likely to be one of those types of changes. Where did the idea to tax automation originate, and how has it developed?
Ryan Abbott: There has been a smattering of people in older literature referring to the sorts of issues that we’ve more recently been looking into.
But this concept of a robot tax really emerged with a report by the European Commission in 2017 suggesting that automation had social concerns associated with it, and that one solution might be to have a robot tax on owners of automation equipment to fund support for retraining of workers put out of jobs by robots.
That got a lot of attention, including from Bill Gates and Lawrence Summers. It really sparked a discussion about how we tax people and machines, and more broadly, the structure of the tax system.
Marie Sapirie: Would you tell us about the rationale for changing the tax laws in response to increased automation?
Ryan Abbott: Different people have had different rationales for doing it. As I just mentioned, the European Commission is concerned about having less competition for human workers and some of the undesirable effects of technological unemployment, where people are being put out of work due to automation equipment.
A robot tax that makes automation costlier would disincentivize employers to render people technologically unemployed.
A tax on automation equipment could also be used to fund enhanced social benefits or to retrain workers.
Historically, automation equipment has put swaths of people out of work in the short term. In the long term, productivity has increased and people have gone on to find new types of work.
But as a society, we haven’t done a great job of transitioning people. That dates back to the Industrial Revolution and the first major wave of automation.
As an example, at the turn of the 19th century, 40 percent of the workforce was in agriculture. Now it’s 3 percent, and we don’t have 37 percent unemployment. That 3 percent is dramatically more productive, and the 37 percent of others have gone on to find other sorts of work. So, even if it works out well on the whole for society, attacks like that could help to soften the cushion for the people who are being adversely affected by it.
My coauthor — Bret Bogenschneider — and I argued that another reason to have an automation tax was that there’s a lot of reasons to automate or not automate, including, due to COVID-19, there’s labor shortages or for health reasons, but for all sorts of reasons.
One reason why we would argue businesses shouldn’t be automating is that if they’re just doing it to save on taxes — if McDonald’s is using machines instead of cash registers not because it’s a better consumer experience or it’s safer or it’s faster, or Tesla’s using a self-driving car just not to have to pay taxes for hiring a taxi driver.
We argue that tax policy should be neutral toward the actor doing the productive work, and that is reflected in, again, for example, payroll taxes being required for a person, but not a machine doing functionally exactly the same activity.
Marie Sapirie: In the case of a short-term disruption, like you mentioned, doesn’t that lend to an argument that a robot tax should be time-limited or targeted in some way?
Ryan Abbott: It could. It depends on the social values underlying how you’d decide to structure a tax system.
If, for example, in the next five or 10 years the only industry that is really affected by automation is the transportation industry — say, most truck drivers are put out of work — one could target robot taxes to self-driving vehicles, or towards the transportation industry generally, or toward trucking companies. That would have the effect of the benefited parties bearing the cost of helping to smooth the transition for those being negatively affected.
In my view, though, having automation-neutral tax policies is a generally useful sort of thing, because, again, it gets rid of some perverse incentives that encourage people to automate to avoid paying their fair share of taxes.
Marie Sapirie: In your 2018 article, you included five possible policy options for an automation tax. Would you walk us through those options?
Ryan Abbott: Sure. Well, it starts by recognizing the different ways that people and machines are subject to taxes, or really, where employers are subject to different tax treatment for using people versus machines for certain activities.
Payroll taxes is one, but also changing deductions for using machinery or other sorts of automation equipment, or changing the timing of deductions.
Also, employers can benefit from not having to bear the indirect tax consequences of higher taxes on human labor.
We propose basically going through those and trying to create offsetting taxes on automation equipment that might be used for phasing out deductions, for example, that businesses use for automation equipment, to something equivalent to a payroll tax on a business for the level of using a machine. That’s one view.
Another view is to have something like an automation tax: Something like unemployment taxes where employers are paying into a scheme based on their ratings or how often they’re rendering workers technologically unemployed.
We gave those as options, but ultimately, they’re both a little challenging — again, due to definite issues and the administrative burdens associated with new taxes, and potentially unfairly penalizing businesses that are legitimately more efficient for automating.
One could also grant offsetting tax preferences for human workers. For example, we could get rid of payroll taxes. If we got rid of payroll taxes, businesses wouldn’t have a wage-tax-based reason to choose a machine over a person. That’s probably a more appealing system in that it ends the taxation of something socially valuable — namely, human labor — and it decreases tax complexity.
The problem with doing that is the federal government gets around 30 or 35 percent of its tax revenue from payroll taxes. That would have to be made up in a significant way. That could be done any number of ways, including with an automation tax or by increasing income taxes.
We ultimately argued that what seems like the fairest solution is to increase capital taxes, which could be capital gains taxes or corporate tax rates, in part given the benefits that capital is going to be seeing from increased automation. But also, arguing that the tax system in general seems fairly heavily weighted towards labor taxation right now.
Marie Sapirie: In deciding whether to enact an automation-neutral approach, one thing jurisdictions will probably think about is the potential that the approach causes the owners of the technology to shift their capital investments abroad. Is there a way you’d recommend for jurisdictions to address that?
Ryan Abbott: I think a couple things about it. That is part of the dominant narrative and the reason that labor taxes are favored, which is a couple of reasons.
One is it’s administratively easier, and two, businesses, for example, are easier targets for collecting withheld income taxes and more likely to withhold income taxes appropriately given that it’s not their tax liability that they’re acting on.
But the other reason being that traditionally economists have thought, “Well, if we tax labor, people aren’t going to stop working, and there aren’t that many people who are going to move to low-tax jurisdictions to try and avoid taxes. Whereas if we increase taxes on capital, people will flee to low-tax jurisdictions.”
The United States and Europe, for example, have historically had fairly high tax rates, and capital has not fled these jurisdictions. That’s because, while they may be subject to higher capital taxes, they have very well-developed workforces. They’re the largest markets in the world. They have developed judicial systems and infrastructure.
There’s a lot of positive reasons for people to invest and make a compelling return on investment, even with somewhat higher capital tax rates. You have not seen, for example, all of the capital fleeing to no-tax jurisdictions.
In part, I would argue that the narrative that we ought to tax labor over capital because capital will flee is overblown. But there are limits to how much a jurisdiction could raise its capital taxes.
Another approach to that might be an international treaty on minimum tax rates for things like corporate taxes and capital gains taxes, so that there isn’t a race to the bottom as jurisdictions might be tempted to force more taxes on labor versus capital.
Marie Sapirie: This topic touches many different aspects of policy. Would you walk us through some of the other aspects that are not tax as well?
Ryan Abbott: Yeah. It was surprising to me the more I looked into this, in different areas of the law, even if you have a person and a machine doing something really very similar or functionally interchangeable, how differently the law treats that.
For example, I mentioned taxes on self-driving cars. If my self-driving Tesla runs someone over, and if I run someone over in exactly the same sort of way, the Tesla accident is looked at under a strict liability tort framework, and my accident’s looked at under a negligence-based framework. It probably doesn’t make sense to have two totally different systems for determining liability — particularly unfair to whoever got run over, who’s just not looking to get run over and wants to be able to collect compensation equally from whatever causes an accident.
Another area that’s very topical right now is AI making things, and in particular, sorts of things that get intellectual property protection like patents and copyright. This year, text-to-image generators have exploded over the internet, taking an academic curiosity to something that has serious commercial importance.
In the United States, if an AI makes a piece of artwork, no matter how valuable it is, that can’t get copyright protection right now. That’s probably not a good system, given that the Supreme Court has consistently said, “We have copyright protection to promote the dissemination and generation of creative works,” and given that people are now using AI to make creative works and to disseminate them.
People are now using AI in drug discovery and development or materials engineering to invent new products and hoping to commercialize them, but at the moment, in the United States you can’t get a patent on that, either. Though different jurisdictions are treating it differently. The United Kingdom, for example, does provide copyright protection for that sort of thing.
I argue that actually people would be much better off if the law was neutral to AI versus human behavior, because ultimately the law is generally trying to promote a certain sort of socially beneficial behavior, whether that’s not running people over and driving, or making creative new works or inventing new drugs. We should promote the best way of doing that, whether it’s having a person do it or having a machine do it.
Marie Sapirie: Well, thank you so much for joining me today.
Ryan Abbott: It was my pleasure. Thank you so much.