Since the public release of ChatGPT in November 2022, mass media outlets have hummed with predictions about the risks and rewards of ChatGPT and other generative artificial intelligence—programs that generate text, images, and other media in response to prompts. But ChatGPT and generative AI are not just buzzwords.
They could have a profound impact on TEI members, a recent Thomson Reuters study suggests. To find out more about the survey, senior editor Michael Levin-Epstein interviewed Zach Warren, the technology and innovation insights lead at the Thomson Reuters Institute in May.
Michael Levin-Epstein: Tell us about the research study Thomson Reuters just conducted.
Zach Warren: The survey was on ChatGPT and generative AI in the corporate tax and law firm space, and generally asked respondents how they were approaching these new technologies, as well as any current actions they were taking—whether they’re adopting them already and/or whether they have certain risk concerns, and whether they’re banning or warning their employees against unauthorized use. Generally, we think that people are pretty optimistic about how these technologies will be used. We asked separate questions around whether people believed generative AI and tools like ChatGPT “can” be used in the tax department and whether they “should” be used in the tax department. Slightly different connotations there. But in both cases, we found that most people actually felt that yes, these technologies will have a home, if not now, [then] in the very near future, in the tax department. We found that overall, between all parties, seventy-three percent said yes, ChatGPT and generative AI can be applied to the work that we do. Additionally, fifty-one percent took the next step and said it should be applied to the work that we do. I find that distinction particularly interesting, because [with] past technologies, even something like the cloud or a lot of automation technologies, there were people that were interested, sure, but it didn’t have that quick of an uptick. People said, “Yeah, that’ll be something that we get to when we have to, but we don’t really see the utility of dropping everything and going for it right now.” But with ChatGPT and generative AI, the jump in time saved in particular, I think, has really opened some people’s eyes. And even though it’s only been at this point less than a year since the public release of ChatGPT, a lot of people have played around with it, understand what it can do, and have mapped out use cases to the corporate tax world.
Levin-Epstein: What might spur on an advance in adoption of these new technologies?
Warren: Right now, it’s just going to be education more than anything. One thing that we found in our research is people might have had an opinion about “Can this be used? Should this be used?” But a sizable number of people simply didn’t know. That same question about “Should ChatGPT and generative AI be used in corporate tax work?,” thirty percent said, “I don’t know.” We asked a similar question, “Does your department have risk concerns around this?” And we got about a quarter saying, “We haven’t had that conversation; we don’t know.” So, a lot of it’ll just be not only getting hands-on, playing around with the tool and understanding it a little bit more, but also beginning to have those conversations more within the department if it’s something that they want to use department-wide, if it’s something that they trust having the data within the system to use, if they feel comfortable using it for client work. I don’t think those conversations have happened quite yet on a wide scale just because this is so new, but it also wouldn’t surprise me if it’s going to happen sometime soon.
Levin-Epstein: Who is going to be responsible for educating tax professionals on these tools?
Warren: That’s an interesting question. I’ve had a few conversations around those ends, and one conversation I actually just had yesterday with a corporate tax professional who basically said, “Yeah, I am high up within the organization. I’m a little bit younger, so I feel like I understand technology and I’ve been educating myself,” but I also think it’s going to be incumbent on people that maybe have been technology agnostic in the past, [where] that’s not their area of interest, to go into this with an open mind and ask questions. The professional I talked with said, “I can’t say I’ve seen any pushback in trying to use these tools, but more people playing devil’s advocate with me, asking good questions about how exactly this is going to be used, some of the risks.” I think this is definitely a technology that will come with champions. It is exciting for a lot of people, and I think some people are going to want to use it, or already are using it right away. But with that, I think there’s going to be some tough conversations for leaders in tax departments about putting some guardrails on this just to make sure that if it’s being used, it’s in a responsible way. The only other thing I’ll add real quickly is, particularly as vendors and technology providers come out with this, I think it would be good for them to approach this in a practical manner as well. With some AI in the past, I’ve seen some irresponsible marketing, you could call it, that they’re promising the world and saying, “AI robot tax professionals are going to replace everything,” when that’s just not the truth. So, working with tax departments in a way that actually is functional for them and being able to explain what the technology can do practically—as compared to promising the world—is going to be key.
Levin-Epstein: That’s a good point. You mentioned risk. What are some of the risks that tax professionals are going to have to be cognizant of with this new technology?
Warren: In our research, a few key themes really popped up. One is accuracy. Right now, the accuracy of the tool is pretty good, especially as you’re getting to something like GPT-4, which is the most recent release from OpenAI of ChatGPT. But it depends on what the use case is, really. So, if you’re, say, using it for an internal question-and-answer service, where if it gives you a nonsensical answer at some point, that’s OK, just ask it a different question, then that’s fine. But if you’re looking to use this to, say, write something for client work or something that has to be 100 percent accurate for regulatory reasons, the tool’s not quite there yet. So, you really need to be smart in how you’re using it. Another big one that really has popped up for people is privacy and confidentiality. A lot of these tools, particularly something like ChatGPT, it doesn’t live within your own firewall. You don’t necessarily know how the data is going to be used. So, putting confidential tax information, audit information, within the tool, there is a possibility if you’re using public-facing tools like ChatGPT that that will be fed into their algorithm and could hypothetically be released to the general public. You don’t want that. If you’re going to be using these tools for that sort of information, it’s going to be necessary to have some sort of firewall or some sort of way to make sure that that private information doesn’t get out. And the last one is just a loss of control to that end. We’ve done this sort of report a few times, tax being the most recent. We did it in the legal world before. But particularly within corporate tax departments, the loss of control was a much bigger factor than some of those other segments that we surveyed. Just because I feel like a lot of tax departments are used to having all the numbers at their disposal, having everything within their own firewall, making sure that nothing gets out. So, using some of these public-facing technologies might be a little bit of a new phenomenon for them. Not that they’re not going to do it, but it may take a little bit more time to be comfortable with doing so.
Levin-Epstein: Yes, another good point. What are some differences between how corporate tax and tax firms are approaching generative AI?
Warren: Just for starters, we found actually that corporate tax departments were a little bit more aware, and actually maybe even a little bit more optimistic, about how they were approaching this technology. The very first question that we asked is, “Do you know what generative AI and ChatGPT are?” And we got ninety-one percent for corporate tax saying yes versus tax and audit firms a little bit lower at eighty-six percent that weren’t as aware of the technology. And then that was reflected in whether they believe this should be used actually for the work as well. Seventy-eight percent of corporate tax said, “Yes, this can be used,” as compared to only sixty-eight percent of tax accounting and audit firms saying, “Yes, this can be used.” But on the flip side, as you might expect, they also had a few more risk concerns. Particularly, we found corporate tax departments were a little bit more proactive in either warning against the unauthorized use of ChatGPT or generative AI or outright banning the unauthorized use of ChatGPT or generative AI. So, twenty-one percent of corporate tax departments said that they were warning their employees against unauthorized use, compared to seventeen percent of tax firms. And corporate departments, ten percent said they were banning unauthorized use versus only eight percent for tax firms. So, slightly different numbers. That actually was particularly higher in the UK, which I found interesting. The UK seems to be ahead of the US and Canada, which were our three geographies that we surveyed, in terms of warning against unauthorized use and banning unauthorized use. But overall, across the board, we did find that corporate departments were being a little bit proactive in making sure that the risks are accounted for.
Levin-Epstein: What are some areas that you might be looking at in terms of further research and observation?
Warren: The main thing for me is just use cases right now. There is a very high proportion that are still considering this technology, which makes sense. It’s relatively new, so they don’t really quite know what to do with it yet. Only about six percent in the corporate tax department were either already using or imminently using these technologies, but an additional twenty-one percent were still considering, “OK, how exactly do we want to use it?” As more and more of those companies start to really integrate this for the first time, I really just frankly want to see what their use cases are. The big one on the corporate side so far has been research; eighty-three percent who said that they are using it already were focusing on research. Also, we saw more than fifty percent were using it for compliance, as a question-answering service, and for tax advisory. But I’m curious, particularly as the technology becomes more advanced, whether that will change at all. Also, one thing that we’re really looking at moving forward is how exactly—particularly in the corporate side—they want their vendor firms to be using it. One thing that I found particularly interesting is we asked people on the corporate side, frankly, “Do you know how your vendor firms or your technology vendors are using this technology?” And seventy-one percent said, “No, I don’t know whether they’re using it or not.” Only ten percent said affirmatively, “Yes, we know that we have vendors that are using this.” That’s something we’re going to continue to ask and see if more people are aware or even asking the questions of their vendors here moving forward.
Levin-Epstein: Is Thomson Reuters looking at other industries that are thinking of adopting or have adopted AI?
Warren: Yeah, certainly. Just to start off, I will put in the caveat that I am on the research side of Thomson Reuters. I’m not on product. I can’t speak for product or the product road map or what they’re going to be rolling out. But very generally, I know that Thomson Reuters the company is heavily investing in AI. I believe they had a $100 million investment very recently in AI products and are planning on rolling them out within their preexisting tax and accounting tools, if not by the end of this year [then] definitely getting into 2024, to make sure that generative AI is going to be a part. On the legal side of things, Thomson Reuters announced a partnership with Microsoft as of earlier this week, so in late May, to integrate Copilot [an AI assistant tool] into some of the legal offerings, with the idea that integrating Copilot into some of the tax offerings as well is probably going to be happening imminently. And I think very generally the way that not only TR, but I think a lot of software vendors within the space, are approaching this is it’s not going to replace tax professionals. This isn’t going to be a robot tax person who’s going to be able to fill out all of your forms, do audits entirely by themselves. That’s just not the way the technology functions. There is always going to be a role for the human in this to not only certify that everything is going well, but to interface with a client to make sure that inputs are correct, data is being used correctly. What’s going to change is just letting the human work more efficiently. Some of those repeatable tasks, rote tasks, writing things up, that take a lot of time right now, it’s going to go more quickly. It’s just a matter of how exactly you’re going to do that, to make sure that you’re working in the best service of the company. So, a very long-winded answer, I know, but the short version is, AI is coming, but it’s going to be responsible AI that works in a way that augments how tax professionals currently work as compared to replaces [them].
Levin-Epstein: That’s a good way to end it. Thank you very much