KPMG (UK): Just 7% of auditors’ tasks open to generative AI
July 10, 2023
Increasingly sophisticated branch of AI is still a long way off from assuming the majority of tasks completed by organic auditors, says Big Four firm
Despite widespread concerns about the impact of artificial intelligence (AI) on jobs, only 7% of the tasks carried out by auditors are susceptible to automation at the hands of generative AI, finds a new KPMG report.
Generative AI and the UK labour market, published last month, takes a snapshot of the types of work across various professions that could be taken over by generative AI, given the technology’s current capabilities. In a ranking of 12 professions, auditors are placed a relatively comfortable 10th.
The report says: “Within research occupations, generative AI may offer a potential increase in productivity by automating the tasks around gathering research material as well as contributing to drafting, covering note-writing, proposals and more technical papers.
“The drafting of technical reports also plays a significant role in the 7% of tasks that face automation among auditors,” it adds.
That marks a distinct contrast to the fortunes of the top-ranking authors, writers and translators, for whom 43% of tasks are susceptible to generative AI replacement. Even the closest-ranked group – programmers and software developers – trail their literary counterparts, with 26% of their tasks up for cybernetic grabs.
Cautious optimism
Despite the alarming findings for specific professions, KPGM says that, on average, susceptibility across the entire UK workforce amounts to just 2.5% of tasks, with the potential to boost productivity by a modest 1.2% – equivalent to £31bn per year.
In her foreword to the report, KPMG Partner, Workforce Transformation, Mel Newton says that, despite controversy among journalists, politicians and “grey-haired” business leaders, “AI is something we can afford to be cautiously optimistic about, as well as focusing on building appropriate controls.”
Wider benefits
ICAEW Head of Data Analytics and Tech Ian Pay says that while a 7% automation may not sound like much, it could still have substantial impact in a sector where marginal gains are key for delivering workable profit margins. And given that audit is suffering from a well-documented talent shortage, that 7% could help to address a number of staffing challenges.
“The core work of the auditor – assessing processes and controls, reviewing financial and increasingly non-financial statements and obtaining and assessing evidence to corroborate those statements – does not feel close to being challenged by generative AI,” Pay says.
“As it stands, the technology is not well placed to perform judgmental activities or provide responses which, by the nature of audit and the current focus on the profession, must be absolutely watertight.”
Despite that, Pay foresees scope for generative AI-based tools to support auditors indirectly – for example, by answering questions related to methodology, or accounting and auditing standards’ application to specific audit scenarios. Or even by helping firms to streamline document management and information retrieval.
“With many auditors now dabbling in data analytics, generative AI could assist with the production of code to perform certain data processing tasks. That could accelerate the adoption of more analytics-led procedures, which would have wider benefits for audit,” Pay adds.
However, auditors must remember that generative AI is just one branch of automation and the broader AI spectrum, Pay warns. “There are many other technological advances out there,” he says, “including AI-based analytical tools, which could have a far more transformative impact on the profession. Using all those tools in combination has the potential to automate significantly more than the 7% noted in KPMG’s report.”
He adds: “It’s important to caution that in regulated industries such as auditing, many use cases for generative AI remain hypothetical – and would rely upon strictly private and internal instances of organisations implementing generative AI engines. We cannot stress enough that information that is not publicly available should not be fed into public platforms such as ChatGPT or Google Bard.”
Meanwhile, a Thomson Reuters report ChatGPT and Generative AI within Accounting Firms and Corporate Tax Departments, which canvassed the views of almost 800 tax, accounting and audit professionals in the UK, US and Canada, describes finance professionals’ current state of mind over ChatGPT and generative AI as “open-minded, but cautious”.
Published last month, the report says that at many tax and accounting firms, awareness of ChatGPT and generative AI is generally very high, and is even higher among corporate tax departments. “There is definite interest in how generative AI might be used to improve operational speed and efficiency, but opinions about its utility in the tax profession are decidedly split at the moment.”
Bullish attitude
Almost three-quarters of respondents agreed that ChatGPT and generative AI can be applied to tax, accounting and audit work – and about half think it should be. Many respondents said they feel the technology is not developed enough to trust at present, and may be more appropriate for non-tax, administrative work.
However, the ‘large-language’ technology behind ChatGPT and generative AI is developing rapidly, and many tax professionals can envision using the technology – or tools that incorporate it – to assist them in their work, the report says.
While saving time on research emerged as the most commonly identified use case, respondents cited many other potential applications, too – for example, document management, tax return preparation, tax advisory, auditing and compliance.
Respondents signalled a bullish attitude towards the influence of generative AI on their jobs. “The technology will never reach the equivalent of a talented professional in terms of planning and technical expertise,” one commented.
Another referenced the maxim that the answer to every tax question begins with, “It depends…” and warned that ChatGPT is “ill-equipped to deal with that level of ambiguity”.
[ICAEW Insights]