ICEAW: AI adoption challenges: an ethical dilemma for auditors
Oct 16, 2024
Artificial intelligence offers a lot of opportunities for audit firms and practitioners, but implementing AI solutions into audit functions comes with some complicated considerations.
Two years prior to his retirement, William Gee, Vice Chair for the ICAEW Tech Faculty Board and a member of the Technology Expert Group for the International Ethics Standards Board for Accountants (IESBA), moved away from client service and into a role looking at innovation and disruption in the accounting and audit sector. He participated in early pilots of AI-enabled audit tools and conducted studies into the feasibility of deploying such tools to support audit engagements.
As with all technologies, there are often challenges that adopters would need to address. In the context of AI-enabled systems, a significant amount of data is needed to train the AI model, with a need to continuously provide data to enhance and refine the AI model. The question concerning data use thus arose: whether auditors are able to use data collected from audit engagements to train AI models.
“The initial purpose of collecting data was for the audit,” Gee explains. “If a firm wished to use the same data for AI training, that purpose would have changed. Furthermore, the data that we would collect from a client often contained both commercial data and personal information, such as bank audits, where auditors may obtain details of bank loans. Firms seeking to adopt AI therefore need to address not only the right to use the data for purposes other than the audit, but also privacy and other applicable regulatory obligations.”
One possible solution would be to add a clause in the audit engagement letter asking the audit client for specific permission to use the data for AI training. Inevitably some clients would agree while others won’t, which impacts the availability and variety of data. “If you don’t have sufficient data to train the AI, everything falls apart,” says Gee.
This is just one of many ethical and operational considerations that firms and practitioners will need to contend with when dealing with AI, and that is just for internal use. Things could get more complicated when clients leverage AI in their financial and non-financial reporting and operational processes.
“The majority of audit practitioners are not from a technology background,” says Gee. “Years ago, we had a vision that all audit practitioners should develop basic capability to understand technology. Despite all the training and upskilling, that vision was only realised to a certain extent. Today, the pace of technological change is faster than ever; it is not just about AI, we also have blockchain, robotics, Internet of Things and more. We have a greater need to rely on and work alongside specialists.”
General technology specialists are one thing; specialists in AI are a different matter. The technology is so new and fast moving that few can describe themselves as a true AI specialist.
It is therefore timely that IESBA is enhancing the Code of Ethics for Professional Accountants (the Code) to address the ethical considerations, including independence, relating to the use of experts, both internal and external, in audit and assurance engagements as well as in the provision of other services.
An exposure draft was issued at the beginning of 2024 for public consultation, the new revisions to the Code will be finalised in December before being rolled out in 2025.
Knowledge within the profession varies across both firm size and geographical location. A sole practitioner probably will not need the same knowledge, necessarily, as a firm serving international clients.
“It varies vastly across firms and geography, which is very challenging in particular for the smaller firms as they may not have access to technical specialists in the same way as firms with international affiliations,” says Gee.
There’s also a big discrepancy between what accounting and finance students are being taught about technology and how dominant it will be over their careers. Technology needs to be a much bigger part of accounting education and training, says Gee. “It has to be a collaboration involving the accounting profession, universities, governments and other institutions to address this competency gap.”
For those wishing to adopt AI, Gee recommends an iterative approach, starting with education on what AI is and how it works. He also recommends that accountants and auditors should experiment with different types of AI to get a feel for what they can and cannot do.
“An ex-colleague of mine plays with AI every day, trying out new versions of AI tools the moment they are announced. He analyses their effectiveness and limitations, sharing his views on social media. If you don’t get your hands dirty, you don’t actually get a real feel of what’s going on.”
Once firms and practitioners have developed a better understanding of AI and how it works, then they will be better placed to determine the areas in which AI can be applied. They will be in a position to think more about the issues and implications of using AI-enabled tools to support their work.
Apart from cyber security and data privacy concerns, practitioners need to address the specific AI-related ethical and security concerns. One such example is AI bias: while it is possible to instruct the AI to compensate for known biases, this approach is not perfect as the AI model, at least for now, is a set of instructions and does not really understand the nature of the bias.
The result could be over-compensation resulting in flawed or incorrect output. “It becomes very important for accountants to appreciate the nature and limitations of a particular AI model or solution. This is an area that requires significant professional judgement as well as common sense,” adds Gee.
It is also important for practitioners to be able to explain what AI solutions actually do. “Responsible use of AI requires the user to be accountable in terms of how the AI routine is used and for what purpose, what data is used, how the data is obtained and, most importantly, how to make the entire process explainable.”
The widespread adoption of AI is also fuelling the creation of AI-related regulation and enhanced data protection legislation globally. “The issue about data sovereignty came up in our discussion a few years back and when we compiled a global summary of relevant laws, covering for example cyber security, data security, privacy, and so on, we concluded that this is a complex web of limitations and restrictions on a global level. That obviously further complicates what an audit firm can do.”
Regulations in the EU and China show the direction of travel for management guidance on generative AI regulations. Some territories have moved fast, which is good on the one hand, but on the other adds another layer of complexity for users of the technology.
“Dealing with the technology aspects of AI is challenging enough and practitioners also need to consider the legal aspect of it, often on a cross-jurisdiction level.” Gee concludes. “Life is going to be most interesting for auditors over the next few years.”
[ICAEW Insights]