Internal auditors say AI risks are the toughest
October 8, 2024
While internal audit and IT professionals view AI as a technology risk that will grow more significant over time, compared to other technologies they have the least amount of confidence in their ability to identify and handle these risks.
This is according to a recent poll from business technology consulting firm Protiviti. It found that while 28% believe AI and machine learning—including generative AI—does represent a technology risk, and that 59% say it will be a significant threat over the next two to three years, professionals do not feel confident in their ability to manage these risks relative to other technologies.
The survey found that just 13% of the respondents are confident in the proficiency of their IT audit teams to evaluate technology risks related to AI (down one percentage point compared to last year), and just 17% feel their organizations are prepared to address these risks in the next 12 months (down three percentage points from last year). In both cases, AI represented the technology risk they felt least capable of identifying and handling.
In contrast, poll respondents felt most confident about cybersecurity, both in terms of evaluating cybersecurity risks (58%, a five percentage point gain from last year) and handling them (63%, an eight percentage point gain from last year). Below that are risks such as regulatory compliance, data privacy, cloud computing risk, data governance and integrity, IT talent management, transformations and system implementations, software development, technology resiliency, third party/vendor risk, technical debt and aging infrastructure, and Internet of Things. Internal audit and IT professionals felt more confident in their ability to evaluate and handle risks from all of these areas over AI.
As for which risks in particular they are concerned about, the most common answer was security risks such as hacking, adversarial attacks and data poisoning. This was followed by privacy risks like data misuse or consent violations, operational risks like system failures, shortage of qualified AI experts, regulatory risks, risks of AI not integrating well with existing systems, competitive risks like being outpaced in AI adoption by competitors, ethical risks like bias and lack of accountability, and reputational risks.
However, the report also noted internal auditors are already actively looking for ways that AI can be used to reduce risk. It found that 52% of internal audit leaders are researching the future use of AI at their organizations, 39% are already auditing the use of AI in their organizations, and 39% are already using AI tools in their audit activities. This seems to have sharpened their ability to identify risk in certain areas. The poll found that 76% of organizations using AI tools in technology audits perceive a high level of cybersecurity risk in the next year as compared to only 65% who do not use AI tools. Similarly, 71% using AI tools perceive a high level of data privacy and compliance risk, as compared to only 58% of those who don't.
"I'm excited about the increasing adoption of AI in internal audit, and making this a priority is essential for departments to stay ahead of emerging risks and opportunities," said Angelo Poulikakos, global leader of the firm's Technology Audit and Advisory practice. "Internal audit also has a unique opportunity to guide the business in adopting AI responsibly by advising on effective risk and control governance."
While AI is not seen as a significant short-term risk, the report recommended that leaders still be proactive in assessing the ethical, operational and reputational challenges it poses, especially considering how quickly the technology is being adopted in the market. Leaders, said the report, should give AI immediate attention, focused on determining whether their organizations are establishing governance and leveraging frameworks, like the NIST Risk Management Framework, so they can be ready for future AI implementations.
[Accounting Today]