Accounting Firms Scramble to Push Out AI Acceptable Use Policies
June 15, 2023
Ever since ChatGPT exploded onto the scene late last year people of all stripes have been using, discussing, exploiting, and fearing it. Which category you fit in depends on your age, profession, familiarity with technology, knowledge of decades-old conspiracies about robots taking over the world, and most importantly your willingness to embrace the new and exciting. There’s more to it, obviously. But that covers most people who are aware of and using novel AI tools.
Because the accounting profession has a long history of being reactive rather than proactive especially as it pertains to emerging technology, firms are now scrambling to develop AI best practices and acceptable use policies. In the early days of ChatGPT (by “early days” I mean like four months ago), firms told staff to be cautious with ChatGPT and not to throw client data at it; at the same time, firms were developing their own AI technologies. Said a PwC Australia spokeswoman in February, “Our policies don’t allow our people to use ChatGPT for client usage pending quality standards that we apply to all technology innovation to ensure safeguards. We’re exploring more scalable options for accessing this service and working through the cybersecurity and legal considerations before we use it for business purposes.” Not long after, PwC US announced a $1 billion investment in AI to expand and scale its artificial intelligence offerings. There is absolutely no doubt firms are eager to figure out how to monetize this technology and fast.
While they’re doing that, there’s the issue of staff use of these tools. We really don’t know where stuff ends up when it’s put into the AI void but we do know ChatGPT uses your conversations to improve it. That’s fine if you’re a writer for a shitty accounting news site and need headline ideas because you ate a weed gummy for lunch knowing damn well you had a deadline, not so much if you are a professional handling sensitive data. OpenAI started offering an opt-out option for ChatGPT chat history however even if you choose this open they say “we will retain new conversations for 30 days and review them only when needed to monitor for abuse, before permanently deleting.” Boy is it gonna be fun when the data breaches start making headlines.
For now, we get a look at how one firm is handling the issue. Ranked #68 on the Accounting Today Top 100 and the sixth fastest-growing firm in the country, Sax LLP is working on an AI policy for its staff and spoke about it to ROI-NJ:
Leon Grassi happened to be drafting the in-house rulebook for how Sax LLP accountants might use artificial intelligence platforms minutes before speaking to ROI-NJ on the exact same topic.
It’s less coincidence than predictable that he was occupied by the thing a lot of accounting firms are right now.
“To be honest, an AI acceptable use policy for staff is not something we ever thought we’d be creating at an accounting firm,” he said. “But we’re by no means the progenitor of this. It’s spreading around like wildfire now.”
Grassi, chief marketing officer and head of business development at Parsippany-based Sax LLP, said that, with the emergence of AI-driven language processing tools such as ChatGPT, accounting firms aren’t just thinking about how their staff might interact with these tools far down the line. … It’s already happening.
In other words, as Grassi said, the cat’s out of the bag. The technologies are readily accessible to anyone. And accounting firm leaders don’t see themselves as in a position to tell their staff that they shouldn’t be used for research, emails, articles or other tasks.
Last week I wrote about several people who feel more than comfortable dictating what people can and can’t use AI for, 40% of HR professionals surveyed by talent company iCIMS for their annual “Class of” report who said that using ChatGPT/AI bots during the hiring process is a definite deal breaker. Screw those guys.
[Going Concern]