Skip to main content

“Success in creating effective AI could be the biggest event in the history of our civilization. Or the worst. We just don’t know. So we cannot know if we will be infinitely helped by AI, or ignored by it and side-lined, or conceivably destroyed by it.”
—Stephen Hawking, theoretical physicist

Generative Artificial Intelligence (Gen AI) is hugely transformative and consequential.  Goldman Sachs predicted in March 2023 that some 300 million jobs will be eliminated or degraded by AI.  It is clear that AI and Gen AI will have significant implications for the accounting profession.  

I recently gave a presentation at an IFAC EdExchange webinar covering, first, the advent of AI, and its capabilities; second, that it appears to be a mixed blessing in that there are great benefits to be reaped but also huge risks to watch out for; and third, how all of this relates to the accounting profession, especially from the ethics angle.

AI is already transforming the accounting landscape
While it brings efficiency and accuracy, accountants must navigate ethical challenges and adapt to new roles. There is no doubt that the future of accounting lies at the intersection of human expertise and AI capabilities. Clearly, AI automates routine tasks like data entry, reconciliation, and financial reporting. When combined with robotic process automation (RPA) the results could be unimaginably superlative on every dimension: reduced human error, ensuring accurate financial statements, streamlined audit processes, and improved efficiency, and enhanced risk assessment. One can envisage a shift in roles in that AI might competently handle routine tasks, allowing accountants to focus on strategic analysis, risk assessment, and client advisory.  AI also enables real-time monitoring of financial transactions, fraud detection, and compliance so, accountants can beneficially collaborate with AI to enhance decision making.

Arthur C. Clarke, legendary science fiction writer, once remarked, “Any sufficiently advanced technology is indistinguishable from magic.” In New York, they say that while “faster, better, cheaper” is all fine and dandy, in reality you can only get two at a time, not all three.  Thus, faster and better is not cheaper; better and cheaper is not faster; and cheaper and faster is not better. However, AI seems to have overcome this presumed barrier—you can now get all three benefits, simultaneously. In Clarke’s formulation, that would qualify as “magic.” Many hitherto complex tasks can now be completed “faster, better and cheaper” by AI; such spectacular capabilities are nothing short of magic.

At the same time, AI models will most likely inherit biases from training data and algorithmic bias is a serious downside (cf. O’Neil, 2016).  More concerning are the so-called “AI hallucinations” wherein AI generated output could be pure fantasy and completely made up.  Further, those not aware of the phenomenon are likely to treat AI as having a “halo effect” and depend on it hook, line, and sinker. Called automation bias this sort of overreliance on emerging technologies further erodes professional skepticism, something that regulators around the world have called out as being a concern. Accountants should ensure AI systems are transparent and fair in their predictions.  When AI systems handle sensitive financial data, ensuring privacy, confidentiality, and security are crucial. Similarly, accountants must safeguard client information when using AI tools, and the use of AI may even impact auditor independence.

Exploring the ethical issues that arise in the context of pervasive utilization of AI
It is critically important to understand the myriad of ethical issues that may arise in the context of emerging technologies. We need to recognize the fact that machine intelligence and emerging technologies, by definition, are insentient, not subject to rewards or punishment, and hence cannot be meaningfully held responsible or accountable.  We also need urgently to assess the fundamental limitation of current professional codes of ethics which only consider professionals who are human beings and do not take account of human-machine interactions, as well as autonomous systems, that can also produce complex ethical scenarios for which no standards or guidance exists presently.

The International Code of Ethics for Professional Accountants (Including Independence Standards), issued by the International Ethics Standards Board for Accountants, contains the following fundamental principles: integrity, objectivity, professional competence and due care, confidentiality, and professional behavior. However, all these principles draw upon and reference human traits and characteristics that can hardly be ascribed to computers and machine intelligence. Nevertheless, the pace of emerging technologies, especially the advent of blockchain and smart contracts, and more recently Gen AI tools such as ChatGPT, have compelled consideration of ethical issues arising out of human-machine interactions, particularly when these are automated using algorithms and code.

Ethical issues involve nuanced and complex interpretations of meaning about ethical obligations that must be done by human beings, and insentient technologies are simply not capable of such analysis. Who will trust a smart contract if its intended operation can neither be read nor understood by either a lawyer or an accountant? Smart contracts are incomprehensible to such professionals and other businesspeople since they are written in computer programming code.

To the extent that ChatGPT-type tools are known to suffer from AI hallucinations, we need a framework to understand and analyze ethical scenarios that arise through implemented smart contracts and use of AI tools that have ethical implications. A huge downside is the lack of transparency, and hence accountability, because these arrangements cannot be resolved through the court system.

Auditing is a “relationship business”, and trust is the bedrock foundation for client-auditor interactions
It is necessary to develop a framework that outlines trust-enhancing behaviors, as well as trust-depleting behaviors. Perhaps it is such a framework that is needed to develop a revamped code of ethics for a world in which human beings are supplemented by technology, then with autonomous systems, supplanted by technology.

In conclusion
I draw your attention to a relatively new Indian word that officially became part of the English language in 2016.  “AIYO!” is used, especially in South India (including in Tamil, my mother tongue), to express a range of different emotions, including sadness, surprise, fear, or happiness (Cambridge English Dictionary, 2023).

Using this new English word, AIYO, which contains “AI,” I ended with a limerick:

AI’s potential is vast and wide,

But its risks we cannot hide.

From biased data to rogue machines,

The dangers are real, or so it seems.

Aiyo! We must tread carefully to avoid the downside.

--Sridhar Ramamoorti, The University of Dayton


This article was based on a presentation at a recent event organized by IFAC and the International Association for Accounting and Education Research (IAAER). The University of Dayton is an university member of IAAER. 

The author wishes to thank Bruce Vivian, Megan Hartman, and Dr. Linda Biek for their editorial suggestions on an earlier version of this article.

Learn more in Dr. Ramamoorti's EdExchange video presentation below.

Sridhar Ramamoorti
Sridhar Ramamoorti

Dr. Sridhar Ramamoorti, ACA, CPA/CITP/CFF/CGMA, CIA, CFE, CFSA, CGAP, CGFM, CRMA, CRP, MAFF, is an Associate Professor of Accounting at the University of Dayton, and from January 2020, a sustainability Scholar affiliated with the UD Hanley Sustainability Institute. Previously he was on the accounting faculties of Kennesaw State University, Georgia, and the University of Illinois in Urbana-Champaign, Illinois.

Dr. Ramamoorti has a blended academic-practitioner background with over 35 years of experience in academia, auditing, and consulting. A BComm. graduate of Bombay University, he holds Masters and Ph.D. degrees from The Ohio State University. Earlier in his career, he was a principal with Andersen Worldwide, was National EY Sarbanes Oxley Advisor, a corporate governance partner with Grant Thornton LLP, and a principal and later, consultant, with Infogix, Inc.

Dr. Ramamoorti is co-author of over 60 papers and articles and 15 books and monographs, including The Audit Committee Handbook (Wiley, 5th ed., 2010), A.B.C’s of Behavioral Forensics (Wiley, 2013) that has been presented to the FBI Academy, and the textbook, Internal Auditing: Assurance and Advisory Services published by the Institute of Internal Auditors, with translations in French, Spanish, and Japanese. In the last two decades, he has presented his work and spoken at conferences in 16 countries.