Skip to main content

Harnessing digital readiness may be one of the greatest opportunities the accountancy profession has to do the ‘same things differently’ and to ‘do new things’. These elements were covered by IFAC’s Future Ready work that fundamentally depicts the profession as being at a hinge point – will we just do the same things, will we do the same things differently or will we do different things? This speaks to the essence of the role of automation and augmentation as we contend with the impact of Artificial Intelligence (AI).

This article is designed to encourage PAOs to think about the importance of beginning your AI journey within an ethical framework.

1. You may be more advanced than you think.

We hear constantly of the linkage between digitization and automation. This leads us to discuss the impact of AI on our profession but in many cases, PAOs may already be using elements of digitization and AI; sometimes without realizing it or at least realizing its potential.  

Prior to exploring the deployment and expanding use of AI within your organization, I urge PAOs to consider starting with the concept of a Responsible AI Framework (RAF). This builds upon the solid foundations that have been laid before us and maintained by current practitioners to reinforce the trusted position our profession holds within the communities we serve. Starting with an ethical framework or RAF is fundamental to ensuring a viable and enduring framework for cultural acceptance of digitization. It will also serve as a critical defence to malefactors misusing the organization’s data, either deliberately or inadvertently.

2. So, what could the RAF look like and where do you start? 

Initial questions:

  • Assess the strength of your current Risk Management Framework and its adaptability to AI
  • What governance overlays do you have?
  • Have you adopted a strategic mindset and acceptance in the use of AI?
  • Have you considered the public interest in deploying digital tools and do you have the capacity within your organization?
  • Can you clearly articulate your Ethics/Risk Sequence to identify the risk mitigation?

3.  Responsible AI Framework

As recommended by PWC (2019), good AI governance is vital to assist in risk mitigation associated with the use of AI for people, society and building trust in institutions. It is recommended that you consider the development of a Responsible AI Framework supported by an ‘Adaptive Risk Classification Framework’.

4.  Adaptive Risk Classification Framework

It is important to ensure that a risk classification system is not developed to merely respond to AI risks. Rather, the risk classification system should complement existing risk frameworks, while identifying and having capacity to respond to the velocity and evolving complexity of AI systems.

Critical to the development of a layered framework is ensuring it is consistent with the overall governance structures relating to risk. As indicated by Gasser and Almeida (2017), the overarching framework of social and legal, ethical, and technical presents a useful way of assembling the governance model. Once the governance model is identified, an application of risk classifications can occur. This helps ensure the risk classification is consistent with the risk management framework of the organization.

A range of models have been considered within our sector for the purposes of developing a suitable risk classification framework for the AI context. The development of this framework has been influenced by the emergence of the Singapore Framework (2020). For the purposes of an example, an Enterprise Governance Framework (EGF) may be:

  1. Control & Governance
  2. Ethical
  3. Technical and
  4. Legal & Social

Against this EGF, a series of risk classification approach, using an AI lens will be used.

The following AI Governance formula adapted from a model used by PwC (2020) may be instructive. This model explains that the notion of an Effective AI Governance as a compound or function of:

Effective AI Governance =
[AI Solution] x [Societal Values] x [Organization Values] x [People and Culture] x [Data privacy governance] x [Risk Consequence] x [Risk Likelihood] x [AI Governance tools]

This structured, formulaic approach is helpful to ensure AI Governance is in context within the organization. There are multiple layers of risk within any organization and many interrelated aspects ranging from people, to culture, to policies and procedures and so on. The following table provides a sample structure to documenting digital risk and ethical approaches within the context of the Enterprise Governance Framework, with a more expansive sample available here.

5. Conclusion

The development of a comprehensive Responsible AI Framework relies on the direct linkage to overall organizational risk management and Enterprise Governance Framework. Adopting an integrative mindset to the application of risk methodologies and clear lines of accountability, transparency, competency and an evidenced-based approach in the design of the framework will lead to a greater chance of success in the management of risk and application of ethical principles.

Put simply, an ‘ethics first approach’ to digital readiness is critical. This can start with a thorough understanding and application of risk management in embedding a digital or AI mindset using the risk framework. The risk chain sequence may be used to strengthen digital readiness to assist in cultural transformation and capitalize on the benefits of greater efficiency enabling significant increase in effectiveness.

Image
Portrait of a man standing and wearing a suit with a pink tie
Andrew Conway
Prof. Andrew Conway, FIPA, FFA, joined the IFAC PAO Development & Advisory Group in January 2019 after being nominated by the Institute of Public Accountants (IPA).

Andrew has been the Chief Executive Officer of IPA since May 2009, which at the time made him the youngest CEO of a public entity at the age of 28. He has been recognized for IPA’s transformation from its former name, National Institute of Accountants, into a leading and legally recognized professional accountancy body in Australia and the region. His leadership of IPA led to the organization’s recognition as the most innovative accounting body in Australia by Business Review Weekly (BRW).

Prior to working with IPA, Andrew was an Australian Government Treasury Ministry Chief of Staff and Senior Advisor and began his career in education and as an accountant for an insolvency firm. Andrew holds academic appointments as  a Professor of Accounting at the Shanghai University of Finance and Economics (honoris causa); Adjunct Professor Deakin University and Vice Chancellors' Distinguished Fellow at Deakin University. Andrew is also regularly called upon to speak on a range of topics including small business policy, wellbeing and artificial intelligence having completed a post-graduate degree in AI in 2022.