While AI is still in the Wild West phase that many new technologies are experiencing, as the technology has spread there have been increasing calls from both organizations and individuals to make the field a little less wild. Not so much that it completely kills the innovation and vibrancy of this burgeoning field, but enough that serious players will feel safe entering this space without fear of endangering themselves.
Our experts are particularly interested in measures that can improve the transparency and accountability of AI systems, such as clearly labeling AI-generated content, the ability to monitor the model’s decision-making process, and the disclosure of the data and algorithms involved. There was also strong support for ensuring that these systems are explainable and, especially important for the accounting community, auditable.
“An AI regulation that emphasizes transparency in the training of large language models would be very helpful,” said Mike Gerhard, chief data and AI officer at BDO USA. “Understanding how these models are trained, including the data sources and methodologies used, is critical to ensuring accountability and trust in AI systems. This transparency would be particularly beneficial in areas such as accounting, where using AI to improve audit quality requires a clear understanding of how AI decisions are made.”
Respondents also expressed strong support for regulations that align with principles-based or risk-based approaches, such as the EU AI Act, that focus on safety, fairness and non-discrimination while still allowing room for innovation. This is especially important given the stakes involved in the rise of AI, especially for traditionally marginalized communities.
“I believe we need to be at the forefront of the ethical issues that arise from AI’s inherent bias problem,” said Pascal Finette, founder and CEO of training and consulting firm Be Radical. “If we let AI perform tasks such as reviewing resumes, making credit decisions or assessing job interviews, we need to be sure that this is done without (hidden) biases. Part of this problem lies on the side of the supplier, but some of this should be codified (and therefore protected) by law.
At the same time, virtually everyone warned against overly strict regulations, especially at this early stage in the technology’s evolution.
“As the board evolves, I hope we don’t see overly restrictive rules that stifle creativity and progress,” said Avani Desai, CEO of Top 50 company Schellman. “Instead, I would like to see further regulation that strikes the right balance between ensuring the ethical and safe use of AI and encouraging innovation. Public-private partnerships and feedback loops from organizations conducting the assessments will be crucial to that to get it right.”
Will we see more attention to AI regulation in 2025? The only thing we know for sure is that we don’t know anything for sure. But we can make educated guesses. While no one said outright that new regulations would definitely come, some predicted scandals would likely draw attention to the need for further oversight of AI systems.
“AI’s capabilities will continue to evolve,” said Abigail Zhang-Parker, professor of accounting at the University of Texas at San Antonio. “The cost of using AI (e.g. Open AI’s API service) will continue to decline. There will be more AI applications. At the same time, we will also see more AI-related negative incidents, especially those that raise important ethical concerns. and debates.”
Overall, when asked about their most confident predictions, many said that the widespread integration of AI into workflows will accelerate, especially given the increasing prevalence of autonomous AI agents with limited decision-making power. It is widely predicted that the rise of these virtual workers will increase productivity and efficiency in companies. At the same time, some experts warned how this could change employment dynamics and increase the risk of ethical dilemmas.
“I am confident that AI will either reduce the number of new employees that the largest accounting firms plan to hire or lead to further headcount reductions, if not both,” said Jack Castonguay, professor of accounting at Hofstra University and vice president president of learning and development. at Surgent. “The largest companies have been planning this phase of AI for years and they thought this day would come sooner. They know they can do more with less. I’m also pretty sure we’ll see a scandal where a company abuses AI or submits to competition.” its judgment to AI that leads to fraud or a material error that passes an audit. We have already seen this happen in the legal field. It’s only a matter of time before it happens to an accounting firm.
In this second of three parts, we look at our experts’ answers to:
- What is one AI regulation you would like to see? What is one AI regulation you would not like to see?
- Which AI prediction for 2025 are you most confident about? Something you’re pretty sure we’ll all see next year?
You can read the first part here. Next week we have our third and final part – where we delve into one of the more esoteric aspects of AI.