Moderated by Emily Digital Transformation Consultant at Hyperbots
Emily: Hello, hi everyone. This is Emily, a digital transformation consultant at Hyperbots. Good morning, good evening, good afternoon, depending on where you are for today’s session on risk mitigation framework for AI adoption and finance. I’m really glad to have Cecy on the call with me, who is an experienced CFO. So thank you so much, Cecy, for being a part of this discussion. Before we dive into it, can you please introduce yourself?
Cecy: Sure, my name is Cecy Graf. I am based in Seattle, Washington, and my experience has been in law firm management for almost the last 20 years.
Emily: Got it, thank you so much for your introduction, Cecy. So for today’s session, I was essentially divided into three portions. The initial section involves examining the various risks linked to the adoption of AI along with strategies for mitigating these risks. In the second part, we’d cover in depth the risk mitigation framework, and in the third segment, we’ll delve into all the financial and accounting sub-processes to determine which one yields the highest return on investment and least associated risks, thereby distinguishing between the most lucrative versus the less profitable ones. So to kick things off, Cecy, as you might be aware, AI has started seeing real adoption in finance and accounting, and it is likely to accelerate. What risks do you see with AI adoption?
Cecy: I think the biggest concern that we have, particularly in the legal industry, is around compliance and legal concerns. Making sure that the information that is being provided actually adheres to all the things that we need to adhere to. There’s more, but those are the biggest concerns for us right now, and they are a real barrier to adoption.
Emily: Compliance and legal concerns. Let’s go deeper into this particular aspect. Please elaborate on the promised AI benefits versus the real returns.
Cecy: It’s really exciting. AI has the potential to completely revolutionize how we do things in the ways that computers did, having a huge impact on how we process financial transactions and provide the information that is gleaned from those transactions. But that’s a classic garbage in, garbage out challenge. You only get high-quality output if you’re putting in high-quality inputs. So we have to be, and this is of course not specific to AI, we have this problem with any of our information delivery. We have to be capturing good information, we have to be providing the right data points so that we can get the best outcome out of our AI tools.
Emily: Yeah, that’s correct. What would your recommendation be to really mitigate this particular risk?
Cecy: Beta clients. At this point, it just makes it so much more important if we’re going to be automating and applying artificial intelligence to our data, that our data is crisp and clean and as perfect as possible. So really having good data governance strategies in place so that you can really leverage the power of AI.
Emily: Got an understanding. Moving on to the next one, do you see security as a concern and what exactly is the risk there?
Cecy: In the legal industry, we have very strict client confidentiality rules and obligations. Any threat to our data in terms of leakage or inappropriate use is a huge concern for us. Cybersecurity in general is a major focus for the industry because we, like every other industry, are under threat. But the risk of AI, and the things that concern people about AI in the legal industry, really come back to how we ensure client confidentiality. How do we protect against leakages? How do we protect against data breaches?
Emily: Do you have any recommendations to alleviate these security-related risks?
Cecy: Again, this isn’t new to AI, but having very strong data protection measures in place and making sure that your certifications, whether ISO certified or SOC 2 certified, are in place. These certifications and the rigor required to obtain them protect your data. The challenge is with open AI tools where people don’t realize the openness of these tools. So, having very strong effective use or appropriate use policies in place, and ensuring everyone in the environment is familiar with those and understands the associated risks, is crucial.
Emily: True, I completely agree on that. So, Cecy, the perception of AI leading to job loss is real. What are your comments on that?
Cecy: I think there’s going to be a shift in jobs, not necessarily a loss of jobs. We had the same concerns when we started implementing computers in the workplace. People thought they were going to lose their jobs, but that didn’t happen. The jobs changed, the duties changed, but jobs didn’t disappear. We’re seeing that now with AI; it’s creating more work for lawyers as we navigate and figure out the necessary structure around this. I’m not as concerned about job loss; it’s more about positioning ourselves for the jobs of the future.
Emily: Got it. Any recommendations on how to change this perception in people?
Cecy: Investing in our people is key. AI is an exciting, innovative tool that can change how we perform and allow us to provide higher and better use. AI can take away a lot of the grunt work, freeing people up to upskill and expand their horizons rather than fearing that AI will dictate how humans function in their roles.
Emily: From a finance and accounting context, do you see that as a real challenge?
Cecy: I don’t see it as much. It’s more about how you have your effective use policies in place and how you utilize and train your people to use those tools. AI is more about informing how people do what they do, providing better data to drive data-driven decisions rather than just following your gut. We’re not robots; people should direct the AI tools.
Emily: I completely agree. What is your suggestion to overcome any challenge, however minuscule?
Cecy: It’s all about culture and training your people, making sure that your team feels invested and engaged in the process. Creating that security within your environment that this isn’t a threat but an opportunity.
Emily: Got it. Revisiting one of the topics we discussed, the risk of AI output being trustworthy, especially in finance, is one of the biggest risks. Can you share some examples?
Cecy: Sure. My biggest fear is getting inaccurate forecasts. There’s human error today, but if we are completely dependent on AI without applying our knowledge and expertise, we can end up managing to a forecast that is completely off the rails, which doesn’t position us to succeed. And compliance reporting is another area of concern. If we are dependent on AI for our compliance reporting without any checks and balances, we risk being out of compliance.
Emily: Can you suggest some methods to reduce this risk?
Cecy: It comes back to data quality and data governance, ensuring high-quality inputs. You need testing and validation, controls around your processes and approvals, human oversight, and transparency. Involving your team and being transparent about how AI is used is crucial.
Emily: Got it. Thank you so much, Cecy, for talking to us about the various risks associated with AI adoption and the strategies for mitigating these risks. It was great speaking to you today.
Cecy: Thank you, Emily. Thanks for having me.
Emily: Welcome back, Cecy. In the last segment, we covered the various risks linked to AI adoption. In this segment, we’ll dive deeper into the risk mitigation framework from a holistic perspective. There are two broad views on AI’s impact on compliance: one view is that AI improves auditability, visibility, transparency, and data-driven decisions, resulting in better compliance. The counter view is that one should take special measures to ensure compliance, especially where AI is involved. What are your views on that?
Cecy: I think both are true. AI has tremendous potential to significantly improve our compliance efforts. We can automate processes, enhance data quality, and improve compliance. But we still need oversight. A balanced approach is essential, recognizing both the potential and the challenges and risks associated with AI.
Emily: Got it. If you had to draw a risk mitigation framework for AI adoption, what critical components would you advise CFOs to include?
Cecy: As a CFO, it’s always about the return on investment. Quantifying the ROI and evaluating the associated risks is critical. This is true not just for AI but for any major decision.
Emily: Would you like to comment on the return on investment or ROI, especially in terms of AI-led transformation in various finance and accounting processes?
Cecy: Sure. Where there is high risk, there is the potential for high return. In terms of ROI, mergers and acquisitions are at the top of the list despite the high risk. The potential ROI of AI in M&A is significant, enhancing due diligence, market analysis, and integration planning, potentially saving millions and creating value through informed decision-making and strategic alignment. Next on my list is financial planning and analysis. AI can significantly improve forecasting, budget optimization, and strategic planning, directly impacting an organization’s financial health and growth trajectory. The ability to make more informed investment decisions offers substantial returns.
Emily: So, you mentioned that expense management is at the lowest risk and mergers and acquisitions at the highest. Can you elaborate a bit more on why that is?
Cecy: Expense management is low risk because it is already highly standardized. The processes are routine and involve less complex decisions, making them more amenable to AI automation. Errors in expense management generally have a limited financial impact compared to errors in other financial processes. There is usually a wealth of historical data available, making it easier for AI systems to learn and make accurate predictions. On the other hand, mergers and acquisitions involve high stakes, complex negotiations, legal considerations, and strategic decisions that require a deep understanding of multiple variables. While AI can greatly assist, the inherent complexity and high risk involved in M&A mean that human oversight and strategic thinking remain critical.
Emily: Alright, any closing comments before we wrap up this session?
Cecy: I would just reiterate that adopting AI is about being informed, cautious, and strategic. It’s about leveraging the technology to enhance your capabilities while understanding and mitigating the risks. Balancing innovation with oversight and continuous learning is key to successful AI integration in finance and accounting.
Emily: Thank you so much, Cecy. It was a pleasure talking to you.
Cecy: Thank you, Emily. It was great speaking with you too.