Deepak: Hi, welcome everyone to a panel discussion titled transforming finance and accounting through multi-agent collaboration. My name is Deepak, and John Silverstein, Brian Kalish, and Rajeev Pathak are our panelists. I will moderate and anchor the session. This will be a 60-minute session, including 45 minutes of discussion and a 15-minute Q&A session at the end. Please feel free to drop your questions in the comments as we progress through the session. Let me start by introducing our panelists. , John Silverstein is the VP of FP&A at XR Extreme Reach. With over twenty years in Fortune 500 companies and startups, John is known for his data-driven approach and transformative impact on financial processes. Welcome, John. And it’s great to have you. Thank you. Next is Brian. Brian is a financial planning and analysis expert with thirty-plus years of experience in financial leadership, consulting, and analysis. He combines extensive industry expertise with a passion for using technology to enhance financial decision-making. He loves engaging with extensive networks to provide relevant insights, tailored training, and custom tools and solutions that have a true bottom-line impact. Welcome, Brian. And lastly, we have Rajeev Pathak, CEO and co-founder at Hyperbots, with over thirty years of experience in building technology products and businesses for global markets. Rajeev has managed technology business for Wipro as GM and vertical head handling a business of over a hundred million dollars. Welcome, Rajeev. Thank you. So great. Let’s get started. Good morning, John and Brian. Good evening, Rajeev. Hello, everyone in the audience. In the next hour, we will discover how cutting-edge agentic AI systems are reshaping the finance and accounting landscape through collaborative multi-agent frameworks. This comprehensive webinar explores examples of how multiple AI agents collaborate to handle a complex business task in F&A. We will organize our discussion into three sections.
In the first section, we will discuss a few use cases for AI agents in different F&A functions.
In section two, we will talk about why these agents need to talk to each other and elaborate on the methods of collaboration between these agents.
In the third section, we will discuss the mechanisms for humans to control these multi-agent networks, and we will close this session with time for questions and comments.
All right, so let’s start with our first question, which is for Rajeev.
Deepak: Rajeev, can you elaborate on what an AI agent is? How different is it from a normal software component?
Rajeev: Sure. Thank you so much, Deepak. First of all, I would like to welcome the entire audience for this webinar. I would also like to welcome Brian and John as co-panelists and thank you, Deepak, for hosting us. Your question is very interesting because software-led actions, especially in finance and accounting, have been in existence for the last fifty years. So, the first obvious question that comes up, as you rightly asked, is what an AI agent is and how different it is from software. I would say it is different from normal software in four dimensions. The first one is intelligence and autonomy. What this means is that an AI agent can act on its own in a fairly autonomous manner with virtually negligible need for a human in the loop, unlike software. Just to give you an example, if you are a person working in finance and accounting and you receive a clarification email from a customer on an invoice, a normal email software requires a han to open and respond to it. An autonomous email agent will understand the context of what the customer is asking and navigate through the data associated with that customer query. It will be able to auto-generate the response and send it to the customer in an autonomous manner without any human touch. That is an example of an autonomous email agent responding to a customer DSO query, compared to a normal email client where a human has to read, interpret, and respond manually. So, autonomy and intelligence are key elements of an AI agent.
The second is learnability. As I said, intelligence encapsulates learnability. These agents are self-learning. They learn through han feedback in the loop, which is an essential ingredient of any machine-learning-based system.
The third and most recent advancement is reasoning. When an agent takes an action, it should be able to outline the reason for doing so. For example, an AI agent responding to a customer’s email query on a DSO clarification should be able to explain not only to the customer but also to a controller or orchestrator why it is responding in a particular way.
Finally, the fourth element is that it has to be action-oriented. You could have an autonomous, highly intelligent model, but if it does not result in any action, then it is not an agent. For example, if an AI system can understand and interpret a customer’s email but does not take action by responding appropriately, then it cannot be considered an agent.
To summarize, autonomy, learnability, reasoning, and the ability to take action are the four elements that differentiate an AI agent from a traditional software component.
Deepak: Understood. Thanks. But can you give an example of an agent and a non-agent to help clarify?
Rajeev: Yes. I just gave an example of an AI agent in the context of a DSO email being responded to. I’ll give another example. In Hyperbots, we have a series of agents that together form a super agent, which we call an invoice processing copilot. This agent can read emails automatically, understand their content, determine which emails contain invoices, parse those invoices, match them, and take action by posting them into the general ledger. This is an example of an AI agent in the context of Procure-to-Pay. Now, for an example of a non-agent: earlier, I mentioned a basic email client. From a finance and accounting perspective, let’s take payroll processing as another example. While some specific elements may involve AI, the overall use case of payroll is not well-suited for AI agentic applications. Payroll processing is fixed—employee salaries, deduction rules, and processing logic are all predetermined. Since input, output, and logic are all fixed, a rule-based software solution is sufficient. That makes payroll processing a non-agentic function, unlike invoice processing, which can benefit from AI agents. I hope this answers your question, Deepak.
Deepak: Indeed, that does clarify the difference between agents and non-agents. Moving on, our next question is for John. John, can you break down some of the tasks in FP&A that humans do in detail? Which of these tasks could be good use cases for agents?
John: Sure. Thanks, Deepak, and welcome, everybody. I’ll break it down into three areas:
- Data Gathering and Consolidation
- Pulling data from multiple systems (ERP, CRM, spreadsheets, external databases, etc.)
- Manually validating and cleansing the data
- Consolidating data from different systems into a presentable format
- Pulling data from multiple systems (ERP, CRM, spreadsheets, external databases, etc.)
- AI Agent Potential: AI can automate integration, continuously monitor sources, detect inconsistencies, and clean data instantly.
- Data Cleansing and Transformation
- Manually cleaning raw data (removing errors, mapping fields)
- Ensuring different naming conventions align (vendor codes, financial metrics)
- Manually cleaning raw data (removing errors, mapping fields)
- AI Agent Potential: AI can recognize patterns, perform adaptive mapping, and continuously validate data for accuracy.
- Budgeting and Forecasting
- Manually analyzing revenue, expenses, and cash flow
- Updating assumptions and running scenarios
- Ensuring financial reports are aligned
- Manually analyzing revenue, expenses, and cash flow
- AI Agent Potential: AI can run scenario analyses, detect anomalies, and automate forecast updates.
Deepak: Thanks, John. That’s a very in-depth and helpful response. Brian, can you outline some human tasks in reporting and visualization, specifically variance analysis, and potential AI agents for the same?
Brian: Absolutely. It’s a pleasure to be with everyone here today. When we think about reporting and visualization, human tasks involve generating reports, analyzing trends, and identifying variances. Many still rely on spreadsheets or BI tools. We’re also creating dashboards and charts for management and the board, really whoever the concern of our analytics is. Then we’re interpreting the results, adding on to what John and Rajeev have said. That’s where the action takes place. We shouldn’t be doing any analysis that isn’t driving an action at the end of the day. We’re interpreting our results, highlighting key events, and focusing on what’s important. Then we go to a very manual task—manually drafting the commentary.
As someone well-versed in AI and agentic AI, I see tremendous opportunities. We have a wonderful way of collaboration, leveraging what Humans are good at and what AI excels at. Consider automated report generation—AI can compile large amounts of data into standard or customized formats without manual intervention. I’m a big believer in IBCS (International Business Communication Standards), which provides tremendous opportunities by setting rules for our reporting.
We want to shift finance and accounting professionals away from focusing on aesthetics and instead concentrate on strategic business partnerships. Standardization and optimization of reporting can improve both speed and depth of insights. We need to maximize the effectiveness and efficiency of both human and AI resources. It’s a collaborative endeavor. A line I often use is: “AI isn’t going to replace people. People who use AI are going to replace people who don’t.” I truly believe that.
Then, we get into narratives—narrative insights are key. In finance, you want to be a quant who tells great stories. With natural language generation (NLG), AI agents can now produce truly data-driven commentary. For example, “Revenue increased by 10% due to product X performing better in region Y.” This isn’t science fiction—it’s reality.
This is what we have today. So I’ve been a longtime believer in this even before we really’ve had the takeoff in AI. I always believed that you could automate the process of creating the M DNA. So the management discussion and analysis, , the idea is we’re just going to move to a world where finance is much less the author. And much more the editor, right? Because humans are fantastic at finding trends they’re looking for and absolutely terrible, however you want to describe it. At spotting those, they aren’t. And then when you just bring in the idea that we’re dealing now with Bronto bites, ten to the 27th power of data, the AI, the junk AI can help us minimize the biases that Human possesses. Right. And then. If you get to kind of to real-time dashboards, and that’s what everyone wants to move to, right? We just live in this world of extremely high, what I call Volca, V-U-C-A, volatility, uncertainty, complexity, and ambiguity. The AI can trigger, you know, anytime there’s a noticeable change so that humans can really focus on the strategic decision-making rather than spending our time doing, you know, just general reporting tasks. So, you know, the quickness with which we are to identify changes gives us the ability to maximize those opportunities faster, but also minimize when there’s a threat or a challenge to what we are expecting to happen in the business. So I’ll stop with that.
Deepak: Great. Thanks. Brian, that’s a great answer. We’ll move on to the next section. It’s about multi-agent communication and collaboration. So, we have so far familiarized ourselves with important use cases for multi-agent deployment in FP and AA processes. In our next session, we’ll discuss why these agents need to communicate and collaborate. So Rajeev, back to you. Elaborate on how agents can talk to each other.
Deepak: Thanks for that answer, Rajeev. We have a question. The next one is from Devendra. His question is, He understands correctly, AI agent can bring two efficiencies, accuracy and speed. However, are there any reporting statistics stating that, due to AI, the reporting timeline has been reduced?
Rajeev: I can jp in. I mean, so for example, yes, they’re the nice thing is the technology has been around long enough that we’re beginning to see it. So, I know the London School of Economics has put out a number of papers. Very, very broadly, we’re seeing the implementation of energetic AI, you know, showing ROI of between thirty and 200%. You know, again, I think part of it is, I think everyone on the panel would agree, please feel free to disagree, is it’s kind of faith-based. You just realize that, you know, in any process, if we take the hands out, it goes faster. So you can see a, you know, a, you know, you know, in order to pay transaction that takes thirty days, we can bring in AI agents and down to five days because it doesn’t sit on somebody’s desk. It does. It’s not waiting for somebody else to show up. , so that’s what I’ve seen, but I to my other panelists, if there’s other, , places we can point people to, please share. Yeah, rightly said. I mean, there are a lot of, you know, for example, from Hyperbot’s perspective, the customers, wherein these agents have been implemented, there is very clear evidence of up to 80% productivity enhancements in various tasks, including analytics.
Deepak: Great. Thanks for that. I will take one last question. There are a couple more, but I think we have time for one Question. I’ll pick Rajeev Pathak’s question, the attendees. Are AI agents more like background tasks without any UI or do we have to build UI tools on top of it?
Rajeev: Yeah. So maybe I will begin and then I will request John to add. So there will be some agents who will be background tasks, but as we said today, the entire focus of discussion was. , agentic network and the collaboration between agents and collaboration with humans. Right. John answered that question that when humans. Need to collaborate and control these agents. They need to have dashboards. They need to have alerts. They need to have notifications. So all of that is through AI, right? Agents do certain tasks that require a user interface, right? And there are certain tasks which can be done fully in the background, But the humans need to know what happened. That might be through some notifications and emails. So, depending on the nature of the agent and the task that agent is performing, the need for a user interface and the need and the model of communication between the agent and the human need to be decided. Yeah, I would agree with that, that it requires, , it’s in the background. There are tools, and I think there are two avenues, though, on whether you have to actually build the UI on top of it, whether in. If it already exists, because there can be agents that are in the background that are using the tools that are already in your native to your ERP or other, , Your BI tools and things like that that might actually already be presenting and giving you the dashboards and giving you the UI so you may not necessarily have to build something new versus attach AI agent in the background.
Deepak: Great. Hope that answers your question, Rajeev. Let’s attempt one last question. I think we have a few more minutes left. So, I’ll take this question through here. AI can hallucinate or make mistakes. Especially in a multi-agent setting where this effect may compound. As a CFO, what would help you trust an AI agent’s output? John, Brian, any input on that?
John: What I think it’s, you know, I think the second part of the question it’s trust, but verify. Right. Is like any time we see a new process, you know, it, it, it just takes a while to see the output and make sure. So again, As a finance person talking about technology, the solids can be a little dangerous, but it’s the concept of you’re just running things in parallel, right? You just become comfortable with the performance. I mean, again, go ahead. Yeah, I would say it’s that it’s Humans can hallucinate too, which is part of the problem, which we see all the time, and you get how many times you get data back from an analyst, and it’s missing data, or they pulled it incorrectly, or they’re. , or they pulled the wrong period or did certain things where it made a bad assption, or it had a bad formula or those types of things. So I think you have the same thing. From the han element, but then you have the verifications in finance, and it’s already embedded in our, in our DNA as finance professionals, to have audits, have checks, have all the balances and things like that. So it’s actually in a lot of ways when explained, and you go in, and you see the same trust in this, you’re having the same processes in place to verify Humans as you do AI. Then as a CFO, you should be able to trust it the same as a han in some cases, and them working together just makes you stronger.
Brian: Yeah. I mean, for example, if one agent is producing an output, which is based on, , wherein there is a possibility of hallucination, then you build another agent, which is a validation agent. This verifies the output of this, right? So, therefore, if it is going through audit and validation, which to some extent is being done even by other AI agents themselves, so there is a lot of prevention of Such risks is being done by another set of AI agent and eventually it goes to han, right? So, the han is in the loop wherever necessary. So therefore, the risks get minimized, you know, as John said. I mean, there is always risk; even Humans do that, right? As long as you have mechanisms of verification, validation, and audit, either through other AI agents or through Humans, that risk is mitigated.
Deepak: Correct. Great. I think we are on time right now. And so, thanks for a very engaging discussion, Rajeev, John, and Brian. And thanks to everyone in the audience for participating and asking some great questions. So really appreciate that. We’ll continue to do this in the future, get together, and discuss AI, agentic AI. As always, it was in-depth. So yeah, thanks again, everyone. Thank you. And until next time. Thank you so much. Thank you, Deepak. Thanks.