Event

AI in Finance

Multi-agent Collaboration in Finance & Accounting

Find out interesting insights with John, Brian, Rajeev.

Moderated by Deepak,

Don't want to watch a video? Read the interview transcript below.

Deepak: My name is Deepak. John Silverstein, Brian Kalish, and Rajeev Pathak are our panelists. I will moderate and anchor the session. This will be a 60-minute session, forty-five minutes of discussion, and a 15-minute Q&A session at the end. Please feel free to drop in your questions in the comments as we progress through the session.

Right, so let me start by introducing our panelists. John Silverstein is the VP of FP&A, XR Extreme Reach, with over twenty years in Fortune 500 companies and startups. John is known for his data-driven approach and transformative impact on financial processes. Welcome, John, and it's great to have you.

Next, Brian is a financial planning and analysis expert with thirty-plus years of experience in financial leadership, consulting, and analysis. He combines extensive industry expertise with a passion for using technology to enhance financial decision-making. He loves engaging with extensive networks to provide relevant insights, tailored training, and custom tools and solutions that have a true bottom-line impact.

Lastly, Rajeev Pathak is the CEO and co-founder of Hyperbots Inc. With over thirty years of experience in building technology products and businesses for global markets, Rajeev has managed technology businesses for Wipro as GM and vertical head, handling a business of over 100 million dollars.

Good morning, John and Brian, and good evening, Rajeev. Hello, everyone in the audience. In the next hour, we will discover how cutting-edge agentic AI systems are reshaping the finance and accounting landscape through collaborative multi-agent networks. This comprehensive webinar explores examples of how multiple AI agents collaborate to handle a complex business task in F&A. We will organize our discussion in three sections.

  • In the first section, we will discuss a few use cases for AI agents in different F&A functions.

  • In the second section, we'll talk about why these agents need to talk to each other and collaborate on the methods of collaboration between these agents.

  • And lastly, in the third section, we will discuss the mechanisms for humans to control these multi-agent networks and we will close this session with time for questions and comments.

Right. So let's get started. My first question is for Rajeev. Hi Rajeev, can you elaborate on what an AI agent is and how different it is from a normal software component?

Rajeev: Yeah, first of all, thank you so much, Deepak, for inviting me to be part of this panel discussion, and I would like to say good morning to Brian, John, good morning to entire audience. So, it's an extremely relevant topic in my view, for anyone, everyone who is associated with finance and accounting, and has an interest in AI, especially AI agents.

So, coming to your question. What is an AI agent, and how does it differ from a traditional software component that exists that's in the context of finance and accounting? So, my view is that an AI agent has the following three or four characteristics:

  1. Learnability – An AI agent is a component that can learn with human corrections and inputs over time, unlike a software component that does not learn and functions the same way as it is codified.

  2. Reasoning – It should be able to provide a reason for its actions or output. There must be internal logic and reasoning for why it did something a certain way.

  3. Autonomy – It should be fairly autonomous. If it requires human supervision all the time, then it's not a true AI agent.

  4. Action Orientation – It should not just give analysis but also be able to take some action.

To summarize: learnability, reasoning, action orientation, and autonomy.

Deepak: Great, yeah, that's helpful, Rajeev, thanks for that. Just as a follow-up question, can you give an example of an agent versus a non-agent?

Rajeev: That will help. I mean, I can give many examples. Let's pick up one from finance and accounting, where Hyperbots is very deeply involved—invoice processing.

Now, if there is a true AI agent for invoice processing, it will be pre-trained and knowledgeable on a variety of invoices and associated reasoning. If humans give input, this agent should improve through that learning.

Second, it should be fairly autonomous. If every output of invoice processing needs human review, it's not a true AI agent. Third, action orientation—reading, coding, and posting an invoice are all actions the AI agent should be able to do autonomously. As a contrast, payroll processing is a rule-based task. You have defined structures and expected inputs/outputs. You don’t need an AI agent here, a regular software component suffices.

Deepak: Yes, Rajeev. Thanks. I think that makes it clear the difference between an Agent and a non-agent. So moving on, my next question is for John. John, can you break down some of the tasks in F&A that humans do in detail, and which of these tasks could be good use cases for AI agents?

John: Yeah, absolutely. Thanks for having me as well. In FP&A, the biggest tasks involve gathering, validating, and consolidating data from data warehouses, ERPs, CRMs like Salesforce, etc. These tasks are often manual and require checking for format inconsistencies and errors.

AI agents can help by automating data integration, monitoring sources, detecting inconsistencies, and flagging anomalies in real-time. They can handle cleansing (removing duplicates, missing values) and applying business logic.

AI can also assist with forecasting, building revenue, expense, and cash flow models, identifying assumptions, and adjusting them based on different input types. AI enables faster iteration and real-time scenario planning that humans often don’t have time to do manually.

Deepak: Yeah, that's a very in-depth response, John. Thanks for that. I'll move on to my next question for Brian. Brian, can you outline human tasks in reporting and visualization, variance analysis, and potentially agents for the same tasks?

Brian: Absolutely. Great to be with everyone here today. I’ll just add to what John said, the beauty of AI is that it’s the company that learns, not just individuals. On reporting and visualization, humans currently create dashboards, interpret results, draft commentary, often using spreadsheets or BI tools. AI agents can automate report generation, standardize formatting (like using IBCS), and generate narratives using natural language generation (NLG).

They help transform humans from authors to editors, doing 80% of the work. AI agents can find insights that humans might miss, especially subtle trends and anomalies. AI enables real-time dashboards and alerts, which is crucial in today’s volatile environment.

For variance analysis, AI can automatically detect variances, suggest reasons, and monitor data 24/7, unlike humans. These agents allow finance teams to react faster and more strategically.

Deepak: Before we move forward, a follow-up: can you give some examples where an AI agent itself can take action based on very high positive or negative variance?

Brian: Sure. For example, in retail, if a particular shirt is selling fast, an AI agent could automatically notify the warehouse to restock and trigger procurement or production. Similarly, if sales slow down, the agent could apply a 5–10% discount live and adjust supply. Pre-defined, approved scenarios allow agents to act without human involvement.

Deepak: Awesome. I think this is a great example. thanks Brian for that. I think part one, we'll close with the takeaway that actionable insights for strategic decisions is where agentic AI can play a key role. So far, we've familiarized ourselves with important use cases for multi-agent deployment in FP&A processes.

Moving to our next section, let us now discuss why these agents need to communicate and collaborate with each other. So Rajeev, can you elaborate on how agents can talk to each other?

Rajeev: Yeah. I always like to bring in an analogy with humans. In the real world, humans collaborate in three or four primary ways:

  1. Knowledge-Based Collaboration – For example, in a 50-person accounts payable team, tasks are divided: one group receives invoices, another codes them, another does matching, another posts to ERP. Each team knows what inputs and outputs to expect from others. This is mirrored in AI with knowledge graphs, showing task associations and relationships.

  2. Broadcast, Listen, and React – When agents don’t know each other, one broadcasts a task, and whichever agent is best suited picks it up. It's like a chat group where someone posts a task and someone else responds if they have the expertise.

  3. Direct Bonding – A direct, structured relationship between two or more agents. Like a manager assigning A and B to work together with clearly defined responsibilities and dependencies.

These are the three predominant methods of authentic collaboration in the context of finance and accounting.

Deepak: Great. Yeah, that sounds great. Thanks. Moving on, John, can you give an example of how AI agents broadcast, listen, and react in the FP&A context?

John: Sure. In FP&A, these tasks are often role-based, similar to hiring someone for a specific task. For example, there's always someone monitoring actuals, someone else handling forecasting, another handling variance analysis, etc.

If something changes, like in April, where volatility is high, you may need to reassess forecasts daily. One agent could broadcast, “I detected a variance,” and the forecasting agent can pick it up, update assumptions, and inform the budgeting agent.

So the broadcast agent shares a task, and others respond based on pre-established capabilities. This speeds up analysis and response, enabling real-time adjustments across agents.

Deepak: Great. Maybe we can pause for a second. There's a related question from Bhairav: what would you do to control undesired collaboration amongst agents? Anyone want to take that?

Rajeev: Yeah, I’ll take it. Just like we have policies and frameworks in the real world for human collaboration, we need policy formulation and implementation for identity, communication, and collaboration among agents.

This ensures agents don’t collaborate in unintended ways or perform undesired tasks. The specifics of such policy formulation can be technical, but the principle remains, governance prevents undesired behaviors.

Deepak: Great. Hope that answers your question, Bhairav. So moving on, Brian, what would be an example of a direct bonding between agents?

Brian: Sure. Think of agents as just another employee. For example, in the financial close process, you might have three agents:

  1. Close Coordinator Agent – Oversees month-end tasks and deadlines.

  2. Treasury Agent – Handles bank reconciliations, cash updates.

  3. Financial Reporting Agent – Compiles financial statements and disclosures.

These agents form a tightly bonded team. The treasury agent reconciles by Day 3, hands off data to the close agent, who locks ledgers and notifies the reporting agent. They follow a predefined checklist and protocol, like a handshake between agents.

Deepak: Thanks, Brian. So yeah, moving on, we'll get to the last section. It's a good point to segue into how multi-agent networks work and how humans can control them. A lot is said about human control.

So, John, how do you see humans collaborating with AI agents, and what methods do you see for this collaboration?

John: Humans collaborate with AI agents through reports, notifications, UI dashboards, and commands. For example, in data gathering, humans currently coordinate with departments to locate and clarify data.

In the future, AI agents can automate much of this, while humans oversee data governance policies, verify outliers flagged by agents, and negotiate data sharing changes.

Deepak: Just as a follow-up, can you also outline the same for budgeting and forecasting?

John: Sure. Humans gather inputs from operations, marketing, leadership, product strategy, etc. Analysts hold meetings to collect assumptions, like hiring plans, seasonal trends, and campaign forecasts.

AI agents can generate initial forecasts and highlight assumptions that require human validation. Humans then adjust AI-driven assumptions, such as product demand estimates or departmental constraints before finalizing them.

Deepak: Got it. Thanks, John. Rajeev, next question for you. For reporting and visualization, can you outline the human collaboration and the corresponding agent collaboration?

Rajeev: Sure, currently humans use BI tools or excel to create reports like P&L or variance analysis. It’s a highly collaborative human task.

In the agentic world, my favorite is a chatbot-based reporting agent. You can chat with it, and it gives you analytics, reports, and commentary in real-time.

However, humans still have a significant role, especially for external reporting like quarterly earnings. An agent might generate a full balance sheet with commentary, but humans must validate it before sharing with stakeholders. Human review remains essential for compliance, trust, and accuracy.

Deepak: Right. Yeah. Thanks for that, Rajeev.

I'll go to Brian next. For variance analysis, can you talk about tasks requiring deep collaboration between humans at present, and how you see that collaboration happening with agents?

Brian: Definitely. Every company does variance analysis, it’s universal. Today, humans identify variances and then interview stakeholders to understand the reasons.

For example, why did ad spending spike? Why did cost increase suddenly? Finance teams spend a lot of time chasing down the "why." An AI agent can pre-identify anomalies and propose likely drivers based on data. Then a human confirms whether it’s an anomaly, an outlier, or a meaningful trend.

The agent doesn’t skip human oversight, just like an analyst’s report wouldn’t go straight to the CFO without review. Instead, it helps narrow the focus and saves time. Agents bring data quickly to humans, who validate and act strategically.

Deepak: Great. Thanks for that response, Brian. So just in the interest of time, we’ll move on quickly to the next question for John. Would you like to summarize the collaboration happening between human agents and AI agents?

John: Yeah, sure. So, how agents collaborate, it’s really like working with another employee. You define workflows and controls, then assign parts of those workflows to AI.

The AI handles the data-intensive tasks, cleansing, anomaly detection, so humans can focus on decision-making. There’s continuous feedback: AI makes forecasts or suggests variance explanations, and humans refine those suggestions. It’s an ongoing loop. One question often leads to another, and now the AI can help answer those questions. You get joint decision-making: AI proposes data-driven options, and humans decide.

Also, there’s real-time communication. AI proactively alerts humans, speeding up issue resolution and critically, AI must be explainable and trustworthy. It needs to offer reasoning, and humans must maintain oversight to ensure actions align with business goals and cultural expectations.

Deepak: Very useful summary there, John. So moving on, Brian, for the benefit of our attendees, can you identify some typical tasks that the office of finance performs?

Brian: Absolutely. It’s a great question and opportunity to rethink what we do.

Some broad tasks:

  • Monitoring cash flow and liquidity.

  • Approving expenditures.

  • Reviewing KPIs and financial performance.

  • Overseeing risk and compliance.

  • Capital strategy and financial decisions.

Many of these can be monitored by agents, especially low-value, repetitive tasks. Humans shouldn’t spend time verifying things that are already okay. Instead, agents should monitor everything and alert humans only when action is needed.

For example, agents can track cash conversion cycle metrics like DSO or inventory buildup. If these shift subtly, humans may miss it, but agents won’t. Especially in uncertain environments, the ability to get faster insights is invaluable.

Deepak: And Rajeev, which of these do you think are ideal use cases for agents?

Rajeev: Oh, I love them all.

Brian: Yeah!

Rajeev: But yes, cash flow and liquidity is always priority one. If you don’t have liquidity, you don’t have a business. An agent can track when vendors are paying slower. Maybe they used to pay 7 days early, and now they pay on time, that’s still a signal, but most humans won’t notice it. Agents are ideal for spotting subtle behavioral shifts in suppliers, customers, etc.

Overall, I believe agents have a role across all categories of finance. It’s just a matter of prioritizing and sequencing implementation.

Deepak: Yeah, great. So I think we have a little more time for one last question and maybe a couple from attendees.

So this one is for John: What kind of control would CFOs like to exercise on these AI agents?

John: It’s the same as managing employees. You define:

  • What decisions they can and can’t make.

  • What data they can access.

  • What thresholds they’re allowed to operate within.

You don’t let every employee access payroll data, same with agents.You also need explainability and audit trails. AI outputs must be transparent, logged, and traceable for audits and compliance.

CFOs need model governance and validation, if a model goes wrong, it must be easy to roll back or adjust. Forecasts or reports may need revision after consolidation. Same applies to AI.Also, there must be easy override mechanisms, humans must have the ability to intervene when something doesn’t make sense.

Deepak: Great, thanks for that, John. Before we wrap up, we’ll take one question from Sridhar Srinivas:How can agents work in cases of various direct, indirect, and cross-border tax and compliances?

Brian: Great question. It’s complex, and just like with humans, no single agent can handle it all. You’d likely break this into multiple agents:

  • One focused on compliance.

  • One on governance.

  • One on tax.

They pull in regulatory updates, calculate obligations, and inform humans. The goal isn’t to turn over all decision-making, especially at this level of complexity. But agents help get the right info to the right person faster.

Rajeev: Yes, I agree with Brian. I’ll add an example from Hyperbots. We’ve built an AI co-pilot that checks sales tax applicability on procure-to-pay and order-to-cash line items. It looks at where goods are shipped from and to, applies jurisdiction-specific rules, identifies correct tax categories, and flags errors.

This ensures compliance is maintained, and reduces manual errors.

Deepak: Great. Hope that answers your question, Sridhar.I think we’ve run out of time, it’s been a great discussion.

Brian, John, Rajeev, thank you for your time and insights and thanks to all the attendees who participated and asked questions.

Here’s to more such engaging conversations around finance and AI in the future. Be well!

Get the Latest News & Trends

Get the Latest News & Trends