Event
AI in Finance
Detecting and Preventing Fraud and Anomalies in Finance with Agentic AI
Find out interesting insights with John, Brian, Rajeev.
Moderated by Deepak,
Don't want to watch a video? Read the interview transcript below.
Deepak: Good evening, everyone. My name is Deepak, and I'll be moderating today's session. Our panelists are Jon Naseeth, Dave Sackett, and Rajeev Pathak. This will be a 60-minute session, with 45 minutes of discussion and a 15-minute Q&A at the end. Feel free to drop your comments as we progress through the session.
Let me start by introducing our panelists. Jon Naseeth is the CFO and founder of Cantu Capital Link. With a background in AI and machine learning at Google, Jon excels in delivering solutions that address social and economic needs. Welcome, Jon.
Next, we have Dave Sackett, VP of Finance for Personal Technologies, a FinTech speaker, and a Forbes writer. He shares insights on AI, blockchain, and e-commerce and is committed to lifelong learning and kindness. Dave also co-founded AI1 to boost e-commerce sales. Hi, Dave, it’s great to have you.
Lastly, we have Rajeev Pathak, CEO and co-founder of Hyperbots Inc. With over 30 years of experience in building technology products and businesses for global markets, Rajeev has managed the technology business for Wipro as GM and vertical head, handling a business of over $100 million. Welcome, Rajeev.
Good morning, Jon and Dave, and good evening, Rajeev. Hello to everyone in the audience.
In the next hour, we will explore how agentic AI plays a huge role in detecting and preventing fraud and anomalies in finance and accounting. Our discussion will be divided into three sections:
In the first section, we will discuss real-world frauds and why traditional defenses fall short.
In the second section, we will explore how agentic AI detects, decides, and prevents fraud and anomalies in real-time.
In the third section, we will share practical tips for integrating agentic AI into finance workflows.
We'll conclude with time for questions and comments. Let’s dive in. Jon, my first question is for you: What are the five costliest fraud or anomaly types in the finance landscape today?
Jon: First, thank you for the introduction and the opportunity to be here. The most impactful and costliest fraud types can vary. A lot of times, things that make it into the news aren't the costliest. As a certified fraud examiner, I’ve seen that large-dollar incidents often don’t make the headlines because companies prefer to avoid public exposure.
From a cost perspective, here are the top five:
Vendor Payment Fraud: Fake or inflated invoices are common and cause significant losses. Recently, I saw that the U.S. government is finally implementing proper vendor payment authorization after an issue surfaced.
Revenue Leakage (Order to Cash): This is a big issue I worked on at KPMG. Hidden discount abuse often leads to financial losses.
Business Email Compromise: This type of fraud is a significant threat. I have a story I can share at another time.
Indirect Tax Carousel Schemes: These occur when the wrong tax codes are applied, which some may debate as fraud, but it’s intentional fraud if done deliberately.
FP&A Model Manipulation: Financial Planning and Analysis (FP&A) can sometimes manipulate data, which can lead to fraud, especially when presented to investors. This type of fraud often leads to lawsuits.
These types of fraud happen frequently, and the consequences can be severe.
Deepak: Very interesting. Thank you, Jon. Moving on, Rajeev, staying with P2P for a moment—what makes mid-market companies a soft target for duplicate payment or over-billing scams?
Rajeev: Good morning. Can you hear me, Deepak?
Deepak: Yes, we can hear you.
Rajeev: Great. As Jon mentioned, vendor payment fraud is a significant concern. Let me elaborate on that. Mid-market finance teams are usually very lean. For example, in P2P or accounts payable, a small team may be managing hundreds of vendors and thousands of invoices, so bandwidth is limited. If the processes are human-driven, there's a higher risk of errors or fraud.
Take the example of onboarding vendors. Vendor information often moves between departments through email in the form of Excel files. This is a simple resource that can easily be exploited for fraud. Files can be intercepted, modified, and sent with altered information that appears legitimate, such as a fake invoice.
In the absence of significant oversight, there is a risk at nearly every point in the processing pipeline. Until there's deep automation and robust security throughout the process, these systems are highly vulnerable to fraud.
Deepak: Got it, thanks for that. Dave, the next question is for you. In O2C (Order to Cash), why do rules-based receivable systems miss revenue leakage and fictitious credit memo fraud? Can you shed some light on that?
Dave: Sure, thank you, Deepak. In order to cash, there are several ways fraud can occur. For example, sales orders may be recorded in a way that delays or shifts revenue into the future, outside the current audit scope. Credit memos can also be fraudulently applied, such as by backdating them to move the transaction to a future period.
Fraudsters may also use small credit memos, often below materiality thresholds, to avoid detection. If that goes unnoticed, they may become bolder and continue the fraudulent activity. It’s a cycle where small fraudulent actions can escalate over time.
Deepak: Understood. Jon, treasury is a hot target. Can you share a case where traditional bank portal approvals still fail?
Jon: Absolutely. I’ve had a personal experience that aligns with this, and unfortunately, many companies have similar stories. I had just signed up for a loan with a private lender, and the money was deposited into my account. Then, on a Friday night, I received a call saying it was a final underwriting approval call. They wanted verbal confirmation to release more funds.
I was confused because I had already received the loan. It turned out that the call was from someone spoofing the lender’s information. They tried to get my verbal approval to release more money, even though I had already agreed to the loan terms. They claimed they already had documents signed by my CEO, but it wasn’t true.
What happened was that they had spoofed my voice using AI. All I had said was "Hello, this is Jon," and they were able to use that to convince my CEO to sign a contract for a new loan. Since then, we’ve implemented a system where we always use a keyword to confirm identity, ensuring that such an incident doesn’t happen again. It was a very heated experience, and we worked hard to resolve it.
Deepak: Pretty sinister. Right, moving on, Rajeev, what tax-related anomalies slip past standard ERP tax engines?
Rajeev: ERP tax engines, for example, validate sales tax based on a tax dictionary, but they can’t determine the context. Let’s say a vendor provides an exemption certificate, but it’s outdated. Until there’s an intelligent agent regularly checking vendor exemption certificates, there’s a high probability the vendor may continue not charging sales tax, assuming their exemption certificate is valid. This could be due to ignorance or deliberate action. If the vendor has moved out of the tax-exempt regime, the company could face tax liability, which results in penalties for both the vendor and the company.
Similarly, fraudsters could manipulate domestic invoices as export invoices, treating them as zero-tax invoices. This also leads to liability implications and potential penalties. Without robust tax management and physical verification of taxes charged, this area remains vulnerable.
Deepak: Understood. Thanks for that. Dave, what does FP&A anomaly look like in practice?
Dave: From an FP&A perspective, one example could be giving overly optimistic forecasts. For instance, reducing churn from 6% to 3% in a future forecast in the FP&A model. If the model isn’t audited or doesn’t get the right attention at the right time, such claims may mislead external parties, like Wall Street, into thinking things are improving when in reality there’s nothing to back it up. It’s just hidden within the model.
Deepak: Got it. Great. Turning to Jon—Jon, one subtle signal that screams anomaly yet rarely hits a classic exception?
Jon: One anomaly to watch for is when someone asks for approval late at night or early in the morning, outside the normal cadence. This is particularly concerning if it involves a large transaction, like a wire transfer. If it’s something significant and we haven’t discussed it beforehand, I’d be hesitant. If I haven’t had prior conversations or if I don’t have a plan in place, I’m not going to approve that transfer.
Deepak: Right. Okay, so moving on, let’s now bring in the agentic AI aspect into the discussion. How does it detect, decide, and prevent fraud in real-time? My question for Dave is: What fundamentally separates an agentic model from a standard predictive model?
Dave: A standard predictive model is based on mathematical calculations and certain drivers that produce a probability. After that, a human gets involved to close the loop. But with agentic AI, the agent actually owns the entire closed loop. It uses policies to make decisions on what’s right and what’s wrong and then takes action based on those decisions. The agent can prevent fraudulent actions in real-time, following the predefined policies. It operates within the bounds of those policies, much like a junior accountant or financial analyst executing tasks based on the inputs it receives.
Rajeev: Can I jump in on that one a bit?
Deepak: Sure.
Rajeev: I like how you described it as a junior accountant. I sometimes refer to AI agents as like a college intern—you get good value from them, but there’s a trick to it. In financial analysis or fraud detection, combining multiple agents in the right specific roles makes a much more powerful output. I’ve found that stacking them together leads to better results.
Deepak: Thanks for that addition.
Deepak: All right. Moving on to the next question, Rajeev, would you like to illustrate the loop for a P2P invoice fraud scenario? Can you elaborate on that?
Rajeev: Sure. This is the idea and action behind how AI agents can prevent fraud in real time. Let’s take a simple case of invoice processing in the P2P process. If you have an agent that reads invoice emails in a highly policy-driven and rigorous manner, it can immediately eliminate fraud. For example, it will ignore emails from vendors that aren’t listed or are unauthorized. It will also conduct a thorough check for duplicate invoices and cross-check for any manipulations in the invoices.
If this agent works alongside another agent, let’s say a matching agent that pulls purchase orders and goods receipt notes from ERP, it can perform automatic two-way and three-way matching. This matching agent compares the incoming invoice information against the established ground truth in the ERP system. If there’s any manipulation or inconsistency, it’ll be flagged and caught automatically by this agent.
If you design a robust agent pipeline for invoice processing, handling tasks like email reading, matching, journal coding, and posting, the chances of fraud or anomalies slipping through are almost zero.
Deepak: Great. Dave, do you have an O2C example?
Dave: Sure. In order to cash (O2C), a fake credit memo might be created. What agentic AI can do is check against the warehouse management system to see if a shipment and return actually occurred. That’s one check. Then, it compares that data to the ledger. If the AI detects that the credit memo has no basis—there’s no return material authorization (RMA)—it won’t process it.
The AI doesn’t just detect the issue, it also takes action. It stops the process and flags it. Based on policy and the data inputs, the system can prevent it from becoming fraud in real-time.
Deepak: Understood. Jon, tax anomalies often appear in both AP and AR. How do agents talk across these silos? Can you elaborate on that?
Jon: Sure. One thing to note is that it’s now April in the U.S., and it’s tax season—lots of personal taxes being filed, so maybe some people don’t want to remember what they submitted. Bad joke. But I’ll talk about taxes in AP and AR.
Tax anomalies can arise on either side. The key here is to spot any inaccurate submissions or anything that goes beyond the acceptable threshold. AI can get involved by identifying anomalies like incorrect vendor tax codes or risk scores. In some cases, you might submit an event to a shared database that others are referencing. If that event is inaccurate but justified, it could cause issues down the line.
For example, if a company uses a tool like TurboTax and submits inaccurate assumptions or triggers, the AI may flag it as acceptable, which could lead to errors impacting others. So, the real concern is ensuring the data feeding into the system is accurate because if it's not, it can cause significant downstream issues.
Deepak: Understood.
Deepak: Got it. Yeah. So, moving on, Dave, FP&A models aren’t transactional. So, how do agents monitor them? I think this will be one of the last questions before we move on to the next section.
Dave: Sure. With FP&A models, an agentic AI can perform a comparison step. So, if, for example, a fraudster changes the churn rate from six to three percent, the AI can perform a flux analysis between models. Whether it’s in Excel or another software, the AI can check and balance by saying, "Wait a second, you changed something. It was this, and now it’s that. You don’t have a policy for it, and I can’t find any backup." That’s a problem. So, the agentic AI would address this gap by comparing FP&A models that aren’t traditionally audited. The AI would do the work for you.
Deepak: Got it. Anything to add before we move on to the next section, which is about practical tips for integrating agentic AI into finance workflows?
Dave: I think it might be a good time to see if the audience has any questions before we continue.
Deepak: Right. Yes. I don’t see any questions at the moment, but I encourage the audience to drop any questions in the comments as we go along. All right. Let’s move on. The next section is about practical tips for integrating agentic AI into finance workflows, and it will focus purely on hands-on guidance for rolling out fraud prevention.
Deepak: Jon, my first question in this section is to you: What data hygiene steps are non-negotiable before any pilot?
Jon: Yeah, I keep seeing this over and over again wherever I go. The key starting point is getting your customer master data clean. That’s critical. Whether it’s vendor data, customer data, or your IDs, having clean data is essential. The biggest issue arises when your databases can’t talk to each other or compare data properly. This has been a problem in government systems where databases—like social security databases—don’t talk to healthcare databases, and people take advantage of that. Clean master data is basic, but it’s also fundamental. Also, if you can afford it in your system, turn on audit trail logging. It’s a balance. Auditors will say, “Turn it on everywhere,” but that comes with a cost, so you have to figure out where the risk is and where it’s most needed.
Deepak: Got it. Dave, quick wins—where should companies place their first agent and why?
Dave: It sounds obvious, but it’s where the pain is. Look at where you're seeing the biggest issues, where people are spending a lot of time trying to detect fraud or anomalies. Set that up as your target for agentic AI. Hopefully, you can create a return on investment there by targeting high-volume transactions and addressing significant pain points in the department.
Deepak: Understood. Rajeev, how do you frame the project so AP clerks, AR analysts, and treasurers embrace agents instead of fearing them?
Rajeev: I’ll answer this in a moment, but let me add to what Dave mentioned earlier about where companies should place their first agent. There are two elements to consider: ROI and fraud/risk prevention. While ROI may make sense from a cost perspective, in the case of fraud and risk, you should focus on areas with high historical risk.
For example, if your accounts payable function has had a lot of tax penalties and liabilities in the past, deploying agents for automated tax verification can prevent anomalies—whether deliberate or accidental—that could result in significant tax penalties. Even if human effort in tax compliance is low, the AI agent can save thousands of dollars in penalties.
On the other hand, there may be a task with lower risk but a large team working on invoice processing. In such cases, the cost savings (cost take-out) could be the bigger ROI, and that task would be a good place to start. So, the decision should balance both risk prevention ROI and cost take-out ROI.
To summarize, when deciding where to deploy agents first, consider both the cost and risk factors, and decide based on what will bring the highest value in each case.
Deepak: Great insight. Thanks, Rajeev.
Rajeev: Yeah. Now, coming back to your next question, which was… um, can you repeat the question?
Deepak: Sure. How do you frame the project so AP clerks, AR analysts, and treasurers embrace agents instead of fearing them?
Rajeev: See, nobody in the company wants fraud or risk to exist, right? So, if the positioning of your AI agent is that it will prevent and detect fraud, everyone will be on board with that. That’s a no-brainer. So, positioning the AI as a fraud detection and prevention tool is key. It’s all about how you present it.
The second part is about tangible benefits. The AI should be positioned as a productivity enhancer, not as a replacement. For example, if you position a month-end book closing agent as one that can close books five times faster, the acceptance of that agent will be very high. Similarly, if you position an invoice processing agent that allows a clerk to process 500 invoices a day instead of 100, the clerk will feel empowered, more productive, and more valuable to the organization.
So, positioning is critical. If you communicate it well, especially around risk prevention and fraud detection, everyone will be on board. From a financial ROI standpoint, position it as a productivity enhancer, and I don’t see why anyone working in finance and accounting would dislike AI agents.
Dave: Can I jump in on that one? I think it segues to the next topics. I was at KPMG for six years, and after doing normal audit stuff, I helped build a practice called Contract Compliance Services. There were bad things happening in big companies, and we brought in professionals to figure out the facts and resolve the situation. We would address it without necessarily having to prove fraud or bring in lawyers.
There’s a lot of value in using AI and analytics to triage and find anomalies. Fraud prevention techniques can solve business problems, and instead of justifying fraud prevention, we can look at the business value that comes from it. Fraud prevention is a nice byproduct of using these techniques, and there’s plenty of value in just doing the right things that justify the cost.
Rajeev: Absolutely. Actually, I want to jump in on that. Auditors are going to use agentic AI to compare your data. They’ll look for anomalies and fraud. But from a company’s point of view, I want to ensure my data is clean and that I’m aware of anomalies. I’d want to catch any issues before they pop up in an audit. If something strange shows up, I’d prefer to investigate it myself rather than have an auditor point it out. Most frauds are found not because someone discovers it but because a whistleblower reports it. So, we should proactively use data to find issues ourselves, rather than waiting for an auditor or whistleblower.
Deepak: That’s an insightful take. Moving on, the next question is about governance and documentation. What kind of documentation would keep regulators and auditors comfortable when agents are taking autonomous actions?
Jon: I like the question, but I want to push back a bit. Auditors and regulators are never truly “comfortable.” They’ll always ask for more and more. From my experience, I prefer using AI to get comfortable within my company. AI helps proactively manage potential fraud risks and anomalies.
Auditors will be happy with whatever I’m doing for my company. I don’t base business decisions on what auditors say. My threshold for comfort is much higher. If they’re not happy with what I’m doing, I’ll just tell them, "Sorry, this is what we’re doing."
To ensure we’re comfortable, we need an audit trail. If changes are being made, I want to know that there's a history record. Especially with AI, when an agent makes changes, I want to ensure we're doing proper tracking and can trace those changes. It’s important to be able to unwind the data if something goes wrong. An audit trail is key.
Rajeev: Actually, I can jump in on that too. Making auditors happy is a different story. When it comes to AI, auditors don’t just want to hear that the agentic AI handled everything. They want to know the rules, policies, and access the AI has. They need to know which systems are affected by the agentic AI. Instead of having a “black box,” we need to be transparent about how the AI works so auditors can verify it.
Dave: I feel like I should acknowledge this. I’m a CPA and spent six years auditing. I’ve certified fraud examiner credentials, so I say this with respect to my auditor colleagues. But honestly, if you want to make auditors happy, just go to the beach and enjoy life. Don’t wait around for companies to meet every demand.
Deepak: That's a good disclaimer. All right, moving on. Uh, Dave, on the architecture—should it be embedded in each system or orchestrated externally? What's your take?
Dave: With agentic AI, you can have external orchestration, meaning it can tie into external databases without manipulating those databases. It can read them all, and from that, it can act and make decisions. So you’ve got an agentic AI with a lot more capability that can perform various processing steps—reading information, making decisions, and taking action. And it can be accelerated.
Rajeev: Yeah, I completely agree with you. The advantage of AI agents is that they can communicate with disintegrated systems. They can establish correlations among various systems and data, identify anomalies, and recognize patterns across systems, which is complex and difficult for humans to do, especially with large data sources. If you have to wait for an audit to catch anomalies and fraud, it's too late. Agents can prevent or detect these in real time, ensuring that big anomalies don’t happen. That’s where AI can make a huge impact in securing systems and preventing anomalies.
Deepak: Anything to add, Jon, before we move on?
Jon: No, I think we're good.
Deepak: All right. The next one is for you, Jon—about KPIs. Which KPIs prove valuable at month one, month three, and month 12?
Jon: KPIs in the space of problem detection can be tough because executives often push back, asking, "How are you measuring this? How are you showing your value?" The ideal KPI would be to show how much money you've saved or how much fraud you've prevented. However, a lot of times, prevention itself is the best measure. It can be hard to get real numbers. There’s also indirect value to consider. Even if you don’t directly find fraud, that might be okay if the prevention worked.
The key metrics could include how many anomalies were detected, how much money was saved, and the number of false positives you prevented. Let me share a quick story: A friend of mine works for a large fintech company. If they ever found any fraud, the whole company would lose trust, and he’d lose his job. So their KPI is expected zero—that’s their metric.
They use an interesting tool: they send spoofing or phishing emails to their internal employees, trying to get them to click on links that lead to false locations. If someone clicks on one, even if they don’t enter any information, it’s considered a performance hit. If it happens enough, it could lead to a performance improvement plan (PIP). At this point, employees are so cautious that they won’t click on any emails—HR sends out event invites, and nobody clicks on anything to avoid any risk of performance issues.
The point I’m making is that finding the right metric depends on your environment. Zero might be a good KPI if it means nothing was found because you prevented it. So, be creative with your metrics.
Deepak: Understood. So, Dave, Rajeev, did you have anything planned on this?
Rajeev: Sure. So, I think the point is that we can have different KPIs in different organizations for fraud prevention. That’s really the takeaway here.
Deepak: Yeah. Understood. Okay, moving on quickly. Dave, how do you keep false negatives, missed frauds under control while the system learns? How do you do that?
Dave: My approach would be trial and error. You don’t want your tool to be so tight that it takes too much time to address false negatives. You have to balance your research, internal resources, and the tool itself to train it. During that phase, you’ll spend more time as the tool learns, but you're training it to get better and better, leading to fewer and fewer false positives. So, it's about balancing training the model and managing the model.
Deepak: Got it. We’re 45 minutes into the discussion. We have one final question for Rajeev, and then we’ll open it up for questions.
Deepak: Rajeev, final thought: What new skills should finance teams cultivate as agents handle 80% of routine reviews?
Rajeev: It’s fine. I’m sorry, I'm in a very patchy area. So, I think the question you were asking is what new skills finance teams would need in this AI-driven scenario, correct?
Deepak: Yes, that’s right.
Rajeev: Yeah. The key skills are the ability to handle exceptions and to monitor technology-driven output, trusting the system as you see positive results. Teams will need to move from traditional manual processes and rule-based systems to AI-based systems. They’ll need to identify where more monitoring is required and where they can trust the system fully. This requires a new skill set. It’s like having a few kids at home—you need to decide when to give them independence, when to monitor moderately, and when to monitor closely. This sense of how to monitor AI agents effectively, in addition to formulating policy, is a crucial skill for teams.
Deepak: Great. That’s a great insight. So, we are at the end of the questions we had for the discussion. We’ll now move on to some of the questions from attendees. Let’s start with the first one: How does agentic AI detect fraud patterns that traditional systems miss? Anyone want to take that one?
Jon: Yeah, I can try that. Traditionally, you've got internal controls and people looking at things. Fraudsters can exploit these systems by making transactions appear legitimate. They may keep transactions below detection thresholds or imitate legitimate data to mislead humans. AI, on the other hand, is much more rules-based and can verify data in ways that traditional methods might miss. It’s that verification step that catches what traditional systems overlook.
Deepak: Got it. All right, moving to the next one. Next question: Which data sources are most critical for effective financial fraud detection with agentic AI?
Jon: The data source that contains the fraud. I’m not being flippant. Fraudsters will always try to find the one data source you're not monitoring, so you need to monitor at a summary level and then drill down into areas that seem more problematic. One of the good things about fraud is the fraud triangle and decision trees—fraudsters usually use a limited number of methods to commit fraud. You can study these methods, identify patterns, and then watch for them in the data, digging deeper into the data when something triggers a red flag.
Rajeev: Any data coming from external sources should receive higher attention.
Deepak: Yes, fraudsters are always targeting where the money flows, so it's crucial to focus on how they get paid fastest through fraud. That should be your focus.
Jon: Can I just give a shout-out on that?
Deepak: Thanks, Dave. That's a great call. There’s the situation where someone at work gets in a tight spot and makes a bad choice. That can lead to fraud. Then there are the petty criminals who try to steal because they have the opportunity to do so. Frankly, I think that's on the company for allowing such an environment. But I want to highlight that there are criminals who actively seek to attack and steal from you—using fake credit cards, fake transactions, spoofed emails, all that stuff. This isn’t just passive; they’re intentionally trying to harm you. It’s important to be prepared for that.
Great response, Jon. Thanks for the call-out. Now, moving on to the next question: Can you share an example where agentic AI caught fraud that traditional rule-based systems missed? Any example that comes to mind?
Jon: Sure. The term "agentic AI" is relatively new, maybe just in the last couple of years. But using data science and algorithms to find things that rule-based systems miss has been around for a long time. I can share a story. I was working on fraud risk analysis in supply chains, looking at rebates. One large technology company was offering rebates on expensive equipment. The issue was that a piece of equipment had been purchased and returned so many times, and rebates were claimed multiple times, that the company had paid more in rebates than they had earned in revenue from that product. Groups figured out how to cheat the rebate system, and third-party distributors were exploiting it. This wouldn’t have been caught without a multi-variable algorithm analyzing all the data sets and connecting the dots. When you find it, you think, "Oh, that’s obvious," but when you’re dealing with massive data sets in a company, the right tools are necessary to identify it.
Deepak: Got it. Moving on, I think we’ve already addressed the question about the critical data sources for fraud detection with agentic AI.
Deepak: Okay, my mistake. One last question: How do you balance AI autonomy with human oversight in fraud investigations? This is one of my favorite questions.
Rajeev: I can start, and then I’ll invite Dave and Jon to add. Human oversight combined with AI agents will be a reality in every process, and finance and accounting are no exception. In my view, humans and AI agents can follow a maker-checker model. The agents would serve as the first line of defense, identifying potential frauds or anomalies. Humans would then check these identified anomalies through alerts and notifications. On the flip side, there will be cases where humans are the first line of defense, preventing and detecting fraud manually, with AI agents verifying the results. It’s about a mutual maker-checker model: AI makes humans check, and humans make AI check.
Deepak: That’s a great response. Thanks, Rajeev. And I believe we’ve answered all the questions. We’re wrapping up now. Huge thanks to Rajeev, Jon, and Dave, and to everyone who joined. We’ll email you the slides, the full Q&A log, and an invite to our live demo. Thanks for joining, and have a great day ahead. Stay safe, everyone. See you next time.
All: Thanks. Goodbye!