There is a version of this conversation that focuses entirely on fraud tools: which vendors to evaluate, which models to deploy, which detection techniques are most effective against the fraud patterns currently active in your market.
That conversation is worth having.
But it starts in the wrong place.
The fraud tools question is a second question.
The first question is whether your institution has the data layer those tools need to work. Because the most sophisticated fraud detection model in the market, pointed at incomplete, delayed, or fragmented data, will underperform a basic model pointed at clean, complete, real-time data.
This is the most common reason fraud detection projects in digital banking fail to deliver on their promise and it is almost never the reason cited in the post-mortem.
What fraud detection actually needs from your data
Fraud detection works by identifying anomalies: patterns of behaviour that deviate from what is normal for a given account, customer, or transaction type. The accuracy of that identification depends entirely on the quality of the baseline it is working from.
To build an accurate baseline, a fraud detection system needs:
- A complete view of customer behaviour across every channel and product.
- Transaction history going back far enough to establish genuine patterns; not six weeks of data, but twelve to twenty-four months.
- Account activity, login behaviour, device fingerprints, and location signals joined together at the customer level.
- All of this needs to be available in near real-time.
Most digital banks in dynamic markets are not starting from that position. They are starting from a position where transaction data lives in the core, behavioural data lives in the mobile app, device data lives in a separate security tool, and none of these systems were designed to talk to each other in real time. The fraud team is working with whatever subset of that data they can access, at whatever latency the system imposes on them.
The result is predictable: higher false positive rates, because the model cannot distinguish legitimate unusual behaviour from fraudulent unusual behaviour without sufficient context; missed detections, because the signals that would have flagged the fraud existed somewhere in the system but were not visible to the detection layer; and slow investigation, because when a fraud incident does occur, pulling together the full picture of what happened requires manual effort across multiple systems.
The fragmentation problem
The fragmentation problem in digital banking data is structural. It is the result of how digital banking platforms were built and how they grew.
Most institutions running on legacy cores have a core banking system that was designed for batch processing, not real-time analytics. Data moves out of it on a schedule which means the analytics and detection systems downstream are always working with yesterday’s picture. When a fraudster moves fast, and they do, yesterday’s picture is not enough.
Institutions that have added digital channels on top of a legacy core have typically done so through integrations that move data in one direction: from the channel into the core for processing. The reverse flow from the core back out to analytics, detection, and decisioning systems was an afterthought, if it was considered at all. Getting production-grade data out of the core and into the hands of fraud analysts typically requires a vendor ticket, a custom extract, and a manual process that is both slow and fragile.
The consequence is that fraud teams end up working from reports rather than data. They can see what happened; they cannot see what is happening. And the gap between those two things is where fraud lives.
What a fraud-ready data layer looks like
The solution is to build the data layer that your fraud tools need to operate effectively: a secure, governed replica of your production database that is continuously updated, queriable off-core by analytics and detection systems, and available without adding load to the live system.
This architecture solves several problems simultaneously:
- It gives fraud detection models the complete, current data they need to build accurate baselines.
- It gives fraud analysts direct access to transaction history, account activity, and behavioural signals for investigation, without tickets or manual extracts.
- It gives product and risk teams the ability to run models in shadow mode before committing to a production deployment.
It also solves the false positive problem, which is underappreciated as a growth issue. Every legitimate transaction that gets flagged and blocked is a customer who experienced your bank as an obstacle. In markets where switching costs are low and alternatives are a tap away, a pattern of false positives is a retention problem, not just an operational inconvenience. Better data means better signal, which means fewer false positives and less friction for the customers you are trying to keep.
The AI whitepaper published by Oradian describes this architecture clearly: a cloud-native core that ensures financial accuracy and multi-channel stability, combined with Database Access; a secure, read-only mirror of the production PostgreSQL database, continuously updated and isolated from the live core so that heavy queries or model training runs do not risk downtime on production systems. With full-fidelity data available off-core, fraud teams and AI models can work with the complete spectrum of transaction history, user behaviour logs, and metadata that standard reports or limited APIs do not expose.
The sequence matters
Understanding the right sequence is important because many institutions approach fraud detection as a tools procurement problem. They evaluate vendors, select a platform, begin integration and then discover that the data the platform needs does not exist in the form it needs, at the latency it needs, with the coverage it needs.
At that point, the project either stalls while the data infrastructure is retrofitted around the tool, or it goes live on incomplete data and underdelivers. Either outcome is expensive, and the second is more dangerous because it creates a false sense of security: the fraud detection system is running, alerts are being generated, and the institution believes it is protected to a degree it is not.
The right sequence is data layer first, tools second. Establish the unified, real-time replica of your production data. Validate that it is complete, that it is current, and that your team can query it without impacting the live system. Then evaluate fraud detection tools against that foundation because now you can assess them honestly, test them on real data in shadow mode, and deploy them with confidence that they are working with the inputs they were designed for.
This is the same sequence that applies to every AI use case in banking, not just fraud detection. The institutions in dynamic markets that are successfully deploying AI for credit scoring, churn prediction, and personalisation all share one characteristic: they solved the data layer problem before they solved the model problem. The ones still struggling with AI delivery are almost always still struggling with data access.
The question to ask your team today
The practical test for where your institution stands is straightforward. Ask your fraud team how long it takes to pull together the complete transaction history, device data, and behavioural signals for a single account under investigation. If the answer is hours or days then your data layer is the constraint on your fraud capability, regardless of which detection tools you are running on top of it.
The second question is whether your fraud detection system is working from data that is minutes old or hours old. In markets where Indonesia’s OJK has noted that the average gap between a fraud occurring and a victim reporting it is twelve hours, an institution whose detection systems are working from batch data updated overnight is not in a materially different position to having no detection at all.
Fraud tools matter. Choosing the right vendor, designing the right model, and tuning detection logic for the specific fraud patterns active in your market all matter. But none of it matters as much as the foundation those tools are sitting on. Fix the data layer first, and the tools question becomes considerably easier to answer.
The Digital-First Bank’s Guide to AI
The Oradian Digital-First Bank’s Guide to AI in 2026 includes a full AI and fraud readiness checklist covering data foundation, core integration readiness, governance, and operating model. It is written specifically for product, tech, risk, and compliance teams in dynamic markets.
Download the digital-first bank’s guide to AI to assess where your institution stands today
Start your core banking journey today
Ready to build a fraud-ready data layer? Oradian’s cloud-native core and Database Access give your fraud and analytics teams a complete, real-time view of your production data without tickets, manual extracts, or load on your live system. Speak to our team today by emailing vanda.jirasek@oradian.com to find out what a fraud-ready data layer might look like for your institution.