By Finastra, with insights from Treasury Masterminds Board Members
Centralising bank connectivity through SWIFT has become standard practice for multinational treasury teams. The idea is straightforward: bring banking communication into a single channel, streamline processes, and create better visibility over global cash.
In theory, centralisation reduces complexity.
In practice, it often shifts complexity somewhere else.
One multinational generating less than 100 billion in annual revenue receives roughly 3,800 bank statements every day, arriving from banks across multiple countries and formats. At that scale, even small inconsistencies can create significant operational pressure. A missing file, a duplicate statement, or a timing mismatch can quickly trigger investigations across dozens of accounts.
What begins as a simple operational task quietly becomes something more important. Bank statements stop being routine administration and start becoming a core element of treasury’s control framework.
When Scale Meets Reality
The organisation had centralised its bank connectivity through a SWIFT Service Bureau and integrated statement data directly into its treasury management system. On the surface, this setup looked efficient. In reality, managing statement reception across a global banking landscape proved more complicated.
Different banks deliver statements at different frequencies. Some send files only when account activity occurs. Others follow strict delivery windows. File formats vary by region and bank, and acquisitions continually expand the number of accounts that must be monitored.
For years, monitoring relied largely on manual processes and Excel tracking. It worked reasonably well, but visibility into expected versus actual statement delivery was limited. When something went wrong, teams often discovered the problem only after downstream processes started failing.
At a volume of several thousand statements per day, even a one-percent error rate can translate into dozens of investigations.
Data First, Automation Later
Before introducing structured monitoring, the organisation first addressed the underlying data environment. This step proved critical.
Inactive accounts were still sending statements. Some active accounts were missing entirely from monitoring lists. Reconciliation backlogs had grown to more than forty thousand unresolved lines. Until these issues were identified and corrected, implementing automation would have solved little.

This point resonated strongly with Bojan Belejkovski, Treasury Masterminds board member.
“This article gets something right that often gets skipped in treasury transformation conversations: data quality is a prerequisite, not an afterthought.
The point about cleaning up before automating is the one that matters most here. The monitoring capabilities described are only as good as the expectations set behind them. Knowing what should arrive, from which accounts, and when, that’s the real control framework. The technology just enforces it.
One question I’d add to the list: who owns the exception? Visibility without accountability is just a prettier dashboard.”
His point highlights a broader truth about treasury transformation. Technology can automate monitoring, but it cannot define the rules that monitoring relies on. Those rules must already exist.
Building a Monitoring Framework
Once the data landscape was stabilised, the organisation implemented structured monitoring capabilities. Instead of simply receiving files, the system began defining expected behaviour and identifying deviations.
Treasury could now monitor statement reception in real time, compare incoming files against expected schedules, and quickly identify missing or incorrect statements. Dashboards provided a clear view of received, pending, and problematic files, while structured workflows allowed exceptions to be investigated and resolved systematically.
The goal was not simply automation. It was predictability.
Treasury teams could see immediately when something was wrong instead of discovering the issue hours or days later through reconciliation problems.

For Lorena Perez Sandroni, another Treasury Masterminds board member, this distinction is essential.
“Without proper analysis and remediation actions, organisations expose themselves to significant control risks. In environments where large volumes of bank statements are processed daily, small issues can quickly become bigger operational problems.
As shown in the case presented by Finastra, problems such as missing statements, inactive accounts, or reconciliation backlogs can weaken the control framework if they are not properly addressed.
Automation alone is not enough. If the underlying data and processes are not properly analysed and remediated first, automation can actually amplify existing issues rather than solve them.
Strong analysis, data cleanup, and clearly defined monitoring controls are essential to reduce risk and ensure reliable treasury operations.”
Lorena also notes that when entering an organisation facing these challenges, the priority should be restoring data integrity and strengthening monitoring processes before moving forward with broader transformation initiatives.
“It might delay part of the roadmap,” she says, “but it will be worth it.”
Why Monitoring Matters More Today
Treasury environments are becoming increasingly complex. Organisations operate across more banks, more currencies, and more legal entities than ever before. At the same time, treasury teams are often expected to manage these environments with leaner resources and tighter control requirements.
In this context, the integrity of bank statements becomes foundational.
Accurate statements support cash positioning, liquidity forecasting, hedge accounting, and investment decisions. When statement delivery becomes unreliable, confidence in these downstream processes quickly erodes.

Lee-Ann Perkins, Treasury Masterminds board member, sees these issues regularly in global treasury operations.
“In my day-to-day work in a global organisation with many bank accounts, a few things immediately stand out. Data quality issues must be addressed first, otherwise automation simply amplifies underlying problems.
Another key concern is statement reliability. If the integrity of statement reception is compromised, downstream processes like cash positioning, liquidity forecasting, and hedge accounting are exposed to risk.
Manual workarounds are also a warning sign. When critical answers live in inboxes or spreadsheets, the organisation carries a real scale risk.
Finally, audit expectations continue to increase. Treasury teams must be able to demonstrate clear controls and transparent processes, which becomes difficult when monitoring and exception handling are not structured.”
A Lesson Beyond Technology
The case ultimately highlights a broader lesson for treasury teams undergoing digital transformation.
Technology can enable monitoring and provide visibility, but it cannot replace disciplined process design or clean data foundations. Sustainable treasury operations rely on clearly defined expectations, reliable data structures, and transparent exception management.
When these elements come together, automation becomes powerful. Without them, it simply accelerates existing problems.
In the end, bank statement monitoring is not a minor operational detail. At scale, it is an essential component of treasury’s control architecture.