
AI in KYC: The five key questions senior leaders should be asking
This article is a contribution from one of our content partners, Avollone I hardly need to point out the ways in which Artificial Intelligence (AI) has exploded into our lives – both personal and professional – over the past few years. I recently read an EY report that said that 90% of European financial services firms have integrated AI to some extent and 69% expect Generative AI (GenAI) to significantly impact productivity. To take an example from our own industry, GenAI is often put forward as a way to streamline the more labor-intensive and time-consuming tasks in KYC operations. The way I see things, there’s no denying that AI holds gamechanging potential. However, I think it would be foolhardy for executives not to acknowledge its risks, many of which are very complex. One of the reasons for this is that modern AI and machine learning models tend to work within a ‘black box’, with data points working together in ways that are impossible for a human to understand. So executive teams are often left scratching their heads, feeling sure that AI could offer value in their KYC processes but equally not wanting to expose their organisation to unknown risks. To shed some light on the matter, I’ve come up with five key points for leaders considering AI in KYC to discuss with their teams 1. How can AI enhance our KYC processes? Rather than getting sidetracked by shiny new technologies, we advise our customers to think about the problem they’re trying to solve, and how (or if) AI can help. There are a myriad of ways in which AI can be effectively deployed to boost efficiencies within the KYC space. For example, AI can extract tasks and related due dates from an incoming email, generating and assigning sub-tasks for individuals to act upon. It can prompt the collection of data from counterparties – or even source the correct information from public sources. At Avallone, our mantra is that AI should act as a user’s “wingperson”. It should augment, rather than replace, the work of the human. By taking each KYC task case by case, we work with our customers to determine where it makes most sense for AI to streamline their processes. 2. Where can we find experts in AI for KYC? A significant portion of firms say that they have limited GenAI expertise within their workforce, and we know that Machine Learning and AI are typically areas with the widest talent gaps. While it’s important for leadership to take a long-term approach to building key skills within their teams, executives often view this issue the wrong way around. In fact, you don’t need AI experts to determine how best to improve your KYC processes. AI is a tool like any other, and what’s most important is to place that tool in the hands of someone who fully understands the problem you’re trying to solve. You need KYC experts who understand all the complexities within this space, and who also have an overview on where automation could be implemented for the greatest gain and least risk. 3. What are the potential pitfalls in using AI for KYC? Treasury and compliance leaders are right to proceed with caution when using AI. As AI becomes more commonplace within the industry, it’s naturally falling under increased scrutiny by regulators. For example, the European Commission’s recent AI Act promotes principles of safety, ethics, transparency and accountability – requiring transparent model decisioning, explainability and tracking of data privacy. Regulators are penalizing organizations who fail to adopt the correct approach – as they should. Unfortunately, an increasing number of organizations are falling foul to these regulatory requirements. Recently, a Danish bank used AI to close thousands of alerts, however they were unable to explain the AI-driven decision-making process and were forced to re-process them manually. In Germany, a bank was fined €300,000 for not being able to justify why AI rejected a customer’s credit card application. Failure to ensure transparent decision-making can cost an organization in time, money and reputation – often negating the original benefits of AI. Given the significant risks that AI introduces, senior leaders must be sure that there are adequate controls in place to protect both their business and customers – often this involves balancing automation with human oversight. As well as ensuring explainability of all decisions, organizations must also stay updated with the ever-evolving technological and regulatory developments. Executives need to ensure that they have adequate resources to manage this extra workload either within their own teams, or through collaborating with a trusted partner. 4. How – and why – should we balance automation with human oversight? It’s widely acknowledged that AI can perform many tasks faster, more efficiently and more cost-effectively than humans. However, particularly in KYC, human verification is a crucial step. Let’s take a practical example: collecting key financial information from investors to send to your bank. Automating parts of this process can save teams significant time and resources – AI can scan a questionnaire, and source and add missing data points. However, automatically sending this data onto your bank is a step too far. With no human oversight, your organization is left open to data breaches, errors, fines, and untold reputational damage. Not to mention the issues surrounding accountability – who is actually responsible if AI makes your investors’ sensitive information public? None of these consequences are worth the risk. Simply by building in a layer of human approval before sending the package to your bank, you mitigate against these harmful scenarios. 5. How can we integrate AI with our existing and future workflows? At Avallone, we focus on collaborating with our users to understand where it makes most sense to automate. We recommend starting with the obvious, repeatable tasks and building from there. By automating in small and digestible ways, Treasury teams are able to retain full control and accountability while also enjoying AI’s full benefits. And while each AI use case may seem minor, the ultimate impact on…

How Executives Are Using AI to Lead Smarter: Key Takeaways from the Huszár Consulting Survey
In May and June 2025, Huszár Consulting surveyed 143 senior professionals—mainly C-levels, founders, and team leads—across industries like technology, financial services, and professional services. The aim: to understand how AI is currently used in leadership and what holds back broader adoption. Here’s what the data reveals. In May and June 2025, Huszár Consulting surveyed 143 senior professionals—mainly C-levels, founders, and team leads—across industries like technology, financial services, and professional services. The aim: to understand how AI is currently used in leadership and what holds back broader adoption. Here’s what the data reveals. 1. AI Usage Is Nearly Universal A striking 97.6% of respondents report active use of AI in their work. This is not a hypothetical trend—it’s already happening. Most common use cases include: Less common applications include coding (4.6%), image generation (4%), and video generation (1.5%). This suggests AI is primarily used for cognitive, communication-heavy tasks rather than technical development. 2. AI Is Already Impacting Leadership—But Not for Everyone Over half (51.2%) of respondents say AI has already made them more effective leaders. Meanwhile, 36.6% believe it could help, but they haven’t seen the impact yet. Only 2.4% dismiss its relevance entirely. This points to a key theme: belief in AI’s potential is high, but visible proof of impact varies. The strategic takeaway? The more it’s used in daily decision-making, the more tangible the value becomes. 3. Biggest Barrier? Lack of Time When asked what’s holding them back from using AI more confidently, the top answer wasn’t ethics, trust, or compliance—it was lack of time (30.5%). Other notable barriers include: Interestingly, this marks a shift from earlier AI narratives focused on fear or ethics—today, practical adoption constraints are front and center. 4. Curiosity Runs Deep The survey collected over 100 open responses on AI learning interests. Themes included: Some responses even veered into AI’s societal and geopolitical impact—mentioning military regulation, quantum computing, and emotional development. A sign that leaders are thinking beyond just tools and toward long-term implications. 5. Strategic Insights for AI Adoption The report outlines four practical recommendations for organizations: Final Thought This survey doesn’t aim to represent the entire market—it’s a directional signal from innovation-forward professionals already experimenting with AI. But it’s clear that the wave of adoption is underway, and the focus is shifting from why to how. Board Reflections from Treasury Masterminds We asked a few of our board members to react to the findings. Their comments will be added below. Want to share your own experience with AI in treasury? Join the conversation on Treasury Masterminds or drop your thoughts in the comments. Is this data reflective of your reality? COMMENTS Daniel Huszár, AI Strategist & Educator, comments: The findings reflect something I’ve been seeing in conversations for months. Many people are using large language models daily. They rely on them to write, research, summarize, and think. But more importantly, many are beginning to ask a different kind of question, not just “what can this tool do,” but “how do we structure work around it?” People are now thinking about agents, orchestration, and how to build systems around AI. A lot of these voices are coming from leaders, consultants, and managers. People who may not call themselves “technical” but are actively using AI to help them work. It also means that if you’re waiting to “get more technical” before engaging with AI, you might be waiting too long. If you can write a clear sentence, you can prompt a model. If you understand your team’s needs, you can begin to design AI support. One of the strongest themes I am seeing: those who use AI regularly are more confident in its potential, and more aware of its limitations. That confidence comes from trial and error. So if there’s one thing I’d encourage, it’s this: use the tools. Start today. Build your own understanding by experimenting, especially before thinking about automating everything with AI agents. Bojan Belejkovski, Treasury Masterminds Board Member, comments: Across industries, I’m seeing (a rather slow) AI shift from abstract hype to a practical tool for speeding up decisions, refining communication, and surfacing strategic options faster. I believe the real value isn’t in technical complexity but how AI helps leaders analyze trade-offs, align teams, and drive action with more clarity. That said, one barrier that I keep noticing, and is still holding people back, is fear of being replaced or becoming less relevant. At all levels. However, staying away from AI only delays the learning curve and limits your value. The leaders embracing AI aren’t trying to replace judgment because they’re using it to sharpen thinking and operationalize strategy more effectively. Tanya Kohen, Treasury Masterminds Board Member, comments: This is a great conversation. I’d offer you that one of the most valuable roles AI can play for leaders isn’t necessarily in decision-making itself, but in preparing for it. AI can help reduce bias by utilizing broader information sources, spotting patterns, and distinguishing correlation from causation. These are tasks that often distort judgment simply because leadership doesn’t have the time or access to relevant information. Used thoughtfully, AI can serve as a powerful partner in thinking, helping leaders ask better questions before rushing to answers. The hesitation around AI adoption often gets framed as a time issue, but I think it’s more rooted in unfinished digital transformation. Many organizations are still working through messy data, siloed systems, and unclear process ownership. Without clean inputs and a shared understanding of “what’s true,” even the best AI tools won’t deliver meaningful results. Leaders may want to lean in, but the foundation isn’t quite solid yet. That’s why the biggest opportunity lies in fixing the basics like streamlining access to data, creating clarity on ownership, and making sure teams at every level can trust and act on the same information. Also Read Join our Treasury Community Treasury Mastermind is a community of professionals working in treasury management or those interested in learning more about various topics related to treasury management, including cash management, foreign…