Jesse Reiss
Co-Founder & CTO
The AI development cycle keeps accelerating. Yet here in the world of financial compliance – where stakes are high, tolerances are low, and nuance matters – the question isn’t ever just what’s possible in theory. It’s what works in practice.
As someone who lives at the intersection of AI systems design and compliance workflows, I’ve watched the field evolve from brittle experiments to flexible, task-specific agents and powerful LLM-backed feature sets. And that’s just in the last few years! So much has been accomplished in a comparatively short amount of time.
But we’re still early. And if there’s one thing I do feel 100% certain about, it’s that the road ahead won’t be linear. If anything, we should expect more change in the next few years than in those we just experienced.
What I’ve written here isn’t a roadmap, or a weather forecast. It’s not meant to exactly “predict” anything. What it is meant to do is look forward from where we are and draw a logical line to where we might conceivably end up.
So, you ask – why would I spend time on such an exercise?
For one, a solid understanding of where you are (and where you might go) is the basis of modern product development. Hummingbird isn’t Hummingbird without our roadmap, which reflects everything we know about compliance as well as everything we believe technology can do to make it better.
And two: AI is a space where mental exploration is an absolute necessity. The technology will move faster than you think, and if you’re not busy drawing your own mental map of all the possible roads AI could travel, then you’ll likely be left standing at a bus stop, waiting for a bus that’s already long gone.
Today’s AI systems, particularly large language models (LLMs), are capable of impressive feats: summarization, classification, language generation, and even basic reasoning.
But in highly-regulated environments like compliance, there are some practice area requirements that necessitate proceeding with caution.
For example, compliance requires:
These are not technicalities, nor are they guidelines. They are foundational, architectural constraints. You cannot find a way around them anymore than you can catch the AI bus after it’s left the station. Anyone who’s fielded a call with a 6-month old startup claiming they can “do everything” for compliance “out of the box” knows how absurd such a claim sounds.
As an industry, compliance is going to change enormously in the next five years. And this will be in no small part due to the changes in technology we’re experiencing today. But the tension in that relationship – in the give-and-take between capability and constraint – is woven into the very fabric of what compliance is.
As such, an in-depth understanding of these things, in all their nuance, is what will separate the purveyors of the next generation of tools vs. the flash-in-the-pan pretenders and obsolete legacy has-beens.
The next wave of AI tools will be modular – not monolithic. Instead of relying on a single, omnipotent system to do everything, we’ll see systems composed of interoperable parts: retrieval systems, structured prompt chains, task-specific tools, and human review layers.
This modularity matters because:
We’re already applying this philosophy at Hummingbird – decomposing monolithic processes into smaller, measurable, verifiable steps. Compliance needs clarity, not magic.
There’s excitement (and some justified nervousness) about autonomous AI agents – systems that can plan, reason, and act across multiple steps with minimal human guidance. In theory, that’s powerful. In practice (especially in a compliance environment), it’s not without risks.
In my futurecasting, I foresee agents existing and adding value in regulated industries, but in selective and conditionally constrained environments.
For example, I think we will certainly see the following types of agents:
The agents of the future won’t be autonomous without boundaries. They’ll be specialized helpers for workflows requiring the use of detailed memory, feedback loops, and clear audit trails.
In short: we’ll see a lot of agentic behavior, but with controls over use and domain.
There’s a false binary in many AI debates: AI vs. human. But in compliance, the real model is AI + human. Why?
Our job as technologists isn’t to replace humans. It’s to elevate them – automating the tedious, highlighting the risky, and providing structured insight to guide decisions. That’s not a limitation. That’s good design.
If you build with AI, it stands to reason that you will likely test with AI. Over the next five years, I expect to see AI quality assurance emerge as a first-order discipline.
What will this look like? Well:
Currently, QA processes are architected around QA-ing human effort – they need to be redesigned to allow QA-ing AI effort, which will be faster, more voluminous, and more important to control systematically. The next wave of tooling (along with evolving development standards and programming best practices) will help close this gap, blending statistical rigor with human judgment.
If you’re a compliance leader – whether you’re in a technical role or not – get ready: your job requirements are about to expand. You’ve likely felt it already. From today, and everyday hereafter, understanding AI isn’t optional. If you want to grow and succeed, AI will be part of your world.
But here’s the good news. Just because you’ve got new ground to cover doesn’t mean you need to head back to grad school to become a PhD. in AI technology. Bringing AI transformation to your current workplace is about gathering support, promoting AI initiatives, and asking better questions.
What kind of questions? Here are just a few of the ones we find helpful when charting a course for AI compliance technology:
Think of it this way: with AI, it’s not simply that you don’t need to build the system – it’s that you shouldn’t. Unless you have a tech stack ready to support it, and an AI-native internal team ready to build it, there’s no reason to build your own solution. Working with a qualified and dedicated vendor will always result in a more comprehensive, responsive, and flexible set of capabilities.
But that doesn’t mean you’re off the hook. Getting the best from your vendor, program, and team will require that you help mold and shape your company’s AI capabilities.
This is the starting line we find ourselves at. As I see it, the next five years will be defined not just by technical breakthroughs, but by thoughtful, collaborative product design – where a financial institution’s compliance needs are part of the spec from day one.
It’s tempting to chase AI like it’s a race – to ship fast, demo hard, and brag to stakeholders.
But in high-stakes domains like financial compliance, long-term success will be found where AI development is paired with careful, strategic, and deeply integrated thinking about domain expertise and how real people work.
I’m optimistic. The tools are improving at an amazing pace. Ideas are coming in from all sides. And for us – the people building these new AI products – our ability to deliver high-quality, scalable change to our customers just keeps getting better and better.
The next five years aren’t about what AI can do. They’re about what we choose to do with it.
Subscribe to recieve new content from Humminbird