Skip to main content

The Next Five Years: Where We See Compliance AI Going

Jesse Reiss

Co-Founder & CTO

Introduction

The AI development cycle keeps accelerating. Yet here in the world of financial compliance – where stakes are high, tolerances are low, and nuance matters – the question isn’t ever just what’s possible in theory. It’s what works in practice.

As someone who lives at the intersection of AI systems design and compliance workflows, I’ve watched the field evolve from brittle experiments to flexible, task-specific agents and powerful LLM-backed feature sets. And that’s just in the last few years! So much has been accomplished in a comparatively short amount of time. 

But we’re still early. And if there’s one thing I do feel 100% certain about, it’s that the road ahead won’t be linear. If anything, we should expect more change in the next few years than in those we just experienced. 

What I’ve written here isn’t a roadmap, or a weather forecast. It’s not meant to exactly “predict” anything. What it is meant to do is look forward from where we are and draw a logical line to where we might conceivably end up. 

So, you ask – why would I spend time on such an exercise? 

For one, a solid understanding of where you are (and where you might go) is the basis of modern product development. Hummingbird isn’t Hummingbird without our roadmap, which reflects everything we know about compliance as well as everything we believe technology can do to make it better. 

And two: AI is a space where mental exploration is an absolute necessity. The technology will move faster than you think, and if you’re not busy drawing your own mental map of all the possible roads AI could travel, then you’ll likely be left standing at a bus stop, waiting for a bus that’s already long gone. 

Context-Setting: Compliance AI from 2023 to Now

Today’s AI systems, particularly large language models (LLMs), are capable of impressive feats: summarization, classification, language generation, and even basic reasoning. 

But in highly-regulated environments like compliance, there are some practice area requirements that necessitate proceeding with caution. 

For example, compliance requires: 

  • Determinism
    Outputs must be consistent and explainable.
  • Auditability
    Every step in a decision chain must be recorded.
  • Human-in-the-loop design
    People remain the decision-makers, with AI assisting rather than replacing.

These are not technicalities, nor are they guidelines. They are foundational, architectural constraints. You cannot find a way around them anymore than you can catch the AI bus after it’s left the station. Anyone who’s fielded a call with a 6-month old startup claiming they can “do everything” for compliance “out of the box” knows how absurd such a claim sounds. 

Prediction Paths for the Next 5 Years

As an industry, compliance is going to change enormously in the next five years. And this will be in no small part due to the changes in technology we’re experiencing today. But the tension in that relationship – in the give-and-take between capability and constraint – is woven into the very fabric of what compliance is.  

As such, an in-depth understanding of these things, in all their nuance, is what will separate the purveyors of the next generation of tools vs. the flash-in-the-pan pretenders and obsolete legacy has-beens. 

Supposition 1: Modular Systems Will Win

The next wave of AI tools will be modular – not monolithic. Instead of relying on a single, omnipotent system to do everything, we’ll see systems composed of interoperable parts: retrieval systems, structured prompt chains, task-specific tools, and human review layers.

This modularity matters because:

  • It enables customization for specific regulatory requirements.
  • It supports traceability by separating perception (e.g., document extraction) from judgment (e.g., risk scoring).
  • It aligns with modern engineering practices (APIs, microservices, containerization) and allows AI to slot into real workflows, not just demos.

We’re already applying this philosophy at Hummingbird – decomposing monolithic processes into smaller, measurable, verifiable steps. Compliance needs clarity, not magic.

Supposition 2: AI Agents Will Thrive Within Carefully Constrained Environments

There’s excitement (and some justified nervousness) about autonomous AI agents – systems that can plan, reason, and act across multiple steps with minimal human guidance. In theory, that’s powerful. In practice (especially in a compliance environment), it’s not without risks.

In my futurecasting, I foresee agents existing and adding value in regulated industries, but in selective and conditionally constrained environments.

For example, I think we will certainly see the following types of agents:

  • Task-bounded agents that operate only within narrow scopes (e.g., auto-filling SAR forms from known fields).
  • Approval-gated agents that pause before high-stakes steps (e.g., contacting a customer or reporting suspicious activity).
  • Agents in simulation environments where new agentic workflows are tested, audited, and retrained before real-world deployment.

The agents of the future won’t be autonomous without boundaries. They’ll be specialized helpers for workflows requiring the use of detailed memory, feedback loops, and clear audit trails. 

In short: we’ll see a lot of agentic behavior, but with controls over use and domain.

Supposition 3: Human-in-the-Loop Is Not A Compromise – It's the Whole Point

There’s a false binary in many AI debates: AI vs. human. But in compliance, the real model is AI + human. Why?

  • Contextual judgment remains a uniquely human strength. No model understands organizational nuance, legal gray areas, or reputational risk the way a well-trained human does.
  • Trust in AI systems grows when humans can verify, edit, and override.
  • Liability in regulated environments ultimately rests with people – not algorithms.

Our job as technologists isn’t to replace humans. It’s to elevate them – automating the tedious, highlighting the risky, and providing structured insight to guide decisions. That’s not a limitation. That’s good design.

Supposition 4: AI QA Will Become Its Own Discipline

If you build with AI, it stands to reason that you will likely test with AI. Over the next five years, I expect to see AI quality assurance emerge as a first-order discipline. 

What will this look like? Well:

  • AI tools will be used to test AI tools. Because LLMs are probabilistic, you can’t evaluate them with simple “input > output” expectations the way you would with traditional software models. Their responses vary, and there’s no single correct output to test against. Instead, we use additional prompts and models to evaluate their behavior – essentially testing AI with AI. Since these systems understand natural language, we can use one model to interpret and grade the meaning, quality, or intent of another model’s output, even if we can’t ever dictate the exact output it will produce.
  • Companies will double-down on red-teaming and adversarial testing to assess model behavior under edge cases or hostile inputs.
  • Human review layers will be part of core business requirements, not project afterthoughts. These will include interfaces that encourage fast, accurate validation.

Currently, QA processes are architected around QA-ing human effort – they need to be redesigned to allow QA-ing AI effort, which will be faster, more voluminous, and more important to control systematically. The next wave of tooling (along with evolving development standards and programming best practices) will help close this gap, blending statistical rigor with human judgment.

What This All Means for Compliance Leaders

If you’re a compliance leader – whether you’re in a technical role or not – get ready: your job requirements are about to expand. You’ve likely felt it already. From today, and everyday hereafter, understanding AI isn’t optional. If you want to grow and succeed, AI will be part of your world. 

But here’s the good news. Just because you’ve got new ground to cover doesn’t mean you need to head back to grad school to become a PhD. in AI technology. Bringing AI transformation to your current workplace is about gathering support, promoting AI initiatives, and asking better questions.

What kind of questions? Here are just a few of the ones we find helpful when charting a course for AI compliance technology:

  • What decisions would we like AI to make, and which ones would we like to remain exclusively in the realm of human judgment?
  • How are we monitoring and auditing the decisions currently being made by AI?
  • At each human-in-the-loop touchpoint, does your team have the opportunity to provide  judgment, seek deeper context, and reorient AI decision-making as needed?
  • Is there a long-term plan for the continuous model testing, specifically to cover accuracy, bias, and potential degradation?

Think of it this way: with AI, it’s not simply that you don’t need to build the system – it’s that you shouldn’t. Unless you have a tech stack ready to support it, and an AI-native internal team ready to build it, there’s no reason to build your own solution. Working with a qualified and dedicated vendor will always result in a more comprehensive, responsive, and flexible set of capabilities. 

But that doesn’t mean you’re off the hook. Getting the best from your vendor, program, and team will require that you help mold and shape your company’s AI capabilities.

This is the starting line we find ourselves at. As I see it, the next five years will be defined not just by technical breakthroughs, but by thoughtful, collaborative product design – where a financial institution’s compliance needs are part of the spec from day one.

Final Thought: The Long Road is the Right One

It’s tempting to chase AI like it’s a race – to ship fast, demo hard, and brag to stakeholders. 

But in high-stakes domains like financial compliance, long-term success will be found where AI development is paired with careful, strategic, and deeply integrated thinking about domain expertise and how real people work.

I’m optimistic. The tools are improving at an amazing pace. Ideas are coming in from all sides. And for us – the people building these new AI products – our ability to deliver high-quality, scalable change to our customers just keeps getting better and better. 

The next five years aren’t about what AI can do. They’re about what we choose to do with it.

Stay Connected

Subscribe to recieve new content from Humminbird