As AI systems become more independent, they are starting to do more than follow simple rules. Some now make decisions, delegate tasks, and adjust their goals — all without human approval. These new agentic systems raise new governance challenges that current frameworks are not fully prepared to address.
Who should be bracing for impact? As far as we’re concerned, this should matter to anyone responsible for building, managing, or overseeing AI in a regulated environment. The simple truth is that the assumptions that worked for traditional, task-based systems do not hold for systems that act on their own. Without clearer standards for autonomy and decision authority, organizations risk being unprepared for the real-world behavior of these agents.
If your team is already deploying complex AI systems, or you are responsible for ensuring responsible use, it’s time to ask: do current frameworks go far enough?
What Are Agentic Systems?
An agentic system can pursue goals, make decisions, and take actions with limited or no direct human oversight. It may interact with other systems, shift strategies on the fly, or decide when and how to escalate tasks.
This raises fundamental questions:
- How much freedom should an AI agent have to make decisions?
- Who is responsible when the agentic system delegates a task to another system or agent?
- Can your existing oversight structure detect when that system steps outside its assigned boundaries?
If your governance model is based on fixed workflows, static approvals, or manual reviews, it likely won’t hold up.
A Look at the Frameworks Most Organizations Rely On
Three frameworks often form the foundation of responsible AI programs: ISO/IEC 42001, the NIST AI Risk Management Framework, and the EU AI Act. Each offers value, but none of them provide complete answers when it comes to managing autonomy and decision authority in agentic systems. Let’s explore these frameworks (and where they fall flat).
ISO/IEC 42001
This new international standard sets out requirements for establishing an AI management system, emphasizing documentation, process control, and continual improvement. It is effective at helping organizations define internal roles and responsibilities and build a structured approach to AI governance.
But ISO/IEC 42001 does not offer practical guidance on setting or monitoring boundaries for autonomous behavior. It does not define what decisions an agentic system may or may not make, nor how to manage delegation of authority within or between systems.
NIST AI Risk Management Framework
The NIST framework focuses on identifying, measuring, and managing AI-related risks. It promotes principles like accountability and transparency while emphasizing the importance of context.
This makes the NIST framework flexible; it can be applied to a wide range of systems, including agentic ones. But it does not define thresholds for acceptable autonomy or explain how to monitor decision delegation or goal drift over time. The result is a strong foundation, but not a complete toolkit.
EU AI Act
The EU AI Act is the most comprehensive regulatory framework introduced to date. It imposes specific obligations based on risk classification. High-risk systems must meet documentation, oversight, and human review requirements. These are all valuable.
Still, the Act focuses on use cases, not system behavior. It assumes the system will operate in known, fixed ways. There is no detailed guidance for what to do when an AI system starts behaving differently than expected or makes decisions that shift over time.
Key Gaps to Consider
If you are developing or governing agentic systems, these frameworks leave out some important elements.
- No standard for autonomy levels. There is no consistent method for defining what degree of freedom a system has to act without review or approval.
- No clear approach to delegation. When a system passes a task to another agent or model, who is responsible for what happens next?
- No tools to detect autonomy drift. Many systems change how they operate over time. Without monitoring, you may not know when they cross a line until it’s far too late.
- No oversight of emergent behavior. Complex systems sometimes behave in unexpected ways, especially when interacting with other systems. Most frameworks do not address this directly.
While agentic AI is rapidly emerging, there are many unknowns, but the problems laid out above are not theoretical. These issues are already affecting companies that are diving into agentic AI, and they affect how you manage risk, maintain compliance, and build trust with stakeholders.
Where the RAI Institute can help
The RAI Institute exists to support organizations in closing these kinds of gaps. We work with members to operationalize responsible AI instead of just talking about it.
- Our TrustX Risk Classification helps you determine the level of risk an AI system presents before you apply controls. An AI code assistant used in healthcare does not have the same risk profile as an agentic commerce tool used in banking. TrustX helps ensure the right level of control, oversight, and assurance is applied based on the system’s real-world impact.
- Our RAISE Pathways program includes over 1,100 mapped AI controls aligned to 17 global standards. This gives you a way to benchmark your practices, identify where you’re exposed, and strengthen governance where existing frameworks fall short.
- Our verification and assessment programs help organizations define what autonomy means within their systems. We provide structured reviews of decision authority, delegation boundaries, escalation protocols, and oversight, turning abstract principles into real-world controls.
- Our peer network and expert guidance give you access to lessons learned from other sectors already dealing with these questions. Whether in healthcare, finance, or energy, we help our members make informed, practical decisions.
Take Control of Your Agentic Systems (Before They Take Control of Your Operations)
Agentic systems and agentic agents are already being deployed across thousands of businesses in a range of industries. If your team is exploring adaptive models, intelligent assistants, or autonomous agents, now is the time to recognize that you’re no longer dealing with standard automation.
Your existing controls may not be enough and waiting for regulators to spell out every requirement is a risk.
Now is the time to:
- Map your systems’ decision authority.
- Set clear boundaries and escalation points.
- Establish monitoring for autonomy drift.
- Validate your governance with independent oversight.
The organizations that act now will be the ones shaping responsible AI, not reacting to it.
Learn about RAI Institute membership or reach out to discuss how we can support your program.
