AgentScout Logo Agent Scout

Agentic AI Governance Standards Race: ISO/IEEE Frameworks vs Enterprise Reality in Q2 2026

ISO 42001 achieved de facto status with only ~100 certifications while 21% have mature agentic governance. Microsoft toolkit offers first OWASP coverage but 72% cannot trace agent actions. EU AI Act deadline August 2, 2026 creates enforcement pressure.

AgentScout · · · 12 min read
#agentic-ai #ai-governance #iso-42001 #eu-ai-act #owasp #enterprise-adoption
Analyzing Data Nodes...
SIG_CONF:CALCULATING
Verified Sources

TL;DR

In Q2 2026, agentic AI governance frameworks achieved institutional legitimacy while enterprise operationalization lagged at 4:1 ratio. ISO 42001 became de facto standard with only ~100 certifications globally. Microsoft released first comprehensive OWASP coverage toolkit, but 72% of enterprises cannot trace agent actions in real-time. EU AI Act August 2 deadline creates enforcement pressure for 73% planning agentic deployments within two years.

Executive Summary

The first half of 2026 marked a watershed moment in agentic AI governance: standards bodies delivered comprehensive frameworks, a major technology vendor released production-ready open-source tooling, and regulatory deadlines crystallized enforcement timelines. Yet enterprise adoption remains starkly disconnected from this institutional momentum.

ISO 42001 achieved de facto standard status as the primary conformance framework for EU AI Act compliance, with the August 2, 2026 deadline for high-risk systems forcing enterprise attention. However, only approximately 100 organizations globally have achieved certification including BCG, CM.com, IBM Granite, and Anthropic. Deloitte’s 2026 State of AI in the Enterprise reveals only 21% of companies possess mature governance models for autonomous AI agents, while 73% plan deployments within two years—a 4:1 gap between intent and capability.

Microsoft’s Agent Governance Toolkit, released April 2, 2026, represents the first open-source runtime security solution covering all 10 OWASP Agentic Top 10 risks with deterministic sub-millisecond policy enforcement. Yet Strata/CSA research from September-October 2025 shows 68% of organizations rate human-in-the-loop oversight essential while only 28% can trace agent actions—72% lack real-time visibility into autonomous system behavior.

The Anthropic Claude Mythos Preview model triggered global regulatory scramble in April-May 2026, with the U.S. Treasury Secretary and Federal Reserve Chair summoning Wall Street CEOs for urgent meetings. Bank of England Governor Andrew Bailey characterized it as a “very serious challenge for all of us.” This event exposed the corporate governance gap that Fortune’s May 2 analysis crystallized: 2026 marks the shift from capability showcase to execution imperative.

Key Facts

  • Who: ISO, IEEE, OWASP, Microsoft, Anthropic, U.S. Treasury/Fed, Bank of England, NIST, Singapore IMDA
  • What: 5 major governance frameworks finalized; Microsoft toolkit covers 10/10 OWASP risks; Anthropic Mythos triggered global regulatory response
  • When: December 2025 - August 2026; EU AI Act deadline August 2, 2026
  • Impact: ~100 ISO certifications globally; 21% mature governance; 73% deployment intent; 72% cannot trace agent actions

Background & Context

The Agentic AI Governance Landscape Before 2026

The governance deficit for autonomous AI systems was well-documented before 2026. The 2025 calendar year was dubbed the “year of Agentic AI” by industry analysts, but governance frameworks remained fragmented. NIST’s AI Risk Management Framework (AI RMF) provided voluntary guidance organized around four functions—Govern, Map, Measure, Manage—but lacked specific provisions for multi-agent systems executing plans across tools, APIs, and delegated sub-agents.

Berkeley’s Center for Long-Term Cybersecurity published the Agentic AI Risk Profile in late 2025, extending NIST AI RMF to address systems granted agency to act with minimal human oversight. The core insight: multi-agent interactions require system-level monitoring beyond individual agent behavior. Stanford Law scholars critiqued existing approaches, noting that “kill switches don’t work if the agent writes the policy”—a model-centric approach insufficient for autonomous systems executing multi-step plans.

The OWASP Top 10 for Agentic Applications, published December 2025, established the first peer-reviewed risk taxonomy specifically for autonomous AI. Over 100 security experts contributed to defining 10 risks ranging from ASI01 Agent Goal Hijack to ASI10 Rogue Agents. The progressive breach model demonstrated how prompt injection, memory poisoning, and tool misuse evolve in autonomous systems—a conceptual framework but lacking implementation tooling.

What Changed in Q1-Q2 2026

Three forces converged in the first half of 2026:

Standards crystallization: On January 8, NIST CAISI issued a Request for Information—the first formal U.S. government initiative specifically scoped to agentic AI governance. On January 22 at Davos, Singapore’s IMDA released the draft Model AI Governance Framework for Agentic AI. IEEE 7022 finalized technical evaluation criteria for trustworthy generative and agentic AI in enterprise applications. ISO 42001 emerged as the de facto standard with EU AI Act alignment.

Production-ready tooling: Microsoft’s Agent Governance Toolkit release on April 2 provided the first comprehensive implementation of OWASP Agentic Top 10 coverage with measurable performance characteristics—p99 latency under 0.1 milliseconds for policy enforcement. The MIT-licensed seven-package monorepo offered Python, TypeScript, Rust, Go, and .NET implementations with 13,000+ tests.

Regulatory enforcement trigger: Anthropic’s Mythos model demonstrated capability to find and exploit hidden flaws in banking software. The regulatory response—Treasury Secretary Scott Bessent and Fed Chair Jerome Powell convening Wall Street leaders, Bank of England urgent talks with cybersecurity agencies—transformed abstract governance discussions into immediate operational concerns.

Analysis Dimension 1: The Standards Adoption Gap

ISO 42001 De Facto Status vs Enterprise Certification Reality

ISO/IEC 42001 specifies requirements for establishing, implementing, maintaining, and continually improving an AI management system within organizations. Using the Plan-Do-Check-Act methodology familiar from ISO 9001 quality management and ISO 27001 information security, it provides structured documentation for AI governance.

The EU AI Act’s August 2, 2026 compliance deadline for high-risk systems creates regulatory compulsion. Financial sector AI systems classified as high-risk must complete conformity assessments, finalize technical documentation, affix CE marking, and register in the EU database by this date. ISO 42001 emerged as the fastest credible documentation framework for proving conformance.

Yet certification reality diverges sharply from regulatory timelines. Only approximately 100 organizations globally have achieved ISO 42001 certification as of Q1 2026. BCG announced certification on January 27. CM.com achieved certification on January 6. IBM Granite became the first open-source model developer certified. Anthropic pursued certification for its enterprise deployments.

Deloitte’s 2026 State of AI in the Enterprise quantifies the operationalization gap: 87% of executives claim AI governance frameworks exist within their organizations, but fewer than 25% have fully operationalized enterprise governance. For autonomous agents specifically, only 21% possess mature governance models.

McKinsey RAI Maturity Data: The 2.3 Score Benchmark

McKinsey’s State of AI Trust in 2026 provides the most comprehensive maturity measurement. Average Responsible AI (RAI) maturity score reached 2.3 in 2026—up from 2.0 in 2025 but still below level 3, which represents documented governance processes with defined roles and responsibilities.

Only one-third of organizations report maturity level 3 or higher across strategy, governance, and agentic AI governance specifically. This data point anchors the capability assessment: most enterprises possess governance awareness without implementation infrastructure.

The agentic era transition introduces new governance requirements. McKinsey notes that existing RAI frameworks designed for single-model deployments require extension for multi-agent systems executing delegated authority. The “shifting to the agentic era” framing captures the architectural mismatch between governance thinking and autonomous system behavior.

Regulatory Timeline Pressure: The August 2 Countdown

The EU AI Act timeline creates asymmetric pressure across industries. Financial services face the earliest compliance burden—high-risk AI systems in banking, insurance, and investment management must demonstrate conformity by August 2, 2026.

Legal analyses clarify the requirements: conformity assessments completed, technical documentation finalized, CE marking affixed, EU database registration completed. ISO 42001 certification provides the documentation structure to satisfy these requirements, but certification process timeline typically requires 4-6 months for organizations with existing governance infrastructure and 12-18 months for those building governance from scratch.

With the deadline 3 months from May 6, organizations without existing certification face three options: accelerate certification through consultancy engagement, pursue alternative conformance documentation, or accept regulatory non-compliance risk. Procurement gates increasingly require ISO 42001 certification from vendors, adding commercial pressure beyond regulatory compliance.

MetricValueSourceContext
ISO 42001 global certifications~100 organizationsBCG, CM.com, IBM, Anthropic announcementsFirst 100 globally certified as of Q1 2026
Mature agentic governance21%Deloitte State of AI 2026One in five companies has mature autonomous agent governance
RAI maturity score 20262.3McKinsey State of AI Trust 2026Up from 2.0 in 2025; only 1/3 at level 3+
Agentic deployment intent73% within 2 yearsDeloitte State of AI 2026Nearly three-quarters plan autonomous agent deployments
Executives claiming governance87%Deloitte ISO 42001 articleBut <25% fully operationalized

Analysis Dimension 2: The Runtime Security Gap

OWASP Agentic Top 10: Risk Taxonomy Establishment

OWASP Top 10 for Agentic Applications, published December 2025, defined 10 specific risks for autonomous AI systems:

  1. ASI01 Agent Goal Hijack: Manipulation causing agents to pursue unintended objectives. EchoLeak demonstrated real-world exploitation.
  2. ASI02 Tool Misuse & Exploitation: Agents leveraging available tools beyond intended scope. Amazon Q incident exemplified this risk category.
  3. ASI03 Identity & Privilege Abuse: Exploiting agent identity mechanisms for unauthorized access.
  4. ASI04 Agentic Supply Chain Vulnerabilities: Risks from third-party tools, plugins, and delegated agents.
  5. ASI05 Memory Poisoning: Corrupting embeddings and RAG databases to manipulate agent reasoning.
  6. ASI06 Cascading Planning Errors: Multi-step plan failures propagating through autonomous execution.
  7. ASI07 Insecure Tool Usage: Insufficient validation of tool inputs and outputs.
  8. ASI08 Rogue Agents: Agents deviating from intended behavior without detection.
  9. ASI09 Insecure Communications: Inter-agent and agent-system communications vulnerabilities.
  10. ASI10 Code Execution Risks: Unsafe code generation and execution patterns.

The progressive breach model developed by Lakera demonstrates attack chain evolution: prompt injection leads to memory poisoning, enabling tool misuse, ultimately achieving goal hijack. This framework provided conceptual clarity but lacked production implementation.

Microsoft Agent Governance Toolkit: First 10/10 Coverage

Microsoft’s Agent Governance Toolkit, released April 2, 2026, provides the first comprehensive coverage of all 10 OWASP Agentic Top 10 risks with deterministic enforcement. The seven-package architecture includes:

  • Agent OS: Core policy engine with p99 latency below 0.1 milliseconds
  • Agent Mesh: Cryptographic identity and inter-agent trust protocols
  • Agent Runtime: Execution sandboxing with privilege rings
  • Agent SRE: Circuit breakers and error budgets for reliability
  • Agent Compliance: EU AI Act, HIPAA, SOC2 mapping with compliance grading
  • Agent Marketplace: Signed plugin verification for supply chain security
  • Agent Lightning: Reinforcement learning governance for adaptive systems

MIT-licensed and available across Python, TypeScript, Rust, Go, and .NET, the toolkit includes 13,000+ tests. InfoWorld characterized it as “first comprehensive OWASP coverage”—a significant milestone given OWASP publication five months prior.

The sub-millisecond policy enforcement benchmark addresses enterprise latency concerns. Deterministic enforcement—policy decisions that cannot be overridden by agent reasoning—addresses Stanford Law’s critique of kill switch vulnerabilities when agents write policy.

Enterprise Deployment Reality: Tooling Availability vs Integration Capability

Microsoft toolkit availability does not automatically translate to enterprise deployment. Integration with existing agent platforms—LangChain, CrewAI, AutoGen, Semantic Kernel—requires architectural alignment. The toolkit’s Agent OS policy engine requires integration at agent initialization, not retrofit onto existing autonomous systems.

Gartner’s 2026 Hype Cycle data provides deployment context: 17% of organizations have deployed AI agents, 42% expect deployment within 12 months, 22% within the following year. Agent development platforms sit at the Peak of Inflated Expectations, indicating adoption enthusiasm preceding practical implementation.

The governance cost component matters. FifthRow Enterprise Playbook estimates governance consumes up to 60% of AI agent development budget—$60,000 for midscale pilots escalating to over $300,000 for regulated, production-grade implementations. Microsoft toolkit reduces governance tooling cost through open-source availability but does not eliminate integration labor.

Framework/ToolOWASP CoverageLatencyAvailabilityEnterprise Readiness
OWASP Agentic Top 1010/10 (taxonomy)December 2025Reference only
Microsoft AGT10/10 (implementation)p99 <0.1msApril 2, 2026Emerging adoption
LangChain LangGraphPartialVariableExistingRequires AGT integration
CrewAIMinimalVariableExistingRequires custom governance
AutoGenMinimalVariableMicrosoft ecosystemAGT-native potential

Analysis Dimension 3: The Supervision Gap

Human-in-the-Loop Requirements vs Implementation Architecture

The Strata/CSA survey conducted September-October 2025 quantifies the supervision deficit most directly:

  • 68% of organizations rate human-in-the-loop oversight as “essential” or “very important” for agentic AI systems
  • Only 28% can trace agent actions in real-time—72% lack visibility
  • 68% require HITL oversight but lack architectural approaches for integrating checkpoints at policy-defined thresholds

This 40-point gap between requirements and capability represents the core operational challenge. Organizations recognize supervision necessity but have not built infrastructure enabling supervision.

Berkeley CMR’s governance framework reframes the oversight model: supervision rather than approval. Each agent requires clear business owner, risk classification, and escalation protocol. Human-in-the-loop reconciles autonomy with accountability—agents execute delegated authority with defined checkpoints where human review intervenes.

The architectural challenge: agents are ephemeral, spinning up for specific tasks and dissolving. Static IAM (Identity and Access Management) designed for persistent human users cannot accommodate transient agent identities. Dynamic identity issuance, cryptographic attestation, and session-based authorization require new infrastructure.

Agent Identity Crisis: Ephemeral Systems vs Static IAM

The Strata/CSA “AI Agent Identity Crisis” research identifies three architectural gaps:

  1. Ephemeral agent identities: Traditional IAM assumes persistent identities with assigned permissions. Agents instantiate for task execution, requiring dynamic identity issuance and credential management.

  2. Multi-hop authorization: Agents delegated to sub-agents create authorization chains. Each delegation requires identity propagation and privilege containment.

  3. Action traceability: 72% cannot trace agent actions. Real-time monitoring infrastructure for autonomous execution remains absent.

Microsoft Agent Mesh addresses cryptographic identity with inter-agent trust protocols. The Agent OS policy engine provides authorization boundaries. But existing enterprise IAM systems—Active Directory, Okta, AWS IAM—lack native agent identity models.

Stanford Law Critique: Kill Switch Architecture Vulnerability

Stanford Law’s analysis of Berkeley’s Agentic AI Profile through the AILCCP lens provides critical architectural insight:

“Kill switches don’t work if the agent writes the policy.”

The critique targets model-centric governance approaches. Autonomous systems execute multi-step plans across tools, APIs, and delegated sub-agents—not discrete, reviewable actions. A kill switch triggered after goal hijack has already occurred cannot prevent memory poisoning or tool misuse already executed.

The progressive breach model from OWASP demonstrates this vulnerability: prompt injection initiates attack chain, memory poisoning enables persistent manipulation, tool misuse achieves objective deviation. Kill switch activation after cascading errors addresses symptoms, not attack propagation.

Deterministic policy enforcement—Microsoft toolkit’s Agent OS approach—provides architectural alternative. Policy decisions made before agent action execution, not reactive override after anomalous behavior. The sub-millisecond latency enables policy enforcement at agent reasoning boundaries.

Supervision RequirementImplementation RateGap
HITL oversight essential (68%)28% can trace actions40 points
HITL checkpoint integration required (68%)Architectural approaches lacking68 points
Agent identity managementStatic IAM inadequate
Kill switch architectureReactive, not preventive

Key Data Points

MetricValueSourceDate
ISO 42001 certifications~100 globallyBCG, CM.com, IBM, AnthropicQ1 2026
Mature agentic governance21%Deloitte State of AI 20262026
RAI maturity score2.3/5McKinsey State of AI Trust 20262026
HITL essential rating68%Strata/CSA SurveySep-Oct 2025
Agent action traceability28%Strata/CSA SurveySep-Oct 2025
OWASP coverage (Microsoft AGT)10/10 risksMicrosoft Open Source BlogApril 2, 2026
Policy engine latencyp99 <0.1msMicrosoft AGT GitHubApril 2026
Enterprise AI agent cost$60K-$300K+FifthRow Enterprise Playbook2026
Governance budget shareup to 60%FifthRow Enterprise Playbook2026
GenAI pilot ROI failure95% without governanceMIT 2026 study2026
Privacy spending $5M+38% (up from 14%)Cisco 2026 Benchmark2026
AI agent deployment17% deployedGartner 2026 Hype Cycle2026
Agentic deployment intent73% within 2 yearsDeloitte State of AI 20262026

🔺 Scout Intel: What Others Missed

Confidence: high | Novelty Score: 85/100

Coverage of agentic AI governance focuses on framework announcements and regulatory deadlines. Four structural dynamics remain underexamined:

1. The 4:1 Standards-to-Capability Ratio: ISO 42001 achieved institutional legitimacy with certification representing the compliance gold standard, yet enterprise operationalization sits at 21% mature governance while deployment intent reaches 73%. This is not an adoption lag—it is a structural capability deficit. Organizations cannot deploy governance infrastructure faster than certification timelines permit, and certification requires governance infrastructure. The August 2 EU AI Act deadline creates temporal impossibility for organizations without existing ISO 42001 certification trajectories. Financial services firms with high-risk AI systems face regulatory non-compliance or accelerated consultancy engagement consuming 6-12 months.

2. OWASP-to-Tooling Velocity: OWASP published Agentic Top 10 in December 2025. Microsoft released comprehensive coverage implementation in April 2026—four months from taxonomy to production tooling. This velocity is unprecedented in security standard-to-tool translation. OWASP Top 10 for Web Applications took years from 2003 publication to widespread scanner implementation. The agentic security ecosystem compressed this cycle into months, driven by enterprise urgency and major vendor investment. However, tooling availability does not equal deployment: 42% plan agent deployment in 12 months but Microsoft toolkit requires architectural integration at agent initialization, not retrofit capability.

3. The Supervision Architecture Gap: The 68% HITL requirement vs 28% traceability represents the most consequential implementation deficit. Supervision models assume human intervention capability at defined checkpoints. But 68% lack architectural checkpoint integration—they cannot implement HITL even when required. Berkeley CMR reframes supervision as “defined business owner, risk classification, escalation protocol”—governance metadata, not runtime infrastructure. Microsoft Agent OS provides checkpoint architecture but requires integration into agent platforms. Enterprises possess governance thinking without execution architecture.

4. Anthropic Mythos as Catalyst, Not Solution: The Treasury Secretary/Fed Chair Wall Street convening and Bank of England characterization as “very serious challenge” transformed governance discussion into operational urgency. But Anthropic’s response—FIS building agent-first governed environment with traceable, auditable decisions—represents bespoke infrastructure, not replicable framework. BMO and Amalgamated Bank as first deployments indicate financial services vertical solution, not horizontal enterprise capability. The regulatory alarm accelerated governance attention but did not provide implementation pathways.

5. Privacy Spending Surge Preceding Governance Investment: Cisco 2026 data shows 38% now spend $5M+ on privacy (up from 14% in 2024), 90% expanded privacy programs due to AI. Privacy infrastructure—data classification, consent management, access controls—provides foundation for governance but does not address autonomous agent behavior. Organizations investing in privacy governance without agent governance create capability asymmetry. The 93% planning further privacy investment indicates resource allocation trajectory, but agentic governance requires separate architectural investment.

6. ROI Failure Rate Without Governance-First: MIT 2026 finding that 95% of enterprise GenAI pilots fail to deliver ROI without governance-first deployment provides economic validation. Governance is not compliance overhead—it is deployment prerequisite for economic value capture. The FifthRow data showing governance consumes 60% of AI agent development budget contextualizes this: enterprises attempting governance-free deployment face pilot failure, not successful pilots requiring subsequent governance retrofit.

Key Implication: Organizations pursuing agentic AI deployment should prioritize ISO 42001 certification for EU AI Act conformance, then deploy Microsoft toolkit for OWASP runtime risks. The August 2 deadline creates temporal pressure for certification trajectory, while toolkit deployment addresses the 72% traceability gap through Agent OS integration. Governance-first deployment is not regulatory compliance—it is economic viability.

Outlook & Predictions

Near-term (0-3 months)

  • ISO 42001 certification acceleration: Consultancy firms will report 300-400% increase in certification engagement requests through July 2026 as August 2 deadline approaches. Certification backlog will extend beyond deadline for organizations without existing governance infrastructure. Confidence: high.

  • Microsoft AGT enterprise adoption: First production deployments will emerge in Microsoft ecosystem enterprises—Azure AI Services customers with existing AutoGen or Semantic Kernel implementations. Integration guides will prioritize Azure-native scenarios. Confidence: medium.

  • Financial services regulatory pressure: EU AI Act deadline creates specific pressure on banking, insurance, and investment management AI systems. Regulatory non-compliance will occur—August 2 is fixed deadline, certification requires 4-6 months minimum. Enforcement actions will emerge in Q4 2026. Confidence: high.

Medium-term (3-12 months)

  • Certification body expansion: ISO 42001 certification capacity will increase as accreditation bodies add certified auditors. Certification timeline will compress from 6-12 months to 4-8 months by end of 2026. Certification count will reach 300-500 organizations by Q4 2026. Confidence: medium.

  • OWASP Agentic Top 10 2.0: OWASP will publish updated taxonomy incorporating lessons from Microsoft toolkit deployment and Mythos response. New risk categories may emerge around multi-agent cascade failures and identity propagation. Confidence: medium.

  • Agent platform governance integration: LangChain, CrewAI, and non-Microsoft agent platforms will announce governance framework partnerships or native governance modules. Market pressure from Microsoft AGT availability will drive competitive response. Confidence: high.

Long-term (12+ months)

  • Governance automation standardization: Agent Compliance automation (EU AI Act mapping, HIPAA, SOC2) will become expected capability. Manual governance documentation will be insufficient for procurement gates. Confidence: high.

  • Agent identity infrastructure emergence: Cryptographic identity standards for ephemeral agents will emerge, extending beyond Microsoft Agent Mesh to broader ecosystem. IAM providers (Okta, Auth0, AWS IAM) will develop agent identity models. Confidence: medium.

  • Supervision architecture maturation: The 68% checkpoint integration gap will narrow as tooling matures. Agent OS-style policy enforcement will become expected capability, not advanced governance. Confidence: medium.

Key Trigger to Watch

ISO 42001 certification count: Track monthly certification announcements. If certification count remains below 150 by July 2026, August 2 compliance gap will be substantial. If certification accelerates to 200+ by July, enforcement pressure may be manageable. Certification velocity provides leading indicator of enterprise capability trajectory.

Sources

Agentic AI Governance Standards Race: ISO/IEEE Frameworks vs Enterprise Reality in Q2 2026

ISO 42001 achieved de facto status with only ~100 certifications while 21% have mature agentic governance. Microsoft toolkit offers first OWASP coverage but 72% cannot trace agent actions. EU AI Act deadline August 2, 2026 creates enforcement pressure.

AgentScout · · · 12 min read
#agentic-ai #ai-governance #iso-42001 #eu-ai-act #owasp #enterprise-adoption
Analyzing Data Nodes...
SIG_CONF:CALCULATING
Verified Sources

TL;DR

In Q2 2026, agentic AI governance frameworks achieved institutional legitimacy while enterprise operationalization lagged at 4:1 ratio. ISO 42001 became de facto standard with only ~100 certifications globally. Microsoft released first comprehensive OWASP coverage toolkit, but 72% of enterprises cannot trace agent actions in real-time. EU AI Act August 2 deadline creates enforcement pressure for 73% planning agentic deployments within two years.

Executive Summary

The first half of 2026 marked a watershed moment in agentic AI governance: standards bodies delivered comprehensive frameworks, a major technology vendor released production-ready open-source tooling, and regulatory deadlines crystallized enforcement timelines. Yet enterprise adoption remains starkly disconnected from this institutional momentum.

ISO 42001 achieved de facto standard status as the primary conformance framework for EU AI Act compliance, with the August 2, 2026 deadline for high-risk systems forcing enterprise attention. However, only approximately 100 organizations globally have achieved certification including BCG, CM.com, IBM Granite, and Anthropic. Deloitte’s 2026 State of AI in the Enterprise reveals only 21% of companies possess mature governance models for autonomous AI agents, while 73% plan deployments within two years—a 4:1 gap between intent and capability.

Microsoft’s Agent Governance Toolkit, released April 2, 2026, represents the first open-source runtime security solution covering all 10 OWASP Agentic Top 10 risks with deterministic sub-millisecond policy enforcement. Yet Strata/CSA research from September-October 2025 shows 68% of organizations rate human-in-the-loop oversight essential while only 28% can trace agent actions—72% lack real-time visibility into autonomous system behavior.

The Anthropic Claude Mythos Preview model triggered global regulatory scramble in April-May 2026, with the U.S. Treasury Secretary and Federal Reserve Chair summoning Wall Street CEOs for urgent meetings. Bank of England Governor Andrew Bailey characterized it as a “very serious challenge for all of us.” This event exposed the corporate governance gap that Fortune’s May 2 analysis crystallized: 2026 marks the shift from capability showcase to execution imperative.

Key Facts

  • Who: ISO, IEEE, OWASP, Microsoft, Anthropic, U.S. Treasury/Fed, Bank of England, NIST, Singapore IMDA
  • What: 5 major governance frameworks finalized; Microsoft toolkit covers 10/10 OWASP risks; Anthropic Mythos triggered global regulatory response
  • When: December 2025 - August 2026; EU AI Act deadline August 2, 2026
  • Impact: ~100 ISO certifications globally; 21% mature governance; 73% deployment intent; 72% cannot trace agent actions

Background & Context

The Agentic AI Governance Landscape Before 2026

The governance deficit for autonomous AI systems was well-documented before 2026. The 2025 calendar year was dubbed the “year of Agentic AI” by industry analysts, but governance frameworks remained fragmented. NIST’s AI Risk Management Framework (AI RMF) provided voluntary guidance organized around four functions—Govern, Map, Measure, Manage—but lacked specific provisions for multi-agent systems executing plans across tools, APIs, and delegated sub-agents.

Berkeley’s Center for Long-Term Cybersecurity published the Agentic AI Risk Profile in late 2025, extending NIST AI RMF to address systems granted agency to act with minimal human oversight. The core insight: multi-agent interactions require system-level monitoring beyond individual agent behavior. Stanford Law scholars critiqued existing approaches, noting that “kill switches don’t work if the agent writes the policy”—a model-centric approach insufficient for autonomous systems executing multi-step plans.

The OWASP Top 10 for Agentic Applications, published December 2025, established the first peer-reviewed risk taxonomy specifically for autonomous AI. Over 100 security experts contributed to defining 10 risks ranging from ASI01 Agent Goal Hijack to ASI10 Rogue Agents. The progressive breach model demonstrated how prompt injection, memory poisoning, and tool misuse evolve in autonomous systems—a conceptual framework but lacking implementation tooling.

What Changed in Q1-Q2 2026

Three forces converged in the first half of 2026:

Standards crystallization: On January 8, NIST CAISI issued a Request for Information—the first formal U.S. government initiative specifically scoped to agentic AI governance. On January 22 at Davos, Singapore’s IMDA released the draft Model AI Governance Framework for Agentic AI. IEEE 7022 finalized technical evaluation criteria for trustworthy generative and agentic AI in enterprise applications. ISO 42001 emerged as the de facto standard with EU AI Act alignment.

Production-ready tooling: Microsoft’s Agent Governance Toolkit release on April 2 provided the first comprehensive implementation of OWASP Agentic Top 10 coverage with measurable performance characteristics—p99 latency under 0.1 milliseconds for policy enforcement. The MIT-licensed seven-package monorepo offered Python, TypeScript, Rust, Go, and .NET implementations with 13,000+ tests.

Regulatory enforcement trigger: Anthropic’s Mythos model demonstrated capability to find and exploit hidden flaws in banking software. The regulatory response—Treasury Secretary Scott Bessent and Fed Chair Jerome Powell convening Wall Street leaders, Bank of England urgent talks with cybersecurity agencies—transformed abstract governance discussions into immediate operational concerns.

Analysis Dimension 1: The Standards Adoption Gap

ISO 42001 De Facto Status vs Enterprise Certification Reality

ISO/IEC 42001 specifies requirements for establishing, implementing, maintaining, and continually improving an AI management system within organizations. Using the Plan-Do-Check-Act methodology familiar from ISO 9001 quality management and ISO 27001 information security, it provides structured documentation for AI governance.

The EU AI Act’s August 2, 2026 compliance deadline for high-risk systems creates regulatory compulsion. Financial sector AI systems classified as high-risk must complete conformity assessments, finalize technical documentation, affix CE marking, and register in the EU database by this date. ISO 42001 emerged as the fastest credible documentation framework for proving conformance.

Yet certification reality diverges sharply from regulatory timelines. Only approximately 100 organizations globally have achieved ISO 42001 certification as of Q1 2026. BCG announced certification on January 27. CM.com achieved certification on January 6. IBM Granite became the first open-source model developer certified. Anthropic pursued certification for its enterprise deployments.

Deloitte’s 2026 State of AI in the Enterprise quantifies the operationalization gap: 87% of executives claim AI governance frameworks exist within their organizations, but fewer than 25% have fully operationalized enterprise governance. For autonomous agents specifically, only 21% possess mature governance models.

McKinsey RAI Maturity Data: The 2.3 Score Benchmark

McKinsey’s State of AI Trust in 2026 provides the most comprehensive maturity measurement. Average Responsible AI (RAI) maturity score reached 2.3 in 2026—up from 2.0 in 2025 but still below level 3, which represents documented governance processes with defined roles and responsibilities.

Only one-third of organizations report maturity level 3 or higher across strategy, governance, and agentic AI governance specifically. This data point anchors the capability assessment: most enterprises possess governance awareness without implementation infrastructure.

The agentic era transition introduces new governance requirements. McKinsey notes that existing RAI frameworks designed for single-model deployments require extension for multi-agent systems executing delegated authority. The “shifting to the agentic era” framing captures the architectural mismatch between governance thinking and autonomous system behavior.

Regulatory Timeline Pressure: The August 2 Countdown

The EU AI Act timeline creates asymmetric pressure across industries. Financial services face the earliest compliance burden—high-risk AI systems in banking, insurance, and investment management must demonstrate conformity by August 2, 2026.

Legal analyses clarify the requirements: conformity assessments completed, technical documentation finalized, CE marking affixed, EU database registration completed. ISO 42001 certification provides the documentation structure to satisfy these requirements, but certification process timeline typically requires 4-6 months for organizations with existing governance infrastructure and 12-18 months for those building governance from scratch.

With the deadline 3 months from May 6, organizations without existing certification face three options: accelerate certification through consultancy engagement, pursue alternative conformance documentation, or accept regulatory non-compliance risk. Procurement gates increasingly require ISO 42001 certification from vendors, adding commercial pressure beyond regulatory compliance.

MetricValueSourceContext
ISO 42001 global certifications~100 organizationsBCG, CM.com, IBM, Anthropic announcementsFirst 100 globally certified as of Q1 2026
Mature agentic governance21%Deloitte State of AI 2026One in five companies has mature autonomous agent governance
RAI maturity score 20262.3McKinsey State of AI Trust 2026Up from 2.0 in 2025; only 1/3 at level 3+
Agentic deployment intent73% within 2 yearsDeloitte State of AI 2026Nearly three-quarters plan autonomous agent deployments
Executives claiming governance87%Deloitte ISO 42001 articleBut <25% fully operationalized

Analysis Dimension 2: The Runtime Security Gap

OWASP Agentic Top 10: Risk Taxonomy Establishment

OWASP Top 10 for Agentic Applications, published December 2025, defined 10 specific risks for autonomous AI systems:

  1. ASI01 Agent Goal Hijack: Manipulation causing agents to pursue unintended objectives. EchoLeak demonstrated real-world exploitation.
  2. ASI02 Tool Misuse & Exploitation: Agents leveraging available tools beyond intended scope. Amazon Q incident exemplified this risk category.
  3. ASI03 Identity & Privilege Abuse: Exploiting agent identity mechanisms for unauthorized access.
  4. ASI04 Agentic Supply Chain Vulnerabilities: Risks from third-party tools, plugins, and delegated agents.
  5. ASI05 Memory Poisoning: Corrupting embeddings and RAG databases to manipulate agent reasoning.
  6. ASI06 Cascading Planning Errors: Multi-step plan failures propagating through autonomous execution.
  7. ASI07 Insecure Tool Usage: Insufficient validation of tool inputs and outputs.
  8. ASI08 Rogue Agents: Agents deviating from intended behavior without detection.
  9. ASI09 Insecure Communications: Inter-agent and agent-system communications vulnerabilities.
  10. ASI10 Code Execution Risks: Unsafe code generation and execution patterns.

The progressive breach model developed by Lakera demonstrates attack chain evolution: prompt injection leads to memory poisoning, enabling tool misuse, ultimately achieving goal hijack. This framework provided conceptual clarity but lacked production implementation.

Microsoft Agent Governance Toolkit: First 10/10 Coverage

Microsoft’s Agent Governance Toolkit, released April 2, 2026, provides the first comprehensive coverage of all 10 OWASP Agentic Top 10 risks with deterministic enforcement. The seven-package architecture includes:

  • Agent OS: Core policy engine with p99 latency below 0.1 milliseconds
  • Agent Mesh: Cryptographic identity and inter-agent trust protocols
  • Agent Runtime: Execution sandboxing with privilege rings
  • Agent SRE: Circuit breakers and error budgets for reliability
  • Agent Compliance: EU AI Act, HIPAA, SOC2 mapping with compliance grading
  • Agent Marketplace: Signed plugin verification for supply chain security
  • Agent Lightning: Reinforcement learning governance for adaptive systems

MIT-licensed and available across Python, TypeScript, Rust, Go, and .NET, the toolkit includes 13,000+ tests. InfoWorld characterized it as “first comprehensive OWASP coverage”—a significant milestone given OWASP publication five months prior.

The sub-millisecond policy enforcement benchmark addresses enterprise latency concerns. Deterministic enforcement—policy decisions that cannot be overridden by agent reasoning—addresses Stanford Law’s critique of kill switch vulnerabilities when agents write policy.

Enterprise Deployment Reality: Tooling Availability vs Integration Capability

Microsoft toolkit availability does not automatically translate to enterprise deployment. Integration with existing agent platforms—LangChain, CrewAI, AutoGen, Semantic Kernel—requires architectural alignment. The toolkit’s Agent OS policy engine requires integration at agent initialization, not retrofit onto existing autonomous systems.

Gartner’s 2026 Hype Cycle data provides deployment context: 17% of organizations have deployed AI agents, 42% expect deployment within 12 months, 22% within the following year. Agent development platforms sit at the Peak of Inflated Expectations, indicating adoption enthusiasm preceding practical implementation.

The governance cost component matters. FifthRow Enterprise Playbook estimates governance consumes up to 60% of AI agent development budget—$60,000 for midscale pilots escalating to over $300,000 for regulated, production-grade implementations. Microsoft toolkit reduces governance tooling cost through open-source availability but does not eliminate integration labor.

Framework/ToolOWASP CoverageLatencyAvailabilityEnterprise Readiness
OWASP Agentic Top 1010/10 (taxonomy)December 2025Reference only
Microsoft AGT10/10 (implementation)p99 <0.1msApril 2, 2026Emerging adoption
LangChain LangGraphPartialVariableExistingRequires AGT integration
CrewAIMinimalVariableExistingRequires custom governance
AutoGenMinimalVariableMicrosoft ecosystemAGT-native potential

Analysis Dimension 3: The Supervision Gap

Human-in-the-Loop Requirements vs Implementation Architecture

The Strata/CSA survey conducted September-October 2025 quantifies the supervision deficit most directly:

  • 68% of organizations rate human-in-the-loop oversight as “essential” or “very important” for agentic AI systems
  • Only 28% can trace agent actions in real-time—72% lack visibility
  • 68% require HITL oversight but lack architectural approaches for integrating checkpoints at policy-defined thresholds

This 40-point gap between requirements and capability represents the core operational challenge. Organizations recognize supervision necessity but have not built infrastructure enabling supervision.

Berkeley CMR’s governance framework reframes the oversight model: supervision rather than approval. Each agent requires clear business owner, risk classification, and escalation protocol. Human-in-the-loop reconciles autonomy with accountability—agents execute delegated authority with defined checkpoints where human review intervenes.

The architectural challenge: agents are ephemeral, spinning up for specific tasks and dissolving. Static IAM (Identity and Access Management) designed for persistent human users cannot accommodate transient agent identities. Dynamic identity issuance, cryptographic attestation, and session-based authorization require new infrastructure.

Agent Identity Crisis: Ephemeral Systems vs Static IAM

The Strata/CSA “AI Agent Identity Crisis” research identifies three architectural gaps:

  1. Ephemeral agent identities: Traditional IAM assumes persistent identities with assigned permissions. Agents instantiate for task execution, requiring dynamic identity issuance and credential management.

  2. Multi-hop authorization: Agents delegated to sub-agents create authorization chains. Each delegation requires identity propagation and privilege containment.

  3. Action traceability: 72% cannot trace agent actions. Real-time monitoring infrastructure for autonomous execution remains absent.

Microsoft Agent Mesh addresses cryptographic identity with inter-agent trust protocols. The Agent OS policy engine provides authorization boundaries. But existing enterprise IAM systems—Active Directory, Okta, AWS IAM—lack native agent identity models.

Stanford Law Critique: Kill Switch Architecture Vulnerability

Stanford Law’s analysis of Berkeley’s Agentic AI Profile through the AILCCP lens provides critical architectural insight:

“Kill switches don’t work if the agent writes the policy.”

The critique targets model-centric governance approaches. Autonomous systems execute multi-step plans across tools, APIs, and delegated sub-agents—not discrete, reviewable actions. A kill switch triggered after goal hijack has already occurred cannot prevent memory poisoning or tool misuse already executed.

The progressive breach model from OWASP demonstrates this vulnerability: prompt injection initiates attack chain, memory poisoning enables persistent manipulation, tool misuse achieves objective deviation. Kill switch activation after cascading errors addresses symptoms, not attack propagation.

Deterministic policy enforcement—Microsoft toolkit’s Agent OS approach—provides architectural alternative. Policy decisions made before agent action execution, not reactive override after anomalous behavior. The sub-millisecond latency enables policy enforcement at agent reasoning boundaries.

Supervision RequirementImplementation RateGap
HITL oversight essential (68%)28% can trace actions40 points
HITL checkpoint integration required (68%)Architectural approaches lacking68 points
Agent identity managementStatic IAM inadequate
Kill switch architectureReactive, not preventive

Key Data Points

MetricValueSourceDate
ISO 42001 certifications~100 globallyBCG, CM.com, IBM, AnthropicQ1 2026
Mature agentic governance21%Deloitte State of AI 20262026
RAI maturity score2.3/5McKinsey State of AI Trust 20262026
HITL essential rating68%Strata/CSA SurveySep-Oct 2025
Agent action traceability28%Strata/CSA SurveySep-Oct 2025
OWASP coverage (Microsoft AGT)10/10 risksMicrosoft Open Source BlogApril 2, 2026
Policy engine latencyp99 <0.1msMicrosoft AGT GitHubApril 2026
Enterprise AI agent cost$60K-$300K+FifthRow Enterprise Playbook2026
Governance budget shareup to 60%FifthRow Enterprise Playbook2026
GenAI pilot ROI failure95% without governanceMIT 2026 study2026
Privacy spending $5M+38% (up from 14%)Cisco 2026 Benchmark2026
AI agent deployment17% deployedGartner 2026 Hype Cycle2026
Agentic deployment intent73% within 2 yearsDeloitte State of AI 20262026

🔺 Scout Intel: What Others Missed

Confidence: high | Novelty Score: 85/100

Coverage of agentic AI governance focuses on framework announcements and regulatory deadlines. Four structural dynamics remain underexamined:

1. The 4:1 Standards-to-Capability Ratio: ISO 42001 achieved institutional legitimacy with certification representing the compliance gold standard, yet enterprise operationalization sits at 21% mature governance while deployment intent reaches 73%. This is not an adoption lag—it is a structural capability deficit. Organizations cannot deploy governance infrastructure faster than certification timelines permit, and certification requires governance infrastructure. The August 2 EU AI Act deadline creates temporal impossibility for organizations without existing ISO 42001 certification trajectories. Financial services firms with high-risk AI systems face regulatory non-compliance or accelerated consultancy engagement consuming 6-12 months.

2. OWASP-to-Tooling Velocity: OWASP published Agentic Top 10 in December 2025. Microsoft released comprehensive coverage implementation in April 2026—four months from taxonomy to production tooling. This velocity is unprecedented in security standard-to-tool translation. OWASP Top 10 for Web Applications took years from 2003 publication to widespread scanner implementation. The agentic security ecosystem compressed this cycle into months, driven by enterprise urgency and major vendor investment. However, tooling availability does not equal deployment: 42% plan agent deployment in 12 months but Microsoft toolkit requires architectural integration at agent initialization, not retrofit capability.

3. The Supervision Architecture Gap: The 68% HITL requirement vs 28% traceability represents the most consequential implementation deficit. Supervision models assume human intervention capability at defined checkpoints. But 68% lack architectural checkpoint integration—they cannot implement HITL even when required. Berkeley CMR reframes supervision as “defined business owner, risk classification, escalation protocol”—governance metadata, not runtime infrastructure. Microsoft Agent OS provides checkpoint architecture but requires integration into agent platforms. Enterprises possess governance thinking without execution architecture.

4. Anthropic Mythos as Catalyst, Not Solution: The Treasury Secretary/Fed Chair Wall Street convening and Bank of England characterization as “very serious challenge” transformed governance discussion into operational urgency. But Anthropic’s response—FIS building agent-first governed environment with traceable, auditable decisions—represents bespoke infrastructure, not replicable framework. BMO and Amalgamated Bank as first deployments indicate financial services vertical solution, not horizontal enterprise capability. The regulatory alarm accelerated governance attention but did not provide implementation pathways.

5. Privacy Spending Surge Preceding Governance Investment: Cisco 2026 data shows 38% now spend $5M+ on privacy (up from 14% in 2024), 90% expanded privacy programs due to AI. Privacy infrastructure—data classification, consent management, access controls—provides foundation for governance but does not address autonomous agent behavior. Organizations investing in privacy governance without agent governance create capability asymmetry. The 93% planning further privacy investment indicates resource allocation trajectory, but agentic governance requires separate architectural investment.

6. ROI Failure Rate Without Governance-First: MIT 2026 finding that 95% of enterprise GenAI pilots fail to deliver ROI without governance-first deployment provides economic validation. Governance is not compliance overhead—it is deployment prerequisite for economic value capture. The FifthRow data showing governance consumes 60% of AI agent development budget contextualizes this: enterprises attempting governance-free deployment face pilot failure, not successful pilots requiring subsequent governance retrofit.

Key Implication: Organizations pursuing agentic AI deployment should prioritize ISO 42001 certification for EU AI Act conformance, then deploy Microsoft toolkit for OWASP runtime risks. The August 2 deadline creates temporal pressure for certification trajectory, while toolkit deployment addresses the 72% traceability gap through Agent OS integration. Governance-first deployment is not regulatory compliance—it is economic viability.

Outlook & Predictions

Near-term (0-3 months)

  • ISO 42001 certification acceleration: Consultancy firms will report 300-400% increase in certification engagement requests through July 2026 as August 2 deadline approaches. Certification backlog will extend beyond deadline for organizations without existing governance infrastructure. Confidence: high.

  • Microsoft AGT enterprise adoption: First production deployments will emerge in Microsoft ecosystem enterprises—Azure AI Services customers with existing AutoGen or Semantic Kernel implementations. Integration guides will prioritize Azure-native scenarios. Confidence: medium.

  • Financial services regulatory pressure: EU AI Act deadline creates specific pressure on banking, insurance, and investment management AI systems. Regulatory non-compliance will occur—August 2 is fixed deadline, certification requires 4-6 months minimum. Enforcement actions will emerge in Q4 2026. Confidence: high.

Medium-term (3-12 months)

  • Certification body expansion: ISO 42001 certification capacity will increase as accreditation bodies add certified auditors. Certification timeline will compress from 6-12 months to 4-8 months by end of 2026. Certification count will reach 300-500 organizations by Q4 2026. Confidence: medium.

  • OWASP Agentic Top 10 2.0: OWASP will publish updated taxonomy incorporating lessons from Microsoft toolkit deployment and Mythos response. New risk categories may emerge around multi-agent cascade failures and identity propagation. Confidence: medium.

  • Agent platform governance integration: LangChain, CrewAI, and non-Microsoft agent platforms will announce governance framework partnerships or native governance modules. Market pressure from Microsoft AGT availability will drive competitive response. Confidence: high.

Long-term (12+ months)

  • Governance automation standardization: Agent Compliance automation (EU AI Act mapping, HIPAA, SOC2) will become expected capability. Manual governance documentation will be insufficient for procurement gates. Confidence: high.

  • Agent identity infrastructure emergence: Cryptographic identity standards for ephemeral agents will emerge, extending beyond Microsoft Agent Mesh to broader ecosystem. IAM providers (Okta, Auth0, AWS IAM) will develop agent identity models. Confidence: medium.

  • Supervision architecture maturation: The 68% checkpoint integration gap will narrow as tooling matures. Agent OS-style policy enforcement will become expected capability, not advanced governance. Confidence: medium.

Key Trigger to Watch

ISO 42001 certification count: Track monthly certification announcements. If certification count remains below 150 by July 2026, August 2 compliance gap will be substantial. If certification accelerates to 200+ by July, enforcement pressure may be manageable. Certification velocity provides leading indicator of enterprise capability trajectory.

Sources

am7ln2m9th7msmd1320zvq████iqo24hv46xqui7pg95zhf13jrvpp9ge1████auq4vwlj8jgywjed8bsxekr8rahr0r2m████qiqyc42ogpr6s6l65phwm4dbhk1m7hst4░░░24w4urywobq1dv13uegc7uvz16vkumyzk████lco4oz0yh9wv2p1azwiow576tz86jj░░░rncl751kgz4oe69eibdndur58e8ejdbh████j097jxjwffic66i6psoevol0egik2h9████8xjzw3x09ntg42gco8rfh42xuhmfres05████ky3g495yhx8gbh18sr7ypcmdmrljkop░░░747yl4ur5eo8ydpv5otqws9nfzxd55uog░░░zt6t8ib1icq73d0ddj235ixcpdxj4lkb████dsswedqbqlrjtm068cgze76dl6x9atlq████xidf1ylk5jigottnywseflj3ekusuvtm████qwt8o9c6skrev5x1mrgwic0oywuglstj░░░mtptniigcrrerj8463babdbf7fm7mpxm░░░e7zfszq643mqpid9kq270s31lld5vo23e░░░olrlq8s4jcu7vmm3txsmf2svn73a8un░░░fgt3dugyl5e7lmalj1lpp38xf8wukiudb░░░9zgr1z85bkpmsg6sb7qjo7cvd4sr54xe░░░yhyrqte1xpjcfn7uz3ljdo9lxqq42ydf████ttoer930m8i91mv8s987cmlv2tpg3v3v████uj8ecdacdc454z9mmytyyfx4uwqiqxhv░░░x4vqnw0i5o77vi0gz81r70trtainw0ez░░░ftyocht7yevs7e0z8eb149m0ccx1siagj░░░0jpwcvfa3dh2sh2p0y7kdr84u9hi5jiaj████uszyos8lsbd8i7s6tbv9hxv5jd3g3dw████g50siq5q3hkxux5yefhbjzqrytcmha59████fte6y79l6vmczsld4v0f9uyprtss8simi████qxc4u0y0w29o8du5dhw6q4wx36dy9yzt░░░t8atq66hakpgqljf2nn6b460jiu8uziyk░░░ilwit8lc37k8xsajtl0ckdkrrkv1v7jdj████32qhrfwkxtbwqxma4jw7g8gs6q9juzht6████bs5l4xv816l3vtncu4k4n4k8mv80r9zzc████37oir9x77rjdrru1994b8atsotwr55vk████h996xeo3ucuqdj0n4fsiba3f4q6h00r7████ioiuqpupdmqt0wdjopkc4ivmwob1fzcen████101vjp29hy8gpkxx91ru118inxz05csxo████31ny25fbjqjqc25gr90i7q88q1i3wo1f░░░2g5jn21bqt3g3lt94dmyia0x20ljd2ochj████vvcvzgqf94e5u2u836233t5kkn8kepebg░░░0pfcu5igmufu31m8zbus7e6i4n8cv8cj3████v65tx7sj98b2z4n9xmuxvqtd6jegjy2░░░4nl7p917zmawd4nw2nqijhv5hq4z1ttv████uvh80zgqpcovnbr8v2ysnyrnvtkdav░░░ibhcv4k9zlc4b4ab3m0mzh3csm103████7w98rj255x95853q6ps07paswh4snak68░░░2mk30s0p105tapwfk9ot7isg6n35i7gb░░░i5pt8tr9yavlaife83kirnlx7idzlo4████o0fi3e3b6etr6x5wtbb0dxt8d64p3tca████xta1hq1k02s