A post-AI world does not eliminate human work. It reorganizes it.
If you have ever flown on a plane, you have already lived inside the future of work. Modern aircraft are loaded with automation, yet the pilot is still there. They are not a redundant relic, but the accountable authority. The pilot's job is not to manually do everything; it is to supervise, intervene, and own the outcome.
Agentic AI pushes that same shift into knowledge work.
An "agentic workload" is not just a model generating a paragraph. It is a system that can:
- Pursue goals over time
- Execute multi-step plans
- Call tools and services
- Coordinate with other agents
- Produce actions that affect real people
Once AI is allowed to act, the core question becomes: Who is responsible when the agent is wrong?
That question is already shaping regulation, compliance, insurance, and governance. It will also create entire job categories built around human participation points.
The Four Modalities
Where Humans Show Up in Agentic Workflows
Here is how the landscape breaks down:
Verification
A human approves or signs off on an AI-generated decision. The human becomes the accountable checkpoint.
Escalation
The AI hands the baton to a human because it hits a risk limit, policy rule, or uncertainty boundary.
Consultation
The AI proactively asks a human for insight to fill in missing context, such as ethics or organizational intent, without necessarily seeking final approval.
Simulation
The AI runs many simulations before acting. Humans participate in a subset to inject adversarial thinking or human-centered reactions.
Consultation: Direction Before Execution
Consultation is the mechanism that allows agentic systems to incorporate human judgment before execution. The AI may be confident and technically correct, but data alone cannot capture institutional intent, ethical posture, or shifting strategic priorities. Consultation is where direction is set.
Consultation shapes how decisions are weighted, not whether they are approved.
In high-stakes domains, consultation encodes risk appetite, values, and trade-offs into systems that operate at scale. Most importantly, it ensures that AI acts not just correctly, but in alignment with how the organization intends to operate.
1. Healthcare: Physicians as Strategic Context Advisors
AI can generate clinically sound treatment pathways with high confidence, but it cannot independently determine how to weigh survival probability, quality of life, and cost trade-offs for a specific patient or institution.
An AI oncology system produces three viable treatment plans. Each is medically defensible. Survival rates are similar, but side-effect profiles and long-term lifestyle implications differ. Rather than automatically selecting the statistically optimal path, the system consults the attending physician to determine which dimension should be prioritized for this patient: longevity, comfort, or affordability. The physician's guidance informs how the AI weights its final recommendation.
"Clinical AI Strategy Consultant." Physicians guide how AI systems balance competing medical priorities. Their role shifts from reviewing outputs to shaping treatment philosophy within automated decision frameworks.
2. Finance: Portfolio Managers as Mandate Interpreters
AI systems can optimise credit decisions and portfolio allocations faster than any human. Yet optimisation depends on how risk appetite and strategic priorities are defined.
An AI portfolio system recommends reallocating capital toward higher-yield assets based on market conditions. The model's confidence is high and risk thresholds remain within limits. Before executing, the system consults the portfolio manager to clarify whether the firm's current mandate prioritises yield maximisation, volatility reduction, or liquidity preservation. The manager's input determines how the AI weights its allocation model.
"AI Investment Strategy Advisor." Finance professionals define risk posture and mandate interpretation for AI systems. Their work becomes directional rather than procedural, embedding institutional intent into automated financial decisions.
3. Legal and Compliance: Counsel as Risk Posture Definers
Generative and analytical AI can interpret statutes and draft compliant policies, but legal permissibility does not automatically determine strategic posture.
An AI compliance engine identifies two legally permissible approaches to implementing a new regulation. One is conservative, minimising exposure but limiting operational flexibility. The other more aggressive, enabling faster expansion while increasing scrutiny risk. Rather than choosing autonomously, the system consults general counsel to determine which posture aligns with long-term corporate strategy. That guidance informs the final policy configuration.
"AI Policy Advisor" or "Regulatory Strategy Consultant." Legal professionals shape how AI systems navigate ambiguity and risk tolerance. Their role evolves from reviewing documents to defining institutional stance within automated compliance environments.
4. Foundation Models: Researchers as Alignment Consultants
Large language models can generate fluent, coherent outputs without human intervention. But raw capability does not determine how a system behaves. It determines what it can do, not what it should do.
During Reinforcement Learning from Human Feedback (RLHF), human reviewers evaluate model outputs across dimensions such as helpfulness, harmlessness, tone, and policy adherence. They rank alternative responses, flag undesirable behavior, and express preference judgments. The model is not uncertain. It is not asking for approval before each output. Instead, it is being shaped through preference signals that adjust how it optimizes future behavior. Human input determines whether the system becomes cautious or bold, concise or verbose, conservative or creative.
"AI Alignment Specialist" or "Human Feedback Architect." These professionals define the behavioral posture of AI systems at scale. Their role is not to correct individual answers, but to encode institutional and societal values into model optimization processes.
Consultation roles are not demotions to "AI babysitter." They are positions of real authority.
Consultants need deep expertise to define priorities, interpret trade-offs, and encode institutional intent into automated systems. In return, they focus on shaping direction rather than reviewing or rescuing decisions. In a post-AI world, millions of workers will spend their days as these strategic guides: the physician defining treatment philosophy, the portfolio manager clarifying risk appetite, or the general counsel setting regulatory posture.
Source, Connect, and Trust
The Enabling Layer
Across every modality, the same foundational infrastructure is required: Source, Connect, and Trust.
Source: Human Expertise Discoverable for Consultation
When an agent requires consultation, it is not asking for just any human. It must locate someone with the correct expertise, authority, licensure, and approval rights for that specific decision.
A medical AI cannot route consultation to an available clinician; it must route it to a licensed professional authorized for that specific category of diagnosis. At scale, consultation becomes a routing problem across millions of decisions. Source ensures that only eligible humans are discoverable for specific tasks and that policy is enforced before consultation occurs.
Connect: Secure, Contextualized Consultation Workflows
Finding the right consultation target is meaningless if the request lacks sufficient context to support a real decision.
Connect is the layer that packages the AI output, supporting evidence, and relevant context into a structured task. It ensures the human sees what the agent saw and understands exactly what guidance is being sought. In regulated environments, Connect also enforces timing, prevents unauthorized delegation, and ensures the consultation step cannot be bypassed.
Trust: Consultation That Withstands Scrutiny
Consultation only works if it can be proven.
Organizations must be able to demonstrate who provided guidance on a consulted decision, when it occurred, and what information was reviewed. This proof matters for regulatory audits, litigation, and reputation management. Trust in consultation is evidentiary, requiring durable records of human participation that can survive scrutiny years after the fact.
Next in Series
In Part 4: Simulation, we explore how AI systems run thousands of scenarios before acting, and how humans participate in a subset to inject adversarial thinking and human-centered reactions.
Read Part 4: SimulationPrevious in Series
In Part 2: Escalation, we explored how AI systems hand off to humans when uncertainty, risk, or ambiguity crosses a threshold.
Read Part 2: EscalationIn Part 1: Verification, we explored how humans serve as accountable checkpoints, approving AI-generated decisions.
Read Part 1: VerificationAbout the Author
Daniel Kaelin
COO at SanctifAI, the company building the human layer of the AI economy. SanctifAI provides the infrastructure to Source, Connect, and Prove human participation within agentic systems, ensuring that human intelligence remains an inseparable component of the post-AI workforce.