Agentic AI does not suggest. It acts. This Smashing Magazine piece, the second in a series, moves past theory and into the six design patterns that determine whether users trust an autonomous system or abandon it. The framework divides the agentic interaction lifecycle into three phases: Pre-Action, In-Action, and Post-Action. Each phase gets specific patterns with named success metrics and numeric targets.

The Intent Preview is the first and most critical pattern. Before any significant action, the agent presents a plain-language plan, not a log of API calls. The travel assistant example shows a four-step recovery plan for a canceled flight, with three explicit user choices: Proceed, Edit Plan, or Handle it Myself. The acceptance ratio target is above 85 percent. An override rate above 10 percent triggers a model review. The DevOps extension of this pattern shows the same logic applied to cloud infrastructure, where approving a drain-traffic command carries consequences far beyond a missed flight.

The full article details the remaining five patterns, the Autonomy Dial, Explainable Rationale, Confidence Signal, Action Audit and Undo, and Escalation Pathway, each with equivalent metric frameworks. The value is not in the conclusions but in the specifics: how to measure recall accuracy on a plan summary, when confidence signals become noise, and how escalation pathways fail when designed as afterthoughts. Read it for the operational detail, not the taxonomy.

[READ ORIGINAL →]