Nearly 9 in 10 US executives are giving agentic AI the green light to make decisions and complete tasks on behalf of their customers. That’s not a future-facing prediction—it’s happening right now. According to February 2025 data from NLX and QuestionPro, 47% of executives are fully comfortable with agentic AI doing the job, and another 40% say they’re on board depending on the type of decisions being made.
But here’s where it gets interesting.
Despite the executive enthusiasm, consumer trust hasn’t caught up. While 57% of executives plan to roll out customer-facing agentic AI this year, only 24% of consumers are currently comfortable sharing data with AI shopping tools, per EMARKETER and CivicScience.
This mismatch is more than a perception gap—it’s a strategy risk.
What This Means for Businesses
Executives see agentic AI as a way to scale personalized service, drive faster resolutions, and lower support costs. That’s fair. The right implementation can reduce friction in the customer journey and even create competitive advantages by automating routine tasks or offering 24/7 assistance.
But treating agentic AI as a plug-and-play fix ignores one key factor: customer consent isn’t just a checkbox—it’s an emotional transaction. If users don’t trust the system, they won’t engage with it. And without engagement, the AI can’t learn, improve, or deliver value.
What This Means for Customers
From the consumer side, AI still feels impersonal, risky, and unclear. Many customers don’t know what decisions are being made on their behalf or what data is being used to make them. When 76% of consumers say they’re not ready to share data with an AI shopping assistant, that’s not hesitation—that’s rejection.
Brands deploying agentic AI must over-communicate the “why” behind the technology. What tasks are being automated? What controls do users have? How is their data protected? If those answers aren’t obvious, trust erodes before the first interaction.
The Path Forward
The opportunity here is massive—but it’s not automatic. Businesses eager to adopt agentic AI must:
- Start with low-risk, high-value tasks (think: password resets, appointment bookings, product recommendations)
- Build transparency into every interaction
- Use opt-ins, not assumptions, to collect data
- Constantly test for AI fluency, tone, and customer satisfaction
- Treat the AI not as a tool but as an extension of the brand
The technology is ready. The business case is clear. But without earning consumer trust at every step, agentic AI won’t be a breakthrough. It’ll just be another missed opportunity dressed up in hype.
What is Agentic AI?
Agentic AI refers to artificial intelligence systems that can autonomously perform tasks by understanding goals, planning actions, and executing them with minimal human supervision. Unlike traditional AI that simply responds to specific commands, agentic AI works through a sequential workflow as illustrated in the Everest Group diagram.
This workflow begins with prompt input, progresses through intent recognition, knowledge retrieval, goal-setting, reasoning, workflow creation, action execution, and monitoring, before delivering a final response or output.

The diagram shows how Agent 1 coordinates the entire process while potentially collaborating with multiple other agents (Agent 2, Agent 3, Agent n). What makes this system truly “agentic” is the feedback loop – the output feeds back into the system through reinforcement learning, enabling continuous improvement.
The lower section of the Everest Group’s workflow highlights the technical foundation supporting each stage, from APIs and user interfaces at input, to LLMs and knowledge graphs during processing, to custom AI solutions and system metrics at output – all supported by comprehensive infrastructure and agent operations.