3 Things We Need To Fix Before AI Agents Go Mainstream

AI agents are poised to be the next big thing in tech. First, though, we need to make sure the ecosystem is ready for them.

Written by Yukai Tu
Published on Feb. 21, 2025
A person uses an AI agent on their smartphone
Image: Shutterstock / Built In
Brand Studio Logo

In 2025, it’s tough to escape AI agents. These platforms are only growing smarter with the rapid evolution in the ability of artificial intelligence to collect information, understand commands and perform tasks on our behalf. We, the humans, set the goals, but the agent, informed by contextual and behavioral understanding, chooses the actions to achieve them.

The technology is genuinely impressive and exciting in equal parts. But, before agents take on even more personal and professional tasks, some concerns remain. AI agents are only as good as their data and, right now, that data is fragmented, unsecured and often unreliable. Further, there’s little standardization regarding the back-end infrastructure, which erodes trust.

Therefore, before the technology goes mainstream, AI agents must solve critical challenges in data sovereignty, security and integration to — and the key here is safely — deliver on the promise of superhuman helpers.

3 Steps Necessary to Make AI Agents Take Off

  • Bring users into the AI revolution.
  • Secure data on the back end.
  • Put trust front and center.

More on Artificial IntelligenceWill 2025 Be the Year Agentic AI Takes Off?

 

Bring Users Into the AI Revolution

AI agents need good data, and a lot of it, to understand the required task and desired outcome. Of course, with applications ranging from trading bots that respond instantly to market shifts to gaming assistants capable of personalized interactions, each agent needs constant and clean information supplies.

Unfortunately, data sourcing is a persistent issue in AI. Large language models like ChatGPT set a poor precedent by training on data without user consent or compensation, leading to copyright problems. This only entrenches other data rights issues in web2. Social media and search engine giants have profiteered for years by taking user information and selling it to the highest bidder. Meanwhile, the data creator — you — has no say in how the data is used nor shares in the resulting wealth.

AI agents can build in data rights from day one by tracking information provenance and sovereignty on the blockchain. Users, for example, can bind their online identities to an NFT like ERC-7231, which acts like a digital passport. This lets them control what data they share while empowering agents with deep insights. Further, dedicated blockchains can manage such data end-to-end, matching security with scalability.

This creates an ecosystem where users finally get to call the shots. They own and monetize their data while AI agents learn and evolve. It’s a win-win that puts user empowerment at the center of AI’s future.

 

Secure Data on the Back End

Another issue in these early days of AI is that it’s often a black box. Data goes in, commands come out, but there’s little transparency about back-end operations. How and even if the data is secured remains a mystery, hurting private sector adoption and general trust in these systems. 

Therefore, we need strong data infrastructure to get to the next level. Chain ecosystems — networks of interconnected blockchains that share information and access authorization — can provide the foundation, enabling agents to operate securely and collaboratively. But they also need sophisticated frameworks that can transform static information into actionable insights, essentially giving agents the ability to “see” and “understand” what they’re processing. 

Again, blockchain grants much-needed transparency alongside technical ability. Processing data in trusted execution environments (TEEs) — meaning secure, isolated computing spaces — strengthens security. Meanwhile, zero-knowledge proofs, which are methods to verify data without revealing it, ensure private but verifiable operations. Cross-chain integration also ensures agents can access and analyze data from multiple sources while keeping things safe. Here, by standardizing these processes and tracking them on public ledgers, we can deliver data without compromising its safety.

In my view, this combination of secure infrastructure and intelligent data processing goes a long way to unlocking data reserves while addressing one of the technology’s core remaining issues: trust.

More on AI AgentsWhat Are AI Agents?

 

Put Trust Front and Center

Trust doesn’t come easily with emerging tech. AI agents, in particular, face deep skepticism around their reliability, security and impact on human work. Although early adopters are quick to point out efficiency gains, many potential users remain on the sidelines, waiting for clearer standards and safeguards.

This is where we, as an industry, need to prove ourselves worthy of trust. The good news? We can safely bring this technology mainstream by tracking data usage on public ledgers, implementing security frameworks and empowering users with true data sovereignty. This becomes possible when we implement blockchain-backed digital identities that let users control their data through smart contracts — determining what information they share and automatically receiving compensation when agents use it.

The potential applications are too valuable to ignore. Each use case, from decentralized science researchers collaborating on medical breakthroughs to personalized companions supercharging productivity, demonstrates the transformative potential when we get the foundations right.

Let’s learn from past mistakes and make sure, before these tools go wide, we have a more inclusive, transparent ecosystem that puts users first and paves the way for safe AI adoption.

Explore Job Matches.