AI has a trust problem — Decentralized privacy-preserving tech can fix it



Opinion by: Felix Xu, co-founder of ARPA Network and Bella Protocol

AI has been a dominant narrative since 2024, but users and companies still cannot completely trust it. Whether it’s finances, personal data or healthcare decisions, hesitation around AI’s reliability and integrity remains high.

This growing AI trust deficit is now one of the most significant barriers to widespread adoption. Decentralized, privacy-preserving technologies are quickly being recognized as viable solutions that offer verifiability, transparency and stronger data protection without compromising AI’s growth.

The pervasive AI trust deficit 

AI was the second most popular category occupying crypto mindshare in 2024, with over 16% investor interest. Startups and multinational companies have allocated considerable resources to AI to expand the technology to people’s finances, health, and every other aspect.

For example, the emerging DeFi x AI (DeFAI) sector shipped more than 7,000 projects with a peak market cap of $7 billion in early 2025 before the markets crashed. DeFAI has demonstrated the transformative potential of AI to make decentralized finance (DeFi) more user-friendly with natural language commands, execute complex multi-step operations, and conduct complex market research.

Innovation alone hasn’t, however, solved AI’s core vulnerabilities: hallucinations, manipulation and privacy concerns.

In November 2024, a user convinced an AI agent on Base to send $47,000 despite being programmed never to do so. While the scenario was part of a game, it raised real concerns: Can AI agents be trusted with autonomy over financial operations?

Audits, bug bounties and red teams help but don’t eliminate the risk of prompt injection, logic flaws or unauthorized data use. According to KPMG (2023), 61% of people still hesitate to trust AI, and even industry professionals share that concern. A Forrester survey cited in Harvard Business Review found that 25% of analysts named trust as AI’s biggest obstacle.

That skepticism remains strong. A poll conducted at The Wall Street Journal’s CIO Network Summit found that 61% of America’s top IT leaders are still experimenting with AI agents. The rest were still experimenting or avoiding them altogether, citing lack of reliability, cybersecurity risks and data privacy as top concerns.

Industries like healthcare feel these risks most acutely. Sharing electronic health records (EHR) with LLMs to improve outcomes is promising, but it is also legally and ethically risky without airtight privacy protections.