The TON ecosystem is expanding again, and this time, it’s entering the race to build private, censorship-resistant AI infrastructure.
Telegram founder Pavel Durov has announced the launch of Cocoon, a decentralized confidential compute network that runs AI workloads directly on The Open Network (TON). The system went live this week, and it is already processing real user requests with full privacy while GPU providers earn TON in real time.
Durov confirmed the launch in a public announcement on X, writing that “the first AI requests from users are now being processed by Cocoon with 100% confidentiality. GPU owners are already earning TON.”
🐣 It happened. Our decentralized confidential compute network, Cocoon, is live. The first AI requests from users are now being processed by Cocoon with 100% confidentiality. GPU owners are already earning TON. https://t.co/jDBwQNutH6 is up.
🏦 Centralized compute providers such…
— Pavel Durov (@durov) November 30, 2025
His statement marks a major milestone for TON’s ambitions to build decentralized digital infrastructure that rivals traditional cloud services, but with privacy embedded at the protocol level.
A Privacy-First AI Compute Network Built on TON
According to the project’s website, Cocoon is designed as a decentralized platform for running AI on TON. It works by “securely connecting GPU owners” who provide compute power with applications that need to run private AI models.
Cocoon focuses on a core idea: AI should be able to run without exposing data to node operators, cloud providers, or intermediaries. And users should be free to deploy models without relying on Big Tech platforms or centralized gatekeepers.
The network is already live. It handles encrypted AI requests on-chain and distributes TON rewards instantly to GPU contributors.
Cocoon is LIVE on $TON — A Privacy-First Path for Decentralized AI Compute
Cocoon, a confidential AI compute network on The Open Network, is now fully live. Real users are already running private AI workloads, and GPU providers around the world are earning $TON in real time for… pic.twitter.com/54Z8XCKScS
— Crypto Miners (@CryptoMiners_Co) December 1, 2025
The Three-Layer Architecture Behind Cocoon
Cocoon is structured around three components, each playing a role in its decentralized workflow:
- Client
The Client is the user. They pay for AI requests and send them to the Cocoon proxy. Once a request enters the system, the network handles all routing and security.
- Proxy
The Proxy acts as the routing engine. It selects a suitable Worker node based on the GPU required, availability, and performance, then forwards the request securely.
- Worker
The Worker is where the AI task is executed. It runs the GPU hardware, processes encrypted jobs inside a secure environment, and returns the results without ever seeing the input data.
This structure enables a simple user experience on the surface, with a complex cryptographic and hardware-secured network underneath.
Confidential AI Powered by Trusted Execution Environments
The centerpiece of Cocoon’s technology is its use of hardware Trusted Execution Environments (TEEs). These environments isolate workloads so that:
- Data stays encrypted during execution
- Workers cannot view inputs, outputs, or model parameters
- Developers can verify the integrity of computations
- Node operators cannot tamper with the process
Cocoon uses remote attestation to prove that each workload runs inside a secure enclave. This approach mirrors the security model used by major confidential computing providers, but without the centralized infrastructure.
Cocoon positions itself as a privacy-first AI platform suitable for startups, developers, enterprise workloads, and end users who need confidentiality as a default feature, not an add-on.
A Fully Decentralized GPU Marketplace
Beyond confidentiality, Cocoon introduces a decentralized GPU marketplace. It allows anyone with capable hardware to connect, contribute compute cycles, and earn $TON automatically.
For GPU owners, the pitch is direct:
- Connect hardware
- Start receiving tasks
- Earn TON instantly
For developers, Cocoon provides a cheaper alternative to cloud giants:
- Amazon Web Services
- Microsoft Azure
- Google Cloud Platform
Cocoon’s decentralized model avoids the overhead and pricing structures of traditional cloud compute, and early testing indicates that the cost for AI inference could be significantly lower.
Why Cocoon Marks a Major Shift for TON
Cocoon’s launch is more than a technical deployment, it signals TON’s push into decentralized AI infrastructure. The impact spans several categories:
Reduced Reliance on Centralized Cloud Providers
Cocoon allows developers to run AI workloads without AWS, Azure, or GCP. It challenges cloud monopolies with open, peer-to-peer compute supply.
Lower-Cost GPU Access
The decentralized marketplace structure brings competitive pricing. GPU owners worldwide can supply compute power, removing traditional barriers.
Privacy-Preserving and Censorship-Resistant AI
Because all jobs run inside encrypted enclaves, no provider can intercept or censor user workloads.
Real Infrastructure for TON’s Growing Ecosystem
Cocoon adds a new layer to TON’s architecture, one that supports both AI applications and broader decentralized compute services.
The arrival of confidential compute on a major blockchain positions TON as a serious competitor in the global AI-crypto intersection.
Already Live: Real Workloads, Real Earnings, Real Clients
Cocoon is not a testnet or pre-release. It is fully live.
According to project updates:
- On-chain AI requests are already being processed
- GPU providers are actively receiving TON payouts
- Data remains confidential end-to-end
- The system is open and permissionless
Most notably, Telegram has been confirmed as Cocoon’s first major client, immediately putting the network into production for millions of active users.
This gives Cocoon a powerful early advantage: real demand from day one.
Developers are also beginning to integrate the network into their own pipelines, testing encrypted inference for analytics, bots, private assistants, and data-sensitive AI tools.
Open to Everyone, Built for Scaling
One of Cocoon’s core principles is openness:
- Anyone can supply compute
- Anyone can build applications
- Anyone can use the network
There are no allowlists. No approval processes. No KYC requirements. The system is structured to scale globally as long as GPU owners continue connecting hardware.
The project’s team highlights that Cocoon is built as a permissionless infrastructure layer, one that stays aligned with TON’s broader vision of decentralized digital ownership and freedom.
A New Phase for Decentralized AI
Cocoon’s arrival signals a shift in how AI can be deployed and managed. Instead of relying on centralized cloud giants or exposing sensitive data to third-party servers, users can now run workloads in a verifiable, private, decentralized environment.
The combination of:
- Confidential inference
- TEEs
- Real-time TON rewards
- A global GPU marketplace
- A live on-chain ecosystem
- Telegram as an anchor client
makes Cocoon one of the most ambitious decentralized AI launches this year.
And for TON, it pushes the blockchain into a new category, one where AI, privacy, and crypto infrastructure converge.
Disclosure: This is not trading or investment advice. Always do your research before buying any cryptocurrency or investing in any services.
Follow us on Twitter @themerklehash to stay updated with the latest Crypto, NFT, AI, Cybersecurity, and Metaverse news!

