The era of Artificial Intelligence (AI) is providing everything the users want at their fingertips, but at the cost of their own privacy.
The intelligence that moulds our lives is stored inside a handful of data centers. Every prompt, every dataset, and every dependency asks users to place blind trust in infrastructure they don’t own and can’t audit.
Moving forward, AI upgrades with Gradient
What the world needs next could be a democratized AI, an infrastructure for individuals to own their data or a machine learning infrastructure that is not controlled within a single data center. Gradient Network, an open-source intelligence, is working towards building the world’s first fully decentralized and sovereign AI.
In an exclusive interview with AltCoinDesk, CEO Eric Yang talks about their innovation. They intend to build an open-source infrastructure where intelligence is public, not controlled by a few.

“The centralization actually has a lot of cost issues as well as control. There’s a single entity controlling the machine, the models’ creation, and the serving part of it. We want to make that a community-owned process and make machine learning possible in distributed machine networks.”
“It brings the cost down and also the sovereignty to building and serving AI models. And secondly, I think it is also democratising AI for the world,” he said.
Eric’s opinion directs towards the reliability of users on these third-party services who hold the users’ valuable data and prompts, understanding in depth about what their daily lives would look like. AI sovereignty, according to him, is owning the AI models that individuals or enterprises rely on. For individuals, this means controlling their own “personal agents”, like their “Jarvis” or “E-girlfriend”, preventing third-party control and the risk of being denied access to the models.
As AI is built over what the users and enterprises feed them about their own lives, third parties are trusted for serving the inference needs for us.
The ‘abstract’ technology by Gradient
What enables this abstract tech is a cheaper way for people to build foundation models together:
- Individuals with a few Mac minis and Nvidia GPUs can host a large language model on their own hardware without exposing their privacy and data.
- Enterprises with GPUs rented across different clouds can unify them into a single cloud to train foundation models over distributed machine networks or even on hardware they own in their offices.
This makes model building and hosting cheaper, maintaining sovereignty and privacy on one’s own hardware and compute resources.
Eric said: “We can host those models in a cheap way. We don’t have to pay for other people. It maintains our own sovereignty and privacy on our own hardware and compute resources. So, that’s how we see AI sovereignty actually; it will definitely be a trend that eventually we’ll realize.”
How it all began for Gradient
The initial idea behind the initiative was inspired by how blockchain technology used a peer-to-peer system, building a public ledger aiming to eliminate the failures and concerns found in the traditional financial system.
Eric noted that generative AI models are as centralized as traditional financial ledgers. And that’s where Gradient stepped in, wanting to bring the power to own, train, and serve models to every enterprise and individual on their own infrastructure, sometimes with the help of others they trust. He stated:
So that’s where I think the idea for a gradient actually comes out: into building something that can break AI apart and can lower the cost for operating AI. Actually, one counter-intuitive thing is that it actually achieves a similar performance compared to the centralized one.
Commercialization opportunities of decentralised AI
The company’s research showed that machine-learning does not differ for decentralization, it only eliminates the trust issues. Eric said:
Our stack, or the current machine learning stack, the RL stack built by us – we don’t sacrifice in terms of efficiency. So, you can train a model just at the same time frame compared to when you adopt a centralized one. So, that’s where I think it actually will gain commercial trust and commercialization opportunities going into the future because ultimately a lot of people care about the efficiency of running it.
One of the concerns of decentralized AI, is to ensure trust, security, and high-quality data as individuals contribute to this new AI network. Gradient claims to have implemented a proprietary verification mechanism, as anyone can be both a contributor and a verifier of machine learning tasks.
For example, if a node claims to have provided inference for a model layer, another random node is used to replicate the inference process and compare results to check. Eventually a network is built with users that can provide and verify.
Gradient’s user inference algorithm
Eric explained that when it comes to running AI model inference on a network, there are two main approaches. One of them is permission-less but still centered around data centers. In this model, anyone already using traditional cloud services like AWS can adopt it easily, with little to no friction. Fees are recorded and redistributed to the compute providers, who power the inference.
Next is the much more radical kind, where a single model is broken apart and hosted across different locations with smaller compute resources, leading to constrained performances.
AI agents would be the earliest real-world adopters of this model, not humans, said Eric.
“It will be running 24/7, always running at the back end, doing research or doing some task for you, but it’s not as picky in terms of time and performance as human beings are. So, I think eventually it will become a cheaper inference network, if we use all the distributed machine networks,” he said.
Eric says ZK proofs are not economically viable
Zero-Knowledge (ZK) Proofs are not considered economically viable for real enterprise use cases to date, due to the overhead for verifying inference still being 100 to 1,000 times the magnitude of doing one inference.
Gradient Network currently uses a probabilistic verification method where not every inference is verified, but participants are expected to report honest work if verified.
However, Eric thinks the efficiency of ZK proofs is improving rapidly with almost 10 times more efficiency every year.
2026 plans roll out
“Right now, there’s a lot of trading activity going on blockchain, but we actually want to bring a lot of AI use cases and new innovations to blockchain use cases.”
One of the major next steps is to start with a public testnet for the AI use cases to understand decentralized training live on-chain, while receiving rewards for contributing compute. While anticipating a “killer” result, Eric asks his audience to stay tuned for the official testnet to roll out.