🎉 The #CandyDrop Futures Challenge is live — join now to share a 6 BTC prize pool!
📢 Post your futures trading experience on Gate Square with the event hashtag — $25 × 20 rewards are waiting!
🎁 $500 in futures trial vouchers up for grabs — 20 standout posts will win!
📅 Event Period: August 1, 2025, 15:00 – August 15, 2025, 19:00 (UTC+8)
👉 Event Link: https://www.gate.com/candy-drop/detail/BTC-98
Dare to trade. Dare to win.
The Integration of AI and Web3: Open Markets and Value Co-Creation
AI+Web3: Towers and Squares
TL;DR
Web3 projects with AI concepts have become targets for capital attraction in both primary and secondary markets.
The opportunities for Web3 in the AI industry are manifested in: using distributed incentives to coordinate potential supply in the long tail ------ across data, storage, and computing; at the same time, establishing an open-source model and a decentralized marketplace for AI agents.
AI's main application in the Web3 industry is on-chain finance (crypto payments, trading, data analysis) and assisting development.
The utility of AI+Web3 is reflected in the complementarity of the two: Web3 is expected to counteract AI centralization, while AI is expected to help Web3 break out of its niche.
Introduction
In the past two years, the development of AI has been like pressing the acceleration button. This butterfly effect stirred by ChatGPT has not only opened up a new world of generative artificial intelligence but has also created a tidal wave in the realm of Web3.
With the support of AI concepts, the financing boost in the slowing cryptocurrency market is significant. Media statistics show that in the first half of 2024, a total of 64 Web3+AI projects completed financing, with the AI-based operating system Zyber365 achieving a maximum financing amount of 100 million dollars in its Series A.
The secondary market is more prosperous. Data from a certain crypto aggregation website shows that in just over a year, the total market value of the AI sector has reached $48.5 billion, with a 24-hour trading volume nearing $8.6 billion; the positive impact brought by mainstream AI technology advancements is evident, as the average price of the AI sector surged by 151% following the release of OpenAI's Sora text-to-video model; the AI effect has also spread to one of the cryptocurrency attraction sectors, Meme: the first AI Agent concept MemeCoin ------ GOAT has quickly gained popularity and achieved a valuation of $1.4 billion, successfully sparking the AI Meme trend.
The research and topics related to AI+Web3 are equally hot, from AI+Depin to AI Memecoin and now to AI Agent and AI DAO, the FOMO sentiment can no longer keep up with the speed of the new narrative rotation.
AI+Web3, this term combination filled with hot money, trends, and future fantasies, is inevitably seen by some as a marriage arranged by capital. It seems difficult for us to discern, beneath this splendid robe, whether this is the arena of speculators or the eve of a dawn explosion.
To answer this question, a crucial consideration for both parties is whether it will become better with the other. Can benefits be gained from each other's models? In this article, we also attempt to examine this pattern by standing on the shoulders of predecessors: How can Web3 play a role in various aspects of the AI technology stack, and what new vitality can AI bring to Web3?
Part.1 What Opportunities Does Web3 Have Under the AI Stack?
Before we delve into this topic, we need to understand the technology stack of AI large models:
In simpler terms, the entire process can be expressed as follows: the "large model" is like the human brain. In the early stages, this brain belongs to a newborn baby who has just come into the world and needs to observe and absorb a massive amount of external information to understand this world. This is the "data collection" phase. Since computers do not possess multiple senses like human vision and hearing, before training, the large-scale unlabelled information from the outside world needs to be transformed into a format that computers can understand and use through "preprocessing."
After inputting data, the AI constructs a model with understanding and predictive capabilities through "training", which can be seen as the process of an infant gradually understanding and learning about the outside world. The model's parameters are like the language abilities that an infant continuously adjusts during the learning process. When the content of learning begins to be categorized, or when feedback is received from interactions with people and corrections are made, it enters the "fine-tuning" stage of the large model.
As children grow up and learn to speak, they can understand meanings and express their feelings and thoughts in new conversations. This stage is similar to the "reasoning" of AI large models, where the model can predict and analyze new language and text inputs. Babies express feelings, describe objects, and solve various problems through language ability, which is also similar to how AI large models apply reasoning to various specific tasks after completing training and being put into use, such as image classification and speech recognition.
The AI Agent is approaching the next form of large models——capable of independently executing tasks and pursuing complex goals, not only possessing the ability to think but also able to remember, plan, and interact with the world using tools.
Currently, in response to the pain points of AI across various stacks, Web3 has initially formed a multi-layered and interconnected ecosystem that covers all stages of the AI model process.
1. Basic Layer: Airbnb of Computing Power and Data
Hash Rate
Currently, one of the highest costs of AI is the computing power and energy required for training and inference models.
An example is that Meta's LLAMA3 requires 16,000 H100 GPUs produced by NVIDIA (which is a top graphics processing unit designed for artificial intelligence and high-performance computing workloads) and takes 30 days to complete training. The unit price of the latter's 80GB version ranges from $30,000 to $40,000, necessitating a computational hardware investment of $400 million to $700 million (GPU + network chips), while the monthly training consumes 1.6 billion kilowatt-hours, leading to energy expenditures of nearly $20 million each month.
The unburdening of AI computing power is precisely the area where Web3 first intersects with AI------DePin (Decentralized Physical Infrastructure Network). Currently, a data website has listed over 1,400 projects, among which representative projects for GPU computing power sharing include io.net, Aethir, Akash, Render Network, and so on.
The main logic is: the platform allows individuals or entities with idle GPU resources to contribute their computing power in a decentralized manner without permission, similar to an online marketplace for buyers and sellers like Uber or Airbnb, thereby increasing the utilization rate of underutilized GPU resources, and end users can also obtain more cost-effective and efficient computing resources; at the same time, the staking mechanism also ensures that if there is a violation of the quality control mechanism or network disruption, resource providers will face corresponding penalties.
Its characteristics lie in:
Gather idle GPU resources: The suppliers are mainly independent small and medium-sized data centers and operators of surplus computing power resources from cryptocurrency mines, with mining hardware that uses the PoS consensus mechanism, such as FileCoin and ETH miners. Currently, there are also projects aimed at launching devices with lower entry barriers, such as exolab, which uses local devices like MacBook, iPhone, and iPad to establish a computing network for running large model inference.
Facing the long-tail market of AI computing power:
a. "From a technical perspective," the decentralized computing power market is more suitable for inference steps. Training relies more on the data processing capabilities brought by large-scale GPU clusters, while inference has relatively lower GPU computing performance, such as Aethir focusing on low-latency rendering tasks and AI inference applications.
b. "From the demand side perspective," small to medium computing power demanders will not train their own large models independently, but will only choose to optimize and fine-tune around a few leading large models, and these scenarios are naturally suitable for distributed idle computing power resources.
Data
Data is the foundation of AI. Without data, computation is as useless as floating duckweed, and the relationship between data and models can be summed up by the saying "Garbage in, Garbage out". The quantity of data and the quality of input determine the quality of the final model output. For the training of current AI models, data determines the model's language ability, comprehension ability, and even values and human-like performance. Currently, the data demand dilemma for AI mainly focuses on the following four aspects:
Data hunger: AI model training relies on a large amount of data input. According to public information, OpenAI trained GPT-4 with a parameter count reaching trillions.
Data Quality: With the integration of AI across various industries, new requirements have emerged for data timeliness, data diversity, the specialization of vertical data, and the incorporation of emerging data sources such as social media sentiment, which all impact its quality.
Privacy and compliance issues: Currently, countries and companies are gradually recognizing the importance of high-quality datasets and are imposing restrictions on dataset crawling.
High data processing costs: large data volume and complex processing. Public information shows that more than 30% of AI companies' R&D costs are spent on basic data collection and processing.
Currently, web3 solutions are reflected in the following four aspects:
The vision of Web3 is to allow users who truly contribute to also participate in the value creation brought by data, and to obtain more private and valuable data from users in a low-cost manner through distributed networks and incentive mechanisms.
Grass is a decentralized data layer and network, allowing users to run Grass nodes to contribute idle bandwidth and relay traffic to capture real-time data from the entire internet, and earn token rewards;
Vana has introduced a unique Data Liquidity Pool (DLP) concept, allowing users to upload their private data (such as shopping records, browsing habits, social media activities, etc.) to a specific DLP and flexibly choose whether to authorize specific third parties to use this data;
In PublicAI, users can use #AI 或#Web3 as a classification tag on X and @PublicAI to achieve data collection.
Currently, Grass and OpenLayer are both considering incorporating data annotation as a key component.
Synesis introduced the concept of "Train2earn", emphasizing data quality, where users can earn rewards by providing labeled data, annotations, or other forms of input.
The data labeling project Sapien gamifies the labeling tasks and allows users to stake points to earn more points.
Currently, common privacy technologies in Web3 include:
Trusted Execution Environment ( TEE ), such as Super Protocol;
Fully Homomorphic Encryption (FHE), for example BasedAI, Fhenix.io or Inco Network;
Zero-knowledge technology (zk), such as the Reclaim Protocol, uses zkTLS technology to generate zero-knowledge proofs of HTTPS traffic, allowing users to securely import activity, reputation, and identity data from external websites without exposing sensitive information.
However, this field is still in its early stages, and most projects are still in exploration. One current dilemma is that the computing costs are too high, some examples are: