AI Layer1 New Era: Analyzing Six Major Projects Including Sentient to Build DeAI Infrastructure

AI Layer1 Research Report: Finding On-Chain DeAI's Fertile Ground

Overview

In recent years, leading technology companies such as OpenAI, Anthropic, Google, and Meta have rapidly developed large language models (LLM). LLMs have demonstrated unprecedented capabilities across various industries, greatly expanding the realm of human imagination, and in some scenarios, even showing the potential to replace human labor. However, the core of these technologies is firmly held by a few centralized tech giants. With substantial capital and control over expensive computing resources, these companies have established insurmountable barriers, making it difficult for most developers and innovation teams to compete with them.

At the same time, in the early stages of AI's rapid evolution, public opinion often focuses on the breakthroughs and conveniences brought by technology, with insufficient attention to core issues such as privacy protection, transparency, and security. In the long term, these issues will profoundly affect the healthy development of the AI industry and social acceptance. If these issues cannot be properly addressed, the debate over whether AI is "for good" or "for evil" will become increasingly prominent. Centralized giants, driven by profit motives, often lack sufficient motivation to proactively tackle these challenges.

Blockchain technology, with its characteristics of decentralization, transparency, and censorship resistance, offers new possibilities for the sustainable development of the AI industry. Currently, many "Web3 AI" applications have emerged on mainstream blockchains such as Solana and Base. However, in-depth analysis reveals that these projects still face numerous issues: on one hand, the degree of decentralization is limited, and key links and infrastructure still rely on centralized cloud services, with a heavy meme attribute that makes it challenging to support a truly open ecosystem; on the other hand, compared to AI products in the Web2 world, on-chain AI still shows limitations in terms of model capabilities, data utilization, and application scenarios, with the depth and breadth of innovation needing to be enhanced.

To truly realize the vision of decentralized AI, enabling the blockchain to securely, efficiently, and democratically support large-scale AI applications, and to compete in performance with centralized solutions, we need to design a Layer 1 blockchain specifically tailored for AI. This will provide a solid foundation for open innovation in AI, democratic governance, and data security, promoting the prosperous development of a decentralized AI ecosystem.

Biteye and PANews jointly released AI Layer1 research report: Searching for on-chain DeAI fertile ground

Core Features of AI Layer 1

AI Layer 1, as a blockchain specifically tailored for AI applications, has its underlying architecture and performance design closely aligned with the requirements of AI tasks, aiming to efficiently support the sustainable development and prosperity of the on-chain AI ecosystem. Specifically, AI Layer 1 should possess the following core capabilities:

  1. Efficient incentives and decentralized consensus mechanism The core of AI Layer 1 lies in building an open shared network for computing power, storage, and other resources. Unlike traditional blockchain nodes that primarily focus on ledger bookkeeping, AI Layer 1 nodes need to undertake more complex tasks, not only providing computing power and completing AI model training and inference but also contributing diverse resources such as storage, data, and bandwidth, thereby breaking the monopoly of centralized giants in AI infrastructure. This raises higher requirements for the underlying consensus and incentive mechanisms: AI Layer 1 must be able to accurately assess, incentivize, and verify the actual contributions of nodes in AI inference, training, and other tasks, ensuring the security of the network and efficient allocation of resources. Only in this way can the stability and prosperity of the network be guaranteed, while effectively reducing the overall computing power costs.

  2. Excellent high performance and heterogeneous task support capabilities AI tasks, especially the training and inference of LLMs, place extremely high demands on computational performance and parallel processing capabilities. Furthermore, the on-chain AI ecosystem often needs to support diverse and heterogeneous task types, including different model architectures, data processing, inference, storage, and other diverse scenarios. AI Layer 1 must deeply optimize its underlying architecture for requirements such as high throughput, low latency, and elastic parallelism, and preset native support capabilities for heterogeneous computing resources, ensuring that all AI tasks can run efficiently and achieve smooth scaling from "single-type tasks" to "complex diverse ecosystems."

  3. Verifiability and Trustworthy Output Guarantee AI Layer 1 not only needs to prevent security risks such as model malfeasance and data tampering, but also must ensure the verifiability and alignment of AI output results from the underlying mechanism. By integrating cutting-edge technologies such as Trusted Execution Environment (TEE), Zero-Knowledge Proof (ZK), and Multi-Party Computation (MPC), the platform allows every instance of model inference, training, and data processing to be independently verified, ensuring the fairness and transparency of the AI system. At the same time, this verifiability can help users clarify the logic and basis of AI output, achieving "what is obtained is what is desired", and enhancing user trust and satisfaction with AI products.

  4. Data Privacy Protection AI applications often involve sensitive user data, particularly in fields such as finance, healthcare, and social networking, where data privacy protection is crucial. AI Layer 1 should ensure verifiability while employing encryption-based data processing technologies, privacy computing protocols, and data permission management methods to guarantee data security throughout the entire process of inference, training, and storage, effectively preventing data leaks and misuse, and alleviating users' concerns regarding data security.

  5. Powerful ecological support and development capabilities As an AI-native Layer 1 infrastructure, the platform not only needs to have technological leadership but also must provide comprehensive development tools, integrated SDKs, operational support, and incentive mechanisms for ecosystem participants such as developers, node operators, and AI service providers. By continuously optimizing platform usability and developer experience, it promotes the landing of diverse AI-native applications and achieves the sustained prosperity of a decentralized AI ecosystem.

Based on the above background and expectations, this article will provide a detailed introduction to six representative AI Layer 1 projects, including Sentient, Sahara AI, Ritual, Gensyn, Bittensor, and 0G, systematically sorting out the latest progress in the field, analyzing the current development status of the projects, and discussing future trends.

Biteye and PANews jointly released AI Layer1 research report: Searching for on-chain DeAI fertile ground

Sentient: Building Loyal Open Source Decentralized AI Models

Project Overview

Sentient is an open-source protocol platform that is building an AI Layer1 blockchain (. The initial phase is Layer 2, which will later migrate to Layer 1). By combining AI Pipeline and blockchain technology, it aims to construct a decentralized artificial intelligence economy. Its core objective is to address issues of model ownership, invocation tracking, and value distribution in the centralized LLM market through the "OML" framework (Open, Monetizable, Loyal), enabling AI models to achieve on-chain ownership structure, invocation transparency, and value sharing. Sentient's vision is to empower anyone to build, collaborate, own, and monetize AI products, thus promoting a fair and open AI Agent network ecosystem.

The Sentient Foundation team brings together top academic experts, blockchain entrepreneurs, and engineers from around the world, dedicated to building a community-driven, open-source, and verifiable AGI platform. Core members include Princeton University professor Pramod Viswanath and Indian Institute of Science professor Himanshu Tyagi, who are responsible for AI safety and privacy protection, while Polygon co-founder Sandeep Nailwal leads the blockchain strategy and ecosystem layout. The team's background spans renowned companies such as Meta, Coinbase, and Polygon, as well as top universities like Princeton University and the Indian Institutes of Technology, covering fields such as AI/ML, NLP, and computer vision to collaboratively promote the project's implementation.

As a second entrepreneurial project of Polygon co-founder Sandeep Nailwal, Sentient was born with a halo, possessing abundant resources, connections, and market recognition, providing strong backing for the project's development. In mid-2024, Sentient completed a $85 million seed round financing, led by Founders Fund, Pantera, and Framework Ventures, with other investment institutions including dozens of well-known VCs such as Delphi, Hashkey, and Spartan.

Design Architecture and Application Layer

Infrastructure Layer

Core Architecture

The core architecture of Sentient consists of two parts: AI Pipeline and on-chain system.

The AI pipeline is the foundation for developing and training "Loyal AI" artifacts, consisting of two core processes:

  • Data Curation: A community-driven data selection process used for model alignment.
  • Loyalty Training: Ensure that the model maintains a training process consistent with community intentions.

The blockchain system provides transparency and decentralized control for protocols, ensuring the ownership, usage tracking, profit distribution, and fair governance of AI artifacts. The specific architecture is divided into four layers:

  • Storage Layer: Stores model weights and fingerprint registration information;
  • Distribution Layer: Authorized contract controls the model invocation entry;
  • Access Layer: Verifies whether the user is authorized through permission proof;
  • Incentive Layer: The yield routing contract allocates payments to trainers, deployers, and validators with each call.

OML Model Framework

The OML framework (Open, Monetizable, Loyal) is the core concept proposed by Sentient, aiming to provide clear ownership protection and economic incentives for open-source AI models. By combining on-chain technology and AI-native cryptography, it has the following characteristics:

  • Openness: The model must be open-source, with transparent code and data structure, facilitating community reproduction, auditing, and improvement.
  • Monetization: Each model invocation will trigger a revenue stream, on-chain contracts will distribute the revenue to trainers, deployers, and validators.
  • Loyalty: The model belongs to the contributor community, the upgrade direction and governance are determined by the DAO, and its use and modification are controlled by cryptographic mechanisms.

AI-native Cryptography

AI-native encryption leverages the continuity, low-dimensional manifold structure, and differentiable properties of AI models to develop a "verifiable but non-removable" lightweight security mechanism. Its core technology is:

  • Fingerprint embedding: Insert a set of concealed query-response key-value pairs during training to form a unique signature for the model;
  • Ownership verification protocol: Verifying whether the fingerprint is retained through a third-party detector (Prover) in the form of a query.
  • Permission calling mechanism: Before calling, it is necessary to obtain a "permission certificate" issued by the model owner, and the system will then authorize the model to decode the input and return the accurate answer.

This method enables "behavior-based authorization calls + ownership verification" without the cost of re-encryption.

Model Rights Confirmation and Security Execution Framework

Sentient currently adopts Melange mixed security: combining fingerprint confirmation, TEE execution, and on-chain contract revenue sharing. The fingerprint method is implemented by OML 1.0 as the main line, emphasizing the "Optimistic Security" concept, which assumes compliance by default and allows for detection and punishment of violations.

The fingerprint mechanism is a key implementation of OML, which generates a unique signature for the model during the training phase by embedding specific "question-answer" pairs. With these signatures, the model owner can verify ownership and prevent unauthorized copying and commercialization. This mechanism not only protects the rights of model developers but also provides a traceable on-chain record of the model's usage behavior.

In addition, Sentient has launched the Enclave TEE computing framework, which utilizes trusted execution environments (such as AWS Nitro Enclaves) to ensure that models only respond to authorized requests, preventing unauthorized access and use. Although TEE relies on hardware and has certain security risks, its high performance and real-time advantages make it a core technology for current model deployment.

In the future, Sentient plans to introduce zero-knowledge proofs (ZK) and fully homomorphic encryption (FHE) technologies to further enhance privacy protection and verifiability, providing a more mature solution for the decentralized deployment of AI models.

application layer

Currently, Sentient's products mainly include the decentralized chat platform Sentient Chat, the open-source model Dobby series, and the AI Agent framework.

Dobby Series Model

SentientAGI has released multiple "Dobby" series models, mainly based on the Llama model, focusing on values of freedom, decentralization, and cryptocurrency support. Among them, the leashed version has a more constrained and rational style.

DEAI-6.03%
View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • 5
  • Share
Comment
0/400
AirdropHunterXiaovip
· 8h ago
Trying to fool me into being a sucker again?
View OriginalReply0
MindsetExpandervip
· 07-31 02:49
The giants are playing with AI, and none of them are doing their jobs.
View OriginalReply0
GasOptimizervip
· 07-30 10:41
Understanding the experience of pulling wool, tools in the crypto world, if you haven't experienced fluctuation, don't touch coins.
View OriginalReply0
NoodlesOrTokensvip
· 07-30 10:35
Again it's a new concept of Be Played for Suckers.
View OriginalReply0
MetaDreamervip
· 07-30 10:19
The giants must be going crazy playing with AI.
View OriginalReply0
Trade Crypto Anywhere Anytime
qrCode
Scan to download Gate app
Community
English
  • 简体中文
  • English
  • Tiếng Việt
  • 繁體中文
  • Español
  • Русский
  • Français (Afrique)
  • Português (Portugal)
  • Bahasa Indonesia
  • 日本語
  • بالعربية
  • Українська
  • Português (Brasil)