When AI achieves a leap in capability through technological breakthroughs in hierarchical layers, the L1-L2-L3 in the encryption field falls into a vicious cycle of problem shifting. Why is it that the hierarchical logic leads to such vastly different outcomes?
Written by: Haotian
Everyone says that Ethereum's Rollup-Centric strategy seems to have failed? And they are deeply resentful of this L1-L2-L3 nesting game, but interestingly, the development of the AI track over the past year has also gone through a rapid evolution of L1—L2—L3. Comparing the two, what exactly is the problem?
The layered logic of AI is that each layer addresses core issues that the upper layer cannot solve.
For example, L1's LLMs solve the basic skills of language comprehension and generation, but logical reasoning and mathematical calculations are indeed hard flaws; As a result, when it comes to L2, the inference model specializes in overcoming this shortcoming, and DeepSeek R1 can do complex math problems and code debugging, which directly fills the cognitive blind spot of LLMs. After completing these groundwork, L3's AI Agent naturally integrates the first two layers of capabilities, allowing AI to change from passive response to active execution, planning tasks, invoking tools, and handling complex workflows on its own.
You see, this kind of stratification is "capability progression": L1 lays the foundation, L2 makes up for shortcomings, and L3 does integration. Each layer is a quantum leap from the previous one, and users can clearly feel that the AI is getting smarter and more useful.
The layered logic of encryption is that each layer patches the problems of the previous layer, but unfortunately brings about an entirely new and larger problem.
For example, the performance of the L1 public chain is not enough, so it is natural to think of using the layer2 scaling solution, but after a wave of layer2 infra, it seems that the gas is low, the TPS is accumulatively increased, but the liquidity is scattered, and the ecological application continues to be scarce, making too many layer2 infra a big problem. As a result, we started to build layer3 vertical application chains, but the application chains were independent and could not enjoy the ecological synergy of the INFRA general chain, and the user experience became more fragmented.
In this way, this layering has become "problem shifting": L1 has bottlenecks, L2 patches them, and L3 is chaotic and decentralized. Each layer merely shifts the problem from one place to another, as if all the solutions are only focused on the single issue of "issuing tokens."
At this point, everyone should understand the crux of this paradox: AI layering is driven by technological competition, with OpenAI, Anthropic, and DeepSeek all striving to enhance model capabilities; Crypto layering is hijacked by Tokenomics, with each L2's core KPI being TVL and Token price.
So, essentially one is solving technical problems, while the other is packaging financial products? There may not be a clear answer to which is right or wrong, as it depends on individual perspectives.
Of course, this abstract analogy is not so absolute; it's just that I find the comparison of the developmental contexts of the two very interesting, a mental massage for the weekend.
The content is for reference only, not a solicitation or offer. No investment, tax, or legal advice provided. See Disclaimer for more risks disclosure.
AI and the Encryption Hierarchical Paradox: The Diverging Path of Technology-Driven vs Token Hijacking Development
Written by: Haotian
Everyone says that Ethereum's Rollup-Centric strategy seems to have failed? And they are deeply resentful of this L1-L2-L3 nesting game, but interestingly, the development of the AI track over the past year has also gone through a rapid evolution of L1—L2—L3. Comparing the two, what exactly is the problem?
For example, L1's LLMs solve the basic skills of language comprehension and generation, but logical reasoning and mathematical calculations are indeed hard flaws; As a result, when it comes to L2, the inference model specializes in overcoming this shortcoming, and DeepSeek R1 can do complex math problems and code debugging, which directly fills the cognitive blind spot of LLMs. After completing these groundwork, L3's AI Agent naturally integrates the first two layers of capabilities, allowing AI to change from passive response to active execution, planning tasks, invoking tools, and handling complex workflows on its own.
You see, this kind of stratification is "capability progression": L1 lays the foundation, L2 makes up for shortcomings, and L3 does integration. Each layer is a quantum leap from the previous one, and users can clearly feel that the AI is getting smarter and more useful.
For example, the performance of the L1 public chain is not enough, so it is natural to think of using the layer2 scaling solution, but after a wave of layer2 infra, it seems that the gas is low, the TPS is accumulatively increased, but the liquidity is scattered, and the ecological application continues to be scarce, making too many layer2 infra a big problem. As a result, we started to build layer3 vertical application chains, but the application chains were independent and could not enjoy the ecological synergy of the INFRA general chain, and the user experience became more fragmented.
In this way, this layering has become "problem shifting": L1 has bottlenecks, L2 patches them, and L3 is chaotic and decentralized. Each layer merely shifts the problem from one place to another, as if all the solutions are only focused on the single issue of "issuing tokens."
At this point, everyone should understand the crux of this paradox: AI layering is driven by technological competition, with OpenAI, Anthropic, and DeepSeek all striving to enhance model capabilities; Crypto layering is hijacked by Tokenomics, with each L2's core KPI being TVL and Token price.
So, essentially one is solving technical problems, while the other is packaging financial products? There may not be a clear answer to which is right or wrong, as it depends on individual perspectives.
Of course, this abstract analogy is not so absolute; it's just that I find the comparison of the developmental contexts of the two very interesting, a mental massage for the weekend.