Vitalik: How the LSDFi protocol and liquidity increase decentralization

Written by Vitalik Buterin, compiled by bayemon.eth Source: ChainCatcher

Special thanks to Mike Neuder, Justin Drake, and others for their feedback and review. See also: Mike Neuder, Dankrad Feist, and arixon.eth earlier articles on similar topics.

The current development status of Ethereum can be said to include a large number of two-tiered staking, and the double staking here refers to the staking model with two types of participants.

  1. Node Operator: Operates nodes and lives a certain amount of its own capital as collateral for its reputation
  2. Delegator: Agents stake a certain amount of Ethereum with no minimum amount and no additional restrictions on how to participate other than collateral

This emerging double staking is generated through a large number of staking pools that participate in providing liquidity staking tokens (LST). (Both Rocket Pool and Lido are in this mode.)

However, current double staking has two drawbacks:

  1. Centralization risk of node operators: The selection mechanism of node operators in all staking pools is still overly centralized
  2. Unnecessary consensus burden: Ethereum L1 verifies about 800,000 signatures per EPOCH, which is a huge load for a single slot. In addition, liquidity staking pools require more capital, and the network itself does not fully benefit from this load. Therefore, if the Ethereum network can achieve reasonable decentralization and security, without requiring each staker to sign according to the time period, then the community can adopt such a solution, thereby effectively reducing the number of signatures per time period.

This article will describe solutions to both of these problems, first assuming that most of the capital is in the hands of those who are not willing to personally manage staking nodes, sign information on each slot, lock deposits and redistribute funds in their current form, then what role can these people play in this situation and still make meaningful contributions to the decentralization and security of the network?

How does current double staking work?

The two most popular staking pools are Lido and RocketPool, and in the case of Lido, the two parties involved are:

  1. Node operator: Voted by Lido DAO, which means that this is actually elected by LDO holders, when someone deposits ETH into the Lido smart contract system, stETH is created, and node operators can put it into the staking pool (but because the withdrawal certificate is bound to the smart contract address, the operator cannot withdraw at will)
  2. Agent: When someone deposits ETH in the Lido smart contract system, stETH is generated, and the node operator can use it as a stake (but because the withdrawal certificate is bound to the smart contract address, the operator cannot withdraw at will)

For the Rocket Pool, they are:

  1. Node operator: Anyone can become a node operator by submitting 8 ETH and a certain amount of RPL tokens.
  2. Agent: When someone deposits ETH into the Rocket Pool smart contract system, rETH is generated, which node operators can use as staking (also because the withdrawal certificate is bound to the smart contract address, the operator cannot withdraw at will).

Agency role

In these systems (or new systems enabled by potential future protocol changes), a key question to ask is: What is the point of having an agent from a protocol perspective?

To understand the profound implications of this issue, let's first consider that for the protocol changes mentioned in the post, i.e. the reduction penalty is limited to 2ETH, Rocket Pool will also reduce the staking amount of node operators to 2ETH, and Rocket Pool's market share will increase to 100%/ (for stakers and ETH holders, almost all ETH holders will become rETH holders or node operators as rETH becomes risk-free).

Assuming a return of 3% for rETH holders (including in-protocol rewards and priority fees + MEV), node operators will have a return of 4%. We also assume a total supply of ETH of 100 million.

The calculation result is as follows. To avoid compounding the calculation, we will calculate the earnings on a daily basis:

Now, assuming that the Rocket Pool does not exist, the minimum deposit per staker is reduced to 2 ETH, the total liquidity is capped at 6.25 million ETH, and the node operator return is reduced to 1%. Let's calculate again:

Consider both cases in terms of the cost of the attack. In the first case, the attacker would not register as an agent, since the agent does not essentially have any right to withdraw, so it makes no sense. Therefore, they will use all their ETH to stake and become node operators. To reach 1/3 of the total amount staked, they would need to put in 2.08 million Ethereum (which, to be fair, is still a pretty large number.) In the second case, the attacker only needs to invest funds, and to reach 1/3 of the total staking pool, they still need to invest 2.08 million Ethereum.

From the perspective of staking economics and the cost of attack, the end result of both cases is exactly the same. The share of total ETH supply held by node operators increased by 0.00256% per day, and the share of total ETH supply held by non-node operators decreased by 0.00017% per day. The attack cost was 2.08 million ETH. Thus, in this model, agents seem to be a pointless Rube Goldberg machine, with rational communities even inclined to cut out the middleman, drastically reduce the staking rewards, and limit the total amount of ETH staked to 6.25 million.

Of course, this article does not advocate reducing the staking reward by 4 times, while limiting the total amount of staking to 6.25 million. Instead, the idea in this paper is that a well-functioning staking system should have a key attribute, namely that agents should take significant responsibility throughout the system. Also, it doesn't matter if the agent is heavily motivated by community pressure and altruism to take the right action; After all, this is what motivates people today to implement decentralized, high-security staking solutions.

The agent's responsibilities

If agents could play a meaningful role in the staking system, what could that role be?

I think there are two types of answers:

  • Agent selection: Agents can choose which node operators to entrust their interests to. The "weight" of node operators in the consensus mechanism is proportional to the total staking entrusted to them. Currently, the agent selection mechanism is still limited, i.e. rETH or stETH holders can withdraw their ETH and switch to a different pool, but the actual availability of proxy selection can be greatly improved.
  • Consensus mechanism participation: The principal can choose to play a certain role in the consensus mechanism, the responsibility is "lighter" than the full subscription, and there will not be a long exit period and reduction risk, but it can still play the role of balancing node operators.

Enhanced proxy selection

There are three ways to enhance delegates' choice of power:

  1. Improve voting tools in pools
  2. Increase competition between pools
  3. Fix representation

Currently, voting in a pool isn't actually practical: in Rocket Pool, anyone can be a node operator, and in Lido, voting is decided by LDO holders, not ETH holders. Lido has put forward a proposal for dual governance of LDO + stETH, where they could activate a protection mechanism that prevents new votes and thus node operators from being added or removed, which in a way gives stETH holders a voice. Still, this power is limited and can be stronger.

Cross-pool competition already exists today, but it is relatively weak. The main challenge is that staking tokens in smaller staking pools are less liquid, harder to trust, and less supported by applications.

We can improve the first two problems by limiting the penalty amount to a smaller amount, such as 2 or 4 ETH. The remaining ETH can then be safely deposited and withdrawn immediately, allowing two-way exchanges to remain valid for smaller staking pools. We can improve on the third issue by creating a master issuance contract, which is for managing the LST (similar to the contract used by ERC-4337 and ERC-6900 for wallets) so that we can guarantee that any staked tokens issued through this contract are secure.

At present, there is no solidified representation in the agreement, but such situations seem likely for the future. It will involve logic similar to the above idea, but implemented at the protocol level. See this article for the pros and cons of solidifying things.

These ideas are improvements over the status quo, but they all offer limited benefits. There are problems with token voting governance, and ultimately any form of non-incentive proxy selection is just a form of token voting; This has always been my main complaint with delegated proof-of-stake. Therefore, it is also valuable to consider ways to achieve stronger consensus participation.

Consensus participation

Even without considering the current issues of liquidity staking, there are limitations to the current independent staking method. Assuming single-slot finality, ideally each slot might process about 100,000 to 1,000,000 BLS signatures. Even though we use recursive SNARKs to aggregate signatures, for signature traceability, each signature needs to be given a participant bit field. If Ethereum were to become a global-scale network, fully decentralized storage of bit fields would not be enough: 16 MB per slot would only support about 64 million stakers.

From this perspective, there is value in dividing staking into higher complexity destructible layers that will take effect per slot but may only have 10,000 participants, and lower complexity layers that are only occasionally called to participate. Layers of lower complexity can be completely exempt from decapitation, or participants can be randomly given the opportunity to deposit within several slots and become targets for deposition.

In effect, this can be done by increasing the validator balance cap followed by a balance threshold (e.g., 2048 ETH) to determine which existing validators enter the higher or lower complexity tier.

Here are some suggestions on how these microstaking roles work:

  1. For each slot, 10,000 small stakers will be randomly selected, and they can sign what they believe is representative of the slot. Run the LMD GHOST fork selection rule with a small staker as input. If there is a certain divergence between the fork selection driven by the microstaker and the fork choice driven by the node operator, the user's client will not accept any block as a final confirmation and will display an error. This forces the community to step in to resolve the situation.
  2. Agents can send transactions announcing to the network that they are online and willing to act as small stakers within the next hour. The message (block or proof) sent by the node is calculated and requires both the node and a randomly selected proxy to sign the node's acknowledgement.
  3. Agents can send transactions announcing to the network that they are online and willing to act as small stakers within the next hour. Each period, 10 random agents are selected as inclusion list providers and 10,000 more agents are selected as voters. These are selected before the k-slot and given a k-slot window to publish on-chain messages confirming that they are online. Each confirmed inclusion list provider of choice may publish an inclusion list, and the block will be considered invalid unless for each inclusion list either contains the transactions in the inclusion list or contains the votes of the selected voters in general, showing that the inclusion list is not available.

What these small staking nodes have in common is that they don't need to actively participate in every slot, or even light nodes to do all the work. As a result, node deployments only require a verification consensus layer, which node operators can implement through applications or browser plug-ins, which are mostly passive and require little computing overhead, hardware requirements, or know-how, or even advanced technologies like ZK-EVM.

These "small roles" also share a common goal: to prevent 51% of majority node operators from censoring transactions. The first and second also prevent the majority from participating in final reduction. The third focuses more directly on censorship, but it is more susceptible to the choice of most node operators.

These ideas are written from the perspective of implementing a double staking solution in the protocol, but they can also be implemented as a function of a staking pool. Here are some concrete implementation ideas:

  1. From the protocol perspective, each validator can set two staking keys: a continuous stake key P, and a bound Ethereum address that can be called, and output a fast stake key Q. The node's signature information tracking for fork selection is represented by P, and the signed information is represented by Q, if the PQ storage results are inconsistent, the final determination of any block is not accepted, and the liquidity pool is responsible for randomly selecting representatives
  2. The protocol can remain largely unchanged, but the public key of the authenticator for that period will be set to P+Q. Note that for destocking, two delimitable messages may have different Q keys, but they will have the same P key; The underweight design needs to deal with this situation.
  3. Q Keys can only be used in the protocol to sign and verify the inclusion list in a block. In this case, Q can be a smart contract rather than a single key, so the staking pool can use it to implement more complex voting logic, accepting an inclusion list from a randomly selected provider or enough votes indicating that the inclusion list is not available.

Conclusion

When implemented correctly, fine-tuning the proof-of-stake design can solve two problems in one fell swoop:

  1. Provides an opportunity for those who today do not have the resources or ability to do independent proof-of-stake by giving them the opportunity to participate in proof-of-stake, thereby retaining more power in their hands: including (i) the power to choose which nodes to support and (ii) actively participating in consensus in some lighter but still meaningful way than fully operating proof-of-stake nodes. Not all participants will choose one or both of these options, but any participant who chooses one or both options will be a significant improvement over the status quo.
  2. Reduce the number of signatures that the Ethereum consensus layer needs to process in each slot, even under a single-slot finality regime, to a smaller number like about 10,000. This will also help with decentralization, making it easier for everyone to run validators.

For these solutions, solutions to the problem can be found at different levels of abstraction: the permissions granted to users within the proof-of-stake protocol, the user selection between proof-of-stake protocols, and the establishment within the protocol. This choice should be carefully considered, and it is generally best to choose a minimum feasible establishment to minimize the complexity of the protocol and the degree of change to the economics of the agreement, while still achieving the desired goals.

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Share
Comment
0/400
No comments
Trade Crypto Anywhere Anytime
qrCode
Scan to download Gate app
Community
English
  • 简体中文
  • English
  • Tiếng Việt
  • 繁體中文
  • Español
  • Русский
  • Français (Afrique)
  • Português (Portugal)
  • Bahasa Indonesia
  • 日本語
  • بالعربية
  • Українська
  • Português (Brasil)