🎉 [Gate 30 Million Milestone] Share Your Gate Moment & Win Exclusive Gifts!
Gate has surpassed 30M users worldwide — not just a number, but a journey we've built together.
Remember the thrill of opening your first account, or the Gate merch that’s been part of your daily life?
📸 Join the #MyGateMoment# campaign!
Share your story on Gate Square, and embrace the next 30 million together!
✅ How to Participate:
1️⃣ Post a photo or video with Gate elements
2️⃣ Add #MyGateMoment# and share your story, wishes, or thoughts
3️⃣ Share your post on Twitter (X) — top 10 views will get extra rewards!
👉
a16z talks to Solana Lianchuang: Why didn’t Solana become an EVM public chain?
Original title: Debating Blockchain Architectures (with Solana)
Moderator: Ali Yahya, general partner of a16z crypto, Guy Wuollet, partner of a16z crypto trading team
Guest: Anatoly Yakovenko, CEO of Solana Labs and co-founder of Solana
Compiled by: Qianwen, ChainCatcher
"But what I'm trying to say is that people should try to create bigger ideas instead of repeating what's already there. The best analogy I've ever heard is that when people discovered cement, everyone focused on using cement Laying bricks, and then one guy thought, I can build a skyscraper. They figured out a way to combine steel and concrete with construction that no one had thought of. The new tool was cement. You just have to figure out the skyscraper. What is it, and then go build the mansion."
In this episode, a16z crypto talks with Solana Labs co-founder and CEO Anatoly Yakovenko, who previously worked at Qualcomm as a senior engineer and engineering manager.
overview
The ultimate goal of decentralized computing
a16z crypto: First of all, I want to know what you think of the ultimate goal of decentralized computing? What do you think of blockchain architecture?
Anatoly Yakovenko: My position is quite extreme. I think settlement will become less and less important, just like in traditional finance. You still need someone to provide a guarantee, but these guarantees can be achieved in many different ways. I think what is truly valuable to the world is a synchronized state that is globally distributed and globally synchronized. This is also the real difficulty. You can think of it as what Google Spanner is to Google, or what Nasdaq is to financial markets.
From a macro perspective, the blockchain system is permissionless, programmable, and highly open, but there is still some kind of market behind the stack. It would be extremely valuable for all of these markets to have complete global synchronization as close to the speed of light as possible, so that everyone can use it as a reference. You can still operate local markets, but if global prices can be synchronized quickly, global finance will become more efficient. I think this is the ultimate goal of blockchain, to synchronize as much state as possible at the speed of light.
a16z crypto: If cryptocurrencies and blockchain gain mainstream adoption, what will be the biggest driver of activity on the blockchain at that time?
Anatoly Yakovenko: I think the form will still be very similar to Web2, but it will be more transparent and realize the vision of long-tail distribution - there will be a variety of smaller companies on the Internet, and they will be able to control their own Data, rather than a few dominant players like now (although these large companies are doing great things), I think that in the long run, creators should have more control, more independent publishing rights, and be able to Realize the true meaning of the Internet, with a wide range of segments and markets.
a16z crypto: Another way to think about or ask this question is how to make trade-offs. You said you think settlement will become less important in the future. I'm curious, as a place where a lot of global business, especially financial activity, takes place, how can Solana accelerate or complement the ultimate goal you just talked about?
Anatoly Yakovenko: The Solana system is not designed as a store of value. It actually has a very low tolerance for network failures. It uses all available resources on the Internet as fast as possible. In fact, it relies on much of the world's free cross-border communication and finance. It is different from a bunker coin that can be used for emergency refuge. Of course, I think the world also needs bunker coins that can survive when geopolitical conflicts occur.
But looking optimistically, things in the world are becoming more and more connected. I think we'll see 10 Gigabit connectivity between us. In that world, you will have a completely connected world. I think a lot of the execution aspects can be absorbed by this globally synchronized state machine.
From experience, settlement can occur in many places because settlement is easy to guarantee. Again, I am taking this position for the sake of discussion. Since 2017, we have witnessed hundreds of privacy networks of various types, with many different examples in terms of design. We basically see no voting algorithm (Quorum) failure because settlement is relatively easy to implement. Once you establish a complex Byzantine fault tolerance mechanism between 21 decentralized parties, you will not see Settlement failed. We have actually solved all other extension issues. From experience, Tendermint is very feasible. Although we experienced the Luna crash in the early stage, the problem was not the voting algorithm mechanism.
I think we spend too much on settlements, in terms of security, resources and engineering, and not nearly enough on research and execution, which is where most of the financial industry makes its money. I personally believe that if these technologies are to truly impact and reach the world, they must be better than traditional finance in terms of price, fairness, speed, etc. This is where we need to focus our R&D efforts and competition. **
a16z crypto: You consider settlement to be one of the aspects of blockchain you choose to optimize. People may over-optimize the blockchain for settlement and ignore other aspects, such as throughput, latency, and composability, but they are often opposed to the security of settlement. Can you talk about the architecture of Solana?
Anatoly Yakovenko: The task of the Solana architecture is to transmit information from all over the world to all participants in the network at the fastest speed. So there is no need for sharding and no complicated consensus protocols. We actually want to make things very simple. In other words, we were lucky enough to solve a difficult computer science problem, which is box synchronization (using a verifiable delay function as a time source in a network). You can think of it like two radio towers transmitting at the same time or frequency, creating noise. One of the first protocols people thought of when they started building cellular networks was to give each tower a clock and have them alternate transmitting signals on time.
One metaphor is that the FCC is like a truck full of bad guys, if your tower is not synchronized on an open permission-listed network, they will drive up to your tower and shut it down. Solana was inspired to use a verifiable delay function to schedule block producers so that collisions cannot occur. For example, in a network like Bitcoin, if two block producers produce a block at the same time, a fork will occur, which is the same noise as in a cellular network. If we could force all block producers to take turns producing on time, you could get a nice time division protocol where each block producer could take turns producing as scheduled and they would never collide. Therefore, forks never occur and the network never gets into a noisy state.
After that, everything we do is operational optimization of the operating system and database, we transfer chunks of data around the world like a bit torrent, transfer chunks of encoding (ratio coding) to different machines, and in fact, they end up Looks very similar to data availability sampling and has the same effect. Then they forward bits to each other, rebuild blocks, vote, and so on. The main design idea of Solana is that we strive to ensure that every process in the network or code base can be expanded only by updating the kernel.
If in two years we get twice as many cores per dollar spent, we can scale it so that we now have twice as many threads per block, or Blocks are twice as computationally intensive. So the web is about doing twice as much. This all happens naturally without any changes to the architecture.
That's the main thing we really want to achieve, and it's based on my experience. I worked at Qualcomm from 2003 to 2014. We see improvements in mobile terminal hardware and architecture every year. If you write software without considering that it can be expanded the next year without having to rewrite it, then you are very unqualified as an engineer. Because your devices will scale rapidly, you will have to rewrite your code to take advantage of this.
So if you really have to think ahead, everything you build will only evolve faster and faster. The biggest learning experience in my engineering career is that you can choose a well-designed algorithm, but it may be wrong because as the hardware scales, the benefits of using this algorithm become minimal and it is now difficult to implement it Complexity feels like a waste of time. So, if you could just do something very simple and just extend the kernel, you might actually be able to achieve 95% of it.
Solana’s building philosophy
a16z crypto: Using proof of history as a way to synchronize time across validators is a very groundbreaking idea, which is why Solana is different from other consensus protocols.
Anatoly Yakovenko: This is part of Amdahl's law, which is why it is difficult for people to replicate Solana in terms of accountlessness, latency and throughput. This is because the classic consensus implementations are based on step functions. of. An entire network, such as Tendermint, must agree on the contents of the current block before it can move on to the next block.
Cell towers use a schedule and you just send the signal. Because there's no need to use step functions, you can run the network very quickly, which I feel like is a kind of synchronization, but I don't know if that's the right word. They transmit continuously and never stop waiting for consensus to run. We are able to do this because we have a strict understanding of time. Honestly, we could build some clock synchronization protocol for redundancy, it would just be a very difficult process. This is a huge project that requires reliable clock synchronization.
This is Solana's philosophy. Before I started building Solana, I enjoyed trading, being a broker, etc., although not making any money. At that time, "flash boys" were prevalent in the traditional financial industry. Every time I think my algorithm is good enough, my order will be a little later, the order will take longer to enter the market, and the data will come a little slower.
I think if we want to disrupt the financial industry, the fundamental goal of these open business systems is to make that never possible. The system is open and anyone can participate. Everyone knows exactly how to gain access and how to gain rights, such as priority or equity.
Achieving all of this as quickly as possible within the limits of physics and within the limits that engineers can achieve, I think this is the fundamental issue. If blockchain can solve this problem, it will have a very big impact on the rest of the world, and many people around the world will benefit. This could become a building block that you can then use to disrupt ad exchanges and monetization models on the web and so on.
a16z crypto: I think there is an important distinction between pure latency and malicious activity, especially within a single state machine. Maybe you could elaborate a little more on which one you think is more important and why.
Anatoly Yakovenko: It is not possible to atomicize the entire state, because this means there is only one global right lock for the entire state which means a very slow ordering system. Therefore, you need atomic access to the state, and you need to guarantee it. It's difficult to build software that operates on remote states of non-atomic states if you don't know what side effects it will have on your calculations. So the idea is like committing a transaction and either fully executing it or failing completely without any side effects. That's one of the things these computers have to have. Otherwise, I don't think it would be possible to write reliable software for them. You simply can't construct any solid logic or financially sound logic.
You might be able to build a system that's consistent, but in my opinion, that's a different kind of software. So, there is always a tension between maintaining the atomic state of the system and its performance. Because if you guarantee this, it ultimately means that at any moment you have to select a specific writer globally to handle a specific part of the state. To solve this problem, you need to have a single sequencer and linearize these events. This creates points where value can be extracted and the fairness of the system improved. I think it is really difficult to solve these problems. Not only Solana faces these problems, but Ethereum and Lightning Robots also face these problems.
Solana and Ethereum
a16z crypto: One of the issues that is often debated, especially in the Ethereum community, is the verifiability of execution, which is very important for users because they do not have very powerful machines to verify activity in the network, What are your thoughts?
Anatoly Yakovenko: I think the end goal of both systems is very similar. If you look at the goals of the Ethereum roadmap, the idea is that the overall network bandwidth is greater than any single node, and the network is already computing or processing more events than any single individual node. You have to take into account the security factors of such a system. There are also protocols for issuing fraud proofs, sampling schemes, etc., all of which actually apply to Solana as well.
So, if you step back and look at it, it's not really that different. You have a system that's like a black box and creates so much bandwidth that it's not very practical for a random user. Therefore, they need to rely on sampling techniques to ensure the authenticity of the data. Like a very powerful rumor network, capable of spreading fraud proofs etc. to all clients. The guaranteed things between Solana and Ethereum are the same. I think the main difference between the two is that Ethereum is very much beholden to the narrative of itself being a global currency, especially the narrative of competing with Bitcoin as a store of value.
I think it makes sense to allow users to have very small nodes. Even if they are only partially involved in the network, rather than having the network fully run by professionals. Honestly, I think it's a fair optimization, like, **If you don't care about execution, only settlement, why not keep the node requirements to a minimum and let people partially participate in network activities? **I don’t think doing this creates a trust-minimized or absolutely secure system for the vast majority of the world, people will still have to rely on data availability sampling and fraud proofing. To verify whether the blockchain has done something wrong, users only need to execute the signatures of the majority of people on the chain.
On Solana, a single transaction describes a piece of the action state of all people who have touched the transaction. It runs on any device, such as a browser on a mobile phone. It is easy to execute a single transaction signed by a majority of people because everything on Solana is It's specified ahead of time, so it's actually easier to build on Solana. Like EVM or any smart contract can touch any state and randomly jump between them during execution. In a way, it's almost simpler. But I think at a very high level, users ultimately have to rely on DAS and fraud proofs. At this point, all designs are the same.
a16z crypto: I think the difference between the two is zero-knowledge proof and validity proof, especially fraud proof. You seem to think that zkEVM is nearly impossible to audit and that they won't be developed for a few years. I want to ask you, why does Solana not prioritize zero-knowledge proofs and validity proofs like Ethereum?
Anatoly Yakovenko: I think there are two challenges here, one is the way we prioritize them, because there is a company called "white protocol" that is building zero-knowledge proofs for applications. Proof time is quick. Users will not notice them during their interaction with the chain.
In fact, you can combine them. You can have one transaction Solana call five different zk programs. Therefore, this environment can save computing resources or create privacy for users, but it does not truly verify the entire chain. The reason why I think it is difficult to verify the entire chain is because zero-knowledge systems cannot handle a large number of sequential state dependencies well. The most typical example is vdf (verifiable delay function). When you try to prove a sequential SHA, a recursive SHA of 56, you find that it breaks down because the ordering state dependencies during execution greatly increase the constraints that the system must have. And verification takes a long time, I don't know if this is the best result in the industry, the latest result I saw on twitter was that a 256 byte SHA took about 60 milliseconds. That's a long time for a single click command.
Therefore, sorting calculations and classical calculations are necessary. And in an environment designed for execution, where there's a lot of markets, you actually have a lot of sequential dependencies. The market is very hot. Everyone submits data directly to a pair of transactions, and everything around that pair of transactions depends on that pair of transactions. So, like execution, this order dependency is actually quite large, which would lead to a very lengthy proof system.
Solana does not prohibit someone from running a zero-knowledge prover using recursive light to verify the entire calculation, if that is feasible. But what the user needs is that during the transaction, my information is quickly written to the chain, and it is written in microseconds or milliseconds, and I need to quickly obtain the status and some guarantees about the status. This is the key to gain.
So I think we need to solve this problem, and that requires actual competitiveness in traditional finance. If that can be achieved, then you can start looking into zero knowledge and figuring out how we can provide these guarantees for users who don't want to verify the chain, don't want to rely on these events, but maybe we can do at least once every 24 hours or something like that . I think there are two different use cases, first, we have to really solve the market mechanism problem, and then for other long-tail users.
a16z crypto: It sounds like what you're saying is that validity proofs, ZK proofs are great at settlement, but don't really help with execution because the latency is too high, and their performance needs to be improved.
Anatoly Yakovenko: So far it’s true. This is my intuition for the simple reason that the more active the chain, the more hotspots the state depends on. They are not fully parallelizable and will never talk to each other. It's just a bunch of poor quality code.
a16z crypto: Another counterargument may be that zero-knowledge proofs are experiencing exponential progress because there is now a lot of investment in this area. Maybe 5 years from now, 10 years from now, the cost may be reduced from 1,000 times now to a more feasible level. You come from a hardware engineering background, and I'd love to hear your opinion on how having one node do the computation and generate the proof, and then distribute the proof to others, might be more efficient than having each node do the computation on its own. What do you think about this? View?
Anatoly Yakovenko: This trend is useful for zero-knowledge systems that optimize programs. More and more is happening on the chain. The number of constraints will increase faster than you can add hardware, and then you continue to add hardware. This is my gut feeling. My feeling is that as the demand increases, such as more and more computations on the chain, it will become harder and harder for zero-knowledge systems to keep up in a low-latency way. I'm not even sure if it would be 100% feasible. I think it's quite possible that you could build a system that could handle extremely large recursive batches, but you'd still have to run classic execution, taking snapshots every second. Then, invest an hour of computation time on a large parallel farm, verify between each snapshot, and recompute from there, but that takes time and I think that's a challenge.
I don't know if ZK can catch up unless demand levels off, but I think demand will eventually level off. Assuming hardware continues to improve, at some point the demand for cryptocurrencies will be saturated, just as Google searches per second may be saturated currently. Then, you'll start to see this happen. I think we are still far from that goal.
a16z crypto: Another big difference between the two models is Ethereum’s Rollup-centric worldview, which is essentially a compute sharding, data availability sharding, bandwidth and network activity sharding model. So it's conceivable that ultimately greater throughput can be achieved because you can add rollups almost infinitely on top of a single rollup, but that means compromising on latency. So, what is more important? Is it the overall throughput of the line or the access latency? Maybe both are important?
Anatoly Yakovenko: I think the main problem is that, you have Rollup and sorter, people will extract value from the construction of sorter and Rollup, in this system, you will more or less have something in common Sorter. Their operations are no different from Citadel, Jump, brokers, traders, etc. They all route orders. These systems already exist. This design doesn't actually break the entire monopoly. I think the best way is to build a completely permissionless commercial system so that those middlemen can't really participate in it and start grabbing the value of the global synchronization state machine.
Most likely, it will actually cost less to use because it's like creating a bunch of different little pipes.
Generally speaking, pricing for any given channel is based on the remaining capacity of that pipe, rather than on overall network capacity. It is difficult to build a system that completely shares network bandwidth. You can try to put blocks wherever available like a Rollup design, but they will all compete and bid. It's not as simple as one giant pipeline, and the price is based on the remaining capacity of this chain of pipelines. Because it's a bandwidth aggregation source, its pricing will be lower, but the ultimate speed and performance will be higher.
Block space and the future
a16z crypto: I once heard you say that you did not believe that the demand for block space was unlimited. Do you think blockchain’s demand for block space will reach an equilibrium point when web3 gains mainstream adoption?
Anatoly Yakovenko: Imagine if Qualcomm engineers were told that the demand for cellular bandwidth is infinite and the code is designed for infinite. This is ridiculous. **
In fact, you will design a goal and design for this demand, such as thinking about how much hardware is needed? Do I need to start? What's the simplest implementation? How much does deployment cost? etc. My intuition is that 99.999% of the most valuable transactions may only require less than 100,000 TPS. This is my intuitive guess. Achieving a system of 100,000 TPS is actually quite feasible. Current hardware can achieve it, and Solana hardware can do it. I think the speed of 100,000 TPS is probably the blockchain space in the next 20 years.
a16z crypto: Could it be that demand for block space is soaring because block space is so affordable and people want to use it for all kinds of things?
Anatoly Yakovenko: But there is still a price floor. Price purchases must cover the bandwidth cost of each validator. Just like the egress cost will dominate the verification cost. If you have 10,000 nodes, you probably need to price the per-byte usage of the network at 10,000 times the normal egress cost, but that sounds expensive.
a16z crypto: So I guess it's a question, do you think at some point Solana will reach its limit, or do you think the monolithic architecture is enough?
Anatoly Yakovenko: So far, the reason people have done sharding is because they have built systems with much lower bandwidth than Solana, so they run into capacity constraints and start bidding to get bandwidth, which It has greatly exceeded export costs. Taking the egress cost of 10,000 nodes as an example, the last time I looked at the price the egress cost per megabyte for Solana validators should be $1, which is a floor price and you can't use it to play videos. But it's cheap, you can use it to search, and you can basically have every search put on-chain and get the results back from your search engine.
a16z crypto: I think this is actually an interesting point because we asked the question at the beginning of the podcast "what is the ultimate goal of blockchain expansion", which means the scalability of blockchain is the most important question.
Chris has used this analogy before, much of the progress in AI over the past decade has been due to better hardware, which is really the key. So I think we talk about the scalability of the blockchain for the same purpose. If we can achieve a substantial increase in TPS, everything will run normally. But an interesting objection is that Ethereum can complete 12 transactions per second, and the throughput of an Ethereum itself is still greater than that of any single L2, charging relatively high handling fees. On Solana, many simple transfer transactions have low transaction fees. When we talk about this problem, we often conclude that if we get to the next order of magnitude in throughput, there will be a lot of new applications that we can't reason about or think about now. In a way, Solana has been the place to build applications over the past few years, and a lot of things are very similar to things built on top of Ethereum.
Do you think that higher throughput or lower latency will unlock a lot of new applications? Or will most things built on blockchain in the next 10 years be very similar to the designs we've already come up with?
Anatoly Yakovenko: Actually, I think most applications will be very similar. The hardest thing to crack is, how to build a business model, such as how to apply these new tools? I think we've discovered the tools.
**The reason Ethereum transactions are so expensive is because its state is so valuable, and when you have that state and anyone can write to it, they build up the economic opportunity cost of being first people wrote this state, and it all effectively ballooned the fees. This is what generates valuable transaction fees on Ethereum. **In order to achieve this, many applications need to create this valuable state so that people are willing to keep writing and so that people start competing for higher fees.
a16z crypto: I offer a counter-argument here. I think it's easy to underestimate the creativity of developers and entrepreneurs across the space. In fact, if you look back historically, like the first wave of the web and the Internet starting in the 1990s, it took us a long time to really develop the main drivers of interesting applications. Taking cryptocurrency as an example, starting from Ethereum around 2014, we really have programmable blockchains. Things like Solana have only really existed for about 4 years. People have not been exploring designs for a long time. .
The fact is that the number of developers in this field is still extremely small. For example, there are probably tens of thousands of developers who know how to write smart contracts and truly understand the promise of blockchain as a computer. Therefore, I feel it is still early to develop interesting ideas on blockchain. The design space it creates is so vast that I suspect we'll be surprised at what people create in the future. They may not just be something related to trading, markets, or finance. They may come in the form of shared data structures that are very valuable but play a role that is not inherently financial.
A good example is a decentralized social network, where the social graph is put on-chain as a public good, which allows various other entrepreneurs and technology developers to build on it. Because the social graph is on the blockchain and is open and accessible to all developers, the social graph becomes a very valuable state for the blockchain to maintain. You can imagine people wanting to publish large numbers of transactions for various reasons, such as updating this data structure in real time. If these deals are cheap enough, I imagine developers will figure out a way to take advantage of them.
Historically, whenever computers got faster. Developers will look for ways to take advantage of the extra computing power to improve their applications. We never have enough computing power. People always want more computing power, and I think the same will happen with blockchain computers. And there won't be an upper limit, maybe the upper limit is not unlimited, but I think the upper limit on the demand for block space must be much higher than we think.
Anatoly Yakovenko: But on the other hand, the use cases of the Internet were actually discovered very early, such as search, social graphs, and e-commerce were also discovered very early, probably in the 1990s.
a16z crypto: Some things are hard to predict. For example, shared bicycles are difficult to predict. In fact, the form that search ultimately takes is also difficult to predict, and my extensive use of things like streaming video in social networks was also unimaginable at the beginning.
I think, like here, we can think of some applications that people might build on blockchain. But given current constraints and infrastructure constraints, some of these applications feel impossible to imagine. Once these restrictions are lifted, and once more people enter this field to build, we can imagine that many heavyweight applications may appear in the future. So if we let it develop, we might be surprised at how powerful it becomes.
Anatoly Yakovenko: There is an interesting card game called "dot bomb" where the object of the game is to lose money as slowly as possible. You can't actually win or make any money. You're running a bunch of different startups using 90s internet ideas. Without exception, every so-called bad idea, such as online grocery delivery and online pet stores, became at least a billion-dollar business sometime after 2010. So I think a lot of ideas that might be terrible at first, or fail during initial implementation, end up being adopted very well in the future.
Future Adoption of Blockchain
a16z crypto: So the question is, what do you think is the key to blockchain from its current application to becoming mainstream on the Internet? If it's not scalability, what's the other blocking factor, like cultural acceptance of blockchain? Is it a privacy issue? Is it user experience?
Anatoly Yakovenko: This reminds me of the history of the development of the Internet, and I remember how the whole experience shifted, after I went to college, I had an email address and everyone at work had an email Address, I started to receive some links containing various contents, and then the user experience on the Internet became better. For example, Hotmail was born and Facebook also developed.
Because of this, people's thinking has changed and they understand what the Internet is. Initially, it was difficult for people to even understand what a URL was, what did it mean to click on something? What does it mean to enter the server? We have the same problem with self-regulation and need for people to really understand these concepts, like what does a mnemonic phrase mean? What do wallets and transactions mean? People's minds need to change, and this change is slowly happening. I think every user who ends up buying cryptocurrency and depositing it into their own self-regulated wallet will understand this once they have that experience. But so far, not many people have had this experience.
a16z crypto:** You guys built a cell phone. Maybe you can tell us where the inspiration for making the phone came from and how do you think the promotion is going? **
Anatoly Yakovenko: My experience at Qualcomm made me realize that this is a problem with limitations, that we can solve it, and that it will not shift the entire company to mobile phones. **So this is a very low marginal cost opportunity for us that could change the cryptocurrency or mobile industry. **
This is something worth doing. We worked with a company to build a device, and when we worked with them to launch cryptocurrency-specific features, we got really great reviews from people and developers who thought it was like an app store alternative. But everything is unknown, such as whether the application of cryptocurrency under macro conditions is so compelling that people are willing to switch from iOS to Android? Some people are willing, but not many yet. Launching a device is very difficult. Basically, every device launched outside of Samsung and Apple has ended in failure. The reason is that the production lines of Samsung and Apple have been well optimized, and any new company cannot compete with these giants in terms of hardware. The company is very lagging behind.
So, you need to have some "religious"-like reason for people to convert, and maybe cryptocurrencies are that reason. We haven't proven it, but we haven't disproven it either. Like we haven't seen a breakthrough use case where self-regulation is like a critical feature that people need and they're willing to change their behavior.
a16z crypto: You are one of the few founders who can build both hardware and decentralized networks. Decentralized protocols or networks are often compared to building hardware because of how complex it is, do you think this metaphor holds true?
Anatoly Yakovenko: Like when I used to work at Qualcomm. If there is a problem with the hardware, it will cause a lot of problems. For example, if a tape is broken, the company will spend tens of millions of dollars every day to repair it, which can be catastrophic. In a software company, you can still find problems quickly, and you can patch the software 24 hours a day, which makes it easier.
Community and Development
a16z crypto: Solana has done a great job building its community and has a very strong community. I'm curious, what methods did you use to build your company and build your ecosystem?
Anatoly Yakovenko: It can be said that there is a bit of luck involved. We are still Solana Lab from 2018, which was at the end of the previous cycle. And many of our competitors have actually raised several times more capital than we have. Our team was small at the time. We didn't have enough funds to build and optimize the cdm, so we built a runtime that we thought could demonstrate this key feature - a scalable and untethered blockchain that is not constrained by the number of nodes, severe Delay effects. We really want to make breakthroughs in all three areas.
At that time we only focused on building this fast network and did not care about too many other aspects. In fact, when the network launched, we only had a very rudimentary explorer and command line wallet, but the network speed was very fast. This was also key to attracting developers because there was no other fast, cheap network that could replace it, nor any programmable network that could provide such speed, latency and throughput.
This is actually why developers can develop. Since many people couldn't copy and paste solidity code at the time, it was all about starting from scratch. The process of building from scratch is essentially the entry process for engineers. For example, if you can build the primitives you're used to in stack a and stack b, you can learn stack b from start to finish. If you can accept certain trade-offs, you might become an advocate.
**If we had more funding, we might have made the mistake of trying to build EVM compatibility, but the fact that we had limited engineering time forced us to only prioritize the most important things, which was this performance of the state machine. **
My gut feeling is that if we can lift the constraints on developers and give them a very large, very fast, low-cost network, they can lift the constraints on themselves. And this has actually happened, surprisingly and amazingly. I'm not sure if we would have been successful if the timing wasn't right, if the macro environment wasn't right. We announced it on March 12th, and then on March 16th both the stock market and the cryptocurrency market crashed 70%. I think the timing of those 3 days may have saved us.
a16z crypto: Another important factor here is how to win over developers?
Anatoly Yakovenko: It's a little counter-intuitive, you have to build your first program by chewing glass, which requires people to really invest time, we call it "chew glass".
Not everyone will do it, but once enough people do, they will build libraries and tools that make it easier for the next developer to develop. For developers, doing this is actually a matter of pride, and naturally the library will be built up and the software will naturally expand. I think that's something that we really want the developer community to build and chew on, because that really makes those people own it, really makes them feel like they have real ownership of the ecosystem. We try to solve problems that they can't solve, like long-term agreement issues.
I think that's where this ethos comes from, you're willing to chew glass because you're getting something back from it, you're getting ownership of the ecosystem. We are able to focus on making protocols cheaper, faster, and more reliable networks.
a16z crypto: What are your thoughts on the developer experience and what role programming languages will play as they gain more mainstream adoption in this space. It's quite difficult to get involved in this field, to learn how to use these tools, to learn how to think.
In the new paradigm, programming languages may play an important role in this regard, as the security of smart contracts becomes an important task that engineers in this field must complete. The stakes are high. In an ideal world, we will eventually see a world where programming languages help you much more than you do now through tools, such as formal verification, compilers, and automation tools that allow you to determine Is your code correct?
Anatoly Yakovenko: In my opinion, formal verification is necessary for all Defi applications. A lot of innovation happens here, like building new markets, and these are where the threat from hackers is greatest, and these are where formal verification and similar tools are really needed.
I think there are a lot of other applications that are moving very quickly towards single-node implementations and becoming credible in their effects. Once you can establish a single standard for a certain type of problem, it's much easier than a startup building a new Defi protocol that has to bear a lot of implementation risk because no one has coded it before. Then get people to believe it and risk their money in the protocol. This is where you need all the tools. Formal verification, compilers, move language, etc.
a16z crypto: The programming world is changing in a very interesting way, because in the past most programming was traditional imperative programming, similar to java. And when you write some code, it's likely to be incorrect and break, and then you fix it.
However, more and more applications are mission-critical, and for these applications you need a completely different way of programming, one that better ensures that the code you write is correct. On the other hand, there is another type of programming that is emerging, and that is machine learning, which involves using data to synthesize programs. Both of these things are eating away at the original form of imperative programming. There will be less and less ordinary Java code in the world. Machine learning algorithms will increasingly be coded based on data. There will be more code written through more formal techniques that look more like mathematics and formal verification.
Anatoly Yakovenko: Yes, I could even imagine that at some point the verifier optimizes the smart contract language and then tells LLM to translate it to solidity or other Solana anchors. Two years ago, people might not have believed it, but on Gpt 4, there are already a lot of step functions.
a16z crypto: I like this idea. You can use an LLM to generate program specifications that meet the requirements of certain formal verification tools. You can then ask the same LLM to generate the program itself. You can then run formal verification tools on your program to see if it actually meets the specification. If it doesn't match, it will give you an error. You can feed this error back to other LLMs and let them try again. You can keep doing this until you end up with a verifiable, formally verified program.
Ecosystem and Talent Recruitment
a16z crypto: We are discussing how to build a strong ecosystem. Many blockchains decentralize almost immediately after launch, to the point where the core team no longer participates in forum discussions or attempts to help other partners participate. And you seem to be very familiar with it from the beginning of the network launch and entry into the market. I think this could be a big advantage in building the Solana ecosystem.
Anatoly Yakovenko: To quote, decentralization is not the absence of leadership, but plural leadership. I still remember how difficult it was to take Linux seriously at a large company like Qualcomm, and even the idea of running Linux on a mobile device seemed laughable. When I first joined, the whole community was trying to convince everyone that open source made sense, and I thought that was what we needed to do, that the network needed to be decentralized.
But that doesn’t mean there’s no leadership. In reality, you need a lot of experts to keep telling people about the benefits of using this particular network and its architecture, keep getting more people on board, and create more leaders who can teach and educate people around the world. But that doesn't mean everything happens under one roof. If the network and code are open, anyone can contribute and run it. Naturally, it is actually decentralized. You will naturally see leadership emerge from places you never expected.
Our goal is to grow everything around us, to make our voice one among many, not to silence others. We focus a lot on hackathon fans and so on, trying to connect them to each other and keep them in the loop. It's like a flywheel. We try to connect people with developers around the world, spend as much time one-on-one with them as possible, and then get them all into a hackathon to compete and push them to build their first or second product.
Among cryptocurrency users, only a handful of products can enter the market, receive venture capital, and have a scalable number of users. To me, that means we're not creative enough. We don’t have enough founders to take aim and figure out business models that can actually scale to millions of users. So we need a lot of companies competing to see if they can come up with great ideas, and that's the biggest challenge.
a16z crypto: A related question is, how do you involve the community in developing parts of the core protocol itself? This is one of the trickiest balancing issues for any blockchain ecosystem. On the one hand, you can keep the community actively engaged, but on the other hand, you may be less flexible. Moreover, the governance process involves more people, making coordination difficult. On the other hand, you can also control things in a more top-down way and grow faster as a result. But in terms of community participation, you will be affected to a certain extent. How do you strike a balance?
Anatoly Yakovenko: Generally speaking, when I work at foundations, we see people actively contributing to the things they want to do. Then they go through a proposal process, and then there's a grant or something that comes with it. This is very similar to the interview process. For example, when I hire someone in the laboratory, it may be that the corporate culture does not match the person, or it may be other reasons, but it does not mean that the person is not good, but that something is not working. effect. By the same token, you'll find engineers already submitting code and contributing to the code base. They already know how to culturally get code merged and how to deal with open source direction issues. When you find people who can solve problems themselves, you give grants, and those grants are really important, making sure that you find really good people who can commit code and are willing to work on it for the long term.
a16z crypto: What do you think is the best way to run a decentralized governance protocol today?
Anatoly Yakovenko: Just like L1, the approach we took seems to be working, just like linux, keep moving forward and avoid vetoes from any participant as much as possible. It follows the path of least veto. To be honest, there are many participants who can veto any change, feel that the change is not good, or do not want to change it. But we have to make the system faster, more reliable, and use less memory, and no one will object to these changes.
Ideally, we have a process where you release the design and everyone spends three months discussing it. So before merging, everyone has plenty of opportunities to look at this code and decide if it's good or bad. This process may seem like a long time, but it really isn't. If you've ever worked at a big company, basically with Google or Qualcomm, you know you have to talk to a lot of people, you have to drive it, make sure that all the key partners, like the key people who touch the code base are Be able to accept it and then slowly complete it. Carrying out drastic reforms is more difficult. Because a lot of smart people are looking at the same thing, they might actually find some mistakes and then make the final decision.
a16z crypto: How do you consider talent recruitment?
Anatoly Yakovenko: In terms of engineering, our requirements are often very high, at least we will hire fairly senior people. The way I hire is, I work on something early on so I know how to do it, and then I tell the new employee that's how I do it. I don't expect them to finish it in 90 days, or beat me. I can assess them during the interview and tell them this is the problem I'm solving. I needed someone to take over so I could do the unknown. In a startup, if you're the CEO, it's best not to give someone an unknown problem because you don't know if they can solve it.
When the ecosystem develops to a certain level, PM is needed. I spent so much time answering questions that I was still answering questions until 2 o'clock in the morning. I was like, let someone else do this, now I know what the job is all about.
a16z crypto: How important do you think privacy will be for blockchain in the future?
Anatoly Yakovenko: I think there will be a shift in the entire industry. First, some visionary person will focus on privacy, and then all of a sudden, a large payment company or something will adopt this technology and it will become the standard. I think it needs to be a feature - if you don't have that feature, you can't compete. We haven't reached the point where the market has matured yet, but I think we will get there. Once many people use blockchain, every merchant in the world will need privacy. This is just the minimum requirement.
a16z crypto: What impact does the Solana architecture have on MEV? Does the leader have too much authority to reorder transactions?
Anatoly Yakovenko: Our original idea was to arrange more than 1 leader per slot. If we go as close to the speed of light as possible, which is about 120 milliseconds, then you can have discrete batch time auctions every 120 milliseconds around the world. Users can select the most recent or the one with the largest rebate from all available block producers. In theory, this is probably the most efficient way to run finance, either I choose delay and send to the nearest block producer; or I choose the highest rebate and do delayed dollar transactions. This is a theory, we haven't tested multiple leaders per slot yet, but we're getting close and I think it might be feasible, maybe next year.
I think once we implement that, we can get a very powerful system that basically forces competition and minimizes MEV.
a16z crypto: What is your favorite system optimization in the Solana architecture?
Anatoly Yakovenko: What I like most about the way we propagate blocks is that it was an early idea and one of the things we really needed to do. We can scale the network to a very large number of nodes in the system, and we can transmit large amounts of data, but the amount of egress each node must share, that is, the amount of egress load it must bear, is fixed and capped.
If you think about it at a high level, when each leader creates a block, it cuts it into pieces (threads) and creates encodings for those pieces. They then transmit the fragment to a node, which then sends it to other nodes in the network. Because all the data is mixed with encoding, as long as someone receives the data, the reliability of the data is very high because the number of nodes spreading the data is very large, unless 50% of the nodes fail, which is extremely unlikely. So it's a really cool optimization, and it's very low overhead and very high performance.
a16z crypto: How do you see the application development of cryptocurrency in the future? How will these users who do not understand blockchain adopt blockchain in the future?
Anatoly Yakovenko: I think we have some breakthrough applications and payment methods, because using cryptocurrency for payment has clear advantages compared to traditional systems. I think once the regulations are in place and Congress passes a few bills, payments will become a breakthrough use case. Once we have payment methods, I think another aspect of it will also develop, such as social applications, which can be messaging applications, social graph applications. These applications are currently growing slowly. I feel like they're in their prime to take off and reach really impressive numbers.
Once a product reaches mainstream adoption it's possible to iterate, understand what exactly people want, and give them that product. People should use products for their utility, not for tokens.
a16z crypto: What advice do you have for builders in the space or outside of the space? Or any advice for those who are curious about cryptocurrencies and Web3?
Anatoly Yakovenko: I would say that now is the best time. The current market is relatively sluggish on a macro level, and there is not much noise. You can focus on product-market fit. When the market turns around, these discoveries will dramatically accelerate your growth. If you want to work in artificial intelligence, you people shouldn't be afraid to start an artificial intelligence company or a cryptocurrency company or whatever right now, you should try and build these ideas.
But what I am trying to say is that people should try to create greater ideas instead of repeating what already exists. The best analogy I've ever heard is that when people discovered cement, everyone was focused on building bricks with cement, and then one person thought, maybe I can build skyscrapers. They came up with a way to combine steel and construction, which no one had thought of. The new tool is cement, you just have to figure out what a skyscraper is and then build it.