Analysis of parachains: implementation process, technical difficulties, future development directions, etc.

1623143401905430

  • How did the parachain go from a research paper to a code implementation?
  • How will parallel threads and nested relay chains develop in the future?
  • Where is the limit of the scalability of the parachain?

In the 2021 Polkadot Decoded roundtable discussion  “Analysis of Parachains : Who Made Parachains ? What are Parachains? Why Are Parachains?”  , researcher Jeff, code implementer Rob, and host Joe discussed some ideas about parachains. Development is an important issue. PolkaWorld summarized the main content of the round table in this article.

Jeff: Jeff Burdges, W3F cryptography researcher, has done a lot of research on the development of parachains

Rob: Robert Habermeier, Polkadot co-founder/Parity core developer, leading the implementation team to make the parachain run in practice

Joe: Joe Petrowski, head of W3F technology integration, moderator of this roundtable

1623143401946618

Joe: About a year and a half ago, Jeff led a team and published a paper on usability and effectiveness. At the time of implementation, this scheme has changed a lot. Jeff, can you briefly talk about how the idea for this paper came about?

Jeff: We got some ideas from the Ethereum ecosystem, such as the idea of ​​using erasure codes. However, some specific things surrounding these ideas, such as how to optimize fragmentation, have not been formally determined.

At the end of 2019, we decided to formalize these ideas and propose more precise methods. Generally speaking, the way we design complex protocols is that I will write down all possible design options and then use the method of elimination.

At the beginning of 2020, I wrote this design and discussed it with everyone. We noticed some problems. I came up with a technique called “Two-phase inclusion”. That is to say, before the real start, the parachain relay chain must know a block, and the validator must say that this block is valid.

After that we started to do erasure coding, and then started to do the real work of checking it. One of the advantages of this is that because someone has invested a lot of vital interests in this process, it will limit the number of attempts they can try. So that if you want to attack it, then you will also destroy yourself. It is not cryptographic security, but distributed system security, but it is reasonable.

Joe: For those who are not familiar with sharding, what I just said may be a bit abstract. In fact, we have a thousand validators. When you want to include one of these parachain blocks, you actually need to send the data block to all validators, which involves cost, complexity, network, storage, etc. Let everyone deal with this message, so you want to really ensure that these messages are valid messages, and there is a reason.

Jeff: Yes. Erasure codes are actually quite old, and there are different types of erasure codes. But generally speaking, if you use cryptography, you usually use some based on Lagrangian interpolation or Reed-Solomon codes. The reason is that its threshold is very steep, so we can recover the whole picture from any one-third of the fragments.

What do we do? We have parachain blocks, called candidate blocks, and we have 3f+1 validators. Then we eradicate these things into 3f+1 fragments, as long as you have any f+1 fragments, you can reconstruct the original block. In other words, you can reconstruct the original block with just a little more than 1/3 of the fragments.

This is a very old mathematical method that actually makes us faster. According to our current number of validators, we must find some relatively new papers to optimize. This is what we did this year-to greatly optimize erasure codes. We made it run 400 times faster, which is actually faster progressively. From O(N²) algorithm to O(log n) algorithm. This makes calculations less burdensome. We may do better in the future. This is our recent breakthrough. Of course, it would be great if we could solve it earlier, hahaha.

Joe: Turning these studies into code is actually a big challenge. We launched the Rococo testnet in the middle of last year. Can Rob talk about some of the challenges encountered in the early implementation of this protocol?

Rob: I remember that the first code submission related to parachain was in the second half of 2018. In mid-2019, we had the first draft of the V0 protocol. In the first few years, we invested more in BABE, In the GRANDPA consensus, that is, in terms of block generation and block confirmation, there was actually no in-depth study of parachains at that time, because the part of parachains was more complicated and required more development time.

From mid-to-late 2019 to early 2020, things have made great progress. As Jeff mentioned just now, the research team has begun to really finalize the agreement, such as availability to ensure that the parachain block still exists, so that other People can check them for additional checks to ensure safety.

I think it is very difficult to realize all these studies. If you are building any type of system, then every time you add a little extra complexity to the system, the time required to create the system will increase exponentially. This rule also applies to code, because once you reach a certain amount of code, it is really difficult to add more things, because new things will definitely disrupt and destroy some of the things that were done before.

Therefore, it is very important to have a good design and planning. When we iterate the agreement, we will definitely do some research back and forth. But in 2020, we will focus on the “Implementer’s Guide” and iterate in there, not in the code. I can talk to Jeff and Al (Alistair) to discuss the content of the draft paper, and then write a page stating “We will write code like this” instead of just writing the code. We saved a few weeks with this method After that, I can distribute the work of writing code to many developers.

So I think it is important to have a good plan when building such a system. There is also a modular system, so you can add a separate part of the code, you can organize these codes into small packages, rather than a whole system, because it is difficult for one person to get a whole complex system.

Joe: Speaking of the current stage. Now Kusama has launched the Shell blank chain, and there are already 12 parachains on Rococo, but the block generation time of Kusama is about 12 seconds. We are solving this problem. In order to increase the block generation time to 6 seconds and launch more chains on Kusama, what are the challenges we face in the short term?

Rob: I think it all boils down to the Internet in essence. Kusama has 900 validators, all of whom have KSM, and have deployed nodes around the world to synchronize the chain. This thing is very cool, this may be one of the largest validator sets in the world.

But when you add some complexity to this network, such as adding parachains, it will definitely increase a lot of load. In fact, we have also tested with the same parameters on Rococo before, but the effect on Kusama is completely different. Because validator nodes on Kusama run all over the world, the main challenge is to make the network code run as smoothly as possible. . When we wrote network code, we made a lot of anti-cheating mechanisms. This kind of thing is that if no one makes trouble, you will not notice its existence, but if someone does evil, you will find that these defense mechanisms are very important.

Jeff: Exactly. As we add more and more parachains, more computing loads will appear. We will see how it will develop at that time and how we will grow in the process. In fact, observing how these operations affect the network is also a gradual learning process.

Joe: This is the meaning of Kusama, isn’t it?

Rob: Exactly. As more parachains increase, the load of validators will definitely increase. Because the validator needs to verify a block and pledge his currency behind the block, some other validators will choose to check on their own. The more parachains, the more calculations you will have to do, although the amount of calculations should be slower than the increase in the number of parachains, which is why this network is scalable, not as non-scalable as some other blockchains. But as a validator, you may still need to verify dozens of blocks per second.

Joe: Let’s talk about something more practical. Let’s talk about Polkadot and Kusama’s plans for the next year. We have a plan for parallel threads, which can actually be seen in the UI now, because the chains are registered in the form of parallel threads before they are upgraded to parachains. But in the future, we will make parallel threads more practical. Can you talk about the design and implementation of parallel threads, and what work is left to be completed in order to realize it?

Rob: Parallel threads are similar to parachains, the main difference lies in the way they are scheduled. We have a scheduler. If you are a parallel chain, then each block will be scheduled; if you are a parallel thread, then you need to conduct an auction, and parallel thread collectors can compete with each other to obtain the written block right. This will bring about network changes in collection. When you are a block producer in parallel threads, you need to let the validator know that you have a block to submit.

So in general, there are three challenges: scheduling procedures, auctions, and network changes.

Jeff: Actually there was a design for parallel threads, but in the end we chose the one with auction. Because this design can better prevent cheating, but for parallel threads, if they cannot submit blocks for some reason, they may lose resources. So we have to look at some economic issues.

Joe: We said that some core functions will be released from the relay chain and transferred to the parachain to further achieve scalability and realize the idea of ​​nested relaychain. Do you want to talk about why you want to do this?

Jeff: Actually, compared to the term “nested relay chain”, I prefer to call it “relay chain sharding” because the nested relay chain sounds like a certain chain is dominant.

In a sense, the sharding of the relay chain is simpler than what we have done now (parachain sharding). But I think I might do this when there are more than 3000 validators. I want to tell you that you don’t have to rush to implement it now. Before that, we hope to make the function of the relay chain as simple as possible. I think this is the smallest amount of work for developers.

Rob: At present, staking and election modules and some governance functions are actually relatively heavy, which will bring a greater load to the relay chain. All that happens on the relay chain, the relay chain verifier needs to execute. According to the design, what happens on the parachain only needs a subset of validators to process. So this is the source of scalability, making as little as possible what every validator machine needs to perform.

I think it is actually quite difficult to safely extract things like staking and governance. Because Polkadot has some failure modes, for example, when a mechanism is in dispute analysis, the chain may be prevented from producing blocks. You may not be able to perform Slash transactions, the validator set cannot update transactions, etc. These are very difficult challenges.

But this is actually not very urgent. Before that, we should optimize the node side, such as how to handle parachains and network messages, to obtain higher scalability and run more parachains.

Jeff: I think our goal should be, although this goal may not be achieved, but the goal should be to achieve the same level, so that each parachain has a validator. This may not be achieved, but when we reach this situation, we should know the existence of this limit, and then work in other directions.

Joe: Just now you said 3000 validators, which means 3000 parachains. Rob, as a achiever, how do you evaluate this goal?

Rob: (laughs) It’s not possible for the time being, that’s for sure. I think the code can run 80-100 parachains after a round of optimization, I am very happy, and this is more than enough for the community.

Jeff: Yes. Eventually we may reach a point where users are exhausted and we have to start persuading more people to use it, so I guess there may be many such outbreaks.

Rob: I think so. I think it’s a bit like the challenge of Polkadot’s governance-what is the long tail effect of the auction plan? Because at some point, if all the technologies go well, then we may be able to run more parachains, and may even exceed the market’s demand for parachains. But we don’t want the resources of the parachain to be filled up by some garbage projects, which will take up to two years. Of course, the development of the community will catch up (the seat of the parachain will not be enough), and there will definitely be this kind of back and forth development. process.

Leave a Reply

Your email address will not be published. Required fields are marked *

Previous Post

Bitcoin becomes legal tender, can this country’s cyberpunk dream come true?

Next Post

What is the mystery of Polygon’s rise?