Is Bitgert the fastest and scalable blockchain ecosystem ever? - blockchain

Recently I happened to know that Bitgert will be the future of ideal blockchain network because it's the fastest and provides zero-gas fee.
Is this true? Bitgert has 15 seconds average blocktime whereas Ethereum is 13 seconds.

This is not necessarily true. I have never heard of bitgert, the speed and zero gas fee almost mean nothing for the usefulness of a blockchain. The reason ethereum has such a dominant position, is the fact that everyone accepts it as the standard. Also, Ethereum development is very good with a great community. If speed and price of fees are the goal for you, it would be optimal to use a layer 2 solution such as polygon, which has inherited security from ethereum.

Related

What's preventing the Ethereum blockchain to getting to big too fast

So I recently started looking at Solidity on the Ethereum blockchain, and have a question about the size that smart contracts generate.
I'm aware that there is a size limit for the byte code generated by the contract itself, and that it cannot exceed 27kb. Also there's an upper limit for transactions too. However, what I'm curious about is that, since there's no limit on the variables that smart contract stores, what is stopping those variables from get very large in sizes? For popular smart contracts like uniswap, I would imagine they can generate hundreds of thousands of transactions per day and the state they keep would be huge.
If I understand it correctly, basically every node on the chain would store the whole blockchain, so limiting the size of blockchain would be very important. Is there anything done to limit the size of smart contracts, which mainly I think is dominated by the state variables they store.
Is there anything done to limit the size of smart contracts, which mainly I think is dominated by the state variables they store.
No. Ethereum will grow infinitely and currently there is no viable plan to limit state growth besides keeping transaction costs high and block space premium.
You read more about this in my post Scaling EVM here.
TLDR: The block size limit.
The protocol has a hardcoded limit that prevents the blockchain from growing too fast.
Full Answer
Growth Speed
The protocol measures storage (and computation) in a unit called gas. Each transaction consumes more or less gas depending on what its doing, such that an ether transfer costs 21k gas, but a uniswap v2 swap consumes around 100k gas. Deploying big contracts consume more.
The current gas limit is 30 million per block, so the actual number of transactions varies even if the blocks are always full (some consume more than others).
FYI.: This is why transactions per second is a BS, marketing metric in blockchains with rich smart contracts.
Deeper Dive
Storage as of June 2022
The Ethereum blockchain is currently ~180 GB in size. This is the stuff that is critical to the existence and from which absolutely every thing else is calculated.
Péter Szilágyi is the lead developer of the oldest, flagship ethereum node implementation
That being said, nodes generate a lot of data while processing the blockchain to generate the current state (i.e. how much money do you have on your wallet now).
Today, if you want to run a node that stores every single block and transaction starting from genesis (or what bitcoin, but not ethereum engineers, call an archive node) you currently need around 580 Gb (this grows as the node runs). See Etherscan's geth after they deleting some locally generated data, June 26, 2022.
If you want to run what ethereum engineers call an archive node - a node that not only keeps all blocks from genesis but also does not delete generated data - then you currently need 1.5 TB of storage using erigon.
Older clients that do not use the flat key-value storage, generate considerably more data (in the order of 10TB).
The Future
There are a lot of proposals, research and active development efforts working in parallel and so this part of the answer might become outdated. Here are some of them:
Sharding: Ethereum will split data (but not execution) into multiple shards, without losing confidence that the entirety of it is available via Data Availability Sampling;
Layer 2 Technologies: These move gas that was consumed by computation to another layer, without losing guarantees of the first layer such as censorship resistance and security. The two most promising instances of this (on Ethereum) are optimistic and zero-knowledge rollups.
State Expiry: Registers, Cache, RAM, SSD, HDD, Tape libraries are storage solutions, ordered by from fastest, most expensive to slowest, cheapest. Ethereum will follow the same strategy: move state data that is not accessed often in cheaper places;
Verkle Trees;
Portal network;
State Rent;
Bitcoin's Lightning network is the first blockchain layer 2 technology.

How to secure a blockchain based on PoW against 51% attack?

I couldn't find a satisfying answer to this 51% attack issue, so for a new blockchain with only 300 mined blocks, from my understanding the attacker has to rebuild all the blocks from scratch, is that true, if yes then what if the blockchain has 100k or 300k blocks? is there a way to prevent or penalize a miner if he mines too fast? does having honest miners would solve the issue? what about multiple Full nodesI need practical solutions
You can use dynamic checkpoints, as same as it used in Peercoin or Emercoin.

What is the difference between blockchain performance and blockchain scalability? [closed]

Closed. This question is not about programming or software development. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 12 days ago.
Improve this question
I really don´t get the difference between blockchain scalability and blockchain performance. Can some explain it to me?
The differences would be conceptual and categoric. Blockchain scalability as a concept has wider significance in terms of scalability of a blockchain network, node, protocol etc. Blockchain scalability can be measured in terms of a collection of parameters such as transactions per second, latency, response time etc. It is the ability of a blockchain network to scale as per the demands of the participating nodes.
Blockchain performance is quite a subjective term in comparison with scalability. Performance is the current throughput from any live system. Blockchain performance could be measured in terms of the current capacity of nodes and network to manage data at rest and data in motion. It could be measured in terms of the total active users and total concurrent users. Performance improvement and performance optimisation can be done without scaling the system. In the world of blockchain systems, methods like pruning, zero knowledge proofs, compression etc. are widely adopted performance improvement techniques.
The following diagram from the art of scalability book is a great illustration of the different measures of scalability such as scale by cloning, scale by functional decomposition and scale by partitoning. These are three different ways to scale systems. Systems of blockchain protocols can also be scaled in three ways like this. One is scaling by increasing the number of identical nodes ( cloning ). Another approach is scaling by introducing different layers ( side chains, state channels etc. ). Then we can also scale them by creating shards.
Performance is an indication of the responsiveness of a system to execute any action within a given time interval, that means even if your blockchain is handling e.g. 5 transactions how is it handling it? while scalability is ability of a system either to handle increases in load without impact on performance or for the available resources to be readily increased.
When we talk about blockchain networks, I can suggest the following approach:
Performance is a rate at which useful data is added to the blockchain state.
Scalability is a maximal level of performance, security and decentralisation which target blockchain network can handle simultaneously.
To keep it short and simple, Blockchain scalability is the ability of a blockchain network to expand its capacity in order to accommodate a growing number of transactions. Blockchain performance is a measure of how well a blockchain network can process transactions in a timely and efficient manner.

How to protect from 51% Attack?

What is the best way to reduce the risk of this attack?
How to protect from 51% Attack?
The nature of the system means that this attack cannot be prevented. Think of it this way, if you have a perfectly decentralized system in which the participants have control over the network (not some centralized authority), then the users get to vote on changes. The way to vote in blockchain is with your mining hashpower. If a majority (>50%) of the network votes on a change, then the change goes into effect (theoretically). So, how could you prevent this unless you centralize the network?
Now, in actuality, an attacker would likely need much more than 51% because not only do they have to outpace the network, they have to do so with every block after the one they want to modify, because what if a new block is mined by someone else while they are trying to outpace the network? They would need much more hashpower to have a good chance of successfully pulling it off.
Prevention
The real answer is you can't really prevent it, since it is a decentralized network, but if you are designing a new blockchain, the answer is to make it as decentralized as possible. Here are some considerations:
Commoditization of the mining hardware (commoditizing ASICs). Note this goes against some conventional thinking, that hashing algorithms should be ASIC resistant, but there is a good article that explains why that is a bad idea: ASICs and Decentralization FAQs Users who have to pay a lot, or find it difficult to get hashpower will likely not mine and it will be left to a few large players with the resources to do so. This results in more centralization of mining.
Avoid forking an existing coin with much larger hashpower. Users of the original coin now own coins on your new chain and are incentivized to attack it if they have a much larger portion of hashpower they can switch over to the new coin. If you do fork an existing coin, consider changing the hashing algorithm so miners of the original coin would have to invest more capital in order to attack.

How do clients of a distributed blockchain know about consensus?

I have a basic blockchain I wrote to explore and learn more about the technology. The only real world experience I have with them is in a one-to-one transaction from client to server, as a record of transactions. I'm interested in distributed blockchains now.
In its simplest, most theoretical form, how is consensus managed? How do peers know to begin writing transactions on the next block? You have to know when >50% of the entire pool has accepted some last block written. But p2p systems can be essentially unbounded, and you can't trust a third party to handle surety, so how is this accomplished?
edit: I now know roughly how bitcoin handles consensus:
The consensus determines the accepted blockchain. The typical rule of "longest valid chain first" ensures that only one variant is accepted. People may accept a blockchain after any number of confirmations, typically 6 is sufficient to ensure a clear winner.
However, this seems like a slow and least-deliberate method. It ensures that there is a certain amount of wasted work on the part of nodes that happen to be in a part of the network that had a local valid solution at roughly the same time as a generally accepted solution.
Are there better alternatives?
Interesting question. I would say the blockchain technology solves only probabilistic consensus. With a certain confidence, the blockchain-network agrees on something.
Viewing blockchain as a distributed system we can say that the state of blockchain is distributed: the blockchain is kept as a whole but there are many distributed replicas of local copies. More interestingly, the operations are distributed: Writes or reads can happen at different nodes concurrently. Read operations can be done locally at the local copy of the blockchain, but this read can of course be stale if your local copy is not up-to-date, however there is always an incentive for nodes in the blockchain network to keep their local copy up-to-date so that they can complete new transactions when necessary.
Write operations is the tricky part here, that blockchain must solve. As writes happen concurrently in a distributed fashion, blockchain must ensure to avoid inconsistencies such as double spending and somehow reach consensus on the current state. The way blockchain does this is probabilistic, first of all they made it expensive to write to the chain by adding the "puzzle" to be solved, reducing the probability that different distributed writes happen concurrently, but they can still happen, but with lower probability. In addition, as there is an incentive for nodes in the network to keep their state up to date, nodes that received the flooded write operation will validate it and accept that operation into their chain. I think the incentive to always keep the chain up-to-date is key here because that ensures that the chain will make progress. I.e a writer has a clear incentive to keep its chain up-to-date since it will be competing with the "longest-chain-first" principle against other concurrent writers. For non-adversarial miners there is also an incentive to interrupt the current mining, accept a new write-block and restart the mining process, ensuring a sort of liveness in the system.
So blockchain relies on probabilistic consensus, what is the probability then? The probability that two exactly equal branches growing in parallel at the same time is close to 0 assuming that there are not any large group of adversarial nodes taking over the network. With very high probability one branch will be longer than the other and be accepted and the network reach consensus on that branch and write operations in the shorter branch have to be re-tried. The big concern is of course big adversarial miner groups who might deliberately try to create forks in the blockchain to perform double spending attacks.. but that is only likely to succeed if they get close to 50% of the computational power in the network.
So to conclude: natural branching in blockchain that can happen due to probabilistic reasons of concurrent writes (probability reduced due to the puzzle-solving) will with almost 100% probability converge to a single branch as write operations continue to happen, and the network reaches consensus on a single branch.
However, this seems like a slow and least-deliberate method. It
ensures that there is a certain amount of wasted work on the part of
nodes that happen to be in a part of the network that had a local
valid solution at roughly the same time as a generally accepted
solution.
Are there better alternatives?
Not that I can think of, there would be many more efficient solutions if all peers in the system "were under control" and you could make them follow some protocol and perhaps have a designated leader to tell the order of writes and ensure consensus, but that is not possible in a decentralized open system.
In the permissioned blockchain environment, where the participants are known in advance, client can get cryptographic proof of the consensus (e.g. that it was signed at least by 2/3 of the participants) and to verify it. Usually it can be achieved using threshold signatures.
In the public blockchains, AFAIK, there is no way to do this since the number of participants is unknown/changes all the time.