Can I use the block height to measure the passage of a year based on the average block time in RSK and Ethereum? - blockchain

I want to build a Solidity smart contract in RSK and Ethereum that pays dividends every year.
Should I use the block time or can I rely on the block number, assuming a the current average inter-block time in RSK and Ethereum?

RSK and Ethereum have trunk blocks, which chained and executed, and uncle blocks (now called ommers), which are referenced but not executed. Both RSK and Ethereum have difficulty adjustment functions that try to maintain a target density of blocks (including trunk and ommers). In other words, a fixed number of blocks mined per time period. The adjustment functions in RSK and Ethereum are not equal, but both target a block density, not an inter-block time in the chain. Therefore, if the mining network produces a higher number of ommer blocks, the number of trunk blocks created over a period decreases, and the trunk average inter-block time increases. In the case of Ethereum, the number of ommers have oscillated between 5% and 40% in the last 5 years, but in the last 2 years it has stayed relatively stable between 4% and 8%. This translates to a +-2% error when measuring time based on block count. However, in Ethereum the “difficulty bomb” has affected the average block time much more than the ommer rate. The average block time is ~14 seconds now, but it has peaked to 30, 20 and 17 seconds at different times. Therefore, in Ethereum, the a number of blocks should not be used to measure long periods of time. It may be used only for short periods, not longer than a month. More importantly, if Ethereum switches to PoS, the average block interval will decrease to 12 seconds at that point.
Here we show the Ethereum ommer rate:
(source: https://ycharts.com/indicators/ethereum_uncle_rate)
And this is Ethereum average block time:
(source: https://ycharts.com/indicators/ethereum_average_block_time)
The spikes are caused by the difficulty bomb and the abrupt decays by hard-forks that delayed the bomb.
In RSK, most miners are configured to minimize mining pool bandwidth and create a high number of ommers. This is permitted and encouraged by design. They can also be configured to minimize the number of ommers, and consume more bandwidth. RSK targets approximately a density of 2 blocks every 33 seconds, and currently one block is an ommer, and the other is part of the trunk. If the RSK/Bitcoin miners decide in the future to switch to the ommer-minimizing mode, almost no ommers will be created and the average trunk block interval will decrease to 16.5 seconds (to keep the 2 blocks per 33 seconds invariant). This is why, even if the trunk block interval in RSK is currently very stable, in the future (and without prior notice) it can suddenly change from 22 seconds down to 16.5 seconds. This makes the block number an unreliable source for computing the time for values such as the interest rate.
On the other hand, the block time cannot be easily forged because nodes check that block time is not in the future, and not prior to the parent block time. Also RSK has a consensus rule that ties RSK timestamp to Bitcoin timestamp, which makes cheating extremely expensive as the Bitcoin blocks back-dated or forward-dated produced by merge-mining would be invalid.
Here is the RSK average block time and average uncle rate from June 2018 to March 2021. The X-axis shows the block number.
Each dot in the chart corresponds to a day. We can see that the block interval is highly correlated to the uncle rate.
The EVM opcode NUMBER (which is used to obtain the block height) returns the number of trunk blocks, not considering ommers. As a consequence, the value returned cannot be used to count all types of blocks. However, a new opcode OMMERCOUNT could be added, to query the total number of ommers referenced up to the current block. Together with NUMBER, these opcodes could be used to better approximate the passage of time.

Related

What is an epoch in solana?

I am new to the solana and exploring the web3js part for solana. I came across the term "epoch". I know what the epoch is which we normally use for timestamp. But in solana, the definition of epoch is quite different. I read the official documentation but could not properly understood the meaning of epoch. Can anyone please explain what exactly the epoch is in solana?
An epoch is a length of a certain amount of blocks (in Solana: “slots”) in which the validator schedule of Solana’s consensus algorithm is defined. To stakers this means that beginning and stopping to stake, as well as reward distribution, always happen when epochs switch over. An epoch is 432,000 slots, each of which should at a minimum take 400ms. Since block times are variable this means epochs effectively last somewhere between 2–3 days.
Source
From Epoch in Solana and Slot in Solana.
Epoch
The time, i.e. number of slots, for which a leader schedule is valid.
Slot
The period of time for which each leader ingests transactions and produces a block.
Collectively, slots create a logical clock. Slots are ordered sequentially and non-overlapping, comprising roughly equal real-world time as per PoH.

Why is the transaction throughput of RChain blockchain calculated in per node per CPU ? Instead of just per CPU?

I get that RChain blockchain runs faster because of concurrency, and that block proposal can be parallelised (no total ordering).
Nevertheless I have trouble understanding why the TPS of rchain is calculated in per node per CPU ? In my view every node has to replay all transactions, so the TPS calculation should only be per CPU. Do you have an understanding of this ??

What's preventing the Ethereum blockchain to getting to big too fast

So I recently started looking at Solidity on the Ethereum blockchain, and have a question about the size that smart contracts generate.
I'm aware that there is a size limit for the byte code generated by the contract itself, and that it cannot exceed 27kb. Also there's an upper limit for transactions too. However, what I'm curious about is that, since there's no limit on the variables that smart contract stores, what is stopping those variables from get very large in sizes? For popular smart contracts like uniswap, I would imagine they can generate hundreds of thousands of transactions per day and the state they keep would be huge.
If I understand it correctly, basically every node on the chain would store the whole blockchain, so limiting the size of blockchain would be very important. Is there anything done to limit the size of smart contracts, which mainly I think is dominated by the state variables they store.
Is there anything done to limit the size of smart contracts, which mainly I think is dominated by the state variables they store.
No. Ethereum will grow infinitely and currently there is no viable plan to limit state growth besides keeping transaction costs high and block space premium.
You read more about this in my post Scaling EVM here.
TLDR: The block size limit.
The protocol has a hardcoded limit that prevents the blockchain from growing too fast.
Full Answer
Growth Speed
The protocol measures storage (and computation) in a unit called gas. Each transaction consumes more or less gas depending on what its doing, such that an ether transfer costs 21k gas, but a uniswap v2 swap consumes around 100k gas. Deploying big contracts consume more.
The current gas limit is 30 million per block, so the actual number of transactions varies even if the blocks are always full (some consume more than others).
FYI.: This is why transactions per second is a BS, marketing metric in blockchains with rich smart contracts.
Deeper Dive
Storage as of June 2022
The Ethereum blockchain is currently ~180 GB in size. This is the stuff that is critical to the existence and from which absolutely every thing else is calculated.
Péter Szilágyi is the lead developer of the oldest, flagship ethereum node implementation
That being said, nodes generate a lot of data while processing the blockchain to generate the current state (i.e. how much money do you have on your wallet now).
Today, if you want to run a node that stores every single block and transaction starting from genesis (or what bitcoin, but not ethereum engineers, call an archive node) you currently need around 580 Gb (this grows as the node runs). See Etherscan's geth after they deleting some locally generated data, June 26, 2022.
If you want to run what ethereum engineers call an archive node - a node that not only keeps all blocks from genesis but also does not delete generated data - then you currently need 1.5 TB of storage using erigon.
Older clients that do not use the flat key-value storage, generate considerably more data (in the order of 10TB).
The Future
There are a lot of proposals, research and active development efforts working in parallel and so this part of the answer might become outdated. Here are some of them:
Sharding: Ethereum will split data (but not execution) into multiple shards, without losing confidence that the entirety of it is available via Data Availability Sampling;
Layer 2 Technologies: These move gas that was consumed by computation to another layer, without losing guarantees of the first layer such as censorship resistance and security. The two most promising instances of this (on Ethereum) are optimistic and zero-knowledge rollups.
State Expiry: Registers, Cache, RAM, SSD, HDD, Tape libraries are storage solutions, ordered by from fastest, most expensive to slowest, cheapest. Ethereum will follow the same strategy: move state data that is not accessed often in cheaper places;
Verkle Trees;
Portal network;
State Rent;
Bitcoin's Lightning network is the first blockchain layer 2 technology.

What is P99 latency?

What does P99 latency represent? I keep hearing about this in discussions about an application's performance but couldn't find a resource online that would talk about this.
It's 99th percentile. It means that 99% of the requests should be faster than given latency. In other words only 1% of the requests are allowed to be slower.
We can explain it through an analogy, if 100 students are running a race then 99 students should complete the race in "latency" time.
Imagine that you are collecting performance data of your service and the below table is the collection of results (the latency values are fictional to illustrate the idea).
Latency Number of requests
1s 5
2s 5
3s 10
4s 40
5s 20
6s 15
7s 4
8s 1
The P99 latency of your service is 7s. Only 1% of the requests take longer than that. So, if you can decrease the P99 latency of your service, you increase its performance.
Lets take an example from here
Request latency:
min: 0.1
max: 7.2
median: 0.2
p95: 0.5
p99: 1.3
So we can say, 99 percent of web requests, the average latency found was 1.3ms (milli seconds/microseconds depends on your system latency measures configured).
Like #tranmq said, if we decrease the P99 latency of the service, we can increase its performance.
And it is also worth noting the p95, since may be few requests makes p99 to be more costlier than p95 e.g.) initial requests that builds cache, class objects warm up, threads init, etc.
So p95 may be cutting out those 5% worst case scenarios. Still out of that 5%, we dont know percentile of real noise cases Vs worst case inputs.
Finally; we can have roughly 1% noise in our measurements (like network congestions, outages, service degradations), so the p99 latency is a good representative of practically the worst case. And, almost always, our goal is to reduce the p99 latency.
Explaining P99 it through an analogy:
If 100 horses are running in a race, 99 horses should complete the race in less than or equal to "latency" time. Only 1 horse is allowed to finish the race in time higher than "latency" time.
That means if P99 is 10ms, 99 percentile requests should have latency less than or equal to 10ms.
If p99 value is 1ms, it means, 99 out of 100 requests take less than 1ms, and 1 request take about 1 or more than 1ms.

Count the clock periods of code block in Embedded C

I'm programming in a Keil Board and am trying to count the number of clock periods taken for execution by a code block inside a C function.
Is there a way to get time with precision to microseconds before and after the code block, so that I can get the diff and divide it by the number of clock periods per microsecond to compute the clock periods consumed by the block?
The clock() function in time.h gives time in seconds which will give the diff as 0 as it is a small code block that I'm trying to get the clock periods for.
If this is not a good way to solve this problem are there alternatives?
Read up on the timers in the chip, find one the operating system/environment you are using has not consumed and use it directly. This takes some practice, you need to use volatiles to not let the compiler re-arrange your code or not re-read the timer. And you need to adjust the prescaler on the timer so that it gets the most practical resolution without rolling over. So start with a big prescale divisor, convince yourself it is not rolling over, then make that prescale divisor shorter, until you reach a divide by one or or reach the desired accuracy. If divide by one doesnt give you enough then you have to call the function many times in a loop and time around that loop. Remember that any time you change your code to add these measurements you can and will change the performance of your code, sometimes small enough not to notice, sometimes 10% - 20% or more. if you are using a cache then any line of code you add or remove can change the performance by double digit percentages and you have to understand more about timing your code at that point.
The best way to count the number of clock cycles in the embedded world is to use an oscilloscope. Toggle a GPIO pin before and after your code block and measure the time with the oscilloscope.The measured time multiplied by the CPU frequency is the numbler of CPU clock cycles spent.
You have omitted to say what processor is on the board (far more important than the brand of board!), if the processor includes ETM, and you have a ULINK-Pro or other trace-capable debugger then uVision can unintrusively profile the executing code directly at the instruction cycle level.
Similarly if you run the code in the uVision simulator rather than real hardware, you can get cycle accurate profiling and timing, without the need for hardware trace support.
Even without the trace capability, uVision's "stopwatch" feature can perform timing between two break-points directly. The stopwatch is at the bottom of the IDE in the status bar. You do need to set the clock frequency in the debugger trace configuration to get "real-time" from the stop-watch.
A simple approach that requires no special debug or simulator capability is to use an available timer peripheral (or in the case of Cortex-M devices the sysclk) to timestamp the start and end of execution of a code section, or if you have no available timing resource, you could toggle a GPIO pin and monitor it on an oscilloscope. These methods have some level of software overhead that is not present in hardware or simulator trace, that may make them unsuitable for very short code sections.