Assume I started a transaction yesterday which is not yet confirmed (In Pending status).
Later four other transactions are Successful and assumed they are having block numbers
1110, 1111, 1112, 1113.
Assume the old transaction got confirmed at this point in time. May I know what can be the block number of that old one? Can it be less than 1110 or it will be greater than 1113.
I tested but my transactions are fast and unable to generate this scenario.
The reason for asking is, to read ether scan data using block numbers.
The tx will be in a higher block (Assuming it gets accepted by a miner at some point). It is currently in the mempool waiting to be mined. You can query the blockchain to get the status (Depends on client API, special clients like Alchemy and QuickNode may have special tools to explore the mempool e.g. Alchemy have a mempool watcher).
This is a good primer to understand what is happening.
Related
It seems transactions on polygon can get automatically dropped and replaced.
original: 0xa67609bacf51ab83b1989e4097b4147574b4e26399bec636c4cfc5e12dfa2897
replaced: 0xec0d501619b5fc9cde6af41df929eeded252138a49965f15a7598bf2e532e555
What is happening here?
On Ethereum, I believe this can only happy if someone proactively replaces the tx by submitting another with the same nonce and higher gas price. Is that correct?
{"level":"error","message":"Error: transaction was replaced [ See: https://links.ethers.org/v5-errors-TRANSACTION_REPLACED ] (cancelled=true, reason=\"replaced\", replacement={\"hash\":\"0xec0d501619b5fc9cde6af41df929eeded252138a49965f15a7598bf2e532e555\",\"type\":2,\"accessList\":[],\"blockHash\":\"0x252f663dfb64dd82dff77b5e4fbe2073cd77248c5ce8dff1191c87ac22d97cf9\",\"blockNumber\":39285028,\"transactionIndex\":60,\"confirmations\":2,\"from\":\"0x90Be1Ef5EEa48f1d33e2574a73E50D208bB3680E\",\"gasPrice\":{\"type\":\"BigNumber\",\"hex\":\"0x6cdbaaf8e5\"},\"maxPriorityFeePerGas\":{\"type\":\"BigNumber\",\"hex\":\"0x6cdbaaf8e5\"},\"maxFeePerGas\":{\"type\":\"BigNumber\",\"hex\":\"0x6cdbaaf8e5\"},\"gasLimit\":{\"type\":\"BigNumber\",\"hex\":\"0x0186a0\"},\"to\":\"0x2791Bca1f2de4661ED88A30C99A7a9449Aa84174\",\"value\":{\"type\":\"BigNumber\",\"hex\":\"0x00\"},\"nonce\":112,\"data\":\"0xe3ee160e00000000000000000000000090be1ef5eea48f1d33e2574a73e50d208bb3680e00000000000000000000000090be1ef5eea48f1d33e2574a73e50d208bb3680e00000000000000000000000000000000000000000000000000000000000027100000000000000000000000000000000000000000000000000000000063eb95950000000000000000000000000000000000000000000000000000000063eb9b71c726f5f957d29df36c915d2f2816a5906bdb096a68d79abeb83102359a3c51ef000000000000000000000000000000000000000000000000000000000000001cbaca5b2bb8c9b3a25ba94b3303be256a72cc37172886b67140c788f53eacfa0526f4bc5dd18d1e0154a0c574c12ff67b656846a731cc55e32b7d60a8ae5b21ee\",\"r\":\"0x2503d5645a7620c94678ef0a5de4bca4e03b18943cec0511d58b7e444412b467\",\"s\":\"0x72c2cf739e2bfeb8335faab2c4b87b7b0464c9681a488456fa7c8fe25aef89c6\",\"v\":1,\"creates\":null,\"chainId\":137}, hash=\"0xa67609bacf51ab83b1989e4097b4147574b4e26399bec636c4cfc5e12dfa2897\", receipt={\"to\":\"0x2791Bca1f2de4661ED88A30C99A7a9449Aa84174\",\"from\":\"0x90Be1Ef5EEa48f1d33e2574a73E50D208bB3680E\",\"contractAddress\":null,\"transactionIndex\":60,\"gasUsed\":{\"type\":\"BigNumber\",\"hex\":\"0x0110bc\"},\"logsBloom\":\"0x00000000000000001000000000000000000000000000000000000000000000000000000000000000020000000000000000008000000000000000000200000000000000000000000000000008000000800000000000000000000100000000000000000000000000000000000000000000020000000000000180000010000000000001000000400000000000008000000000008000000000004000000000000000200000000000000000000000000000000004000000000000000000000000004000100002000000000081000000000000000000000000000000100000000000000000008000000000800000000000000000000000000000000000000000100000\",\"blockHash\":\"0x252f663dfb64dd82dff77b5e4fbe2073cd77248c5ce8dff1191c87ac22d97cf9\",\"transactionHash\":\"0xec0d501619b5fc9cde6af41df929eeded252138a49965f15a7598bf2e532e555\",\"logs\":[{\"transactionIndex\":60,\"blockNumber\":39285028,\"transactionHash\":\"0xec0d501619b5fc9cde6af41df929eeded252138a49965f15a7598bf2e532e555\",\"address\":\"0x2791Bca1f2de4661ED88A30C99A7a9449Aa84174\",\"topics\":[\"0x98de503528ee59b575ef0c0a2576a82497bfc029a5685b209e9ec333479b10a5\",\"0x00000000000000000000000090be1ef5eea48f1d33e2574a73e50d208bb3680e\",\"0xc726f5f957d29df36c915d2f2816a5906bdb096a68d79abeb83102359a3c51ef\"],\"data\":\"0x\",\"logIndex\":250,\"blockHash\":\"0x252f663dfb64dd82dff77b5e4fbe2073cd77248c5ce8dff1191c87ac22d97cf9\"},{\"transactionIndex\":60,\"blockNumber\":39285028,\"transactionHash\":\"0xec0d501619b5fc9cde6af41df929eeded252138a49965f15a7598bf2e532e555\",\"address\":\"0x2791Bca1f2de4661ED88A30C99A7a9449Aa84174\",\"topics\":[\"0xddf252ad1be2c89b69c2b068fc378daa952ba7f163c4a11628f55a4df523b3ef\",\"0x00000000000000000000000090be1ef5eea48f1d33e2574a73e50d208bb3680e\",\"0x00000000000000000000000090be1ef5eea48f1d33e2574a73e50d208bb3680e\"],\"data\":\"0x0000000000000000000000000000000000000000000000000000000000002710\",\"logIndex\":251,\"blockHash\":\"0x252f663dfb64dd82dff77b5e4fbe2073cd77248c5ce8dff1191c87ac22d97cf9\"},{\"transactionIndex\":60,\"blockNumber\":39285028,\"transactionHash\":\"0xec0d501619b5fc9cde6af41df929eeded252138a49965f15a7598bf2e532e555\",\"address\":\"0x0000000000000000000000000000000000001010\",\"topics\":[\"0x4dfe1bbbcf077ddc3e01291eea2d5c70c2b422b415d95645b9adcfd678cb1d63\",\"0x0000000000000000000000000000000000000000000000000000000000001010\",\"0x00000000000000000000000090be1ef5eea48f1d33e2574a73e50d208bb3680e\",\"0x000000000000000000000000e7e2cb8c81c10ff191a73fe266788c9ce62ec754\"],\"data\":\"0x00000000000000000000000000000000000000000000000000080d77c3b67cb80000000000000000000000000000000000000000000000003005ebfb86a0d1350000000000000000000000000000000000000000000003ebfb8e3e34e504eca50000000000000000000000000000000000000000000000002ffdde83c2ea547d0000000000000000000000000000000000000000000003ebfb964baca8bb695d\",\"logIndex\":252,\"blockHash\":\"0x252f663dfb64dd82dff77b5e4fbe2073cd77248c5ce8dff1191c87ac22d97cf9\"}],\"blockNumber\":39285028,\"confirmations\":2,\"cumulativeGasUsed\":{\"type\":\"BigNumber\",\"hex\":\"0x8d88d1\"},\"effectiveGasPrice\":{\"type\":\"BigNumber\",\"hex\":\"0x6cdbaaf8e5\"},\"status\":1,\"type\":2,\"byzantium\":true}, code=TRANSACTION_REPLACED, version=providers/5.7.1)"}
On Ethereum, I believe this can only happy if someone proactively replaces the tx by submitting another with the same nonce and higher gas price. Is that correct?
Yes, and the same is possible on Polygon and other EVM chains.
Senders can replace their transactions for multiple reasons. For example, high-frequency trading bots continuously check if their pending transactions are still likely to be profitable - and if the transaction is not going to be profitable, the bot replaces it with another one. Either with new recalculated params that they expect to be profitable or simply with transaction from/to the same address so at least they don't lose any more funds than just the gas fees.
Note: Once you send a transaction, it's impossible to drop it completely from the mempool. That's why it is sent back to the sender address.
Or sometimes transactions are replaced by regular users that specified insufficient gasPrice and want to speed up the transaction by replacing the gas price with a higher value.
Don't kill me if I'm about to ask something stupid. But I'm very noobish in this whole crypto world, and I'm terribly fascinated about its technology.
So just for education purposes I've decided to build my own blockchain following more or less the bitcoin principles (ECC keypair generation using the secpbk1 curve, SHA256 as hashing algo, dynamic diff based on the timestamp of the previous block, p2p connectivity etc..). But I've came to a point where I'm pretty confused about the blockchain's wallet itself.
For what I've learned so far, each transaction has to be signed by a wallet. So my transactions has basically three fields: input, outputs and id. Since the user's wallet signs the outputs field of the transaction, this can't be changed anymore without being signed again by the same private key that belongs to the public key contained in the input field, how can I reward the miners?
If I got it right, the miner creates a transaction signed somehow by the chain using the fee in the outputs field, or by asking the chain itself to generate and sign a special reward transaction for that miner.
The guide that I was following was using the second approach, and was generating a blockchain wallet each time the program was executed on a client. This approach left me perplexed:
wouldn't a client generate a new wallet for "his" blockchain each time it goes back online? If so, wouldn't this create a mess on the transactions signed on the chain? Since each miner (therefore peer) signing its own reward would use a different blockchain wallet than the other peers? Wouldn't this lead to any problems?
The first one that I might think of, is that if we generate a new blockchain wallet that signs rewards for miners, each peer would create a different wallet, so wouldn't this lead to many "ghosts" wallets in the chain, that spits out rewards tokens from nowhere? Is this supposed to happen?
For what I think is definitively more straightforward to use the fee amount to reward the miner, but this doesn't solve my doubts at all. Since the outputs of the transactions are signed upon creation, how could the peer initiating the transaction know upfront the possible miner who finds the block? And if he can't know it, how could possibly the miner "extract" its reward without tampering the transaction itself? Of course it could create a new transaction, and add that to the block. But who would sign that transaction? From where those reward tokens come?
And if the answer is not to generate a new wallet each time, where could you possibly store that very first private key of the chain's wallet where no one can see it, but still be able to use it, without having to put a server in the middle?
Which in my opinion breaks the whole decentralized concept and would add a single point of failure.
I've also implemented a transactions pool, that automatically filters out invalid (tampered) transactions, whenever a miner requests a sub set of them to stamp in a block. But does this mean that the miner for that only exception can tamper the transaction since it'll be "forged" in the block? So who gives a *** if it was tampered once it got in the chain? MEEEEEH that doesn't sound nice at all.
I'm terribly confused, and I'm dreaming key pairs at night. Please help me.
wouldn't a client generate a new wallet for "his" blockchain each time it goes back online? If so, wouldn't this create a mess on the transactions signed on the chain? Since each miner (therefore peer) signing its own reward would use a different blockchain wallet than the other peers? Wouldn't this lead to any problems?
You don't say what problems you think this will lead to. I can't think of any.
For what I think is definitively more straightforward to use the fee amount to reward the miner, but this doesn't solve my doubts at all. Since the outputs of the transactions are signed upon creation, how could the peer initiating the transaction know upfront the possible miner who finds the block? And if he can't know it, how could possibly the miner "extract" its reward without tampering the transaction itself?
The simplest solution to this is for the transaction itself to just contain its inputs and outputs. The fee is the difference between the total inputs and the total outputs.
The miner just includes the transaction in the block of transactions they mine. They also add one additional transaction into the block, sending themselves the reward. Obviously, they know their own destination address. Every participant who receives the newly-mined block checks to make sure this transaction is valid (just as they check every other one) and doesn't claim a larger reward than is allowed.
And if the answer is not to generate a new wallet each time, where could you possibly store that very first private key of the chain's wallet where no one can see it, but still be able to use it, without having to put a server in the middle?
Typically in a file on the local disk. The private key isn't really needed anyway -- you only need it to send. You don't need it to mine or report. So it can be prompted for or decrypted only when actually needed.
Of course it could create a new transaction, and add that to the block. But who would sign that transaction? From where those reward tokens come?
The usual rule is that the reward transaction has no inputs, one output, and no signature. The tokens come from the pool of unclaimed miner reward tokens which can be finite or infinite depending on the blockchain's design. (For bitcoin, this pool is finite.)
How does one program a Cryptocurrency Miner?
Like
XMRig
XMR-Stak
MinerGate
etc.
You would first need to have a understanding of the concept of PoW. Simply put PoW is hashcash - a miner hashes the block they have created, incrementing a random "nonce" (number used once) until the resultant hash fulfills the "difficulty" requirements. The difficulty is a number that is calculated based on the time between the blocks over the last 2 weeks, it changes to keep blocks being made every 10 mins (ish). For a block to be accepted its hash must be under the difficulty value (and the block must be valid of course). Solo mining software works by polling the coins daemon for the block template (this contains all the highest fee transactions in some cases, in others you have to add them yourself) creating a "coinbase" transaction (a transaction which will pay you the reward once you find a valid block, this is appended to the top of the list of transactions) updating the merkle root of the transactions to include the new coinbase transaction and adding a nonce, you then hash this block - check if the hash fulfills the difficulty and if it doesn't then increment the nonce. The miner keeps doing this until:
1) The miner finds a block - in which case it sends the block to the daemon
2) A block is found by someone else, in which case the miner starts again (getting the new block template bla bla bla).
However most miners are pool miners - in this case the miner connects to a pool via the stratum+tcp protocol and requests a "job", a job is just a string the pool wants you to hash - the pool does the jb of creating the block to be hashed then splits up the task of hashing over all the miners connected. For example the pool might tell alice to hash the block with nonce 0 up to nonce 15,000 and bob to hash with nonce from 15,001 to 30,000, and so on. The pool miner then submits the result of the work. Once a miner finds a solution they tell the pool and the pool sends the block to the pools daemon, it tells the other miners to stop and start work on the new block. It then splits the reward to the miners based on how many jobs they completed - though the way in which this is done is out of the scope of this answer).
TLDR;
You need to have a understanding of how PoW works, a understanding of what method you want to mine with (solo or pool), (if pool) you'll need to understand the tcp+stratum protocol and (if solo) you will need to understand the rpc of the coin you want to make a miner for. I would start by reading through basic and simple solominers, and then building one of your own. Then you can consider moving onto pool miners which are considerably more complicated. If you want your miner to work with GPUs (and most miners do) then you will need to understand common GPU interfaces for both NVIDIA (eg CUDA) and AMD.
I hope this helps and the best of luck and wishes regarding your adventure into the cryptoverse!
Leo Cornelius
Well i have some questions regarding the UTXO model -
1) How is it decided how many transactions will a block contain? Are these transactions related in any sorts?
2) Where are the details of sender and recipient of a transactions stored? If they are not stored, how is it decided where to transfer bitcoins?
1) Miners will usually fill the next block with as many of the highest paying (by fee rate, satoshis/kb) valid (not spent already and pass validation checks) transactions as they can. That way they maximize the transaction fee that they are paid if they win the block reward. There is a limit to the number of bytes a block can contain and it is calculated based on a maximum block weight of 1M virtual bytes, see Weight Units, and is theoretically slightly less than 4MB.
2) They are stored in the transactions which are stored in the blocks. By details all that is stored is the scripts, for the sender, the input script (previous output's scriptPubKey and scriptSig), and for the receiver, the output script (scriptPubKey). See Transaction for more detail.
1> Transactions are broadcasted by anyone in the system and at random intervals. Which transactions, of all the ones broadcasted, are included is very dependent on the miner, as he/she is the one who groups them up and includes them in the block. As Nate noted below, there is also a 1MB block size limit which limits how many transactions can be included in a block. This limit is to prevent huge blocks that clog the network and may be removed if the number of transactions in the network ever grows such that the limit is a serious factor.
2> Sender and recipient transactions are stored in the Blockchain Blocks.Transaction data includes scripts used to spend cryptocurrency amounts listed in the transaction data. The most common of these scripts specify what is commonly called an "address" but that is derived from a public key and is nowadays usually unique to a transaction. It is designed to be difficult or impossible to identify a sender or recipient from these "addresses".
If it is not stored then the transaction will not happen.
In an attempt to use Dynamodb for one of projects, I have a doubt regarding the strong consistency model of dynamodb. From the FAQs
Strongly Consistent Reads — in addition to eventual consistency, Amazon DynamoDB also gives you the
flexibility and control to request a strongly consistent read if your application, or an element of your application, requires it. A strongly consistent read returns a result that reflects all writes that received a successful response prior to the read.
From the definition above, what I get is that a strong consistent read will return the latest write value.
Taking an example: Lets say Client1 issues a write command on Key K1 to update the value from V0 to V1. After few milliseconds Client2 issues a read command for Key K1, then in case of strong consistency V1 will be returned always, however in case of eventual consistency V1 or V0 may be returned. Is my understanding correct?
If it is, What if the write operation returned success but the data is not updated to all replicas and we issue a strongly consistent read, how it will ensure to return the latest write value in this case?
The following link
AWS DynamoDB read after write consistency - how does it work theoretically? tries to explain the architecture behind this, but don't know if this is how it actually works? The next question that comes to my mind after going through this link is: Is DynamoDb based on Single Master, multiple slave architecture, where writes and strong consistent reads are through master replica and normal reads are through others.
Short answer: Writing successfully in strongly consistent mode requires that your write succeed on a majority of servers that can contain the record, therefore any future consistent reads will always see the same data, because a consistent read must read a majority of the servers that can contain the desired record. If you do not perform a strongly consistent read, the system will ask a random server for the record, and it is possible that the data will not be up-to-date.
Imagine three servers. Server 1, server 2 and server 3. To write a strongly consistent record, you pick two servers at minimum, and write the data. Let's pick 1 and 2.
Now you want to read the data consistently. Pick a majority of servers. Let's say we picked 2 and 3.
Server 2 has the new data, and this is what the system returns.
Eventually consistent reads could come from server 1, 2, or 3. This means if server 3 is chosen by random, your new write will not appear yet, until replication occurs.
If a single server fails, your data is still safe, but if two out of three servers fail your new write may be lost until the offline servers are restored.
More explanation:
DynamoDB (assuming it is similar to the database described in the Dynamo paper that Amazon released) uses a ring topology, where data is spread to many servers. Strong consistency is guaranteed because you directly query all relevant servers and get the current data from them. There is no master in the ring, there are no slaves in the ring. A given record will map to a number of identical hosts in the ring, and all of those servers will contain that record. There is no slave that could lag behind, and there is no master that can fail.
Feel free to read any of the many papers on the topic. A similar database called Apache Cassandra is available which also uses ring replication.
http://www.read.seas.harvard.edu/~kohler/class/cs239-w08/decandia07dynamo.pdf
Disclaimer: the following cannot be verified based on the public DynamoDB documentation, but they are probably very close to the truth
Starting from the theory, DynamoDB makes use of quorums, where V is the total number of replica nodes, Vr is the number of replica nodes a read operation asks and Vw is the number of replica nodes where each write is performed. The read quorum (Vr) can be leveraged to make sure the client is getting the latest value, while the write quorum (Vw) can be leveraged to make sure that writes do not create conflicts.
Based on the fact that there are no write conflicts in DynamoDB (since these would have to be reconciliated from the client, thus being exposed in the API), we conclude that DynamoDB is using a Vw that respects the second law (Vw > V/2), probably just V/2+1 to reduce write latency.
Now regarding read quorums, DynamoDB provides 2 different kinds of read. The strongly consistent read uses a read quorum that respects the first law (Vr + Vw > V), probably just V/2 if we assume V/2+1 for writes as before. However, an eventually consistent read can use only a single random replica Vr = 1, thus being much quicker but giving zero guarantee around consistency.
Note: There's a possibility that the write quorum used does not respect the second law (Vw > V/2), but that would mean DynamoDB resolves automatically such conflicts (e.g. by selecting the latest one based on local time) without reconciliation from the client. But, I believe that this is really unlikely to be true, since there is no such reference in the DynamoDB documentation. Even in that case though, the rest reasoning stays the same.
You can find answer to your question here: http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/APISummary.html
When you issue a strongly consistent read request, Amazon DynamoDB returns a response with the most up-to-date data that reflects updates by all prior related write operations to which Amazon DynamoDB returned a successful response.
In your example, if the updateItem request to update the value from v0 to v1 was successful, the subsequent strongly consistent read request will return v1.
Hope this helps.