Is there any statistics on number of successful transactions comparing to number of sent transactions to the Ethereum main network?
In my own experience of sending bunch of transacitons (by Go), I could see that something like 20% of transactions which primarily is pending on Ethereum network will not become successful (same happens when I am using Ethereum wallet).
I want to know whether I should modify my code (e.g. gas price) or this successful ratio is something normal for the Ethereum transactions.
You can write a simple application using etherscan api, but remember they have a limit for how many transactions you can do in a given span of time.
Related
I am working on TRC20 network and I am using it to monitor a list (1000+) addresses to check if there are any USDT being sent in.
I tried 2 methods but I am not sure what is the best approach to do this.
Method 1: which is the .watch() function from the official documentation
https://developers.tron.network/reference/methodwatch
I am able to get lots of transaction information, but I did a test with my own transactions and its really wonky, as in I am not able to catch my own transactions reliably.
Method 2: calling the api to get all transactions of a wallet address.
https://developers.tron.network/reference/get-transaction-info-by-account-address
The issue with this is I have to do polling every x minutes on 1000+ addresses ( which will grow in the future).
Would like to ask if anyone could share how do platforms monitor their addresses. I joined the tron developer community and its full of scammers.
Method 2:
I have a client which has a pretty popular ticket selling service, to the point that the microservice based backend is struggling to keep up, I need to come up with a solution to optimize and loadbalance the system. The infrastructure works through a series of interconnected microservices.
When a user enter the sales channels (mobile or web app), the request is directed to an AWS API Gateway which is in charge of orchestrating the communication towards the microservice in charge of obtaining the requested resources.
These resources are provided from a third party API
This third party has physical servers in each venue in charge of synchronizing the information between the POS systems and the digital sales channels.
We have a REDIS instance in charge of caching these requests that we make to the third party API, we cache each endpoint with a TTL relative to the frequency of updating the information.
Here is some background info:
We get traffic mostly from 2 major countries
On a normal day, about 100 thousands users will use the service, with an 70%/30% traffic relation in between the two countries
On important days, each country has different opening hours (Country A starts sales at 10 am UTC, but country B starts at 5 pm UTC), on these days the traffic increases some n times
We have a main MiddleWare through which all requests made by clients are processed.
We have a REDIS cache database that stores GETs with different TTLs for each endpoint.
We have a MiddleWare that decides to make the request to the cache or to the third party's API, as the case may be.
And these are the complaints I have gotten that need to be deal with:
When a country receives a high amount of requests, the country with the least traffic gets negatively affected, the clients do not respond, or respond partially because the computation layer's limit was exceeded and so the users have a bad experience
Every time the above happens, the computation layer must be manually increased from the infrastructure.
Each request has different response times, stadiums respond in +/- 40 seconds and movie theaters in 3 seconds. These requests enter a queue and are answered in order of arrival.
The error handling is not clear. The errors are mixed up and you can't tell from which country the errors are coming from and how many errors there are
The responses from the third party API are not cached correctly in the cache layer since errors are stored for the time of the TTL
I was thinking of a couple of thinks that I could suggest:
Adding in instrumentation of the requests by using AWS X-Ray
Adding in a separate table for errors in the redis cache layer (old data has to be better than no data for the end user)
Adding in AWS elastic load balancing for the main middleware
But I'm not sure how realistic would be to implement these 3 things, I'm also not sure if they would even solve the problem, I personally don't really have experience with optimizing this type of backed. I would appreciate any suggestions, recommendations, links, documentation, etc. I'm really desperate for a solution to this problem
few thoughts:
When a country receives a high amount of requests, the country with the least traffic gets negatively affected, the clients do not respond, or respond partially because the computation layer's limit was exceeded and so the users have a bad experience
A common approach in aws is to regionalize stack - assuming you are using cdk/cloud formation creating regionalized stack should be a straightforward task.
But it is a question if this will solve the problem. Your system suffers from availability issues, regionalization will isolate this problem down to regions. So we should be able to do better (see below)
Every time the above happens, the computation layer must be manually increased from the infrastructure.
AWS has an option to automatically scale up and down based on traffic patterns. This is a neat feature, given you set limits to make sure you are not overcharged.
Each request has different response times, stadiums respond in +/- 40 seconds and movie theaters in 3 seconds. These requests enter a queue and are answered in order of arrival.
It seems that the large variance is because you have to contact the servers at venues. I recommend to decouple that activity. Basically calls to venues should be done async; there are several ways you could do that - queues and customer push/pull are the approaches (please, comment if more details are needed. but this is quite standard problem - lots of data in the internet)
The error handling is not clear. The errors are mixed up and you can't tell from which country the errors are coming from and how many errors there are
That's a code fix, when you do send data to cloudwatch (do you?). You could put country as a context to all request, via filter or something. And when error is logged that context is logged as well. You probably need venue id even more than country, as you can conclude country from venue id.
The responses from the third party API are not cached correctly in the cache layer since errors are stored for the time of the TTL
Don't store errors + add a circuit breaker pattern.
I am new to blockchain technology and have a basic question.
I understand that in any blockchain network, if any node tries to commit something which is not in sync with other nodes , it gets rejected.Then how the new transaction is commited and validated? Who has authority to do it.
That's the thing about blockchain. There is no authority that determines which block will get added to the chain. And by blockchain, I mean public blockchain.
Blockchain's typically are either public or permissioned.
Public
Public blockchain's, such as bitcoin and ethereum work on the principle of proof of work. In layman's terms, if any participant wants there transaction to be processed, i.e added to the chain, they submit it to the network. This transaction is then processed by independent entities called miners who have to solve a computational puzzle in order to produce a valid block, which if accepted results in compensation in form of the said digital currency for the work put in by the miner. Also, the longest chain is always accepted as the valid chain.
There is absolutely no criteria or organisation that overlooks mining, meaning anybody can become a miner and start contributing. So the network is for the people, by the people, anybody can join and both submit as well as process transactions.
If the transaction is valid, that is you own the coin and are not double spending it, it will be processed by a miner. And if the block produced by the miner is accepted, so is your transaction.
Private/Permissioned
On the other hand, in case of private/permissioned blockchain's like hyperledger fabric for example, participation and block processing is decided by a single or multiple organisations. Hence in this case, a block is processed only if it produced by a valid member and it is endorsed by nodes of all participating organizations.
As you said "if any node tries to commit something which is not in sync with other nodes" what I get is that you are asking about the block which one node produce but rejected by the blockchain. This scenario happens when 2 nodes try to find the proof of work and one node finds it first and broadcast to the network but due to network delay (there could be other reasons too for that), and the other node didn't get the block in this way stale/uncle blocks are created. Bitcoin blockchain considers the longest blockchain and discards the other.
I am new to blockchain and have a question about its transaction verification process. I have read some articles related to Bitcoin. They said it takes about 10 minutes to verify a bitcoin transaction. Does this mean if I purchase a bitcoin I have to wait 10 minutes for this transaction? If yes, how can I handle the real-time system requirement in the blockchain?
They said it takes about 10 minutes to verify a bitcoin transaction.
Does this mean if I purchase a bitcoin I have to wait 10 minutes for
this transaction?
Yes if your transaction was lucky enough (or expensive enough) to make it into the next block. Since Bitcoin's 10-minute blocks hold a finite number of transactions, you might need to wait longer than that if your transaction wasn't included in the upcoming block.
On top of that, once your transaction is on the blockchain, you should ideally wait until several more blocks are mined until you're confident that your transaction got included in the globally-accepted branch (the longest branch).
So, to be safe, if you decide to wait for 6 blocks after your transaction is included, you'd end up waiting an additional hour until you're relatively confident that your transaction has been accepted.
If yes, how can I handle the real-time system requirement in the
blockchain?
That's one challenge that Bitcoin hasn't overcome yet. The closest work-in-progress solution is the Lightning Network, but that's still in development.
In the meantime, some applications might decide to process real-time transactions off-chain by accepting transactions before they're included in the blockchain. But this is a risky trade-off; you'd allow fast transactions at the cost of security, opening yourself up to double-spending.
I have been reading the documentation on how HyperLedger Fabric's project is implementing a open source BlockChain solution: https://github.com/hyperledger/fabric/blob/master/docs/protocol-spec.md
I have seen that PBFT consensus algorithm is used, but I do not understand how blocks are mined and shared among all Validating Peers in the BlockChain network.
Hyperledger Validating Peers (VPs) do not mine blocks and do not share the blocks between them. Here is how it works:
A transaction is send to one trusted VP.
The VP broadcasts the transaction to all other VPs.
All VPs reach consensus (using PBFT algorithm) on the order to follow to execute the transactions.
All VPs execute the transactions "on their own" following the total order and build a block (calculating hashes mainly) with the executed transactions.
All the blocks will be the same because: the transaction execution is deterministic (should be) and the number of tx in a block is fixed.
According to Hyperledger Fabric 1.X
User through Client SDK send the transaction proposal to Endorsing Peers.
Endorsing Peer check the transaction and make endorsement proposal of transaction(with read/write set (previous value/Changed value)) and send to again client SDK.
Client SDK wait for all endorsement, once it get all endorsement proposal it make one invocation request and send to Orderer.
Orderer verify invocation request rent by client SDK by checking Policies defined(Consensus), verify the transaction and add to the block.
According to configuration defined for block, after specified time or number of transaction it form a Hash of block by using transaction hash, metadata and previous block hash.
The blocks of transactions are “delivered” to all peers on the channel by the Orderer.
All committing peers verify the endorsing policy and ensure that there have been no changes to ledger state for read set variables since the read set was generated by the transaction execution. After this all the transactions in the block and update the ledger with new block and current state of asset.
Ledger Contains
1) Current state Database(Level BD or Couch DB)
2) Blockchain(Files)(Linked blocks)
Read the transaction flow of hyperledger fabric
Check image for reference
Hyperledger is an umbrella of blockchain technologies. Hyperledger Fabric, mentioned above, is one of them. Hyperledger Sawtooth also does not use mining and adds these consensus algorithms:
PoET Proof of Elapsed Time (optional Nakamoto-style consensus algorithm used for Sawtooth). PoET with SGX has BFT. PoET Simulator has CFT. Not CPU-intensive as with PoW-style algorithms, although it still can fork and have stale blocks . See PoET specification at https://sawtooth.hyperledger.org/docs/core/release s/latest/architecture/poet.html
RAFT Consensus algorithm that elects a leader for a term of arbitrary time. Leader replaced if it times-out. Raft is faster than PoET, but is not BFT (Raft is CFT). Also Raft does not fork.
With unpluggable consensus, another consensus algorithm can be changed without reinitializing the blockchain or even restarting the software.
For completeness, the original consensus algorithm with bitcoin (and does use mining) is:
PoW Proof of Work. Completing work (CPU-intensive Nakamoto-style consensus algorithm). Usually used in permissionless blockchains