How does Ordering Nodes Synchronization work? - blockchain

How can adding a new orderer download the ledger as ordering nodes are not connected with each other and kafka keeps messages only for 7 days.
And also if I shut down a orderer node for more than 7 days and if I bring it up again then it will not find the transactions that happened in those 7 days in kafka partition therefore how will it sync and update it's local ledger.

In 1.0, Kafka brokers are to be set with log.retention.ms = -1 (source: documentation, Step 4e).
This disables time-based retention and prevents segments from expiring. This means that:
A partition hosts the entire transaction history of the channel.
A new orderer service node (OSN) can be added at any point in time and use the Kafka brokers to sync up with all channels in their entirety.
In a minor release within the 1.x track we will support ledger pruning for the OSNs. This means that the brokers will only need to maintain a pruned sequence of the transaction history (that will always start from a configuration block), and any new OSN will only be able to sync back to that configuration block.

Related

Why is AWS MSK Kafka broker constantly disconnecting and reconnecting the consumer group

I have AWS MSK Kafka cluster with 2 brokers. From the logs I can see (on each broker) that they are constantly rebalancing. Every minute I can see in logs:
Preparing to rebalance group amazon.msk.canary.group.broker-1 in state PreparingRebalance with old generation 350887 (__consumer_offsets-21) (reason: Adding new member consumer-amazon.msk.canary.group.broker-1-27058-8aad596f-b00d-428a-abaa-f3a28d714f89 with group instance id None) (kafka.coordinator.group.GroupCoordinator)
And 25 seconds later:
Preparing to rebalance group amazon.msk.canary.group.broker-1 in state PreparingRebalance with old generation 350888 (__consumer_offsets-21) (reason: removing member consumer-amazon.msk.canary.group.broker-1-27058-8aad596f-b00d-428a-abaa-f3a28d714f89 on LeaveGroup) (kafka.coordinator.group.GroupCoordinator)
Why this happens? What is causing it? And what is amazon.msk.canary.group.broker-1 consumer group?
May it be something with the configuration of Java’s garbage
collection on the brokers? I remember reading that a misconfiguration of the garbage collectors can cause the broker to pause for a few seconds and lose connectivity to the Zookeeper, hence the flipping behavior. Could you check whether you are applying any custom configuration for garbage collection? (i.e. via KAFKA_JVM_PERFORMANCE_OPTS environmental variable)

CHAINLINK NODE - Your node is overloaded and may start missing jobs ERROR

Running a test node in GCP, using Docker 9.9.4, Ubuntu, Postgres db, Infura. I had issues with public/private IP, but once I cleared that up my node is up and running. I am now throwing the error below repeatedly, potentially due to the blockchain connection. How do I fix this?
[ERROR] HeadTracker: dropping head 26085153 with hash 0xf50e19099b7e343829935d70dd7d86c5bc0398286b7a4e4f32ac033ac60c3733 because queue is full. WARNING: Your node is overloaded and may start missing jobs. logger/default.go:155 stacktrace=github.com/smartcontractkit/chainlink/core/logger.Errorf
This log output is related to an overload of your blockchain connection.
This notification is usually related to the usage of public websocket connections and/or free third party NaaS Provider. To fix this connection issue you can either run an own full node or change the tier or the third party NaaS provider. Also it is recommended to use Chainlink version 0.10.8 or higher, as the HeadTracker has been revised here and performs more efficient.
In regard to the question let me try to give you a small technical overview, which may clarify the payload of a Chainlink node to it's remote full node:
Your Chainlink node establishes a connection to a full node. There the Chainlink node initiates various subscriptions, which are a special feature of the websocket protocol to enable bidirectional communication. More precisely, this means that the Chainlink node is informed if a certain "state" of the subscription changes. Basically, the node interacts with using JSON-RPC methods and uses the following methods to initiate and process various functions internally:
eth_getBlockByNumber,eth_getBalance,eth_getTransactionReceipt,eth_getTransactionCount,eth_getLogs,eth_subscribe,eth_unsubscribe,eth_sendRawTransaction and eth_Call
https://ethereum.org/uk/developers/docs/apis/json-rpc/
The high amount of interactions of the Chainlink node are especially executed during the syncing process via the internal HeadTracker service. This service initiates a "head" subscription in order to interact with every single incoming new blockheader.
During this syncing process it uses the JSON-RPC methods eth_GetBlockByNumber and eth_getBalance to get all the necessary information from the block. So these two methods are used/ executed every block. The number of requests now depends on the average blocktime of the network the Chainlink node is connected to
An example would be the Kovan Testnet:
The avg. blocktime here is 6.7sec, which means you get a daily request number of approx. 21.000
During fulfilling job requests, those request also includes following methods: eth_getTransactionReceipt, eth_sendRawTransaction, eth_getLogs, eth_subscribe, eth_unsubscribe, eth_getTransactionCount and eth_call, which increases the total number significantly depending on the number of job requests.
It should also be noted that especially with faster blockchains (e.g. polygon) there is a very high payload of the WebSocket and you have to deal with a good full node connection in detail, as many full nodes do not receive such a high number of requests permanently.

Adding New Subscription policy in WSO2 API Manager and Applying on API

I've tried creating a new subscription throttling policy(10req/min). I've selected the same while publishing an API, and also selected the same in-store while subscribing to an API. But still, it's taking more than 10req/min.
Note: we are using 2 nodes in a cluster environment.
This might be due to not synching the throttling conditions in both nodes. Therefore, in this scenario, each node will serve 10 req/min. Total, 20 req/min.
To fix this, you should publish throttling events from each node to both the nodes.
Node 1 - publishes to Node 1 and Node 2
Node 2 - publishes to Node 2 and Node 1.
This way, both the nodes have the throttle events so that, the throttle decision will be taken properly.
In each node, you have to do the following configuration
<ThrottlingConfigurations>
<EnableAdvanceThrottling>true</EnableAdvanceThrottling>
<DataPublisher>
<Enabled>true</Enabled>
<Type>Binary</Type>
<ReceiverUrlGroup>{tcp://node1_ip:9612, tcp://node2_ip:9612}</ReceiverUrlGroup>
<!--ReceiverUrlGroup>tcp://${carbon.local.ip}:9612</ReceiverUrlGroup-->
<AuthUrlGroup>{ssl://node1_ip:9712, ssl://node2_ip:9713}</AuthUrlGroup>
<!--AuthUrlGroup>ssl://${carbon.local.ip}:9712</AuthUrlGroup-->
<Username>${admin.username}</Username>
</ThrottlingConfigurations>

How Consesus is reached in Hyperledger

In Hyperledger fabric consensus is acheived by orderers but on what basis each transaction is ordered and How many orderers are present in distributed network if there are more than one again how ordering done by each orderers are identical
on what basis each transaction is ordered
Each transaction is ordered by an ordering service node, where it uses its internal implementation-specific consensus mechanism.
Hyperledger Fabric v1.0 comes with 2 ordering service implementations:
Solo orderer -- mainly used for development and testing; has 1 node and simply batches the transactions into blocks
Kafka based orderer -- the orderer nodes send the transactions to a kafka queue from which all orderer nodes "pull" the transactions in the same order, and then cut the same blocks; each orderer sends into the queue a "time to cut a block" message, and then when the first message reaches the queue, a block is cut by all ordering service nodes
In the future there will probably be a Byzantine tolerant implementation of the ordering service based on the sBFT (simplified pBFT) algorithm.

Are blocks mined in HyperLedger Fabric?

I have been reading the documentation on how HyperLedger Fabric's project is implementing a open source BlockChain solution: https://github.com/hyperledger/fabric/blob/master/docs/protocol-spec.md
I have seen that PBFT consensus algorithm is used, but I do not understand how blocks are mined and shared among all Validating Peers in the BlockChain network.
Hyperledger Validating Peers (VPs) do not mine blocks and do not share the blocks between them. Here is how it works:
A transaction is send to one trusted VP.
The VP broadcasts the transaction to all other VPs.
All VPs reach consensus (using PBFT algorithm) on the order to follow to execute the transactions.
All VPs execute the transactions "on their own" following the total order and build a block (calculating hashes mainly) with the executed transactions.
All the blocks will be the same because: the transaction execution is deterministic (should be) and the number of tx in a block is fixed.
According to Hyperledger Fabric 1.X
User through Client SDK send the transaction proposal to Endorsing Peers.
Endorsing Peer check the transaction and make endorsement proposal of transaction(with read/write set (previous value/Changed value)) and send to again client SDK.
Client SDK wait for all endorsement, once it get all endorsement proposal it make one invocation request and send to Orderer.
Orderer verify invocation request rent by client SDK by checking Policies defined(Consensus), verify the transaction and add to the block.
According to configuration defined for block, after specified time or number of transaction it form a Hash of block by using transaction hash, metadata and previous block hash.
The blocks of transactions are “delivered” to all peers on the channel by the Orderer.
All committing peers verify the endorsing policy and ensure that there have been no changes to ledger state for read set variables since the read set was generated by the transaction execution. After this all the transactions in the block and update the ledger with new block and current state of asset.
Ledger Contains
1) Current state Database(Level BD or Couch DB)
2) Blockchain(Files)(Linked blocks)
Read the transaction flow of hyperledger fabric
Check image for reference
Hyperledger is an umbrella of blockchain technologies. Hyperledger Fabric, mentioned above, is one of them. Hyperledger Sawtooth also does not use mining and adds these consensus algorithms:
PoET Proof of Elapsed Time (optional Nakamoto-style consensus algorithm used for Sawtooth). PoET with SGX has BFT. PoET Simulator has CFT. Not CPU-intensive as with PoW-style algorithms, although it still can fork and have stale blocks . See PoET specification at https://sawtooth.hyperledger.org/docs/core/release s/latest/architecture/poet.html
RAFT Consensus algorithm that elects a leader for a term of arbitrary time. Leader replaced if it times-out. Raft is faster than PoET, but is not BFT (Raft is CFT). Also Raft does not fork.
With unpluggable consensus, another consensus algorithm can be changed without reinitializing the blockchain or even restarting the software.
For completeness, the original consensus algorithm with bitcoin (and does use mining) is:
PoW Proof of Work. Completing work (CPU-intensive Nakamoto-style consensus algorithm). Usually used in permissionless blockchains