Linkages of Blocks in Hyperledger Blockchain - blockchain

In case of Bitcoin Blockchain blocks are linked with hash of previous block. How blocks are linked in the Hyperledger Fabric blockchain? And how to visualize them?

You can find how block in structured in Hyperledger Fabric, by inspecting protobuf definitions, here. Where block defined as:
message Block {
BlockHeader header = 1;
BlockData data = 2;
BlockMetadata metadata = 3;
}
and you will drill down further to definition of BlockHeader:
message BlockHeader {
uint64 number = 1; // The position in the blockchain
bytes previous_hash = 2; // The hash of the previous block header
bytes data_hash = 3; // The hash of the BlockData, by MerkleTree
}
you will see that blocks in Hyperledger Fabric are chained with hashes in very similar fashion to Bitcoin and many other blockchain solutions. The hash (H) of the block N computed by following scheme:
H(H) = hash([transactions] || H(N-1) || N)
E.g. content of the block, previous block hash and the block sequence number.
Many aspects have good coverage in "Getting Started" documentation.

If you are using IBM Bluemix here is simple tutorial with GO
# Create the parent directories on your GOPATH
mkdir -p $GOPATH/src/github.com/hyperledger
cd $GOPATH/src/github.com/hyperledger
# Clone the appropriate release codebase into $GOPATH/src/github.com/hyperledger/fabric
# Note that the v0.5 release is a branch of the repository. It is defined below after the -b argument
git clone -b v0.5-developer-preview https://github.com/hyperledger-archives/fabric.git
If you are installing IBM BLUEMIX v0.6 release, use this for your git clone command:
# The v0.6 release exists as a branch inside the Gerrit fabric repository
git clone -b v0.6 http://gerrit.hyperledger.org/r/fabric
If the fabric is not installed properly on your GOPATH, you will see errors like the one below when building your chaincode
Apart from you can simply refer documentation for linking process!

Related

Cardano-cli query utxo fails

Error message I am trying to create a staking pool for cardano, i got the node up and running but cardano-cli is giving me a hard time. I have it installed as when i type cardano-cli version it returns infocardano-cli version.
However when i enter cardano-cli query utxo --mainnet --address $RECEIVER i get this error:
cardano-cli: Network.Socket.connect: <socket: 11>: does not exist (No such file or directory)root#vmi803461:~#
Could it be because the blockchain isn't fully synched?
I am running windows 10 with vs
node
Yes ivan p is correct however its also important to mention that there have been cardano node example scripts involving stakepools such as creating a shelley blockchain from scratch that had a bug for quite some time essentially causing multiple nodes to point to the same socket. If you're attempting to run multiple nodes - which you should be while running a stakepool, double check the precise path and location of the socket
cardano-cli requires both CARDANO_NODE_SOCKET_PATH variables installed in the shell and node server running to access it with the socket.
export CARDANO_NODE_SOCKET_PATH="/root/db/node.socket"
To query the balance of an address we need a running node and the
environment variable CARDANO_NODE_SOCKET_PATH set to the path of the
node.socket:

Generate DID in ION SIDETREE testnet

I have succeeded deploy the ION SIDETREE testnet.
I followed this install instruction. But when I created DID, it returned DID for mainnet.
I don't know where it's wrong.
Here is my configuration:
Step 1: I run bitcoin with this command:
root#ion:~/bitcoin-0.20.1# ./bin/bitcoind -testnet -rpcuser=admin -rpcpassword=admin -fallbackfee=0.0002 -txindex=1 -server
And here is the output log
Step 2: I config ION Sidetree
Here is my configuration:
root#ion:~/ion/json# cat testnet-bitcoin-config.json
{
"bitcoinDataDirectory": "/root/.bitcoin/testnet3",
"bitcoinFeeSpendingCutoffPeriodInBlocks": 1,
"bitcoinFeeSpendingCutoff": 0.001,
"bitcoinPeerUri": "http://localhost:18332",
"bitcoinRpcUsername": "admin",
"bitcoinRpcPassword": "admin",
"bitcoinWalletOrImportString": "cMj4VE3WyJt6RAQVGboDATFQ6YAVKo6fCVXw7oKuSpaAfNJqCuV1",
"databaseName": "ion-testnet-bitcoin",
"genesisBlockNumber": 1900000,
"logRequestError": true,
"mongoDbConnectionString": "mongodb://127.0.0.1:27017/",
"port": 3002,
"sidetreeTransactionFeeMarkupPercentage": 1,
"sidetreeTransactionPrefix": "ion:test",
"transactionPollPeriodInSeconds": 60,
"valueTimeLockUpdateEnabled": false,
"valueTimeLockAmountInBitcoins": 0,
"valueTimeLockPollPeriodInSeconds": 600,
"valueTimeLockTransactionFeesAmountInBitcoins": 0.0001
}
And
root#ion:~/ion/json# cat testnet-core-config.json
{
"batchingIntervalInSeconds": 600,
"blockchainServiceUri": "http://127.0.0.1:3002",
"databaseName": "ion-testnet-core",
"didMethodName": "ion:test",
"ipfsHttpApiEndpointUri": "http://127.0.0.1:5001",
"maxConcurrentDownloads": 20,
"mongoDbConnectionString": "mongodb://127.0.0.1:27017/",
"observingIntervalInSeconds": 60,
"port": 3000
}
Step 3: I run bitcoin
root#ion:~/ion# npm install
root#ion:~/ion/json# npm run build
root#ion:~/ion/json# npm run bitcoin
And here is the output log
Step 4: I run core
root#ion:~/ion# npm run core
And here is the output log
Step 5: I build ION
root#ion:~# cd ion/
root#ion:~/ion# npm install
root#ion:~/ion# npm run build
root#ion:~/ion# npm install -g .
Step 6: I generate DID
root#ion:~/ion# ion operation create
Here is the output log
Here is my problem, I don't know why I am running testnet but it created DID as mainnet. So, when I resolved DID, it proved this error
Thank you.
But when I use command "ion operation create"...
The CLI is completely experimental (as it is using a test library) and on development pause since last year, we did not anticipate anyone would know about it since we didn't advertise it on front page, but I have just opened a PR to make it use the ION SDK instead, it should be merged soon, I am not sure if it was working before my PR (probably was), but it should definitely work now.
That DID belongs to mainet.
The CLI was written with the assumption that it targets mainnet. But it is fine, the request will work perfectly fine against the testnet. The test: prefix in the DID string is purely cosmetic for the most part.
But with my DID (generate by myself), it shows this error
Since you are testing testnet DIDs and your node is setup as such ("didMethodName": "ion:test", seen in your config above), you must therefore prefix your unique DID suffix string with did:ion:test:<unique_suffix>, you didn't in your screenshot, hence the error. I hope this makes sense. The error message you are seeing in the screenshot also says exactly that.
Step 3: I resolve with DID "did:ion:test:EiC3YoSodQ20tJcgKjLXr65BHr2KwnQWsUnm3VOiYUFMeA", but It proved "not found"
ION DID tool should work. Sanity: assuming you've setup the node to be able to write, did you wait for that transaction to be confirmed? You can check by checking the transaction hash that is printed out when batch writer kicks in every 10 minutes (based on your conifg "batchingIntervalInSeconds": 600,). Even though the post request immediately returns a DID Document, that is just for your convenience to know that 1. the node accepted your request and queues it, and 2. shows what your DID Document will look like once it is confirmed, purely for your convenience. But we all know bitcoin transactions takes a while to confirm, especially on testnet, where the confirmation time is less predictable.

Create cron in chef

I want to create Cron in chef witch they verify size of the log if it's > 30mb it will delete it, here is my code:
cron_d 'ganglia_tomcat_thread_max' do
hour '0'
minute '1'
command "rm - f /srv/node/current/app/log/simplesamlphp.log"
only_if { ::File.size('/srv/node/current/app/log/simplesamlphp.log').to_f / 1024000 > 30 }
end
Can you help me in it please
Welcome to Stackoverflow!
I suggest you to go with existing tools like "logrotate". There is a chef cookbook available to manage logrotate.
Please note, that "cron" in chef manages the system cron service which runs independently of chef. You'll have to do the file size check within the "command". It's also better to use the cron_d resource as documented here.
In the way you create cron_d resource it will add cron task only when your log file has size greater than 30mb. In all other cases cron_d will be not created.
You can check that ruby code
File.size('file').to_f / 2**20
to get the file size in Megabytes - there is a slight difference in the result I believe that is more correct.
so you can go wirh 2 solutions for your specific case
create new cron_d resource when log file is less than 30 mb to remove existing cron and provision your node periodically
move the check of the file size in the command with bash and glue with && - in that case file will be dated only if size is greater than 30mb. something like that
du -k file.txt | cut -f1
will return size of the file in bytes
To me also correct way to to that is to use logrotate service and chef recipe for that.

Where is Hyperledger fabric chaincode store location?

I used peer commond in cli container on chancode install.
chaincode name + varsion file was made on peer containers.
(location is /var/hyperledger/production/chaincodes)
What is this? chaincode?
I am looking for chaincode after install.
The chaincode package which is installed on the peer is technically the serialized bytes of a protobuf message. The protobuf in a ChainDeploymentSpec message, which is defined in the "github.com/hyperledger/fabric/protos/peer" package:
message ChaincodeDeploymentSpec {
enum ExecutionEnvironment {
DOCKER = 0;
SYSTEM = 1;
}
ChaincodeSpec chaincode_spec = 1;
// Controls when the chaincode becomes executable.
google.protobuf.Timestamp effective_date = 2;
bytes code_package = 3;
ExecutionEnvironment exec_env= 4;
}
The actual chaincode source files are stored in a serialized archive file stored in the code_package field of the ChainDeploymentSpec.

How to do branch grafting on separate tasks

I want to write a ducttape task that grafts the outputs of tasks belonging to different branches. For example, take a scenario where you have a training and test data. You want to use the training data to develop a model and you want to evaluate the model on a test set.
Here's a dummy example. prep is run on each of the data sets, but trainModel is only to be executed on training. The goal is to have applyModel use the results from trainModel with the training set and to evaluate it on the test set.
task prep
< input=$corpus
> output
{
# Do some common preprocessing on all corpora
cp $input $output
}
task trainModel
< input=$output#prep
> trainedModel
{
echo "Trained..." > $trainedModel
cat $input >> $trainedModel
}
task applyModel
< trainedModel=$trainedModel#trainModel[Set:Train]
< testInput=$output#prep[Set:Test]
> out
{
diff $file1 $file2 > $out
}
plan DoIt {
# reach prep via (Set: *)
reach trainModel via (Set: Train)
reach applyModel via (Set: *)
}
global
{
train=train.txt
test=test.txt
corpus=(Set: Train=$train Test=$test)
}
However, if you look at the execution plan:
ducttape 0.3
by Jonathan Clark
Loading workflow version history...
Have 0 previous workflow versions
Finding hyperpaths contained in plan...
Found 4 vertices implied by realization plan DoIt (Clause 1 of 2)
Found 4 vertices implied by realization plan DoIt (Clause 2 of 2)
Union of all planned vertices has size 5
Checking for completed tasks
Finding packages...
Found 0 packages
Checking for already built packages (if this takes a long time, consider switching to a local-disk git clone instead of a remote repository)...
Checking inputs...
Work plan (depth-first traversal):
RUN: ./prep/Baseline.baseline (Set.Train)
RUN: ./trainModel/Baseline.baseline (Set.Train)
RUN: ./prep/Set.Test (Set.Test)
RUN: ./applyModel/Baseline.baseline (Baseline.baseline)
RUN: ./trainModel/Set.Test (Set.Test)
you see that trainModel will be executed twice, both on the training and test data. How can I configure the work plan to train only on one or more training sets, and evaluate on one or more test sets?
(I realize it might be better in this example to break the $corpus variable into sub-branches, but in more complex scripts this problem still exists.)
This is exactly the sort of scenario that branches are aimed at. The issue in the sample above is that the graft for the training data is applied to late -- since you want to run training for only the training data set, you should apply the graft at the training task.
You also mentioned the use case of having multiple test sets, so I've included an example of that as well, which makes use of nested branch points.
To keep things clear, I renamed your original branch point "Set" to "Section" (i.e. train vs test data sections) and introduced a new "TestSet" branch point.
task prep
< input=$corpus
> output
{
# Do some common preprocessing on all corpora
cp $input $output
}
task trainModel
< input=$output#prep[Section:Train] <----- EDITED
> trainedModel
{
echo "Trained..." > $trainedModel
cat $input >> $trainedModel
}
task applyModel
< trainedModel=$trainedModel#trainModel <----- EDITED
< testInput=$output#prep[Section:Test]
> out
{
diff $file1 $file2 > $out
}
plan DoIt {
reach applyModel via (TestSet: *) <----- EDITED
}
global
{
train=train.txt
test_a=test_a.txt
test_b=test_b.txt
corpus=(Section:
Train=$train
Test=(TestSet: A=$test_a B=$test_b) <----- EDITED
)
}