I'm new to Corda and I'm still trying to understand it.
So, I already know that we can run multiple Cordapps in the same node. But the thing is that if those Cordapps can have access and update the same ledger?
Thank you very much :)
Yes they can, you can have the following structure inside your project:
1. Contracts module that defines CustomState and Custom Contract.
2. Workflows1 module that depends on Contracts module.
3. Workflows2 module that depends on Contracts module and Workflows1 module.
4. Workflows1 module can have a flow that creates and modifies CustomState.
5. Workflows2 module can have a flow that creates and modifies CustomState.
6. Workflows2 module can call a flow from Workflows1 module.
In the above structure you have 3 CorDapps (Contracts, Workflows1, and Workflows2); they all create and modify your state.
As for if they all access the same ledger, in Corda data is shared on a need to know basis; in other words when you define your state you define the Participants which are the parties that will be signing the transaction and storing the resulting state.
I recommend starting here: https://docs.corda.net/key-concepts.html
Also join the Corda Slack channel (Create a Stackoverflow post and share the link there): slack.corda.net
Related
I am currently learning how to start developing on ChainLink, and I saw that there is a GetRoundData() method that is used to return data from a specific timestamp.
When I dig into the code and I found that there the method came from the interface AggregatorV3Interface. Also, I didn't find the implementation of the function inside any of .sol files, but I find it in a .go file.
My question is, how is the aggregator store data on the blockchain? as I see the data come from nowhere when I call getRoundData. If data comes from the module written by Go lang, does that means the data source is off-chain? Thank you.
Code snippet captures:
aggregator_v2v3_interface.go
AggregatorV3Interface.sol
A contract implementing this interface is deployed on an address specified in the _AggregatorV2V3Interface Golang variable.
So your offchain script is connected to a node of some EVM network (Ethereum, BSC, Polygon, ...) and queries the node to perform a read-only, gas-free, call on that specific contract address. The actual data is stored onchain.
so I'm looking at this api docs on polkadot.js https://polkadot.js.org/docs/substrate/storage#staking
but I could not figure which one to use to actually query all the staking rewards given an account ID / publish address.
I was thinking I would have to loop for each era. but which one returns the staking rewards. so than I can calculate a total overtime? thank you very much !
In general, the node isn't used for querying historical state. Instead you very likely want to use an indexer service that generates data that is much easier to get queries on. There are a few options, but one of the most supported is substrate archive that I would suggest you use.
Alternatively you can look to substrate compatible block explorers to see what they do for this in their source code.
What are the main set of files that are required for orchestration of new network from old data from old sawtooth network ( I don't want to extend old sawtooth network).
I want to backup the essential files that are crucial for the operation of the network from the last block in the ledger.
I have list of files that were generated in sawtooth validator and with poet concenses:
block-00.lmdb
poet-key-state-0371cbed.lmdb
block-00.lmdb-lock
poet_consensus_state-020a4912.lmdb
block-chain-id
poet_consensus_state-020a4912.lmdb-lock
merkle-00.lmdb
poet_consensus_state-0371cbed.lmdb
merkle-00.lmdb-lock
txn_receipts-00.lmdb
poet-key-state-020a4912.lmdb
txn_receipts-00.lmdb-lock
poet-key-state-020a4912.lmdb-lock
What is the significance of each file and what are the consequences if not included when restarting the network or creation of new network with old data in ledger.
Answer for this question could bloat, I will cover most part of it here for the benefit of folks who have this question, especially this will help when they want to deploy the network through Kubernetes. Also similar questions are being asked frequently in the official RocketChat channel.
The essential set of files for the Validator and the PoET are stored in /etc/sawtooth (keys and config directory) and /var/lib/sawtooth (data directory) directories by default, unless changed. Create a mounted volume for these so that they can be reused when a new instance is orchestrated.
Here's the file through which the default validator paths can be changed https://github.com/hyperledger/sawtooth-core/blob/master/validator/packaging/path.toml.example
Note that you've missed keys in the list of essential files in your question and that plays important role in the network. In case of PoET each enclave registration information is stored in the Validator Registry against the Validator's public key. In case of Raft/PBFT consensus engine makes use of keys (members list info) to send peer-peer messages.
In case of Raft the data directory it is /var/lib/sawtooth-raft-engine.
Significance of each of the file you listed may not be important for the most people. However, here goes explanation on important ones
*-lock files you see are system generated. If you see these, then one of the process must have opened the file for write.
block-00.lmdb it's the block store/block chain, has KV pair of block-id to block information. It's also possible to index blocks by other keys. Hyperledger Sawtooth documentation is right place to understand complete details.
merkle-00.lmdb is to store the state root hash/global state. It's merkle tree representation in KV pair.
txn-receipts-00.lmdb file is where transaction execution status is stored upon success. This also has information about events if any associated with those transactions.
Here is a list of files from the Sawtooth FAQ:
https://sawtooth.hyperledger.org/faq/validator/#what-files-does-sawtooth-use
I am exploring bitcoin source code for some time and have successfully created a local bitcoin network with new genesis block.
Now i am trying to understand the process of hard forks (if i am using wrong terms here, i am referring to the one where the blockchain is split instead of mining a new genesis).
I am trying find this approach in BitcoinCash source code, but haven't got anywhere so far except the checkpoints.
//UAHF fork block.
{478558, uint256S("0000000000000000011865af4122fe3b144e2cbeea86"
"142e8ff2fb4107352d43") }
So for i understand that the above checkpoint is responsible for the chain split. But i am unable to find the location in source code where this rule is enforced, i.e. the code where it is specified to have a different block than bitcoin after block number 478558.
Can anyone set me into the right direction here?
There is not a specific rule that you put in the source code that says "this is where the fork starts". The checkpoints are just for bootstrapping a new node, they are checked to make sure the right chain is being downloaded and verified.
A hard fork by definition is just a change in consensus rules. By nature, if you introduce new consensus-breaking rules, any nodes that are running Bitcoin will reject those blocks which are incompatible and as soon as one block is rejected (and mined on the other chain) you will now have different chains.
As a side note, you should probably change the default P2P ports and P2P message headers in chainparams.cpp so it doesn't try to connect with other Bitcoin nodes.
I am working with fabric chaincodes and have implemented the table concepts being provided to store data since blockchain does not allow modification or deletion, i am eager to know the internal implementation of the table format. Is there any documentation for that if yes please suggest or if anyone knows and can explain.
Thanks in advance
Tables are implemented using Protocol Buffers.
You can have a look at file table.pb.go. This file is auto-generated file by using proto message definitions in table.proto .
On top of this, chaincode API provides functions like CreateTable, GetTable, DeleteTable, GetRow, GetRows, InsertRow which you might be using in your chaincode.
Functions like CreateTable, InsertRow, DeleteTable which are supposed to do data modification internally use PutState API to write the bytevalues to ledger. PutState marshals a struct defined in file table.pb.go into bytes and stores into ledger.
Similarly functions like GetRow, GetRows, GetTable which are supposed to query data, internally use GetState API to read the bytevalues from ledger. GetState API finds the value from ledger in bytes and then unmarshals the byte value into structs.
Effectively you get to interact with Go structs without caring how internally the table is stored.