By default someone can read the state data using REST API. Is there a way to add read permissions on specific addresses and change them while the network is up.
The short answer to your question is by using a proxy server, the documentation you're referring to in the question mentions it here https://sawtooth.hyperledger.org/docs/core/releases/1.1/sysadmin_guide/rest_auth_proxy.html#using-a-proxy-server-to-authorize-the-rest-api
There may not be an out of the box component that does what you're asking. There's definitely possibility of doing what you're asking for. You can add the logic filtering based on the read address in the proxy server.
More explanation:
If you're considering one Validator instance per organization. Organization participates in a blockchain application use case then all the participants in the network can see the data you store in the state store. It's the responsibility of the participating organizations to restrict the access to their data. Using the proxy server is one such means.
If you're considering adding multiple use cases per organization, participating in different network altogether then it is advisable to have a different Validator instance per those use cases that require isolation. Again, it's the responsibility of each organization to protect the data stored in the network they're participating in.
For the point 2, the Hyperledger Sawtooth 2.0 proposed solution allows you to run multiple instances of the Validator as a service in a single process. That means you can have one physical node (also process) participating in multiple circuits providing isolation.
Before I end the answer for the benefit of others searching for an answer: Blockchain is not just a distributed storage but also a decentralized network. There are number of design patterns that allows us to keep the critical data outside the blockchain network and use the functionalities of the blockchain network (achieving consensus, smart contract verification to be specific) for what it is expected to do.
Related
I ask this question because I want to facilitate a workflow that utilizes a managed blockchain service such as the Azure or AWS blockchain service.
Is the true purpose attestations, provenance and interoperability?
In that aspect, aren't regular (legacy and or current) methodologies sufficient for data interoperability and the transfer and consumption of said data?
Lastly, if all this effectively is doing is creating a ledger account of data flow would a true advantage be the encryption of the data existing on the entire flow including up unto the edge?
If it cannot be encrypted up to the edge so that it is not readable at any point in time of the data flow into the data archive/traditional store is effectively worth any of the previous described gains of provenance and interoperability?
I think there is some nuance to this answer. The purpose of Azure Blockchain Service is to allow enterprises to build networks (consortiums) that enable the business workflows. The unique value that blockchain is adding to business workflows is a logical data model/flow with infrastructure shared to the participants (businesses). That is not easy to do with a traditional database model.
With regards to the encryption you mentioned above, the value with blockchain is providing a digital signature for every change in the system that is shared between enterprises. The typically is done at the client to provide the least chance for manipulation. Privacy, which can use encryption techniques, is something that can be used to allow participants to control access to change details. The fact that changes were made is still cryptographically verifiable, without sharing all the data details with everyone.
If you look at something like EDI that is done today with supply chains, this essentially is a complex network of enterprises, synchronizing databases. This typically suffers breakage of keeping all these things in sync. With a blockchain based system, the "syncing" is abstracted and the focus is more about the business logic, which is always cryptographically signed and verifiable. So it functions like a single "logical" data store, but is actually distributed.
Everything I've been reading about blockchain from my understanding says that even on a private blockchain, every participant can view all transactions. I've seen it mentioned that a use case for block chain could be the sharing of medical data. So for example if I had a blockchain that holds the medical history of every person from birth to death in a country. Is there no way of setting up permissions so that only data relating to a person and those who have been given permission to that person's data can view it? If the data is stored on every node in a blockchain, how is a person's computer supposed to have the capacity to store the medical data of every person in a country?
I would advise looking up Medrec when related to health care. Most of the research is geared towards dealing with keeping the data off the chain. In addition, there are other blockchains that might provide a better solution, for more privacy, for example, I would look up quorum by JP Morgan. There are different formats being looked at but these can give you two possible solutions. Also, check out Health Nexus' whitepaper, it deals with medical blockchain technology. Let me know if you need more.
https://www.pubpub.org/pub/medrec
https://github.com/jpmorganchase/quorum
There are blockchains that allow defining permissions. Hyperledger Fabric is one of them. You have the ability to configure channels with data stored in the ledger of the participants in the channel only.
to pass the scalability problem of blockchains and their solutions for this purpose, you should concentrate on off-chain architecture.
right now this scenario should be considered:
save tx's to the blockchain(it should be formal)
save hashed data to an off-chain repository like DB's.
save the address of that data-hash to blockchain for future access.
yeah, you pointed to the right thing. a central point of access as an admin-node or god should be the opposite of blockchain as a distributed dream.
for this issue, the mechanism like secret-sharing or re-encrypt proxy should be realized to guarantee the privacy and security of data-hashed.
for more information read this article:
https://www.sciencedirect.com/science/article/pii/S2210670717310685
GoQuorum has an 'enhanced permissioning' model where you can do all that, and at the same time stay compatible with Ethereum standards.
Check this out: https://consensys.net/docs/goquorum/en/latest/configure-and-manage/manage/enhanced-permissions/
What happens in Hyperledger Fabric on a private channel block-chain consisting of only two peers if one of the peers is faulty and manipulates it's private block chain?
So the two copies of the block chain will diverge and finally it will be impossible for a consensus algorithm to tell which one is correct.
Is this a valid problem? If so, how would this be mitigated? Would it help to add additional peers to the channel (e.g. placed at a regulator's data center) which are not in control of the two peers mentioned above? Or is there a better solution to tackle this problem?
Adding additional peers to each organization would defend against any single node becoming compromised. Adding additional nodes to the channel(s) at an independent 3rd party (auditor, regulator, or other trusted provider) would be another valid strategy to defend against a counter-party with malicious intent.
The consensus is achieved in the Ordering Service, the Peers are independent from it. I think that they are two different things:
The Peers don't manipulate the Blockchain. They could send incorrect or invalid transactions. The result of the execution of those transactions depends on the Smart Contrac that you have on the Peers, and the Endorsement Policy that you have defined. Then, each Peer sends the validated transactions to the Ordering Service.
The Blocks are created by the Ordering Service, so the blocks will be equal to both peers.
The solution to that issue would be to create an Ordering Service where the orderers are located in additional and independent 3rd party.
Nowadys, the Ordering Services gives you the chance to choose among different Services: two different are developed, a third one will be ready soon. More info about it, here.
Is it possible to achieve property level privacy in Fabric 1.0. For example: If I have a chaincode representing a tenancy contract. I want only tenant & lessor to see all the details, banks to see only payment terms and actual owner to see everything except payment terms. How can I achieve this in Fabric 1.0. If I use channels then I will need to deploy two different contracts and the total number of channels I can create is limited to the network performance. Channels are not meant to be used to achieve property level privacy. I don't want to do it off-chain and also don't want to do on-chain encryption as I cannot apply smart operations on it. What is the best solution for achieve this?
The Side DB for private channel data is planned as an upcoming feature for Hyperledger Fabric, where it will be able to restrict data only to a subset of peers while the evidence of data exposed to all in the channel. More info here (https://jira.hyperledger.org/browse/FAB-1151)
You can use the Composer Access Control Language to implement this, however unfortunately we have not (yet) written the code to enforce property level access control. The ACL engine enforces access control for namespaces and resources, and resource instances, however we have plans to extends this to properties on classes.
So, in the absence of declarative access control from the ACL engine you would have to use the getCurrentParticipant() runtime API and add procedural access control checks to your transaction processor functions.
You can read about the ACL language here:
https://hyperledger.github.io/composer/reference/acl_language.html
Have a look at Fabric 1.2's Private data. See the official documentation here. It provides a side DB that is mentioned in one of the answers.
To what degree should web service providers limit implementation changes without creating a new service version? One view is that as long as the contract is upheld, the service owner should be free to update the implementation as needed. Schemas are not always air tight and it is foreseeable that changes within the service implementation affect the service output while still upholding the contract.
To what degree should consumers be notified of implementation changes? Its one thing to notify consumers of updates to your own web service implementation. How feasible is it to track implementation changes to all downstream dependencies? Should service owners create a new version when they know that a change may affect consumers? And try to be a good citizen and notify consumers of all other changes?
Lots of questions and I doubt there is one size fits all answer. It could just depend on the situation. Maybe this is what SLAs are for.
Good questions, and I think you've already answered it. Yes, these details would be in an SLA and I think that if the contract/WSDL is the same that why would the service need to notify its' consumers? Unless of course changes to the service impact response times and performance. Maybe the service would notify consumers when another contract is introduced (in addition to the original). Consumers become aware of any new capabilities and can adjust their clients accordingly if desired.
I'm in an environment where SLAs don't exist for internal clients, so absent an SLA, the following are some common sense guidelines
Attempt to limit number of modifications to services
Communicate service implementation releases so consumers can plan test cycles
Provide consumers with the list of direct downstream dependencies and location to find their schedules and release notes
Consider a new version if an implementation change will semantically affect consumer
A lot depends on your specific circumstances. Speaking generally, here are a few top considerations.
The service contract and schema are all that a service and client share in common. A service implementation change that does not change the contract or schema (e.g., fixing a bug in the implementation logic) should not necessitate notifying the clients, nor should it be considered a new version.
OTOH, if you have a poorly constructed, overly-loose contract, such as passing all of the data as one big string, where the client had to do extensive interpretation to consume the service, and now you're looking to exploit that overly-loose contract in a way that would likely break the client, you owe it to all parties to change the contract (and improve it!) and publish that as a new version of the service.
Since services are often used to enable loose coupling between services, it is sometimes not practical or even possible to identify all of the clients of a service. Producing a new version of a service in these situations often entails maintaining multiple versions of a service for some period of time, often as directed by some governance body.
Providing details about service implementations, implementation dependencies, etc., encourages creating tight coupling by disclosing non-contract related details that the client may then take a dependency on. That can limit the ability of the service to change independently of the client.
The book Web Service Contract Design and Versioning for SOA
by Thomas Erl is a good resource on the topic, and details several common scenarios.