I am trying to build an application in Hyperledger v1.0 which has the following features,
Multi-sig contract execution
Discoverability of contracts
Selective
visibility.
But I am not able to find,
Any functions to retrieve role/user information
Define and create users with different roles.
Any examples on how can I make my smart contract discoverable by other smart contracts will also be highly appreciated.
You can obtain the certificate of the creator of the proposal in the chaincode execution in the following way:
creatorByte, err := stub.GetCreator()
if err != nil {
return shim.Error("Error stub.GetCreator")
}
bl, _ := pem.Decode(creatorByte)
if bl == nil {
return shim.Error("Could not decode the PEM structure")
}
cert, err := x509.ParseCertificate(bl.Bytes)
if err != nil {
return shim.Error("ParseCertificate failed")
}
First Question:
Any functions to retrieve role/user information
This is done using the CID library.
The client identity chaincode library enables you to write chaincode which makes access control decisions based on the identity of the client (i.e. the invoker of the chaincode). In particular, you may make access control decisions based on either or both of the following associated with the client:
the client identity's MSP (Membership Service Provider) ID
an attribute associated with the client identity.
Attributes are simply name and value pairs associated with an identity. For example, email=me#gmail.com indicates an identity has the email attribute with a value of me#gmail.com
For Nodejs: https://fabric-shim.github.io/master/fabric-shim.ClientIdentity.html
Second Question:
Define and create users with different roles.
https://hyperledger-fabric-ca.readthedocs.io/en/latest/users-guide.html#attribute-based-access-control
fabric-ca-client register --id.name user1 --id.secret user1pw --id.type user --id.affiliation org1 --id.attrs 'app1Admin=true:ecert,email=user1#gmail.com'
or while enrolling
fabric-ca-client enroll -u http://user1:user1pw#localhost:7054 --enrollment.attrs "email,phone:opt"
https://github.com/hyperledger/fabric/blob/release-1.3/core/chaincode/lib/cid/README.md:
the Attributes are stored inside the X509 certificate as an extension with an ASN.1 OID (Abstract Syntax Notation Object IDentifier) of 1.2.3.4.5.6.7.8.1. The value of the extension is a JSON string of the form {"attrs":{:
see https://github.com/hyperledger/fabric-samples/blob/release-1.3/fabric-ca/README.md for
How to use the Hyperledger Fabric CA client and server to generate all crypto material rather than using cryptogen. The cryptogen tool is not intended for a production environment because it generates all private keys in one location which must then be copied to the appropriate host or container. This sample demonstrates how to generate crypto material for orderers, peers, administrators, and end users so that private keys never leave the host or container in which they are generated.
I'm goingo to answer single point of your question:
Define and create users with different roles.
When you create a user or a component, you create it via Fabric CA, i.e. when you create something, you define what it is going to be: peer, orderer, user... So, the kind of user that it is depends on its role.
I don't know if I answered any or your questions. Could you give more info about them?
Related
builder.Configuration.AddSecretsManager(region: RegionEndpoint.EUCentral1,
configurator: options =>
{
options.SecretFilter = entry => entry.Name.StartsWith($"{env}_{appName}_");
options.KeyGenerator = (_, s) => s
.Replace($"{env}_{appName}_", string.Empty)
.Replace("__", ":");
options.PollingInterval = TimeSpan.FromSeconds(10);
});
builder.Services.Configure<DatabaseSettings>(
builder.Configuration.GetSection(DatabaseSettings.SectionName));
. If a hacker were to gain access to my EC2 Windows server, implementing the solution of not allowing the connection string to be read from the appsetting.json file would prevent them from accessing it. However, the hacker could potentially use a tool like dnSpy to reverse engineer the code and extract the connection string. Using an obfuscator would also prevent the hacker from being able to read the connection string. So why would I need AWS SecretsManager.
SecretsManager is about managing your secrets lifecycle. If a hacker gains access to your machine then anything in that machine is vulnerable and should be treated as compromised. For example if a machine has compromised a secret you can terminate that instance and use SecretsManager to rotate your secret and depending on how the other parts of your system are coding they can automatically pick up the rotation. It also provides access controls for who can access the secrets which can be easily revoked in the case of a compromised situation.
we are moving from monolithic to microservice architecture application, we're still in planning phase and we want to know what is the best practices of building it.
suppose we have two services :
User
Device
getUserDevices(UserId)
addDevice(DeviceInfo, UserId)
...
Each user has multiple devices
what is the most common, cleaner and proper way of asking the server to get all user devices ?
1- {api-url}/User/{UserId}/devices
needs another HTTP request to communicate with Device service.
for user X, get linked devices from User service.
// OR
2- {api-url}/Device/{UserId}/devices
for user X, get linked devices from Device service.
There are a lot of classic patterns available to solve such problems in Microservices. You have 2 microservices - 1 for User (Microservice A) and 1 for Device (Microservice B). The fundamental principle of a microservice is to have a separate database for each of the microservice. If any microservice wants to talk to each other (or to get data from another microservice), they can but they would do it using an API. Another way for communication between 2 microservices is by events. When something happens in Microservice A, it will raise an event and push it to a central event store or a message queue and Microservice B would subscribe to some or all of the events emitted by A.
I guess in your domain, A would have methods like - Add/Update/Delete a User and B would have Add/Update/Delete a device. Each user can have its own unique id and other data fields like Name, Address, Email etc. Each device can have its own unique id, a user id, and other data fields like Name, Type, Manufacturer, Price etc. Whenever you "Add" a device, you can send a POST request or a command (if you use CQRS) to Device Microservice with the request containing data about device + user-id and it could raise an event called "DeviceAdded". It can also have events corresponding to Update and Delete like "DeviceUpdated" and "DeviceRemoved". The microservice A can subscribe to events - "DeviceAdded", "DeviceRemoved", and "DeviceUpdated" events emitted by B and whenever any such event is raised, it will handle that event and denormalize that event into its own little database of Devices (Which you can call UserRelationships). In future, it can listen to events from other microservices too (so your pattern here would be extensible and scalable).
So now to get all devices owned by a user, all you have to do is make an end-point in User Microservice like "http://{microservice-A-host}:{port}/user/{user-id}/devices" and it will return you a list of the devices by querying for user-id in its own little database of UserRelationships which you must have been maintaining through events.
Good Reference is here: https://www.nginx.com/blog/event-driven-data-management-microservices/
it may really be either way, but to my liking, I would choose to put it under /Devices/{userId}/devices as you are looking for the devices given the user id. I hope this helps. Have a nice one!
You are requesting a resource from a service, resource being a device and service being a device service.
From a rest standpoint, you are looking for a resource and your service is providing various methods to manipulate that resource.
The following url can be used.
[GET] ../device?user_id=xyz
And device information can be fetched via ../device/{device_id}
Having said that, if you had one service that is providing for both user and device data than the following would have made sense.
[GET] ../user/{userId}/device
Do note that this is just a naming convention and you can pick what suits best for you, thing is pick one and hold onto it.
When exposing the api consistency is more important.
One core principle of the microservice architecture is
defining clear boundaries and responsibilities of each microservice.
I can say that it's the same Single Responsibility Principle from SOLID, but on macro level.
Сonsidering this principle we get:
Users service is responsible for user management/operations
Devices service is responsible for operations with devices
You question is
..proper way of asking the server to get all user devices
It's 100% responsibility of the Devices service and Users service nothing know about devices.
As I can see you thinking only in routing terms (yes API consistency is also important).
From one side the better and more logical URL is /api/users/{userId}/devices
- you try to get user's devices, these devices belong to user.
From other side you can use the routes like /api/devices/user/{userId} (/api/devices/{deviceId}) and that can be more easily processed
by the routing system to send a request to the Devices service.
Taking into account other constraints you can choose the option that is right for your design.
And also small addition to:
needs another HTTP request to communicate with Device service.
in the architecture of your solution you can create an additional special and separate component that routes the requests to the desired microservice, not only direct calls are possible from one microservice to another.
You should query the device service only.
And treat the user id like a filter in the device service. For eg: you should search on userid similar to how you would search device based on device type. Just another filter
Eg : /devices?userid=
Also you could cache some basic information of user in device service, to save round trips on getting user data
With microservices there is nothing wrong with both the options. However the device api makes more sense and further I'll prefer
GET ../device/{userId}/devices
over
GET ../device?user_id=123
There are two reasons:
As userId should already be there with devices service you'll save one call to user service. Otherwise it'll go like Requester -> User service -> Device Service
You can use POST ../device/{userId}/devices to create new device for particular user. Which looks more restful then parameterized URL.
Having read the official doc, I know that endorsement policy is defined at chaincode instantiation using the '-P' flag.
The doc suggests something like this: -P "AND('Org1.member','Org2.member')" However, the 'member' field is not visible in the network-config.json.
Can anyone explain...
What is a 'member' representing in the 'Org1.member'
Can we directly mention peers like this: "AND('Org1.peer2', 'Org2.peer1')"`
Can we address peers with ip like this: "AND('localhost:7051', 'localhost:7052')"`
What is a 'member' representing in the 'Org1.member'?
The member key word is a principal which identifies the role within the organization, so for example a member stays for regular entity, while for example admin, will mean that to endorse transaction one with admin rights has to sign it.
Can we directly mention peers like this: "AND('Org1.peer2', 'Org2.peer1')"`
No you cannot mention peer directly in the endorsement policies.
Can we address peers with ip like this: "AND('localhost:7051', 'localhost:7052')"`
And you cannot use endpoints neither.
In order to turn any peer to an endorsing peer for some chaincode, you have to install the chaincode on that peer, doing that will allow you to forward endorsement request to that peer. Of course assuming its part of the channel and chaincode is being instantiated in that channel.
in HLF v1.1 identity classification is enabled thus it is now possible to specify "peers" in the endorsement policy. from the official docs:
A principal is described in terms of the MSP that is tasked to validate the identity of the signer and of the role that the signer has within that MSP. Four roles are supported: member, admin, client, and peer. Principals are described as MSP.ROLE, where MSP is the MSP ID that is required, and ROLE is one of the four strings member, admin, client and peer. Examples of valid principals are 'Org0.admin' (any administrator of the Org0 MSP) or 'Org1.member' (any member of the Org1 MSP), 'Org1.client' (any client of the Org1 MSP), and 'Org1.peer' (any peer of the Org1 MSP).
though it does not allow to specify named peers or endpoints
I read protocol specification https://github.com/hyperledger/fabric/blob/master/docs/protocol-spec.md#5-byzantine-consensus-1
I am wondering:
What exactly happened when chaincode has an coding block of authority?
What exactly happened when chaincode has an coding block of event?
For example, there are A,B,C,D are four parties, they are runing on four Validating Peers.
The There is a coding block of authority in chaincode A , only party A has the authority to run the coding block.
And there is a coding block of event in chaincode A, only party A can get the result of event.
So only party A can run into the coding block. Party B, C, D can not run into the coding block.
How PBFT make consensus of A,B,C,D in such situation?
Taking into account the comments above, this question can be changed to something like this:
“An example asset_management.go has “isCaller” method which can be executed only by “caller”. How PBFT consensus can be reached in this case? ”
but this definition would be incorrect because all A, B, C and D Validation Peers can execute code in “transfer”, “assign” and “isCaller” if transaction is signed with original “admin” certificate.
Let’s go through this example and study it step by step.
“asset_management.go” chaincode can be deployed to the ledger by any user with role “client”
During deployment, in Init method, this user’s certificate will be saved in the ledger as “admin”:
adminCert, err := stub.GetCallerMetadata()
...
stub.PutState("admin", adminCert)
When somebody would like to submit assign or transfer transaction into the ledger he has to sign this request with his own certificate
This request will be propagated to all VP in the network.
Each of VP will load “admin” certificate from the ledger and compare it to the certificate which was used to sign this particular request:
adminCertificate, err := stub.GetState("admin")
...
ok, err := t.isCaller(stub, adminCertificate)
If certificates are not the same - this request will not be accepted by VPs during PBFT consensus phase.
If certificates are the same - all VPs will know that this request is signed by original “caller” and will proceed with chaincode execution because they have all necessary information to do so.
Let's say my web service is located at http://localhost:8080/foo/mywebservice and my WSDL is at http://localhost:8080/foo/mywebservice?wsdl.
Is http://localhost:8080/foo/mywebservice an endpoint, i.e., is it the same as the URI of my web service or where the SOAP messages received and unmarshalled?
Could you please explain to me what it is and what the purpose of it is?
This is a shorter and hopefully clearer answer...
Yes, the endpoint is the URL where your service can be accessed by a client application. The same web service can have multiple endpoints, for example in order to make it available using different protocols.
Updated answer, from Peter in comments :
This is de "old terminology", use directally the WSDL2 "endepoint"
definition (WSDL2 translated "port" to "endpoint").
Maybe you find an answer in this document : http://www.w3.org/TR/wsdl.html
A WSDL document defines services as collections of network endpoints, or ports. In WSDL, the abstract definition of endpoints and messages is separated from their concrete network deployment or data format bindings. This allows the reuse of abstract definitions: messages, which are abstract descriptions of the data being exchanged, and port types which are abstract collections of operations. The concrete protocol and data format specifications for a particular port type constitutes a reusable binding. A port is defined by associating a network address with a reusable binding, and a collection of ports define a service. Hence, a WSDL document uses the following elements in the definition of network services:
Types– a container for data type definitions using some type system (such as XSD).
Message– an abstract, typed definition of the data being communicated.
Operation– an abstract description of an action supported by the service.
Port Type–an abstract set of operations supported by one or more endpoints.
Binding– a concrete protocol and data format specification for a particular port type.
Port– a single endpoint defined as a combination of a binding and a network address.
Service– a collection of related endpoints.
http://www.ehow.com/info_12212371_definition-service-endpoint.html
The endpoint is a connection point where HTML files or active server pages are exposed. Endpoints provide information needed to address a Web service endpoint. The endpoint provides a reference or specification that is used to define a group or family of message addressing properties and give end-to-end message characteristics, such as references for the source and destination of endpoints, and the identity of messages to allow for uniform addressing of "independent" messages. The endpoint can be a PC, PDA, or point-of-sale terminal.
A web service endpoint is the URL that another program would use to communicate with your program. To see the WSDL you add ?wsdl to the web service endpoint URL.
Web services are for program-to-program interaction, while web pages are for program-to-human interaction.
So:
Endpoint is: http://www.blah.com/myproject/webservice/webmethod
Therefore,
WSDL is: http://www.blah.com/myproject/webservice/webmethod?wsdl
To expand further on the elements of a WSDL, I always find it helpful to compare them to code:
A WSDL has 2 portions (physical & abstract).
Physical Portion:
Definitions - variables - ex: myVar, x, y, etc.
Types - data types - ex: int, double, String, myObjectType
Operations - methods/functions - ex: myMethod(), myFunction(), etc.
Messages - method/function input parameters & return types
ex: public myObjectType myMethod(String myVar)
Porttypes - classes (i.e. they are a container for operations) - ex: MyClass{}, etc.
Abstract Portion:
Binding - these connect to the porttypes and define the chosen protocol for communicating with this web service.
- a protocol is a form of communication (so text/SMS, vs. phone vs. email, etc.).
Service - this lists the address where another program can find your web service (i.e. your endpoint).
In past projects I worked on, the endpoint was a relative property. That is to say it may or may not have been appended to, but it always contained the protocol://host:port/partOfThePath.
If the service being called had a dynamic part to it, for example a ?param=dynamicValue, then that part would get added to the endpoint. But many times the endpoint could be used as is without having to be amended.
Whats important to understand is what an endpoint is not and how it helps. For example an alternative way to pass the information stored in an endpoint would be to store the different parts of the endpoint in separate properties. For example:
hostForServiceA=someIp
portForServiceA=8080
pathForServiceA=/some/service/path
hostForServiceB=someIp
portForServiceB=8080
pathForServiceB=/some/service/path
Or if the same host and port across multiple services:
host=someIp
port=8080
pathForServiceA=/some/service/path
pathForServiceB=/some/service/path
In those cases the full URL would need to be constructed in your code as such:
String url = "http://" + host + ":" + port + pathForServiceA + "?" + dynamicParam + "=" + dynamicValue;
In contract this can be stored as an endpoint as such
serviceAEndpoint=http://host:port/some/service/path?dynamicParam=
And yes many times we stored the endpoint up to and including the '='. This lead to code like this:
String url = serviceAEndpoint + dynamicValue;
Hope that sheds some light.
Simply put, an endpoint is one end of a communication channel. When an API interacts with another system, the touch-points of this communication are considered endpoints. For APIs, an endpoint can include a URL of a server or service. Each endpoint is the location from which APIs can access the resources they need to carry out their function.
APIs work using ‘requests’ and ‘responses.’ When an API requests information from a web application or web server, it will receive a response. The place that APIs send requests and where the resource lives, is called an endpoint.
Reference:
https://smartbear.com/learn/performance-monitoring/api-endpoints/
An Endpoint is specified as a relative or absolute url that usually results in a response. That response is usually the result of a server-side process that, could, for instance, produce a JSON string. That string can then be consumed by the application that made the call to the endpoint. So, in general endpoints are predefined access points, used within TCP/IP networks to initiate a process and/or return a response. Endpoints could contain parameters passed within the URL, as key value pairs, multiple key value pairs are separated by an ampersand, allowing the endpoint to call, for example, an update/insert process; so endpoints don’t always need to return a response, but a response is always useful, even if it is just to indicate the success or failure of an operation.
A endpoint is a URL for web service.And Endpoints also is a distributed API.
The Simple Object Access Protocol (SOAP) endpoint is a URL. It identifies the location on the built-in HTTP service where the web services listener listens for incoming requests.
Reference: https://www.ibm.com/support/knowledgecenter/SSSHYH_7.1.0.4/com.ibm.netcoolimpact.doc/dsa/imdsa_web_netcool_impact_soap_endpoint_c.html