Hyperledger Fabric Service Discovery - How to get peer tls certificates? - hyperledger-fabric-sdk-go

I am running a hyperledger network (1.3) consisting of 3 orgs. TLS is enabled on all components (so also the peer nodes).
I am using the fabric-go-sdk to trigger transactions.
In the log files of the fabric sdk I often get the following errors:
[...]certificate signed by unknown authority[...]
This seems to happen when the sdk (that was initialized for peers of my own org) tries to contact other nodes on the network where it does not know the correct tls certificate.
I also understood, that the sdk starts a discovery service and tries to discover additional peers (e.g. peers of a channel).
But how does my sdk retrieve the tls ca certificates of these peers to be able to contact them?
What I found out so far is, that in the discovery service of the sdk there is a function that transform discovered peers to a PeerConfig by calling the PeerConfig() method :
func asPeer(ctx contextAPI.Client, endpoint *discclient.Peer){
// ....
peerConfig, found := ctx.EndpointConfig().PeerConfig(url)
// ....
}
But the PeerConfig function also has no idea what the tls ca cert of the discovered peer is and so cannot create a correct PeerConfig object by only looking at the provided url.
What is the correct way configuring my sdk to be able to speak to other peers?
Where does the sdk get the tls ca certificates of the other orgs? Are they beeing discovered at all? Or do I have to provide them manually?

#Subby Don't be confused with all stuff
Org1 - org1CA
Org2 - org2CA
IF go-sdk has profile contains both organizations then you have to mention tlsca cert of appropriate organizations peers
It's your responsibility to mention correct tlsca certs Nothing to do with service discovery
a certificate signed by unknown authority >>> means wrong certificate which is signed by an untrusted certificate authority
All you need to do is mention tlsca cert of appropriate peer of appropriate org
Coming to the Service Discovery
The rule of thumb is you must need at least one peer to discover other peers, so the application will use this peer to discover other peers
Note: You must configure
- CORE_PEER_GOSSIP_EXTERNALENDPOINT=peer0.org1.example.com:7051
check the sample discovery result http://ideone.com/UmM0cK

Related

How to enable Cipher TLS_ECDHE_ECDSA on Windows server 2019 with AWS Load Balancer

The website is on Windows server 2019 with the AWS Load Balancer with ELB SecurityPolicy-2016-08. This policy definitely has the ECDHE_ECDSA cipher enabled. I have checked their docs. SSL certificate is installed on LB.
Running TLS Cipher Suites in PowerShell Windows server 2019 also shows these suits enabled but when running the website domain with SSLLabs or Zenmap. These suites are not appearing
TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384
TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256
or even these:
TLS_DHE_RSA_WITH_AES_256_GCM_SHA384
TLS_DHE_RSA_WITH_AES_128_GCM_SHA256
any ideas? the website is ASP.NetFramework 4.7. but I hardly think it has anything to do with the ciphers. Any help will be appreciated. Thanks
Zenmap Snapshot
AWS load balancer Snapshot
PowerShell Snapshot
Meta: this isn't about programming, and I'm not sure 'how to operate cloud' counts as development, so I authorize deletion if this is voted offtopic.
Your server is irrelevant and nothing you set or change on it will affect client(s).
You don't tell us which AWS load balancer you use but to be at HTTPS level it must be Application or Classic, and in either case to do HTTPS it must terminate the SSL/TLS protocol -- in other words, the LB establishes one SSL/TLS connection with the client and decrypts the incoming request, parses it, and then optionally uses a separate SSL/TLS connection to the backend to re-encrypt, and reverses the process on the response: decrypt from the backend if necessary and re-encrypt to the client. See the line "SSL Offloading" well down in the table on that page; that's a jargon way of saying "LB does the SSL/TLS for the client, your server does not".
Thus the settings in the LB, only, control the SSL/TLS seen by the client(s). ELBSecurityPolicy-2016-08 which is the default (and I'm guessing that might be why you used it) excludes all DHE-RSA ciphersuites. (To avoid confusion, note the AWS webpage uses the OpenSSL names for ciphersuites, where RSA-only keyexchange is omitted from the name, whereas Zenmap/nmap uses the RFC names TLS_RSA_with_whatever.) It does allow ECDHE_ECDSA suites, but those will actually be negotiated, and thus seen by a scanner like Zenmap/nmap, only if you configure an ECDSA certificate and key -- which I bet you didn't.

How to check if data is coming from a secure session created by a specific certificate or not (openssl)?

I have multiple certificates on server through which multiple secure sessions can be created and wants to limit the data execution based the certificates.
For example, let's say I have two certificates cert_A and cert_B.
I have 3 commands which are cmd_1, cmd_2 and cmd_3.
I want that only cmd_1 executes for all the connections created through cert_A and others for cert_B.
Both the certificates are placed in non-volatile memory.
The server is implemented in C++ using openssl library.
Is there any thing like I can read the data from the locally placed certificates and match it with some information from the established secure connection like subject name through the API
X509_NAME_oneline(X509_get_subject_name(cert), subject_name, name_length)
Any help would be appreciated!

Traefik Best Practices/Capabilities For Dynamic Vanity Domain Certificates

I'm looking for guidance on the proper tools/tech to accomplish what I assume is a fairly common need.
If there exists a web service: https://www.ExampleSaasWebService.com/ and customers can add vanity domains/subdomains to white-label or resell the service and replace the domain name with their own, there needs to be a reverse proxy to terminate vanity domains TLS traffic and route it to the statically defined (HTTPS) back-end service on the non-vanity original domain (there is essentially one "back-end" server somewhere else on the internet, not the local network, that accepts all incoming traffic no matter the incoming domain). Essentially:
"Customer A" could setup an A/CNAME record to VanityProxy.ExampleSaasWebService.com (the host running Traefik) from example.customerA.com.
"Customer B" could setup an A/CNAME record to VanityProxy.ExampleSaasWebService.com (the host running Traefik) from customerB.com and www.customerB.com.
etc...
I (surprisingly) haven't found anything that does this out of the box, but looking at Traefik (2.x) I'm seeing some promising capabilities and it seems like the most capable tool to accomplish this. Primarily because of the Let's Encrypt integration and the ability to reconfigure without a restart of the service.
I initially considered AWS's native certificate management and load balancing, but I see there is a limit of ~25 certificates per load balancer which seems like a non-starter. Presumably there could be thousands of vanity domains in place at any time.
Some of my Traefik specific questions:
Am I correct in understanding that you can get away without explicitly provisioning a generated list of explicit vanity domains to produce TLS certificates for in the config files? They can be determined on-the-fly and provisioned from Let's Encrypt based on the headers of the incoming requests/SNI?
E.g. If a request comes to www.customerZ.com and there is not yet a certificate for that domain name, one can be generated on the fly?
I found this note on the OnDemand flag in the v1.6 docs, but I'm struggling to find the equivalent documentation in the (2.x) docs.
Using AWS services, how can I easily share "state" (config/dynamic certificates that have already been created) between multiple servers to share the load? My initial thought was EFS, but I see EFS shared file system may not work because of a dependency on file change watch notifications not working on NFS mounted file systems?
It seemed like it would make sense to provision an AWS NLB (with a static IP and an associated DNS record) that delivered requests to a fleet of 1 or more of these Traefik proxies with a universal configuration/state that was safely persisted and kept in sync.
Like I mentioned above, this seems like a common/generic need. Is there a configuration file sample or project that might be a good starting point that I overlooked? I'm brand new to Traefik.
When routing requests to the back-end service, the original Host name will be identifiable still somewhere in the headers? I assume it can't remain in the Host header as the back-end recieves requests to an HTTPS hostname as well.
I will continue to experiment and post any findings back here, but I'm sure someone has setup something like this already -- so just looking to not reinvent the wheel.
I managed to do this with Caddy. It's very important that you configure the ask,interval and burst to avoid possible DDoS attacks.
Here's a simple reverse proxy example:
# https://caddyserver.com/docs/caddyfile/options#on-demand-tls
{
# General Options
debug
on_demand_tls {
# will check for "?domain=" return 200 if domain is allowed to request TLS
ask "http://localhost:5000/ask/"
interval 300s
burst 1
}
}
# TODO: use env vars for domain name? https://caddyserver.com/docs/caddyfile-tutorial#environment-variables
qrepes.app {
reverse_proxy localhost:5000
}
:443 {
reverse_proxy localhost:5000
tls {
on_demand
}
}

TLS session resumption on Windows

We have C++ code that uses TCP/IP to communicate between a client and server and use TLS 1.2 for encryption between the two. I'd like to implement TLS session resumption as it would speed up reconnections, which will be very often in our software. I've scoured SO and lots of other places and come up with very little for definitive answers. The closest I've found is this: https://forums.iis.net/t/1239418.aspx?How+to+enable+TLS+session+resumption+or+Optimize+TLS+handshake+on+Windows+2016+
The instructions from that site are reproduced here:
To enable TLS session tickets on win2k12 r2 and win2k16, you need to follow these steps:
Create a key (DWORD) in registry with value 1 HKLM\SYSTEM\CurrentControlSet\Services\HTTP\Parameters\EnableSslSessionTicket
Create a new TLS session ticket key through this powershell command: New-TlsSessionTicketKey -Password -Path "C:\KeyConfig\TlsSessionTicketKey.config" -ServiceAccountName "System" https://technet.microsoft.com/en-us/itpro/powershell/windows/tls/new-tlssessionticketkey
Enable TLS session ticket key through this powershell command: Enable-TlsSessionTicketKey -Password -Path "C:\KeyConfig\TlsSessionTicketKey.config" -ServiceAccountName "System"https://technet.microsoft.com/en-us/itpro/powershell/windows/tls/enable-tlssessionticketkey
Reboot the server to enable TLS session ticket generation. Reboot is required for the registry entry to take effect.
But I have some issues with it. I could do it in PowerShell, but I'd prefer to do it in C++ code. We don't use HTTP, only TCP/IP. And the service account you specify in Enable-TlsSessionTicketKey may be a user account and not one of the well-defined system accounts.
It can't be this hard, can it? It's not on by default, is it? I'm looking at a Wireshark capture and it doesn't look like it's on. In my Client Hello packets I see: session_ticket len=0, extended_master_secret len=0, renegotiation_info len=1. In my Server Hello messages I don't see any session ticket. I see: extended_master_secret len=0, renegotiation_info len=1.
After opening a support ticket with Microsoft, they did finally provide the means to do this. It's still not great, but it works. Not great because we have to do some in a Powershell script (rather do it in code) and the service account cannot be a normal user account.
You must pass in ASC_REQ_SESSION_TICKET (an undocumented option) into
the AcceptSecurityContext call on the server side. This will allow
the server to generate a ticket
Do steps 2, 3, and 4 above to create a session ticket then enable it in the Powershell script
Make sure you only create a single credential on the client side.
The service account must be one of these: System, LocalService, NetworkService, or SID of virtual accounts. We used NetworkService

What is a client in network of Hyperledger fabric peers?

What is a client in a network of Hyperledger fabric peer?
What is the role of a client?
What can qualify as a client in the Hyperledger fabric blockchain network?
have a look at this (and specifically, look into the Network Entities / Systems part):
https://github.com/hyperledger/fabric/blob/master/docs/glossary.md
I'm still rather new to this, but my understanding is that you have a) peers in a P2P network that can be either validator or non-validator - the latter existing mostly for performance purposes; and b) you have clients, who talk to peers in a client-server manner to issue queries and request transactions from the P2P network.
What can qualify as a client: basically anything that can talk to peers in this manner. (I think there are even some SDKs, but I'm concentrating on other aspects of Hyperledger, so I don't know yet.) Have a look at the IBM Marbles demo:
https://github.com/IBM-Blockchain/marbles
A client application talks to a peer over either REST or GRPC interface and submits transactions and queries to the peer to chaincodes.
A client is an end user of the application. The client invokes the smart contract by placing a request on the channel. Each smart contract has a set of endorsing pairs required. The request is picked by the required endorsing peers and executed. The resulting read-write sets are sent back to the client.
what is Client in Hyperledger :
The Hyperledger Fabric Client SDK makes it easy to use APIs
to interact with a Hyperledger Fabric blockchain.
Features:
create a new channel
send channel information to a peer to join
install chaincode on a peer
instantiate chaincode in a channel, which involves two steps: propose and transact
submitting a transaction, which also involves two steps: propose and transact
query a chaincode for the latest application state
various query capabilities:
logging utility with a built-in logger (winston)