Hyperledger Fabric SDK location - blockchain

In the Hyperledger Fabric architecture where is the SDK hosted?
After pouring through the Fabric guides and documents I still can't find the exact specification of where the SDK for interacting with or deploying chaincode resides in the overall network architecture.
I am curious as if it is a node on the network then I see this as a single point of failure.

So, the SDK is actually the part in the business flow that interacts with the customer, and therefore it can be hosted anywhere.
It can be a web application that acts as a front end, having the fabric as the back-end
It can be a phone application that a user has
It can be even an IoT device on a RaspBerry Pi that simply logs periodically real world data into the blockchain
It can be a service that interacts with other services inside a data center.

Hyperledger client SDK is a component used to interact with Fabric network. It resides outside of the Fabric network but connected with its own Membership Service Provider (MSP) and network connection profiles. The network connection profile provides client applications the information about the target
blockchain network that are necessary for the applications to interact with it.

Related

Common IoT connectivity to GCP, AWS, Azure, etc

I've to extend an existing product based on ARM-Cortex M0 and M4 micro-controller (No RTOS-bare metal event loop) to enable IoT capabilities.
I'm using W5500 Hardwired TCP/IP embedded Ethernet controller to enable internet connection for my micro-controllers.
One of the requirement of the project is that it must have cloud connectivity (using MQTT and/or ReST API) with all major vendors i.e. Google Cloud Platform, Amazon Web Services, Microsoft Azure and optional cloud providers like Linode and Digital Ocean.
The cloud connectivity is decided by client during installation.
As these devices are field configurable, connectivity to all these platforms need to be built-in the devices.
While I was scouring over internet regarding this topic, I found out that GCP have their own set of libraries and so do AWS and Azure.
Google Cloud IoT Device SDK for Embedded C
AWS IoT Device SDK for Embedded C
Azure SDK for Embedded C
I was under assumption that either by using simple MQTT and/or ReST API I would be able to communicate with any cloud service. Is my assumption wrong?
Is their any additional communication mechanism/layer that has been introduced over MQTT or ReST API to communicate with these cloud services that warrant need of such explicit libraries.
What are my options here to interface with all these services?
Can I use GCP MQTT library to communicate with AWS or Azure or vice versa?
Can I use Wiznet IO Library's MQTT client to connect to either of these services.

how to establish a private network connection of AWS server to a remote IoT device running linux?

how to deploy a code to a remote IoT device running linux? Is there any zombie program to be written on the remote IoT device to establish a connection? Or is there any custom publisher subscriber shell script/python program needs to be handled at IoT device side? Is there any alternative web servers / Is it possible to deploy a code from gitlab to remote IoT device?
AWS IoT Greengrass is exactly the service you are looking for. You can set it up to be started with systemd and it will run a daemon that keeps your IoT device and the shadow device in sync. You can even deploy long-running lambda functions on your device that will only run locally (not in the cloud). All the deployment, secure connection, updating and offline handling is done by Greengrass.
I played with that and my Raspi with Sense HAT as my home office sensor. Now have a fancy dashboard of my room temperature, humidity and more... lots of fun.
You can get started here.

Unable to make rest calls to hyperledger fabric deployed using the AWS Blockchain Template?

I've set up a Hyperledger Fabric network using the AWS Blockchain Templates. The network is fine and can be viewed using the Explorer. But when I try making rest calls using cURL, I don't get a response. SSHing into the ec2 instance and running the netstat command shows that port 7050 is open and listening. But my rest calls do not give the response listed here.
Please help. Thank you.
The Hyperledger Fabric network deployed using the AWS Blockchain Template has Fabric version 1.1.0 and as such has no REST API. Instead it has the orderer operating at port 7050. Sorry for the confusion.

In SOA (Service Oriented Architecture) does individual services run as separate server?

Big banks follow Service Oriented Architecture for their functioning.
They may have more that 50 services so do they run individual services as separate server or they group services ?
Can anyone explain this in detail ?
According to SOA, each service should be able to serve the service independently and hence could be hosted on a separate server.
But all this servers should communicate between each other internally,so the system as a whole is aware of the services offered by each server and the outside world can hit a single endpoint while requesting a service. Internally a routing module will identify the server which offers the particular service and serve it.
It could be also possible that there could be more than one server serving the same request if the load expected is high.
Also the term server could mean a runtime, something like a JVM if the service is Java based or it could be a machine too.
according to wiki:
Every computer can run any number of services, and each service is
built in a way that ensures that the service can exchange information
with any other service in the network without human interaction and
without the need to make changes to the underlying program itself.
Normally similar nature services or services communicating with same code base or DB server or another application server etc are grouped together. While a high volume service or long running service could be served separately to speed up the system.
The whole purpose of Service Oriented Architecture (SOA) is to have flexibility for each module( exposed as service) to have it's freedom of deployment, implementation an expansion with least affecting other modules. So all these services either they could be hosted on single server with different ports or they could be on different server.
Usually at big banks, there would be a team owning each service.Each service is deployed on different server. In fact each service may be deployed on many servers for Scalability and fault tolerance.
Usually the services are hosted by an Enterprise Service Bus, a component which publishes the services to all information systems of the organization and also to the external B2B customers (via B2B gateway).
The services hosted on ESB may utilize services provided by backend systems. These services of backend system are considered as private and are only consumed via ESB. This approach eliminates the spaghetti mess which comes if everybody integrates with anybody.
Most of ESB systems I have come accross were high available solutions with a HA database, a cluster of application servers and a load balancer, all together creating a platform to ensure stability and performance of the services.
The number of services in an enterprise can be very large, I have been involved in projects with hundreds of services, largest corporations can run thousands of services.
I recommend to check wikipedia for more about ESB.

How to implement service as app in DEA?

I am trying to create a clustered cache service for Cloud Foundry. I understand that I need to implement Service Broker API. However, I want this service to be clustered, and in the Cloud Foundry environment. As you know, container to container connection (TCP) is not supported yet, I don't want to host my backend in another environment.
Basically my question is almost same as this one: http://grokbase.com/t/cloudfoundry.org/vcap-dev/142mvn6y2f/distributed-caches-how-to-make-it-work-multicast
And I am trying to achieve this solution he adviced:
B) is to create a CF Service by implementing the Service Broker API as
some of the examples show at the bottom of this doc page [1] .
services have no inherant network restrictions. so you could have a CF
Caching Service that uses multicast in the cluster, then you would
have local cache clients on your apps that could connect to this
cluster using outbound protocols like TCP.
First of all, where does this service live? In the DEA? Will backend implementation be in the broker itself? How can I implement the backend for scaling the cluster, start the same service broker over again?
Second and another really important question is, how do the other services work if TCP connection is not allowed for apps? For example, how does a MySQL service communicates with the app?
There are a few different ways to solve this, the more robust the solution, the more complicated.
The simplest solution is to have a fixed number of backend cache servers, each with their own distinct route, and let your client applications implement (HTTP) multicast to these routes at the application layer. If you want the backend cache servers to run as CF applications, then for now, all solutions will require something to perform the HTTP multicast logic at the application layer.
The next step would be to introduce an intermediate service broker, so that your client apps can all just bind to the one service to get the list of routes of the backend cache servers. So you would deploy the backends, then deploy your service broker API instances with the knowledge of the backends, and then when client apps bind they will get this information in the user-provided service metadata.
What happens when you want to scale the backends up or down? You can then get more sophisticated, where the backends are basically registering themselves with some sort of central metadata/config/discovery service, and your client apps bind to this service and can periodically query it for live updates of the cache server list.
You could alternatively move the multicast logic into a single (clustered) service, so:
backend caches register with the config/metadata/discovery service
multicaster periodically queries the discovery service for list of cache server routes
client apps make requests to the multicaster service
One difficulty is in implementing the metadata service if you're doing it yourself. If you want it clustered, you need to implement a highly-available-ish consistent-ish datastore, it's almost the original problem you're solving except the service handles replicating data to all nodes in the cluster, so you don't have to multicast.
You can look at https://github.com/cloudfoundry-samples/github-service-broker-ruby for an example service broker that runs as a CF application.