How can I add a node to an existing network? - blockchain

I created a network with 4 peers using docker-compose and docker for Mac.
I deploy my blockchain on this network successfully.
Now I'm launching a 5th peer using another yml file using the details of one of the previous peer as discovery node.
It appears in the list returned by http://localhost:7050/network/peers however my blockchain is not deployed on this peer and I cannot use it to process transactions.
Do I have to deploy the chaincode again on this peer? Did I miss something?

This is limitation in Fabric’s versions 0.5 and 0.6
Network configuration cannot be changed in realtime. In case If you use PBFT consensus, network configuration is hardcoded in:
“fabric/consensus/pbft/config.yaml"
# Maximum number of validators/replicas we expect in the network
# Keep the "N" in quotes, or it will be interpreted as "false".
"N": 4
The challenge is in updating configuration on all peers synchronously, otherwise they will not be able to reach consensus.
In one of next Fabric versions this configuration’s parameter will be moved to blockchain and it will be possible to add new peers and modify consensus configuration on the fly.
Update for question in comment:
Saw only this high level Roadmap proposal:

Related

Kubernetes: How to connect one pod to another on an arbitrary port - with or without services?

We are currently transitioning our apps to Kubernetes and I have two apps, appP and appH, that I need to communicate with each other over a port unknown at start up time.
Unlike most of our apps, we don't have a set port for them will to communicate over. Before Kubernetes, third party app (out of my control) would tell appP to start processing an item, itemA, identified with a unique id and it would also tell appH to handle the processed data produced by appP.
To coordinate communications between appP and appH, appH would generate a port based on the unique id and publish the host and port info to connect on to an intermediate app (IA). appP, once done with it's processing queries IA for the connection information based on the unique id and sends it over.
Now we have to adapt this to kubernetes. Each app runs in its own deployment, as does the IA. So how can I setup appH to accept the connection over a port without being able to specify it in the service definition?
Note: I've seen some posts say that pods should be able to communicate to any other pods in the cluster regardless of specifying the ports in the service definition but I can't seem to find a ton of confirming information on this and I don't have a ton of time on our cluster where it is free to bang my head against.
Would it would just fine as is regardless? My biggest worry is the ip resolution. Currently appH grabs its ip based on the host it's running on (using boost). Not sure how this resolves within a container.
If not, my next thought would be if I could setup a headless service with selector for appH in order to allow for ip resolution. What I am unsure of then is if I could have appP connect to <appH_Service>:<arbitrary_port>?
Would the service even have to be headless in this scenario? I mostly say headless w/ selector because I saw in one specific post that it is the only one you don't need a port in the spec for it. Also because I am unsure if the connection would go through unless it was the actual pod's ip it was connecting with, rather than the services.
Any info or clarification is appreciated. For the most part, I can't really change the architecture of these apps right now, I just have to get them talking to each other as is and haven't found a ton of clear information on this type of case.
Note: We use helm and coredns if anyone is curious.
The Kubernetes networking model is as follows: a Pod is a group of containers that share a single network identity (a cluster IP). Any port exposed by a container is thus automatically exposed on the Pod. The model demands that each Pods can communicate with other Pods.
This means that your current design can work without modifications.
What Services bring to the table is that you can bring a stable network identity to a group of Pods that is otherwise very volatile. It does not apply to your appP/appH coupling, I think.

How Ethereum protocol works with geth

I am new to Ethereum and generally to blockchain. I learned that Ethereum blockchain works on Kademlia. The distributed hash table and its working was beautiful and nicely explained by Eleuth P2P.
Now I used geth to connect to the Ethereum Mainnet and it discovered 2 to 3 maximum peers in 5 to 6 minutes.
Now I know the algorithm but my concern is how the first peer is discovered? Because internet is just a big set of routers and different type of computers (server, computer, etc ) and if you broadcast the discovery like in ARP. The internet will be flooded with these peer discovery broadcast messages and this doesn't seems right. So how initially the connections are made? Also we cannot trust a single network for first time connection because this will make the system server and client based and not decentralised so how the initial connections and peer discovery happens?
Are the broadcast message like have TTL like to prevent the circular loop like in TCP I guess? But this also seems a horrible idea to me.
Please explain.
In order to get going initially, geth uses a set of bootstrap nodes whose endpoints are recorded in the source code.
Source: Geth docs
Here's the list of the bootstrap nodes hardcoded in the Geth source code: https://github.com/ethereum/go-ethereum/blob/v1.10.11/params/bootnodes.go#L23
The --bootnodes option allows you to overwrite this list with your own. Example from the above linked docs:
geth --bootnodes enode://pubkey1#ip1:port1,enode://pubkey2#ip2:port2,enode://pubkey3#ip3:port3

How to avoid Hyper ledger Composer Rest server restart while upgrading(with change in model files) composer network installed?

We have a working setup of 3 peer nodes and a multi user rest server running on 1 of the peers. Now there are multiple user cards created and imported in the rest server(using web based client) which is working fine. I can trigger transactions and query the blockchain with it.
However In case I need to upgrade my network and there is some change in model file(i.e. any participant/asset/transaction parameters changes). I need to restart rest server so that effect can be observed by WEB based client application. So my questions are:
1. Is there a way to upgrade Rest interfaces without restarting the server.
2. In case Rest server crashed or restarted is there some way to use the old cards that were created before server shutdown.
When the REST server starts you can see that it "discovers" the Business Network and then generates the End Points. The discovery is not dynamic, so that when you change the model or other element of a BNA you need to restart the REST server to re-discover the updated network. (In a live scenario I would think changes to the model are infrequent.)
Are you using multi-user mode for the REST server? Assuming that you are, then Configuring the REST server with a persistent Data Source as described in the documentation, or this tutorial should solve the problem of re-importing the cards. You could also "backup" the cards after they have been used the first time by Exporting them.

Coordinating master and worker machines

If this question seems basic to more IT-oriented folks, then I apologize in advance. I'm not sure it falls under the ServerFault domain, but correct me if I'm wrong...
This question concerns some backend operations of a web application, hosted in a cloud environment (Google). I'm trying to assess options for coordinating our various virtual machines. I'll describe what we currently have, and those "in the know" can maybe suggest a better way (I hope!).
In our application there are a number of different analyses that can be run, each of which has different hardware requirements. They are typically very large, and we do NOT want these to be run on the application server (referred to as app_server below).
To that end, when we start one of these analyses, app_server will start a new VM (call this VM1). For some of these analyses, we only need VM1; it performs the analysis and sends a HTTP POST request back to app_server to let it know the work is complete.
For other analyses, VM1 will in turn will launch a number of worker machines (worker-1,...,worker-N), which run very similar tasks in parallel. Once the task on a single worker (e.g. worker-K) is complete, it should communicate back to VM1: "hey, this is worker-K and I am done!". Once all the workers (worker-1,...,worker-N) are complete, VM1 does some merging operations, and finally communicates back to app_server.
My question is:
Aside from starting a web server on VM1 which listens for POST requests from the workers (worker-1,..), what are the potential mechanisms for having those workers communicate back to VM1? Are there non-webserver ways to listen for HTTP POST requests and do something with the request?
I should note that all of my VMs are operating within the same region/zone on GCE, so they are able to communicate via internal IPs without any special firewall rules, etc. (e.g. running $ ping <other VM's IP addr> works). I obviously do not want any of these VMs (VM1, worker-1, ..., worker-N) to be exposed to the internet.
Thanks!
Sounds like the right use-case for Cloud Pub/Sub. https://cloud.google.com/pubsub
In your case workers would be publishing events to the queue and VM1 would be subscribing to them.
Hard to tell from your high - level overview if it can be a match, but take a look at Cloud Composer too https://cloud.google.com/composer/

Hyperledger fabric v1.0 local environment setup containing 4 peers to deal with 4 applications running on different servers

I am trying to implement a demo blockchain network using Hyperlegdger Fabric v1.0. I have followed the getting started and everything going fine so far. I could also be able to setup the sample network by following Building Your First Network.
But still I am not getting clarity to meet my requirement mentioned below
I have 4 applications running on 4 different weblogic servers and asset created from 1 application should be shared among the other three applications.
Eg:
App1 creating Asset1 of quantity 100
By running the chaincode I need to share Asset1 among other 3 applications with the ratio App2:App3:App4 = 20:40:30
Prviously I was trying out the same using Hyperledger fabric v0.6 service provided by IBM Bluemix and now only upgrading to V1.0 by doing the local environment setup.
In the Sample network there available 2 organisations with 2 peers each. In my case I need to setup 4 peers, one for each application and I need some suggestions for the below points.
How to create 4 peers for this requirement? will it need to setup peers in different machine where each server is running or can I setup the 4 peers in same machine?
Can I customise this 2 organisation with 2 peers model to 4 organisation 1 peer moderl to deal with each application?
Someone please clarify these and give your valuable suggestions to meet this requirement.
Thanks in advance.
How to create 4 peers for this requirement? will it need to setup
peers in different machine where each server is running or can I setup
the 4 peers in same machine?
You can just extend the docker-compose file to have more peers.
In case you want to setup peers on different machines you can use this script if you have VMs you want to use as peers and a VM to be used as an orderer.
Can I customise this 2 organisation with 2 peers model to 4
organisation 1 peer moderl to deal with each application?
You need to extend the configtx.yaml file to have more organizations,
and then also update the crypto-config.yaml accordingly