When using hyperedger with virtualbox, can i have some instances at one network? - virtualbox

I am a beginner in a block chain. So there are many questions.
When configuring the hyperledger network, create multiple ubuntu instances using virtualbox on one pc. Can the peers within each instance be connected in a single block-chain network?
Thanks in advance for your availability.

You are correct in identifying that the Peers need to communicate with each other. In the default Development Fabric, all the docker containers are running on a single machine and the network addressing/routing is managed by Docker Compose. If you split your fabric to separate Virtualbox Ubuntu instances, you will have to understand and manage the network addressing/routing. This is a Docker and networking issue, not really a Fabric or Composer issue. You may find that Kubernetes is the most helpful way forward for you or Docker Swarm.

For that, you have to create your fabric network and then add everyone to the network. You need to configure each and every peer in the channel config using configtxgen tool.

Related

Deploying Apache Cloudstack with vSphere/vCenter

For a group project in one of my university IT classes, each group is given 3 servers and the professor wants us to get an Apache CloudStack environment running using those three. While initially vague on instructions, he later informed us that we should install the ESXi hypervisor on all 3 of our servers and go from there.
We first installed ESXi on all 3 of our servers. Then we installed vCenter server on one of them in order to combine all the computing resources by adding each as a host in a cluster before we start setting up CloudStack. What we are about to do next is install the CloudStack Management server on a VM created in vCenter server.
I was reading the CloudStack documentation before we start the installation which is where my question stems from. The documentation mentions that a host should not have any running VMs on them before getting added to CloudStack. Here is the exact text:
Ideally clusters that will be managed by CloudStack should not contain any other VMs. Do not run the management server or vCenter on the cluster that is designated for CloudStack use. Create a separate cluster for use of CloudStack and make sure that they are no VMs in this cluster.
So my question is, does that include the management server VM? If it does, would that mean we have to make a separate cluster for just the host server that contains the management server? Cause if that's the case, we can't use any of the other resources on that server that is running the management server. Or does it mean that you can but it's just not recommended?
On top of that, the documentation also mentions the following:
Put all target ESXi hypervisors in dedicated clusters in a separate Datacenter in vCenter.
So would I have to put the ESXi host containing vCenter Server and CloudStack Management Server in both a separate datacenter and cluster?

Issue setting up Open vSwitch on GCE (DHCP client not working)

I am trying to simulate an on-premises solution on GCP.
I am not able to bridge with the GCE NIC and get DHCP working on that.
I have isolated the issue and also successfully tests the similar thing on a sandboxed Vagrant (VirtualBox) setup.
Both approaches are scripted and available on the following repos:
https://github.com/htssouza/ovs-gcp-issue
The DHCP functionality for Compute Engine only provides and manages the IP address for the instance itself. It does not function as a general purpose DHCP server for other clients running hosted inside the instance.

Deployment using Kubernetes - feasibility of Master And Nodes in same machine

I am trying to deploy my microservices using Kubernetes. And also I have one Ubuntu 16.04 machine as AWS Ec2. In that AWS Ec2 I need to use Kubernetes and deploy my microservices developed using Spring Boot. I already explored the architecture of Kubernetes. But when I am learning how to install Kubernetes in Ubuntu.
It showing that need at least two machine, one for master and another for nodes (worker machines). I am adding the one or two links that I readed for installing Kubernetes below:
https://medium.com/#Grigorkh/install-kubernetes-on-ubuntu-1ac2ef522a36
https://medium.com/#SystemMining/setup-kubenetes-cluster-on-ubuntu-16-04-with-kubeadm-336f4061d929
And I am here need to clarify my confusions related with Kubernetes and its installation. I am adding confusions below section:
Can I use one Ubuntu 16.04 machine for both master and worker for my microservice deployment?
Can I integrate Kubernetes with Jenkins in the same ubuntu 16.04 machine, since I am planning to choose Ec2 Ubuntu 16.04 LTS for this?
If master and node in same machine is possible (doubt 1), then how I can create different number of nodes when I am initializing my cluster by using kubeadm init?
I am only a beginner with this.
Let's clarify one by one.
Can I use one ubuntu 16.04 machine for both master and worker for my microservice deployment?
Yes, you can use one server for all components, but if you will run your master and node in different VMs or containers. Theoretically, it is possible to create all-in-one server without that, but its a tricky way and I don't recommend it to you.
Can I integrate Kubernetes with Jenkins in the same ubuntu 16.04 machine? , Since I am planning to choose Ec2 Ubuntu 16.04 LTS for this
You can, as example, install Jenkins inside a Kubernetes, or install it somewhere else and integrate. So - yes, you can. Here is one of the articles about it.
If master and node in same machine is possible (Doubt 1), Then How I can create different number of nodes when I am initializing my cluster by using kubeadm init?
You cannot create multiple nodes on a single machine without docker-in-docker solution or VMs.
Actually, I highly recommend you Minikube for single node Kubernetes. It will automatically create for you a local cluster in VMs in one click.

docker overlay for local development and remote containers

I am looking to improve our development cycle, using multiple docker containers, used by multiple dev teams
Currently each dev team is responsible of few services, that are dependant on other teams services. Meaning all dev teams need to run all containers locally
What i'm trying to figure out is how can a local container be exposed to a remote network on a remote cluster, that each team will join its network, without the need of running all the services locally
One possible solution is using ssh tunnel sharing the docker.sock file => registered services will be exposed to other machines
ssh -nNT -L /tmp/docker.sock:/var/run/docker.sock <USER>#<IP> &
and
export DOCKER_HOST=unix:///tmp/docker.sock

Hyperledger Multinode setup

I am trying to setup a blockchain network using 4 vms. each of the vms has fabric-peer and fabric-membersrvc docker images and that seems to work successfully. I have setup password less ssh among all vms for normal user(non-root). But the docker images are unable to communicate with each other.
Do I require Passwordless ssh for "root" users among vms ? Are there any other requirements?
the membersrvc docker image is not required on all VMs. currently (v0.6) there can be only 1 membersrvc.
if all your peers are docker containers, they talk to eachother via their advertised address, which you can set through environment variable when you start the peer containers:
-e "CORE_PEER_ADDRESS=<ip of docker host>:7051"
make sure you don't use the ip of the container because you don't have a swarm cluster running (for overlay networking) so the containers on the other hosts cannot talk to the private ip of containers on other hosts.
In order to get peers running in docker to talk to each other:
Make sure that the grpc ports are mapped from the docker VM to the
host
Set the CORE_PEER_ADDRESS to <IP of host running docker>:<grpc
port>
Make sure you use the IP of the host for the grpc communication
addresses such as membersrvc address, discovery root node etc.