Hyperledger Multinode setup - blockchain

I am trying to setup a blockchain network using 4 vms. each of the vms has fabric-peer and fabric-membersrvc docker images and that seems to work successfully. I have setup password less ssh among all vms for normal user(non-root). But the docker images are unable to communicate with each other.
Do I require Passwordless ssh for "root" users among vms ? Are there any other requirements?

the membersrvc docker image is not required on all VMs. currently (v0.6) there can be only 1 membersrvc.
if all your peers are docker containers, they talk to eachother via their advertised address, which you can set through environment variable when you start the peer containers:
-e "CORE_PEER_ADDRESS=<ip of docker host>:7051"
make sure you don't use the ip of the container because you don't have a swarm cluster running (for overlay networking) so the containers on the other hosts cannot talk to the private ip of containers on other hosts.

In order to get peers running in docker to talk to each other:
Make sure that the grpc ports are mapped from the docker VM to the
host
Set the CORE_PEER_ADDRESS to <IP of host running docker>:<grpc
port>
Make sure you use the IP of the host for the grpc communication
addresses such as membersrvc address, discovery root node etc.

Related

Access AWS SSM Port inside Docker container

I try to access some AWS resources from inside a docker container. Therefore I have a PortForwarding SSM session running on the host and everything works fine when I try to access the resources via localhost:<port>.
However, inside of a docker container I cannot access these same resources via 172.17.0.1:<port>. Host communication per se seems to work just fine, as I can communicate with a local web server via 172.17.0.1:8000. Only the combination of SSM and docker seems to be a problem.
nmap inside of the container also shows the port as closed.
Is there any way to get the combination of SSM and docker up and running?
I suspect that what is happening is that AWS SSM is port forwarding to localhost and is bound to the loopback adaptor.
If I run aws ssm port forwarding, I am able to access the port on the localhost and not via my machine IP:Port.
So when docker tries to access the port via its own natted IP it is unable to connect to the port.
I have the same issue that I am trying to solve with miniKube. Since I am only able to access my ports via localhost on my system, minikube is unable to access my localports.
If I understand correctly, you try to connect to a webserver from your container host and this works, but when logged into the docker container itself you cannot reach it?
If this is what you meant, It could be related to the fact that containers have a different network interface from the host and thus different security groups. If the receiving server's security group is configured to allow traffic from the host, but not from the security group of the containers running on the host, it would be a possible explanation for what you experienced.

Call Container running on AWS EC2

I have a Linux EC2 with docker running inside of it.
Docker is running 3 services. 2 are workers and one is an API.
I want to be able to call that API from outside the system but I am getting "This site can't be reached".
The service is running and the call is a simple ping which works locally through VS so I don't believe that is the issue.
My security group has all traffic allowed with 0.0.0.0/0.
I have attempted the following urls but no luck:
http://ec2-{public-ip}.ap-southeast-2.compute.amazonaws.com/ping
http://{public-ip}/ping
http://{public-ip}/172.17.0.2/ping (containers IP address)
Based on the comments.
EXPOSE does not actually "exposes" a port:
The EXPOSE instruction does not actually publish the port. It functions as a type of documentation between the person who builds the image and the person who runs the container, about which ports are intended to be published. To actually publish the port when running the container, use the -p flag on docker run to publish and map one or more ports, or the -P flag to publish all exposed ports and map them to high-order ports.
Thus, to expose port 80 from your container to the instance you have to use -p 80:80 option.

Docker Swarm on AWS: Expose service port not working on AWS virtual machine

I have 2 AWS virtual machines instances, running on 2 IPv4 public IPs A.B.C.D and X.Y.Z.W
I installed Docker on both machines, and launch Docker Swarm with node A.B.C.D as manager and X.Y.Z.W as worker. When I launched Docker Swarm, I used the A.B.C.D as the advertise-addr, like so:
docker swarm init --advertise-addr A.B.C.D
The Swarm initialized successfully
The problem occurred when I created a service from the image jwilder/whoami and exposed the service on port 8000:
docker service create -d -p 8000:8000 jwilder/whoami
I expected that I can access the service on port 8000 from both nodes, according to the Swarm Routing Mesh documentation. However, in fact I can only access the service from only one node, which is the node that the container was running on
I also tried this experiment on Azure virtual machine and alo failed, so I guess this is a problem with Swarm on these cloud providers, maybe some networking misconfiguration.
Does anyone know how to fix this ? Thanks in advance :D
One main problem that you are not refer is about the Security Groups. You expose port 8000 but by default, the Security Groups only open port 22 for SSH. Please check the SG and make sure you open the necessary ports.

Dynamic port setting for my dockerized microservice

I want to deploy several instances of my microservice which uses certain port but make it scalable and not fix the port in task definition / Dockerfile. My microservice can listen to port provided in environment variable or command line.
At the moment all microservices are described in AWS ECS task definitions and have static port assignment.
Every microservice registers itself with Eureka server and now I can run multiple service instances only on different EC2 instances.
I want to be able to run several containers on the same EC2 instance but every new service instance shall get some free port to listen to it.
What is the standard way of implementing this?
Just set the host port to 0 in the task definition:
If using containers in a task with the EC2 launch type, you can
specify a non-reserved host port for your container port mapping (this
is referred to as static host port mapping), or you can omit the
hostPort (or set it to 0) while specifying a container port and your
container automatically receives a port (this is referred to as
dynamic host port mapping) in the ephemeral port range for your
container instance operating system and Docker version.
The default ephemeral port range is 49153–65535, and this range is
used for Docker versions before 1.6.0. For Docker version 1.6.0 and
later, the Docker daemon tries to read the ephemeral port range from
/proc/sys/net/ipv4/ip_local_port_range (which is 32768–61000 on the
latest Amazon ECS-optimized AMI)
So you will need application LB in such case to route traffic on the dynamic port.
You can take help from this article dynamic-port-mapping-in-ecs-with-application-load-balancer.

Full path of a docker image file required in NGINX repository

I am new to DOCKER and working with AWS. I am supposed to create a CONTAINER and add it to a ECS CLUSTER. It is asking for 2 parameters:
IMAGEWhich should have a format repository-url/image:tag. I am not able to mention the FULL PATH of the file within NGINX repository. Please select a very simple file so that running it as a TASK on a EC2 CONTAINER should be easy.
PORT MAPPINGS and CONTAINER PORTI am confused with what PORT to give. Is it 80? Regarding HOST, I can give the PUBLIC IPV4 ADDRESS of 4 EC2 CONTAINERS present within the CLUSTER.
See "Couchbase Docker Container on Amazon ECS" as an example:
In ECS, Docker workloads are defined as tasks. A task can contain multiple containers. All containers for a task are co-located on the same machine.
...
And finally the port mappings (-p on Docker CLI). Port 8091 is needed for Couchbase administration.
It is certainly 80 for your NGiNX, and you can map it to any port you want (typically 80) on your host.