Full path of a docker image file required in NGINX repository - amazon-web-services

I am new to DOCKER and working with AWS. I am supposed to create a CONTAINER and add it to a ECS CLUSTER. It is asking for 2 parameters:
IMAGEWhich should have a format repository-url/image:tag. I am not able to mention the FULL PATH of the file within NGINX repository. Please select a very simple file so that running it as a TASK on a EC2 CONTAINER should be easy.
PORT MAPPINGS and CONTAINER PORTI am confused with what PORT to give. Is it 80? Regarding HOST, I can give the PUBLIC IPV4 ADDRESS of 4 EC2 CONTAINERS present within the CLUSTER.

See "Couchbase Docker Container on Amazon ECS" as an example:
In ECS, Docker workloads are defined as tasks. A task can contain multiple containers. All containers for a task are co-located on the same machine.
...
And finally the port mappings (-p on Docker CLI). Port 8091 is needed for Couchbase administration.
It is certainly 80 for your NGiNX, and you can map it to any port you want (typically 80) on your host.

Related

Docker Swarm on AWS: Expose service port not working on AWS virtual machine

I have 2 AWS virtual machines instances, running on 2 IPv4 public IPs A.B.C.D and X.Y.Z.W
I installed Docker on both machines, and launch Docker Swarm with node A.B.C.D as manager and X.Y.Z.W as worker. When I launched Docker Swarm, I used the A.B.C.D as the advertise-addr, like so:
docker swarm init --advertise-addr A.B.C.D
The Swarm initialized successfully
The problem occurred when I created a service from the image jwilder/whoami and exposed the service on port 8000:
docker service create -d -p 8000:8000 jwilder/whoami
I expected that I can access the service on port 8000 from both nodes, according to the Swarm Routing Mesh documentation. However, in fact I can only access the service from only one node, which is the node that the container was running on
I also tried this experiment on Azure virtual machine and alo failed, so I guess this is a problem with Swarm on these cloud providers, maybe some networking misconfiguration.
Does anyone know how to fix this ? Thanks in advance :D
One main problem that you are not refer is about the Security Groups. You expose port 8000 but by default, the Security Groups only open port 22 for SSH. Please check the SG and make sure you open the necessary ports.

Dynamic port setting for my dockerized microservice

I want to deploy several instances of my microservice which uses certain port but make it scalable and not fix the port in task definition / Dockerfile. My microservice can listen to port provided in environment variable or command line.
At the moment all microservices are described in AWS ECS task definitions and have static port assignment.
Every microservice registers itself with Eureka server and now I can run multiple service instances only on different EC2 instances.
I want to be able to run several containers on the same EC2 instance but every new service instance shall get some free port to listen to it.
What is the standard way of implementing this?
Just set the host port to 0 in the task definition:
If using containers in a task with the EC2 launch type, you can
specify a non-reserved host port for your container port mapping (this
is referred to as static host port mapping), or you can omit the
hostPort (or set it to 0) while specifying a container port and your
container automatically receives a port (this is referred to as
dynamic host port mapping) in the ephemeral port range for your
container instance operating system and Docker version.
The default ephemeral port range is 49153–65535, and this range is
used for Docker versions before 1.6.0. For Docker version 1.6.0 and
later, the Docker daemon tries to read the ephemeral port range from
/proc/sys/net/ipv4/ip_local_port_range (which is 32768–61000 on the
latest Amazon ECS-optimized AMI)
So you will need application LB in such case to route traffic on the dynamic port.
You can take help from this article dynamic-port-mapping-in-ecs-with-application-load-balancer.

Docker AWS access container from Ip

Hey I am trying to access my docker container with my aws public IP I don't know how to achieve this. Right now I have a ec2 container ubuntu 16.04
where I am using a docker image of ubuntu. Where I have installed apache server inside docker image I want to access that using public aws ip.
For that I have tried docker run -d -p 80:80 kyo here kyo is my image name I can do this but what else I need to do in order to host this container with aws. I know i is just a networking thing I don;t know how to achieve this goal.
What is it when your are getting while accessing port 80 over browser? Is it resolving and says some error?
If not check your aws security group polices, you may need to whitelist port 80.
Login to container and see apache is up and running. You could check for open ports inside the container you are running,
netstat -plnt
If above all are cleared, there is no clear idea why you can't access it outside. You could then check for apache logs, if something wrong with your configuration.
I'm not sure, if it needs to have EXPOSE parameter in you Dockerfile, if you have build your own container.
Go through this,
A Brief Primer on Docker Networking Rules: EXPOSE
Edited answer :
You can have a workaround by having ENTRYPOINT s.
Have this in your Dockerfile and build an image from it.
CMD [“apachectl”, “-D”, “FOREGROUND”]
or
CMD [“-D”, “FOREGROUND”]
ENTRYPOINT [“apachectl”]

Problems trying to run a custom SFTP service on port 22 on Amazon ECS

I have build a Node.js app that implements an SFTP server; when files are put to the server, the app parses the data and loads it into a Salesforce instance via API as it arrives. The app runs in a Docker container, and listens on port 9001. I'd like it run on Amazon's EC2 Container Service, listening on standard port 22. I can run it locally, remapping 9001 to host 22, and it works fine. But because 22 is also used by SSH, I'm not having any luck running it on ECS. Here are the steps I've taken so far:
Created an EC2 instance using the AMI amzn-ami-2016.03.j-amazon-ecs-optimized (ami-562cf236).
Assigned the instance to a Security Group that allows port 22 (was already present).
Created an ECR registry and pushed my Docker image up to it.
Created an ECS Task Definition for the image, which contains a port mapping from host port 22 to container port 9001
Created a service for the task and associated to the Default ECS Cluster, which contains my EC2 instance.
At this point, which viewing the "Events" tab of the Service view, I see the following error:
service sfsftp_test_service was unable to place a task because no container instance met all of its requirements. The closest matching container-instance XXXX is already using a port required by your task. For more information, see the Troubleshooting section.
I assumed that this is because my Task Definition is trying to map host port 22 which is reserved for SSH, so I tried creating a new version of the Task Definition that maps 9001 to 9001. I also updated my security group to allow port 9001 access. This task was started on my instance, and I was able to connect and upload files. So at this point I know that my Node.js app and the Docker instance are correct. It's a port mapping issue.
In trying to resolve the port mapping issue, I found this stackoverflow question about running SSH on an alternate port on EC2, and used the answer there to change my sshd to run on port 9022. I also updated the Security Group to allow traffic on port 9022. This worked; I can now SSH to port 9022, and I can no longer ssh to port 22.
However, I'm still getting the The closest matching container-instance XXXX is already using a port required error. I also tried editing the Security Group, changing the default port 22 grant from "SSH" to "Custom TCP Rule", but that change doesn't stick; I'm also not convinced that it's anything but a quick way to pick the right port.
When I view the Container instance from the Cluster screen, I can see that 5 ports are "registered", including port 22:
According to this resolved ECS Agent github issue, those ports are "reserved by default" by the ECS Agent. I'm guessing this is why ECS refuses to start my Docker image on the EC2 Instance. So is this configurable? Can I "unreserve" port 22 to allow my Docker image to run?
Edit to add: After reviewing this ECS Agent documentation, I've opened an issue on the ECS Agent Github as well.

Hyperledger Multinode setup

I am trying to setup a blockchain network using 4 vms. each of the vms has fabric-peer and fabric-membersrvc docker images and that seems to work successfully. I have setup password less ssh among all vms for normal user(non-root). But the docker images are unable to communicate with each other.
Do I require Passwordless ssh for "root" users among vms ? Are there any other requirements?
the membersrvc docker image is not required on all VMs. currently (v0.6) there can be only 1 membersrvc.
if all your peers are docker containers, they talk to eachother via their advertised address, which you can set through environment variable when you start the peer containers:
-e "CORE_PEER_ADDRESS=<ip of docker host>:7051"
make sure you don't use the ip of the container because you don't have a swarm cluster running (for overlay networking) so the containers on the other hosts cannot talk to the private ip of containers on other hosts.
In order to get peers running in docker to talk to each other:
Make sure that the grpc ports are mapped from the docker VM to the
host
Set the CORE_PEER_ADDRESS to <IP of host running docker>:<grpc
port>
Make sure you use the IP of the host for the grpc communication
addresses such as membersrvc address, discovery root node etc.