Hey I am trying to access my docker container with my aws public IP I don't know how to achieve this. Right now I have a ec2 container ubuntu 16.04
where I am using a docker image of ubuntu. Where I have installed apache server inside docker image I want to access that using public aws ip.
For that I have tried docker run -d -p 80:80 kyo here kyo is my image name I can do this but what else I need to do in order to host this container with aws. I know i is just a networking thing I don;t know how to achieve this goal.
What is it when your are getting while accessing port 80 over browser? Is it resolving and says some error?
If not check your aws security group polices, you may need to whitelist port 80.
Login to container and see apache is up and running. You could check for open ports inside the container you are running,
netstat -plnt
If above all are cleared, there is no clear idea why you can't access it outside. You could then check for apache logs, if something wrong with your configuration.
I'm not sure, if it needs to have EXPOSE parameter in you Dockerfile, if you have build your own container.
Go through this,
A Brief Primer on Docker Networking Rules: EXPOSE
Edited answer :
You can have a workaround by having ENTRYPOINT s.
Have this in your Dockerfile and build an image from it.
CMD [“apachectl”, “-D”, “FOREGROUND”]
or
CMD [“-D”, “FOREGROUND”]
ENTRYPOINT [“apachectl”]
Related
friends,
I have a ubuntu server with ngnix installed outside of docker.
I have a docker image that has a web application, with an ip for example: 172.17.0.3
(all within the same server)
How can I do it from my nginx to the ip of the name container? for example: "ip container docker"
That structure cannot change, that's how aws service creates it.
* obs: the Ip can change because of that I need as a dynamic variable.
upstream openerp {
server "ip container docker":8069;}
use: $host or hotname, not work
You should publish your docker container port on the docker host and use server 127.0.0.1:your_mapped_docker_container_port.
Read more details in the docker docs https://docs.docker.com/config/containers/container-networking/#published-ports
This way docker will take care of everything docker networking related under the hood.
I want to setup a self managed docker private registry on an EC2 instance without using AWS ECR/ECS services i.e. using the docker registry:2 container image and make it accessible to the development team so that they can push/pull docker images remotely.
The development team has windows laptop with "docker for windows" installed in it.
Please note:
The EC2 instance is hosted on private subnet.
I have already created a AWS-ALB with openssl self-signed certificate and attached it to the EC2 so that the server can be accessed over HTTPS Listener.
I have deployed docker registry using below command:
docker run -d -p 8080:5000 --restart=always --name registry registry:2
I think pre-routing of 443 to 8080 is done because when I hit the browser with
https:///v2/_catalog I get an output in json format.
Currently, the catalog is empty because there is no image pushed in the registry.
I expect this docker-registry hosted on AWS-EC2 instance to be accessible remotely i.e. from windows remote machine as well.
Any references/suggestions/steps to achieve my task would be really helpful.
Hoping for a quick resolution.
Thanks and Regards,
Rohan Shetty
I have resolved the issue by following the below steps:
added --insecure-registry parameter in the docker.service file
created a new directory "certs.d/my-domain-name" at path /etc/docker.
( Please note: Here domain name is the one at which docker-registry is to be accessed)
Placed the self-signed openssl certificate and key for the domain-name inside the above mentioned directory
restart docker
I am new to DOCKER and working with AWS. I am supposed to create a CONTAINER and add it to a ECS CLUSTER. It is asking for 2 parameters:
IMAGEWhich should have a format repository-url/image:tag. I am not able to mention the FULL PATH of the file within NGINX repository. Please select a very simple file so that running it as a TASK on a EC2 CONTAINER should be easy.
PORT MAPPINGS and CONTAINER PORTI am confused with what PORT to give. Is it 80? Regarding HOST, I can give the PUBLIC IPV4 ADDRESS of 4 EC2 CONTAINERS present within the CLUSTER.
See "Couchbase Docker Container on Amazon ECS" as an example:
In ECS, Docker workloads are defined as tasks. A task can contain multiple containers. All containers for a task are co-located on the same machine.
...
And finally the port mappings (-p on Docker CLI). Port 8091 is needed for Couchbase administration.
It is certainly 80 for your NGiNX, and you can map it to any port you want (typically 80) on your host.
I need to put sonatype nexus3 up on AWS. Following an old tutorial for nexus 2, I was led to try this on EC2. What I'm currently trying is an instance with a security group that allows inbound requests from anywhere on ports 80,8080,22,4000,443, and 8081. I'm using a Amazon Linux AMI 2016.09.0 (HVM), SSD Volume Type instance. I install docker using the instructions from here http://docs.aws.amazon.com/AmazonECS/latest/developerguide/docker-basics.html#install_docker. I then simply use the official docker image from here https://hub.docker.com/r/sonatype/nexus3/ with the following command.
docker run -d -p 8081:8081 --name nexus sonatype/nexus3
Using docker ps I can confirm that this seems to be running. When I try to connect to the provided public DNS url ending with amazonaws.com on port 8081, I simply get connection refused. Same thing on port 80 or any of the other ports and the same thing when I add /nexus to the end of the URL.
Attempting the quick test that documentation for this image suggests:
>curl -u admin:admin123 http://localhost:8081/service/metrics/ping
curl: (56) Recv failure: Connection reset by peer
Using the exact same docker command on my local machine (OS X) I am able to access nexus on localhost. Why can't I get this working?
The issue appears to have been with Sonatype's official image. This image which works the exact same way, works perfectly with the exact same process.
I have a containerized app running on a VM. It consists of two docker containers. The first contains the WebSphere Liberty server and the web app. The second contains PostgreSQL and the app's DB.
On my local VM, I just use docker run to start the two containers and then I use docker attach to attach to the web server container so I can edit the server.xml file to specify the public host IP for the DB and then start the web server in the container. The app runs fine.
Now I'm trying to deploy the app on Google Cloud Platform.
I set up my gcloud configuration (project, compute/zone).
I created a cluster.
I created a JSON pod config file which specifies both containers.
I created the pod.
I opened the firewall for the port specified in the pod config file.
At this point:
I look at the pod (gcloud preview container kubectl get pods), it
shows both containers are running.
I SSH to the cluster (gcloud compute ssh xxx-mycluster-node-1) and issue sudo docker ps and it shows the database container running, but not the web server container. With sudo docker ps -l I can see the web server container that is not running, but it keeps trying to start and exiting every 10 seconds or so.
So now I need to update the server.xml and start the Liberty server, but I have no idea how to do that in this realm. Can I attach to the web server container like I do in my local VM? Any help would be greatly appreciated. Thanks.
Yes, you can attach to a container in a pod.
Using Kubernetes 1.0 issue the following command:
Do:
kubectl get po to get the POD name
kubectl describe po POD-NAME to find container name
Then:
kubectl exec -it POD-NAME -c CONTAINER-NAME bash Assuming you have bash
Its similar to docker exec -it CONTAINER-NAME WHAT_EVER_LOCAL_COMMAND
On the machine itself, you can see crash looping containers via:
docker ps -a
and then
docker logs
you can also use kubectl get pods -oyaml to get details like restart count that will validate that the container is crash-looping.