friends,
I have a ubuntu server with ngnix installed outside of docker.
I have a docker image that has a web application, with an ip for example: 172.17.0.3
(all within the same server)
How can I do it from my nginx to the ip of the name container? for example: "ip container docker"
That structure cannot change, that's how aws service creates it.
* obs: the Ip can change because of that I need as a dynamic variable.
upstream openerp {
server "ip container docker":8069;}
use: $host or hotname, not work
You should publish your docker container port on the docker host and use server 127.0.0.1:your_mapped_docker_container_port.
Read more details in the docker docs https://docs.docker.com/config/containers/container-networking/#published-ports
This way docker will take care of everything docker networking related under the hood.
Related
Is it possible to assign the ip address of the host system to a docker container?
Context:
I want to access my AWS Elasticsearch Service (ES) Domain by using the Elasticsearch-Pipline for Srcapy. The ES domain is only accessible for certain IP addresses. Currently I get an "connection refused error" when running the scrapy spider. As far as I understand a docker container gets a dynamic IP address (which then is not amongst the allowed IPs). As the host's IP Address is allowed to access the ES Domain I want to assign this IP to the docker container running the scrapy spider.
Currently I try the described using Docker for Windwos on my own machine. A step further I want run the container on an AWS EC2 Instance.
current settings.py for my scrapy project:
ITEM_PIPELINES = {
'scrapyelasticsearch.scrapyelasticsearch.ElasticSearchPipeline' : 300
}
ELASTICSEARCH_SERVER = 'https://testdomain.us-east-2.es.amazonaws.com'
ELASTICSEARCH_PORT = 443
ELASTICSEARCH_INDEX = 'testindex'
ELASTICSEARCH_TYPE = 'testtype'
thanks in advance
I think
docker container create --net host ...
will do it for you.
I am new to DOCKER and working with AWS. I am supposed to create a CONTAINER and add it to a ECS CLUSTER. It is asking for 2 parameters:
IMAGEWhich should have a format repository-url/image:tag. I am not able to mention the FULL PATH of the file within NGINX repository. Please select a very simple file so that running it as a TASK on a EC2 CONTAINER should be easy.
PORT MAPPINGS and CONTAINER PORTI am confused with what PORT to give. Is it 80? Regarding HOST, I can give the PUBLIC IPV4 ADDRESS of 4 EC2 CONTAINERS present within the CLUSTER.
See "Couchbase Docker Container on Amazon ECS" as an example:
In ECS, Docker workloads are defined as tasks. A task can contain multiple containers. All containers for a task are co-located on the same machine.
...
And finally the port mappings (-p on Docker CLI). Port 8091 is needed for Couchbase administration.
It is certainly 80 for your NGiNX, and you can map it to any port you want (typically 80) on your host.
Hey I am trying to access my docker container with my aws public IP I don't know how to achieve this. Right now I have a ec2 container ubuntu 16.04
where I am using a docker image of ubuntu. Where I have installed apache server inside docker image I want to access that using public aws ip.
For that I have tried docker run -d -p 80:80 kyo here kyo is my image name I can do this but what else I need to do in order to host this container with aws. I know i is just a networking thing I don;t know how to achieve this goal.
What is it when your are getting while accessing port 80 over browser? Is it resolving and says some error?
If not check your aws security group polices, you may need to whitelist port 80.
Login to container and see apache is up and running. You could check for open ports inside the container you are running,
netstat -plnt
If above all are cleared, there is no clear idea why you can't access it outside. You could then check for apache logs, if something wrong with your configuration.
I'm not sure, if it needs to have EXPOSE parameter in you Dockerfile, if you have build your own container.
Go through this,
A Brief Primer on Docker Networking Rules: EXPOSE
Edited answer :
You can have a workaround by having ENTRYPOINT s.
Have this in your Dockerfile and build an image from it.
CMD [“apachectl”, “-D”, “FOREGROUND”]
or
CMD [“-D”, “FOREGROUND”]
ENTRYPOINT [“apachectl”]
I am trying portainer and trying to connect remote host. I am getting error failure on retrieve the containers. when I try with Docker -H remote:2375 info on portainer server I am getting docker is running on host error.
Can anyone help me on this?
I am trying with AWS Rancher machine. Installed portainer on rancher machine. And I am not able to figure out on which port, Docker daemon is running on AWS rancher server.
I did
sudo netstat -latuxen | grep docker
and tried to connect all ports listed there. but still I am getting the same error.
Please help me with this
Portainer needs the Docker API to be exposed in order to manage it.
Portainer can connect to the Docker API in two different ways:
Using a bind mount to the Docker socket (available on Linux and Docker for Windows (Docker in a VM) only, e.g. no native Windows containers)
Connecting to the Docker API via TCP (requires you to expose that TCP port in the Docker daemon configuration)
As you've already experienced:
when I try with Docker -H remote:2375 info on portainer server I am getting docker is running on host error
This means that the Docker API is not exposed via TCP, I suggest that you read more about how to do that in the Docker documentation (this will basically depend on your platform).
For example, here is the documentation part on how to configure the Docker daemon on Ubuntu: https://docs.docker.com/engine/admin/#/configuring-docker
If you can't connect via docker -H remote, neither Portainer will be able to connect unless you're able to start a container locally and use a bind mount to the Docker socket.
I also recommend you to read Portainer documentation, especially the deployment section: https://portainer.readthedocs.io/en/stable/deployment.html
I have a containerized app running on a VM. It consists of two docker containers. The first contains the WebSphere Liberty server and the web app. The second contains PostgreSQL and the app's DB.
On my local VM, I just use docker run to start the two containers and then I use docker attach to attach to the web server container so I can edit the server.xml file to specify the public host IP for the DB and then start the web server in the container. The app runs fine.
Now I'm trying to deploy the app on Google Cloud Platform.
I set up my gcloud configuration (project, compute/zone).
I created a cluster.
I created a JSON pod config file which specifies both containers.
I created the pod.
I opened the firewall for the port specified in the pod config file.
At this point:
I look at the pod (gcloud preview container kubectl get pods), it
shows both containers are running.
I SSH to the cluster (gcloud compute ssh xxx-mycluster-node-1) and issue sudo docker ps and it shows the database container running, but not the web server container. With sudo docker ps -l I can see the web server container that is not running, but it keeps trying to start and exiting every 10 seconds or so.
So now I need to update the server.xml and start the Liberty server, but I have no idea how to do that in this realm. Can I attach to the web server container like I do in my local VM? Any help would be greatly appreciated. Thanks.
Yes, you can attach to a container in a pod.
Using Kubernetes 1.0 issue the following command:
Do:
kubectl get po to get the POD name
kubectl describe po POD-NAME to find container name
Then:
kubectl exec -it POD-NAME -c CONTAINER-NAME bash Assuming you have bash
Its similar to docker exec -it CONTAINER-NAME WHAT_EVER_LOCAL_COMMAND
On the machine itself, you can see crash looping containers via:
docker ps -a
and then
docker logs
you can also use kubectl get pods -oyaml to get details like restart count that will validate that the container is crash-looping.