I have a C++ application containerised as docker container. I have come across a scenario where the docker daemon on a host where the docker containers are running ,goes down. Thereby the container run time also goes down. How do i read this status inside the application which is residing in the container itself?
Thanks in advance
Use --network="host" in your docker run command, then 127.0.0.1 in your docker container will point to your docker host. You can then ping 127.0.0.1 from your container to get the status of your host. But I don't see a point of doing it because if your host is going down, anyways containers would be the first to loose network connectivity and you won't be able to read the status inside the application ever.
Recommended way is to perform container orchestration using swarm, k8s, openshift etc to avoid this issue.
Related
I'm running web service (nginx - uwsgi) on ECS.
I'm running the two applications using supervisor.
Now I want to add another service (filebeat) which will read logs of the web servers and send to logstash on another machine.
I've been told it is good idea to separate applications (all applications run on it's own docker container and get rid of supervisor)
So I'm trying to add a filebeat container to the already running webserver
If I go to define task tab of ECS menu, it seems I'm launching a new ec2 / fargate instance, that's not what I want.
Because filebeat has to run on the same host as the webserver
How do I run filebeat docker container along with webserver container?
I see docker containers restarts after the host receives huge network requests spike.
(I'm running ECS on ec2 instance)
I guess network requests spike somehow makes the instance unstable and somehow docker container decides to restart.
What logs should I look for to narrow down what causes the container restart?
I am running Jenkins server on which sonarqube docker container is running but we stop this server once build is done hence the sonarqube container's setting is lost, is there any way for it.
Please suggest me the proper way to handle this scenario.
I am running Jenkins on Ubuntu ec2 instance
Hey I am trying to access my docker container with my aws public IP I don't know how to achieve this. Right now I have a ec2 container ubuntu 16.04
where I am using a docker image of ubuntu. Where I have installed apache server inside docker image I want to access that using public aws ip.
For that I have tried docker run -d -p 80:80 kyo here kyo is my image name I can do this but what else I need to do in order to host this container with aws. I know i is just a networking thing I don;t know how to achieve this goal.
What is it when your are getting while accessing port 80 over browser? Is it resolving and says some error?
If not check your aws security group polices, you may need to whitelist port 80.
Login to container and see apache is up and running. You could check for open ports inside the container you are running,
netstat -plnt
If above all are cleared, there is no clear idea why you can't access it outside. You could then check for apache logs, if something wrong with your configuration.
I'm not sure, if it needs to have EXPOSE parameter in you Dockerfile, if you have build your own container.
Go through this,
A Brief Primer on Docker Networking Rules: EXPOSE
Edited answer :
You can have a workaround by having ENTRYPOINT s.
Have this in your Dockerfile and build an image from it.
CMD [“apachectl”, “-D”, “FOREGROUND”]
or
CMD [“-D”, “FOREGROUND”]
ENTRYPOINT [“apachectl”]
I have a containerized app running on a VM. It consists of two docker containers. The first contains the WebSphere Liberty server and the web app. The second contains PostgreSQL and the app's DB.
On my local VM, I just use docker run to start the two containers and then I use docker attach to attach to the web server container so I can edit the server.xml file to specify the public host IP for the DB and then start the web server in the container. The app runs fine.
Now I'm trying to deploy the app on Google Cloud Platform.
I set up my gcloud configuration (project, compute/zone).
I created a cluster.
I created a JSON pod config file which specifies both containers.
I created the pod.
I opened the firewall for the port specified in the pod config file.
At this point:
I look at the pod (gcloud preview container kubectl get pods), it
shows both containers are running.
I SSH to the cluster (gcloud compute ssh xxx-mycluster-node-1) and issue sudo docker ps and it shows the database container running, but not the web server container. With sudo docker ps -l I can see the web server container that is not running, but it keeps trying to start and exiting every 10 seconds or so.
So now I need to update the server.xml and start the Liberty server, but I have no idea how to do that in this realm. Can I attach to the web server container like I do in my local VM? Any help would be greatly appreciated. Thanks.
Yes, you can attach to a container in a pod.
Using Kubernetes 1.0 issue the following command:
Do:
kubectl get po to get the POD name
kubectl describe po POD-NAME to find container name
Then:
kubectl exec -it POD-NAME -c CONTAINER-NAME bash Assuming you have bash
Its similar to docker exec -it CONTAINER-NAME WHAT_EVER_LOCAL_COMMAND
On the machine itself, you can see crash looping containers via:
docker ps -a
and then
docker logs
you can also use kubectl get pods -oyaml to get details like restart count that will validate that the container is crash-looping.