How to docker attach to a container - Google Cloud Platform / Kubernetes - google-cloud-platform

I have a containerized app running on a VM. It consists of two docker containers. The first contains the WebSphere Liberty server and the web app. The second contains PostgreSQL and the app's DB.
On my local VM, I just use docker run to start the two containers and then I use docker attach to attach to the web server container so I can edit the server.xml file to specify the public host IP for the DB and then start the web server in the container. The app runs fine.
Now I'm trying to deploy the app on Google Cloud Platform.
I set up my gcloud configuration (project, compute/zone).
I created a cluster.
I created a JSON pod config file which specifies both containers.
I created the pod.
I opened the firewall for the port specified in the pod config file.
At this point:
I look at the pod (gcloud preview container kubectl get pods), it
shows both containers are running.
I SSH to the cluster (gcloud compute ssh xxx-mycluster-node-1) and issue sudo docker ps and it shows the database container running, but not the web server container. With sudo docker ps -l I can see the web server container that is not running, but it keeps trying to start and exiting every 10 seconds or so.
So now I need to update the server.xml and start the Liberty server, but I have no idea how to do that in this realm. Can I attach to the web server container like I do in my local VM? Any help would be greatly appreciated. Thanks.

Yes, you can attach to a container in a pod.
Using Kubernetes 1.0 issue the following command:
Do:
kubectl get po to get the POD name
kubectl describe po POD-NAME to find container name
Then:
kubectl exec -it POD-NAME -c CONTAINER-NAME bash Assuming you have bash
Its similar to docker exec -it CONTAINER-NAME WHAT_EVER_LOCAL_COMMAND

On the machine itself, you can see crash looping containers via:
docker ps -a
and then
docker logs
you can also use kubectl get pods -oyaml to get details like restart count that will validate that the container is crash-looping.

Related

Testing connection out from within running container. Kubernetes. Amazon Linux 2

I am trying to test an outbound connection from within a Amazon Linux 2 container that is running in Kubernetes. I have a service set up and I am able to telnet to that service through a VPN. But I want to test a connection coming out from that container. Is there a way that this can be done.
I have tried the ping, etc. but the commands all say "command not found"
Is there any command I can run that can test an outbound connection?
Please provide more context. What exact image are you running? When debugging connectivity of kubernetes pods and services, you can exec into the pod with
kubectl exec -it <pod_name> -n <namespace> -- <bash|ash|sh>
Once you gain access to the pod and can emulate a shell inside, you can update + upgrade the runtime with the package manager (apt, yum, depends on the distro).
After upgrading, you can install curl and try to curl an external site.

AWS Cloudshell unable to start docker service

I have been created EKS cluster.
Now, I'm trying to create docker image to push it into my private ECR so I just installed docker using the following command:
amazon-linux-extras install docker
The installation succeed but when I'm tried to use docker I got the following:
[cloudshell-user#ip-10-0-73-203 ~]$ docker images
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
When I'm trying to start docker service I got:
[cloudshell-user#ip-10-0-73-203 ~]$ sudo systemctl start docker
Failed to get D-Bus connection: Operation not permitted
How can I solve it? Should I need to use another user?
Unfortunately this cannot be done (today).
Currently, the AWS CloudShell compute environment doesn't support Docker containers.
From the doc page.
An alternative would be to run a full fledge instance using Cloud9. Note Cloud9 has a cost as it is backed by an EC2 instance.

Connect to a container running in Docker (Redis) from Cloud Run Emulator locally

I'm making local cloud run services with the Cloud Code plugin to Intellij (PyCharm) but the locally deployed service cannot connect to the redis instance running in Docker:
redis.exceptions.ConnectionError: Error 111 connecting to 127.0.0.1:6379. Connection refused.
I can connect to the locally running redis instance from a python shell, it's just the cloud run service running in minikube/docker that cannot seem to connect to it.
Any ideas?
Edit since people are suggesting completely unrelated posts - The locally running Cloud Run instance makes use of Docker and Minikube to run, and is automatically configured by Cloud Code for Intellij. I suspect that Cloud Code for intellij puts Cloud Run instances into an environment that cannot access services running on MacOS localhost (but can access the Internet), which is why I tagged those specific items in the post. Please limit suggestions to ones that takes these items into account.
If you check Docker network using:
docker network list
You'll see a network called cloud-run-dev-internal. You need to connect your Redis container to that network. To do that, run this command (This instruction assumes that your container name is some-redis):
docker network connect cloud-run-dev-internal some-redis
Double check that your container is connected to the network:
docker network inspect cloud-run-dev-internal
Then connect to Redis Host using the container name:
import redis
...
redis_host = os.environ.get('REDISHOST', 'some-redis')
redis_port = int(os.environ.get('REDISPORT', 6379))
redis_client = redis.StrictRedis(host=redis_host, port=redis_port)

Docker AWS access container from Ip

Hey I am trying to access my docker container with my aws public IP I don't know how to achieve this. Right now I have a ec2 container ubuntu 16.04
where I am using a docker image of ubuntu. Where I have installed apache server inside docker image I want to access that using public aws ip.
For that I have tried docker run -d -p 80:80 kyo here kyo is my image name I can do this but what else I need to do in order to host this container with aws. I know i is just a networking thing I don;t know how to achieve this goal.
What is it when your are getting while accessing port 80 over browser? Is it resolving and says some error?
If not check your aws security group polices, you may need to whitelist port 80.
Login to container and see apache is up and running. You could check for open ports inside the container you are running,
netstat -plnt
If above all are cleared, there is no clear idea why you can't access it outside. You could then check for apache logs, if something wrong with your configuration.
I'm not sure, if it needs to have EXPOSE parameter in you Dockerfile, if you have build your own container.
Go through this,
A Brief Primer on Docker Networking Rules: EXPOSE
Edited answer :
You can have a workaround by having ENTRYPOINT s.
Have this in your Dockerfile and build an image from it.
CMD [“apachectl”, “-D”, “FOREGROUND”]
or
CMD [“-D”, “FOREGROUND”]
ENTRYPOINT [“apachectl”]

How to open glassfish admin UI (console) in AWS ElasticBeansTalk installed with glassfish 4.1 java 8?

I have deployed my war file on AWS ElasticBeanstalk (setup with glassfish4.1 java 1.8). I want to open glassfish admin UI in browser.
Thanks in advance!
I am not sure its possible to access the glassfish console UI (at least I never went to this point so far, but might be possible using docker forward port ...)
what I do is the following:
SSH into the ec2 instance elastic beanstalk has provisioned
run sudo docker ps -a to find out about the container running on the instance
ssh into the container sudo docker exec -it <container id here> bash
this will log you on the container running glassfish, from there you can run the asadmin command