Can't run a docker container on kubernetes in the interactive mode - amazon-web-services

Here is a post about kubectl run command - http://kubernetes.io/docs/user-guide/kubectl/kubectl_run/
I have tried to run the docker container with the -i option, like in the example:
# Start a single instance of busybox and keep it in the foreground, don't restart it if it exits.
kubectl run -i --tty busybox --image=busybox --restart=Never
However, kubectl says that -i is an unknown command.
Error: unknown shorthand flag: 'i' in -i
Run 'kubectl help' for usage.
Any ideas?

It's likely that your kubectl client is out of date, because your command line works for me:
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"2", GitVersion:"v1.2.0", GitCommit:"5cb86ee022267586db386f62781338b0483733b3", GitTreeState:"clean"}
Server Version: version.Info{Major:"1", Minor:"2", GitVersion:"v1.2.2", GitCommit:"528f879e7d3790ea4287687ef0ab3f2a01cc2718", GitTreeState:"clean"}
$ kubectl run -i --tty busybox --image=busybox --restart=Never
Waiting for pod default/busybox-dikev to be running, status is Pending, pod ready: false
Hit enter for command prompt
/ #

Related

Where do I put `.aws/credentials` for Docker awslogs log-driver (and avoid NoCredentialProviders)?

The Docker awslogs documentation states:
the default AWS shared credentials file (~/.aws/credentials of the root user)
Yet if I copy my AWS credentials file there:
sudo bash -c 'mkdir -p $HOME/.aws; cp .aws/credentials $HOME/.aws/credentials'
... and then try to use the driver:
docker run --log-driver=awslogs --log-opt awslogs-group=neiltest-deleteme --rm hello-world
The result is still the dreaded error:
docker: Error response from daemon: failed to initialize logging driver: failed to create Cloudwatch log stream: NoCredentialProviders: no valid providers in chain. Deprecated.
For verbose messaging see aws.Config.CredentialsChainVerboseErrors.
Where does this file really need to go? Is it because the Docker daemon isn't running as root but rather some other user and, if so, how do I determine that user?
NOTE: I can work around this on systems using systemd by setting environment variables. But this doesn't work on Google CloudShell where the Docker daemon has been started by some other method.
Ah ha! I figured it out and tested this on Debian Linux (on my Chromebook w/ Linux VM and Google CloudShell):
The .aws folder must be in the root folder of the root user not in the $HOME folder!
Based on that I was able to successfully run the following:
pushd $HOME; sudo bash -c 'mkdir -p /.aws; cp .aws/* /.aws/'; popd
docker run --log-driver=awslogs --log-opt awslogs-region=us-east-1 --log-opt awslogs-group=neiltest-deleteme --rm hello-world
I initially figured this all out by looking at the Docker daemon's process information:
DOCKERD_PID=$(ps -A | grep dockerd | grep -Eo '[0-9]+' | head -n 1)
sudo cat /proc/$DOCKERD_PID/environ
The confusing bit is that Docker's documentation here is wrong:
the default AWS shared credentials file (~/.aws/credentials of the root user)
The true location is /.aws/credentials. I believe this is because the daemon starts before $HOME is actually defined since it's not running as a user process. So starting a shell as root will tell you a different story for tilde or $HOME:
sudo sh -c 'cd ~/; echo $PWD'
That outputs /root but using /root/.aws/credentials does not work!

AKS : kubectl exec & kubectl log exits in between while the process is running in the container still

I am running the following kubectl exec & kubectl logs cmd at the same time on two different windows command prompt
kubectl exec ${pod} containername -n namespace -- bash -c "cd somebatch.ksh > /proc/1/fd/1 2>&1"
kubectl logs ${pod} containername -n namespace
both gets exited from the windows command prompt in between while the process is running in the container still.
If I run the kubectl logs cmd again, I could see the running logs
kubectl logs ${pod} containername -n namespace
What should I do to keep the kubectl exec & kubectl log commands running without exiting.
You can "tail" the logs command by including --follow.
The exec creates a shell in the container and runs the script. Once the shell is created and the script passed, the process is complete.
If you want to keep the session open, you should be able to exec just bash and then run the commands interactively. You may want to include --stdin --tty, i.e. kubectl exec --stdin --tty ... -- bash

Creating Selenium network run via docker for firefox node in AWS

I am trying to run a docker image (E.g: webwhatsapi) over Selenium network.
I followed below commands:
docker network create selenium
docker run -d -p 4444:4444 -p 5900:5900 --name firefox --network selenium -v /dev/shm:/dev/shm selenium/standalone-firefox-debug
docker build -t webwhatsapi .
docker run --network selenium -it -e SELENIUM='http://firefox:4444/wd/hub' -v $(pwd):/app webwhatsapi /bin/bash -c "pip install ./;pip list;python sample/remote.py"
On AWS, I have following configuration in security group.
I am trying to open the http://{public ip}:4444 in firefox browser. It shows error. (This site can't be reached). I think, I should change my last command in a way which makes it work in browser url.
Last command:
docker run --network selenium -it -e SELENIUM='http://firefox:4444/wd/hub' -v $(pwd):/app webwhatsapi /bin/bash -c "pip install ./;pip list;python sample/remote.py"
Please let me know, where am I going wrong ?

How do you increase the amount of inotify watchers in Google Cloud Build?

RUN cat /proc/sys/fs/inotify/max_user_watches is 524288 on Docker for Mac
RUN cat /proc/sys/fs/inotify/max_user_watches is 8192 on Google Cloud Build's Docker
See https://github.com/guard/listen/wiki/Increasing-the-amount-of-inotify-watchers#the-technical-details for reference.
Google Cloud Build runs docker containers in privileged mode so you can simply add this step to your cloudbuild.yaml.
- name: 'ubuntu'
args: ['sh', '-c', 'sysctl fs.inotify.max_user_watches=524288']
You can get your current inotify file watch limit by executing:
$ cat /proc/sys/fs/inotify/max_user_watches
Ubuntu Lucid's (64bit) inotify limit is set to 8192.
so,you can make your limit permanent by,
$ echo fs.inotify.max_user_watches=524288 | sudo tee -a /etc/sysctl.conf
$ sudo sysctl -p

Unable to connect to docker container

I setup two swarm manager nodes(mgr1, mgr2). But when I try to connect to the container it throws an error message .
[root#ip-10-3-2-24 ec2-user]# docker run --restart=unless-stopped -h mgr1 --name mgr1 -d -p 3375:2375 swarm manage --replication --advertise 10.3.2.24:3375 consul://10.3.2.24:8500/
[root#ip-10-3-2-24 ec2-user]# docker exec -it mgr1 /bin/bash
rpc error: code = 2 desc = "oci runtime error: exec failed: exec: \"/bin/bash\": stat /bin/bash: no such file or directory"
It's happening in both the servers(mgr1, mgr2). I'm also running consul container on each node and able to connect to the consul containers.
/bin/bash might not be available there in the container. You may use sh as shown below
docker exec -it mgr1 sh or
docker exec -it mgr1 /bin/sh or
docker exec -it mgr1 bash or
docker attach mgr1
UPDATE: Based on the comments
busybox is very light weight linux based image and some of the above works perfectly fine:
bash $ sudo docker exec -it test1 bash
rpc error: code = 13 desc = invalid header field value "oci runtime error: exec failed: container_linux.go:247: starting container process caused \"exec: \\\"bash\\\": executable file not found in $PATH\"\n"
bash $ sudo docker exec -it test1 sh
/ # exit
bash $ sudo docker exec -it test1 /bin/sh
/ # exit
bash $ sudo docker attach test1
/ # exit
bash $