AKS : kubectl exec & kubectl log exits in between while the process is running in the container still - kubectl

I am running the following kubectl exec & kubectl logs cmd at the same time on two different windows command prompt
kubectl exec ${pod} containername -n namespace -- bash -c "cd somebatch.ksh > /proc/1/fd/1 2>&1"
kubectl logs ${pod} containername -n namespace
both gets exited from the windows command prompt in between while the process is running in the container still.
If I run the kubectl logs cmd again, I could see the running logs
kubectl logs ${pod} containername -n namespace
What should I do to keep the kubectl exec & kubectl log commands running without exiting.

You can "tail" the logs command by including --follow.
The exec creates a shell in the container and runs the script. Once the shell is created and the script passed, the process is complete.
If you want to keep the session open, you should be able to exec just bash and then run the commands interactively. You may want to include --stdin --tty, i.e. kubectl exec --stdin --tty ... -- bash

Related

Containerfile entrypoint /bin/sh

I would like to create a container that runs only one shell. For this I have tried the following:
FROM alpine:latest
ENTRYPOINT ["/bin/sh"]
Unfortunately I don't get a shell when I start the container.
podman build -t $IMAGENAME .
podman run --name foobar $IMAGENAME
podman start -ai foobar
But if I start the container as follows it works
podman run --name foobar2 -ti $IMAGENAME /bin/sh
/ #
CTRL+C
podman start -ai foobar2
/ #
I had assumed that the entrypoint "/bin/sh" would directly execute a shell that you can work with.
Your Containerfile is fine.
The problem is because your main command is a shell /bin/sh, a shell needs a pseudo-TTY or it will fail to start.
You can pass a pseudo-TTY with the --tty or -t option. Also, a good option is to use --interactive or -i to allow the main process to receive input.
All the commands below will work for you:
podman build -t $IMAGENAME .
# run (use CTRL + P + Q to exit)
podman run --name foobar -ti $IMAGENAME
# create + start
podman create --name foobar -ti $IMAGENAME
podman start foobar
This is not the case if the main command is something different than a shell like a webserver, as apache, for example.
The entrypoint needs to be a long lasting process. Using /bin/sh as the entrypoint, would cause the container to exit as soon as it starts.
Try using:
FROM alpine:latest
ENTRYPOINT ["sleep", "9999"]
Then you can exec into the container and run your commands.

Where do I put `.aws/credentials` for Docker awslogs log-driver (and avoid NoCredentialProviders)?

The Docker awslogs documentation states:
the default AWS shared credentials file (~/.aws/credentials of the root user)
Yet if I copy my AWS credentials file there:
sudo bash -c 'mkdir -p $HOME/.aws; cp .aws/credentials $HOME/.aws/credentials'
... and then try to use the driver:
docker run --log-driver=awslogs --log-opt awslogs-group=neiltest-deleteme --rm hello-world
The result is still the dreaded error:
docker: Error response from daemon: failed to initialize logging driver: failed to create Cloudwatch log stream: NoCredentialProviders: no valid providers in chain. Deprecated.
For verbose messaging see aws.Config.CredentialsChainVerboseErrors.
Where does this file really need to go? Is it because the Docker daemon isn't running as root but rather some other user and, if so, how do I determine that user?
NOTE: I can work around this on systems using systemd by setting environment variables. But this doesn't work on Google CloudShell where the Docker daemon has been started by some other method.
Ah ha! I figured it out and tested this on Debian Linux (on my Chromebook w/ Linux VM and Google CloudShell):
The .aws folder must be in the root folder of the root user not in the $HOME folder!
Based on that I was able to successfully run the following:
pushd $HOME; sudo bash -c 'mkdir -p /.aws; cp .aws/* /.aws/'; popd
docker run --log-driver=awslogs --log-opt awslogs-region=us-east-1 --log-opt awslogs-group=neiltest-deleteme --rm hello-world
I initially figured this all out by looking at the Docker daemon's process information:
DOCKERD_PID=$(ps -A | grep dockerd | grep -Eo '[0-9]+' | head -n 1)
sudo cat /proc/$DOCKERD_PID/environ
The confusing bit is that Docker's documentation here is wrong:
the default AWS shared credentials file (~/.aws/credentials of the root user)
The true location is /.aws/credentials. I believe this is because the daemon starts before $HOME is actually defined since it's not running as a user process. So starting a shell as root will tell you a different story for tilde or $HOME:
sudo sh -c 'cd ~/; echo $PWD'
That outputs /root but using /root/.aws/credentials does not work!

Publish beanstalk environment hook issues

I have an issue with my script. I use beanstalk to deploy my ASP.NET Core code. And in my post deploy I have this code:
#!/usr/bin/env bash
file1=sudo cat /opt/elasticbeanstalk/config/ebenvinfo/region
file2=/opt/elasticbeanstalk/bin/get-config container -k environment_name
file3=$file2.$file1.elasticbeanstalk.com
echo $file3
sudo certbot -n -d $file3 --nginx --agree-tos --email al#gmail.com
It works perfectly if I launch it on the instance but in the postdeploy script I have the error:
[ERROR] An error occurred during execution of command [app-deploy] - [RunAppDeployPostDeployHooks]. Stop running the command. Error: Command .platform/hooks/postdeploy/00_get_certificate.sh failed with error fork/exec .platform/hooks/postdeploy/00_get_certificate.sh: exec format error
PS: My script has .ebextension which allows exec rights
container_commands:
00_permission_hook:
command: "chmod +x .platform/hooks/postdeploy/00_get_certificate.sh"
What's wrong?
I had the same issue and added
#!/bin/bash
to the top of the sh file and also ran "chmod +x" to the sh file and it was solved

Unable to connect to docker container

I setup two swarm manager nodes(mgr1, mgr2). But when I try to connect to the container it throws an error message .
[root#ip-10-3-2-24 ec2-user]# docker run --restart=unless-stopped -h mgr1 --name mgr1 -d -p 3375:2375 swarm manage --replication --advertise 10.3.2.24:3375 consul://10.3.2.24:8500/
[root#ip-10-3-2-24 ec2-user]# docker exec -it mgr1 /bin/bash
rpc error: code = 2 desc = "oci runtime error: exec failed: exec: \"/bin/bash\": stat /bin/bash: no such file or directory"
It's happening in both the servers(mgr1, mgr2). I'm also running consul container on each node and able to connect to the consul containers.
/bin/bash might not be available there in the container. You may use sh as shown below
docker exec -it mgr1 sh or
docker exec -it mgr1 /bin/sh or
docker exec -it mgr1 bash or
docker attach mgr1
UPDATE: Based on the comments
busybox is very light weight linux based image and some of the above works perfectly fine:
bash $ sudo docker exec -it test1 bash
rpc error: code = 13 desc = invalid header field value "oci runtime error: exec failed: container_linux.go:247: starting container process caused \"exec: \\\"bash\\\": executable file not found in $PATH\"\n"
bash $ sudo docker exec -it test1 sh
/ # exit
bash $ sudo docker exec -it test1 /bin/sh
/ # exit
bash $ sudo docker attach test1
/ # exit
bash $

Can't run a docker container on kubernetes in the interactive mode

Here is a post about kubectl run command - http://kubernetes.io/docs/user-guide/kubectl/kubectl_run/
I have tried to run the docker container with the -i option, like in the example:
# Start a single instance of busybox and keep it in the foreground, don't restart it if it exits.
kubectl run -i --tty busybox --image=busybox --restart=Never
However, kubectl says that -i is an unknown command.
Error: unknown shorthand flag: 'i' in -i
Run 'kubectl help' for usage.
Any ideas?
It's likely that your kubectl client is out of date, because your command line works for me:
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"2", GitVersion:"v1.2.0", GitCommit:"5cb86ee022267586db386f62781338b0483733b3", GitTreeState:"clean"}
Server Version: version.Info{Major:"1", Minor:"2", GitVersion:"v1.2.2", GitCommit:"528f879e7d3790ea4287687ef0ab3f2a01cc2718", GitTreeState:"clean"}
$ kubectl run -i --tty busybox --image=busybox --restart=Never
Waiting for pod default/busybox-dikev to be running, status is Pending, pod ready: false
Hit enter for command prompt
/ #