Unable to deploy image into GKE cluster - google-cloud-platform

I have created a docker image locally and pushed it to GCP container registry using gcloud sdk. The image is successfully pushed to container registry.
Now, I am trying to manually deploy this image from the container registry on to the existing private GKE cluster from the GCP UI by selecting Deploy to GKE option.
After deploying, I am getting an error saying "container has runasNonRoot and the image will run as root (pod:"appname-XXXXX_default(XXXXXX-XXXX....)", container:appName-1): CreateContainerConfigError "
Any help will be greatly appreciated.

Sounds like you've got a pod security policy setup to avoid running containers as root. This is a good security policy because of the risk of breakout into other applications or nodes within the cluster.
You might want to read up on the Kubernetes security context and potentially rebuild your container.
With my clusters I would often have to consume public images that use root, in this case I would consume the previous image as a base, create a new (non-root) user and new group to take ownership of any tools that are needed in the image.
Changing the default user in a Dockerfile:
FROM ubuntu
RUN groupadd --gid 15555 notroot \
&& useradd --uid 15555 --gid 15555 -ms /bin/false notroot\
&& chown -R notroot:notroot /home/notroot
USER notroot
ENTRYPOINT ["/bin/bash", "-c"]
CMD ["whoami && id"]
Here's a better explanation of why you should avoid root in Docker images.

Related

Cannot connect to the Docker daemon at unix:///var/run/docker.sock.( Gitlab )

I have a AWS instance with Docker installed on it. And some containers are running.I have setup one Laravel project inside docker.
I can access this web application through AWS IP address as well as DNS address(GoDaddy).
I have also designed gitlab CI/CO to publish the code to AWS instance.
When I try to push the code through Gitlab pipelines, I am getting following error in pipeline.
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
I checked the docker, it is running properly. Any clues please.
.gitlab-ci.yml
http://pastie.org/p/7ELo6wJEbFoKaz7jcmJdDp
the pipeline failing at deploy-api-staging: -> script -> scripts/ci/build
build script
http://pastie.org/p/1iQLZs5GqP2m5jthB4YCbh
deploy script
http://pastie.org/p/2ho6ElfN2iWRcIZJjQGdmy
From what I see, you have directly installed and registered the GitLab runner on your EC2 instance.
I think the problem is that you haven't already given permissions to your GitLab Runner user to use Docker.
From the official Docker documentation:
The Docker daemon binds to a Unix socket instead of a TCP port. By default that Unix socket is owned by the user root and other users can only access it using sudo. The Docker daemon always runs as the root user.
If you don’t want to preface the docker command with sudo, create a Unix group called docker and add users to it. When the Docker daemon starts, it creates a Unix socket accessible by members of the docker group.
Well, GitLab Runners use the user gitlab-runner by default when they're running any CI/CD Pipeline and that user won't use sudo (neither it should be in the sudoers file!) so we have to correctly configure it.
First of all, create a Docker group on the EC2 where the GitLan Runner is registered:
sudo groupadd docker
Then, we are going to add the user gitlab-runner to that group:
sudo usermod -aG docker gitlab-runner
And we are going to verify that the gitlab-runner user actually has access to Docker:
sudo -u gitlab-runner -H docker info
Now your Pipelines should be able to access without any problem to the Unix socket under unix:///var/run/docker.sock.
Additional Steps if using Docker Runners
If you're using the Docker executor in your runner, you have to now mount that Unix socket on the Docker image you're using.
[[runners]]
url = "https://gitlab.com/"
token = REGISTRATION_TOKEN
executor = "docker"
[runners.docker]
tls_verify = false
image = "docker:19.03.12"
privileged = false
disable_cache = false
volumes = ["/var/run/docker.sock:/var/run/docker.sock", "/cache"]
Take special care of the contents in the volume clause.

Fetching Docker container from AWS ECR, within a script, as a non-root user

I need to fetch a Docker image from AWS Elastic Container Registry and I need to do as a non-root user, apparently. I have a basic install.sh script that sets up my AWS EC2 instance, so every time I launch a new instance I can just theoretically run this script and it will install programs, fetch the container, and set up my system how I want it to be.
However, the workaround that Docker provides for managing Docker as a non-root user (see below) does not work when executed from within a script. The reason, as I understand, is that the last line executes a subshell, and I can't do that from within the script.
sudo groupadd docker
sudo usermod -aG docker ${USER}
newgrp docker # cannot be executed from within a script
Is there any way around this? Or do I just have to execute all three lines AND pull the container manually everytime?

Unable to deploy custom docker image to AWS ECS using Terraform

I am using terraform to build infrastructure on AWS provider. I am using ECR to push my local docker images using AWSCLI.
Now, I have a Application load balancer which would route traffic to ECS_service. I want ECS to manage my Docker Containers using Fargate. But, the docker containers are exited by saying "Essential Docker container exited".
Thats the only log printed out.
If i change the docker image to be nginx:latest(which is fetched from dockerhub). It works.
PS: My docker container is a simple node application with node:alpine as base image. Is it something related to this, i am wrong !
Can anyone provide me with some insight on what is wrong with my approach.
I get the following error in AWS Logs:
docker-standard-init-linux-go211-exec-user-process-caused-exec-format-error.
My Dockerfile
FROM node:alpine
WORKDIR /app
COPY . .
RUN npm install
# Expose a port.
EXPOSE 8080
# Run the node server.
ENTRYPOINT ["npm", "start"]
They say, its issue with the start script. I am just running this command. npm start to start the server.
It’s not your approach, your image is just not working.
Try running it locally and see the output otherwise you will need to ship the logs to Cloudwatch and see what they say

Unable to update the docker image. Error : repository does not exist or may require 'docker login''

I have deploy watchtower which automatically update running Docker containers inside Docker Swarm.
I run this Docker Swarm on two AWS EC2 servers and use AWS ECR as Docker registry.
to avoid aws ecr get-login I have used Amazon ECR Docker Credential Helper which Automatically gets credentials for Amazon ECR on docker push/docker pull and no need to login ech 12 hours.
Problem is watchtower is throwing a error like :
time="2019-03-12T03:41:10Z" level=info msg="Unable to update container /crmproxy.1.wop3c1u2qktbkab8rukrlrgr6, err='Error response from daemon: pull access denied for 00000000000.dkr..amazonaws.com/crm, repository does not exist or may require 'docker login''. Proceeding to next."
I am sure that is not about login to ECR. I have correctly linked credentials into WATCHTOWER contaiener using docker-compose.yml file.
here is the watchtower configurations on docker-compose.yml file.
watchtower:
image: v2tec/watchtower
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- ~/.docker/config.json:/config.json
command: --interval 30
In my research about this issue, I saw others has same problem as me and there is person has fixed it him self but i don't understand it.
this is the what i found : solution that is unclear
I don't exactly know this answer is correct or not. but he has said :
The problem was that I installed docker as root. Now installed with
the ec2-user of the Amazon Linux AMI and working
Please help me to avoid this problem that I'm facing. I tried so many times.
Any help would be adavantage to me.
There's an additional dot in your image url. Might that be the reason for your issue?
00000000000.dkr..amazonaws.com/crm
^
Also, you may just add the ec2-user to the docker group to let it execute docker commands as well: sudo usermod -aG docker ec2-user. No need to reinstall.

How to connect my docker image to my kubernetes cluster?

I have a simple play project, and I created a docker image for it.
I created the image like:
in my circle.yml I added:
deployment:
feature:
branch: /.*/
commands:
- docker login -e admin#something.com -u ${ART_USER} -p ${ART_KEY} crp-docker-docker-local.someartifactory.com
- sbt -DBUILD_NUMBER="${CIRCLE_BUILD_NUM}" docker:publish
Now in my jfrog account I have the image name for this project, and in my controller.yml I added this specific image.
But now I have created kubernetes clusters with 4 minions machines and one master machine, and I want to know how do I connect this docker image to this cluster to run it?
thanksss
kubectl run <app name> --image=<image name from jfrog>
If you want to automatically fetch the image name and start the container, you can run a special container that will fetch the image names