I have a simple play project, and I created a docker image for it.
I created the image like:
in my circle.yml I added:
deployment:
feature:
branch: /.*/
commands:
- docker login -e admin#something.com -u ${ART_USER} -p ${ART_KEY} crp-docker-docker-local.someartifactory.com
- sbt -DBUILD_NUMBER="${CIRCLE_BUILD_NUM}" docker:publish
Now in my jfrog account I have the image name for this project, and in my controller.yml I added this specific image.
But now I have created kubernetes clusters with 4 minions machines and one master machine, and I want to know how do I connect this docker image to this cluster to run it?
thanksss
kubectl run <app name> --image=<image name from jfrog>
If you want to automatically fetch the image name and start the container, you can run a special container that will fetch the image names
Related
I have created a docker image locally and pushed it to GCP container registry using gcloud sdk. The image is successfully pushed to container registry.
Now, I am trying to manually deploy this image from the container registry on to the existing private GKE cluster from the GCP UI by selecting Deploy to GKE option.
After deploying, I am getting an error saying "container has runasNonRoot and the image will run as root (pod:"appname-XXXXX_default(XXXXXX-XXXX....)", container:appName-1): CreateContainerConfigError "
Any help will be greatly appreciated.
Sounds like you've got a pod security policy setup to avoid running containers as root. This is a good security policy because of the risk of breakout into other applications or nodes within the cluster.
You might want to read up on the Kubernetes security context and potentially rebuild your container.
With my clusters I would often have to consume public images that use root, in this case I would consume the previous image as a base, create a new (non-root) user and new group to take ownership of any tools that are needed in the image.
Changing the default user in a Dockerfile:
FROM ubuntu
RUN groupadd --gid 15555 notroot \
&& useradd --uid 15555 --gid 15555 -ms /bin/false notroot\
&& chown -R notroot:notroot /home/notroot
USER notroot
ENTRYPOINT ["/bin/bash", "-c"]
CMD ["whoami && id"]
Here's a better explanation of why you should avoid root in Docker images.
I am using terraform to build infrastructure on AWS provider. I am using ECR to push my local docker images using AWSCLI.
Now, I have a Application load balancer which would route traffic to ECS_service. I want ECS to manage my Docker Containers using Fargate. But, the docker containers are exited by saying "Essential Docker container exited".
Thats the only log printed out.
If i change the docker image to be nginx:latest(which is fetched from dockerhub). It works.
PS: My docker container is a simple node application with node:alpine as base image. Is it something related to this, i am wrong !
Can anyone provide me with some insight on what is wrong with my approach.
I get the following error in AWS Logs:
docker-standard-init-linux-go211-exec-user-process-caused-exec-format-error.
My Dockerfile
FROM node:alpine
WORKDIR /app
COPY . .
RUN npm install
# Expose a port.
EXPOSE 8080
# Run the node server.
ENTRYPOINT ["npm", "start"]
They say, its issue with the start script. I am just running this command. npm start to start the server.
It’s not your approach, your image is just not working.
Try running it locally and see the output otherwise you will need to ship the logs to Cloudwatch and see what they say
I have ec2 instance for testing. I deployed using OpsWorks, and now I'm making new job on Jenkins to deploy automatically. what I want to do is
when someone push to branch
Jenkins server build docker image
push image to ecr
ec2 instance pull ecr image and build docker container and run
I made a job that using ecr and deploy ECS Fargate, but never done using ecr and deploy pre existed ec2 instance.I wonder this is possible to make it.
Pre-requisite
On your EC2 you first have to install docker.
There are many ways you can do it.
Once Jenkin build & push docker image to ECR you can further add the step in Jenkin build steps. Jenkin will do SSH inside EC2 and pull and run the docker image.
Once Jenkin build & push docker image to ECR you can further add the step in Jenkin build steps. Jenkin will trigger shell script file on EC2. That sh file having all logic to pull the latest one and stop existing etc.
From Jenkins also you can do it via ansible script.
New to AWS so any help would be appreciated.
I'm attempting to run Jenkins through Docker on AWS. I found this article https://docs.aws.amazon.com/aws-technical-content/latest/jenkins-on-aws/containerized-deployment.html
Can anyone share a better step-by-step tutorial to achieve this? the page above seems incomplete.
It talks about "The Dockerfile should also contain the steps to install the Jenkins Amazon ECS plugin" but does not show how to go about installing the plugin using the Dockerfile.
thanks
Please follow below steps:
Launch EC2 cluster according to your needs.
Install docker in you local machine. For example, for ubuntu (sudo apt-get isntall docker.io)
systemctl start docker
Create new folder for our jenkins docker. Create new Dockerfile inside it with following contents.
FROM Jenkins
COPY plugins.txt /usr/share/jenkins/plugins.txt
RUN /usr/local/bin/plugins.sh /usr/share/jenkins/plugins.txt
Create plugins.txt in same folder and add below line
amazon-ecs:1.3
Login to ECR using aws cli. Configure aws first with your credentials.
aws ecr get-login --region <REGION>
Run the output returned from above command to docker login.
sudo docker build -t jenkins_master .
sudo docker tag jenkins_master:latest <AWS ACC ID>.dkr.ecr.<REGION>.amazonaws.com/jenkins_master:latest
Create repository in ECR for this image
aws ecr create-repository --repository-name jenkins_master
Push the image in AWS ECR.
sudo docker push <AWS ACC ID>.dkr.ecr.<REGION>+.amazonaws.com/jenkins_master:latest
Our Jenkins docker image is ready. But data stored by this Jenkins server will not be persistent. To store data permanently, we will create another docker image which will create a volume with mount point. For that, create new directory for this new docker image and inside it create another Dockerfile with below content.
FROM Jenkins
VOLUME ["/var/jenkins_home"]
Again follow same commands to push this new repository to ECR.
sudo docker build -t jenkins_dv .
sudo docker tag jenkins_dv:latest <AWS ACC ID>.dkr.ecr.<REGION>.amazonaws.com/jenkins_dv:latest
aws ecr create-repository --repository-name jenkins_dv
sudo docker push <AWS Account Number>.dkr.ecr.<REGION>.amazonaws.com/jenkins_dv:latest
Now our images are ready. We will use this images to run them as service on our ECS cluster. For that we need to install ecs-cli using below command for linux.
sudo curl -o /usr/local/bin/ecs-cli https://s3.amazonaws.com/amazon-ecs-cli/ecs-cli-linux-amd64-latest
Create a new txt file with below contents which will have jenkins configuration.
jenkins_master:
image: jenkins_master
cpu_shares: 100
mem_limit: 2000M
ports:
- "8080:8080"
- "50000:50000"
volumes_from:
- jenkins_dv
jenkins_dv:
image: jenkins_dv
cpu_shares: 100
mem_limit: 500M
15. Finally push this service using above file to your newly created cluster.
ecs-cli compose --file docker_compose.txt service up --cluster <cluster_name>
Hope this helps!
I am unable to see images from the registry
1. gcloud auth login
2. from local machine: gcloud docker push gcr.io/project-id/image-name
3. from VM running docker: gcloud docker images
I see nothing and therefore unable to run any containers - do you know why?
docker images just displays images that have been pulled to the local VM.
Try running gcloud docker pull gcr.io/project-id/image-name to get it onto your VM. Then docker images should show it.
If you are on docker 1.8 or later (see docker version) you can also run: gcloud docker search gcr.io/project-id to see the list of images under project-id.