How to do use Google Container Registry with the docker CLI - google-container-registry

Google Container Registry documentation explains that in order to pull and push images to gcr.io, you have to prefix docker push and pull commands with gcloud preview.
gcloud preview docker push gcr.io/<gcr_namespace>/<docker-image>
gcloud preview docker pull gcr.io/<gcr_namespace>/<docker-image>
Is there a way to use Google Container Registry with the docker CLI directly, without gcloud preview prefix?

You can use the following commands:
gcloud preview docker -a
to update your local docker configuration w/ gcr.io credentials.
And then use the regular docker CLI commands to push and pull images:
docker build -t gcr.io/<gcr_namespace>/<docker-image> .
docker push gcr.io/<gcr_namespace>/<docker-image>
Or for existing images:
docker tag <docker-image> gcr.io/<gcr_namespace>/<docker-image>
docker push gcr.io/<gcr_namespace>/<docker-image>
docker pull gcr.io/<gcr_namespace>/<docker-image>
This configuration is good for interoperability with the native docker CLI, but not ideal as gcloud preview docker -a will need to be run again after the credentials expires.

When building a new image, tag it directly to gcr.io during a docker build:
gcloud preview docker -a
docker build -t gcr.io/<project_id>/<docker-image> <directory>
push gcr.io/<project_id>/<docker-image>

Related

How to install the Docker image into ECR using Jenkins

I can't find a right solution here.
I got installed Jenkins on EC2 where I added a pipeline with link to the github where I have repository with docker-compose.yml file.
How to install this image of docker into ECR on AWS.
Documentation on pushing to ECR is available here
The steps are:
Authenticate docker with ECR
Build your docker image
tag the docker image with the required ECR format:
docker tag <image>:<version> <aws_account_id>.dkr.ecr.<region>.amazonaws.com/<image>:<version>
docker push <aws_account_id>.dkr.ecr..amazonaws.com/
The simplest way to accomplish this is with a shell script. There are also jenkins plugins that will do this for you.

Dockerfile - install jenkins on AWS

New to AWS so any help would be appreciated.
I'm attempting to run Jenkins through Docker on AWS. I found this article https://docs.aws.amazon.com/aws-technical-content/latest/jenkins-on-aws/containerized-deployment.html
Can anyone share a better step-by-step tutorial to achieve this? the page above seems incomplete.
It talks about "The Dockerfile should also contain the steps to install the Jenkins Amazon ECS plugin" but does not show how to go about installing the plugin using the Dockerfile.
thanks
Please follow below steps:
Launch EC2 cluster according to your needs.
Install docker in you local machine. For example, for ubuntu (sudo apt-get isntall docker.io)
systemctl start docker
Create new folder for our jenkins docker. Create new Dockerfile inside it with following contents.
FROM Jenkins
COPY plugins.txt /usr/share/jenkins/plugins.txt
RUN /usr/local/bin/plugins.sh /usr/share/jenkins/plugins.txt
Create plugins.txt in same folder and add below line
amazon-ecs:1.3
Login to ECR using aws cli. Configure aws first with your credentials.
aws ecr get-login --region <REGION>
Run the output returned from above command to docker login.
sudo docker build -t jenkins_master .
sudo docker tag jenkins_master:latest <AWS ACC ID>.dkr.ecr.<REGION>.amazonaws.com/jenkins_master:latest
Create repository in ECR for this image
aws ecr create-repository --repository-name jenkins_master
Push the image in AWS ECR.
sudo docker push <AWS ACC ID>.dkr.ecr.<REGION>+.amazonaws.com/jenkins_master:latest
Our Jenkins docker image is ready. But data stored by this Jenkins server will not be persistent. To store data permanently, we will create another docker image which will create a volume with mount point. For that, create new directory for this new docker image and inside it create another Dockerfile with below content.
FROM Jenkins
VOLUME ["/var/jenkins_home"]
Again follow same commands to push this new repository to ECR.
sudo docker build -t jenkins_dv .
sudo docker tag jenkins_dv:latest <AWS ACC ID>.dkr.ecr.<REGION>.amazonaws.com/jenkins_dv:latest
aws ecr create-repository --repository-name jenkins_dv
sudo docker push <AWS Account Number>.dkr.ecr.<REGION>.amazonaws.com/jenkins_dv:latest
Now our images are ready. We will use this images to run them as service on our ECS cluster. For that we need to install ecs-cli using below command for linux.
sudo curl -o /usr/local/bin/ecs-cli https://s3.amazonaws.com/amazon-ecs-cli/ecs-cli-linux-amd64-latest
Create a new txt file with below contents which will have jenkins configuration.
jenkins_master:
image: jenkins_master
cpu_shares: 100
mem_limit: 2000M
ports:
- "8080:8080"
- "50000:50000"
volumes_from:
- jenkins_dv
jenkins_dv:
image: jenkins_dv
cpu_shares: 100
mem_limit: 500M
15. Finally push this service using above file to your newly created cluster.
ecs-cli compose --file docker_compose.txt service up --cluster <cluster_name>
Hope this helps!

Deploying a container from Google Container Registry to a Compute Engine VM

I am trying to deploy a container on a Google VM instance.
From the doc it seems straightforward: specify your image in the container text field and start the VM.
My image is stored in the Google Container Registry in the same project as the VM. However, the VM starts but does not pull and run the docker image. I ssh'ed the VM and docker images ls returns an empty list.
Pulling the image doesn't work.
~ $ docker pull gcr.io/project/image
Using default tag: latest
Error response from daemon: repository gcr.io/project/image not found: does not exist or no pull access
I know we're supposed to use gcloud docker but gcloud isn't installed on the VM (which is dedicated to containers) so I supposed it's something else.
Also, the VM service account has read access to storage. Any idea?
From the GCR docs, you can use docker-credential-gcr to automatically authenticate with credentials from your GCE instance metadata.
To do that manually (assuming you have curl and jq installed):
TOKEN=$(curl "http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/token" -H "Metadata-Flavor: Google" | jq -r ".access_token")
docker login -u oauth2accesstoken -p "$TOKEN" https://gcr.io
To pull the image from the gcr.io container registry you can use the gcloud sdk, like this:
$ gcloud docker -- pull gcr.io/yrmv-191108/autoscaler
Or you can use the docker binary directly as you did. This command has the same effect as the previous gcloud one:
$ docker pull gcr.io/yrmv-191108/autoscaler
Basically you problem is that you are not specifying either the project you are working nor the image you are trying to pull, unless (very unlikely) your project ID is project and the image you want to pull is named image.
You can get a list of the images you have uploaded to your current project with:
$ gcloud container images list
Which, for me, gets:
NAME
gcr.io/yrmv-191108/autoscaler
gcr.io/yrmv-191108/kali
Only listing images in gcr.io/yrmv-191108. Use --repository to list images in other repositories.
If, for some reason you don't have permissions to install the Gcloud SDK (very advisable for working with Google Cloud) you can see your uploaded images on the Google Cloud GUI by navigating to "Container registry -> images"

how to setup continuos deployment from docker-hub to AWS ECS?

I am setting up a CI/CD pipeline for my micro-services. Currently I use TravisCI to pull the code from Github upon check-in, build the docker image and push it to DockerHub. I tried using docker cloud(previously knows as Tutum), which provides automatic deployment feature to AWS EC2 instance but the deployment sometimes recreates the container and the service endpoint URL changes, which is not desirable.
I am exploring amazon's ECS and its tasks , but I can not find any reference for how to setup continuos deployment to ECS when a new image is pushed to docker hub.
Anybody has any experience doing the setup ?
with ECS you would basically have CI detect a change to docker hub and update your task definition/service.
For this I use the wonderful ecs-deploy script from here:
https://github.com/silinternational/ecs-deploy
After my container has been built and deployed to dockerhub it's simply a matter of:
ecs-deploy -k $AWS_KEY -s $AWS_SECRET -r $AWS_REGION -c $CLUSTER_NAME -n $SERVICE_NAME -i $DOCKER_IMAGE_NAME
and that does it.

unable to see images or pull from registry

I am unable to see images from the registry
1. gcloud auth login
2. from local machine: gcloud docker push gcr.io/project-id/image-name
3. from VM running docker: gcloud docker images
I see nothing and therefore unable to run any containers - do you know why?
docker images just displays images that have been pulled to the local VM.
Try running gcloud docker pull gcr.io/project-id/image-name to get it onto your VM. Then docker images should show it.
If you are on docker 1.8 or later (see docker version) you can also run: gcloud docker search gcr.io/project-id to see the list of images under project-id.