With Google Cloud Build, I am creating a trigger to build using a Dockerfile, the end result of which is a docker image.
I'd like to tag and push this to the standard Docker image repository (docker.io), but i get the following error:
The push refers to repository [docker.io/xxx/yyy]
Pushing xxx/yyy:master
denied: requested access to the resource is denied
I assume that this is because within the context of the build workspace, there has been no login to the Docker registry.
Is there a way to do this, or do I have to use the Google Image Repository?
You can configure Google Cloud Build to push to a different repository with a cloudbuild.yaml in addition to the Dockerfile. You can log in to Docker by passing your password as an encrypted secret env variable. An example of using a secret env variable can be found here: https://cloud.google.com/cloud-build/docs/securing-builds/use-encrypted-secrets-credentials#example_build_request_using_an_encrypted_variable
Related
I am trying to push docker container image built in AWS CodeBuild project to GCP Artifact Registry. In order to push image from AWS managed Ubuntu CodeBuild env, I will need to install and initialise google-cloud-cli. However, to authenticate/activate the CLI using a service account, it requires a service-account-key.json file containing the service account credentials as mentioned here: https://cloud.google.com/container-registry/docs/advanced-authentication.
I would like to avoid having to setup a EFS just to pass a json file to the build server. What is the best way to authenticate google-cloud-cli using a service account without having to use a json file?
Final goal: To deploy a ready-made cryptocurrency exchange on AWS.
I have setup a readymade server by 0xProject by running the following command on my local machine:
npx #0x/launch-kit-wizard && docker-compose up
This command creates a docker-compose.yml file which has multiple container definitions and starts the exchange on http://localhost:3001/
I need to deploy this to AWS for which I'm following this Youtube tutorial
I have created a registry user with appropriate permissions
An EC2 instance is created
ECR repository is created
AWS CLI is configured
As per AWS instructions, I'm retrieving an authentication token and authenticating Docker client to registry:
aws ecr get-login-password --region us-east-2 | docker login --username AWS --password-stdin <docker-id-given-by-AWS>.dkr.ecr.us-east-2.amazonaws.com
I'm trying to build the docker image:
docker build -t testdockerregistry .
Now, since in this case, we have docker-compose.yml instead of Dockerfile - when I try to build the image - it throws the following error:
unable to prepare context: unable to evaluate symlinks in Dockerfile path: CreateFile C:\Users\hp\Desktop\xxx\Dockerfile: The system cannot find the file specified.
I tried building image from docker-compose itself as per this guide, which fails with the following message:
postgres uses an image, skipping
frontend uses an image, skipping
mesh uses an image, skipping
backend uses an image, skipping
nginx uses an image, skipping
Can anyone please help me with this?
You can use the aws ecs cli-compose command from the ECS CLI.
By using this command it will translate the docker-compose file you create into a ECS Task Definition.
If you're interested in finding out more about the CLI take a read of the AWS documentation here.
Another approach, instead of using the AWS ECS CLI directly, is to use the new docker/compose-cli
This CLI tool makes it easy to run Docker containers and Docker Compose applications in the cloud using either Amazon Elastic Container Service (ECS) or Microsoft Azure Container Instances (ACI) using the Docker commands you already know.
See "Docker Announces Open Source Compose for AWS ECS & Microsoft ACI " from Aditya Kulkarni.
It references "Docker Open Sources Compose for Amazon ECS and Microsoft ACI" from Chris Crone, Engineer #docker:
While implementing these integrations, we wanted to make sure that existing CLI commands were not impacted.
We also wanted an architecture that would make it easy to add new backends and provide SDKs in popular languages. We achieved this with the following architecture:
We have our cloud setup on GCP with multiple projects. In our Jenkins machines, I can see multiple entries of docker registries. One of them is something like this:-
"https://gcr.io/abc-defghi-212121": {
"auth": "somethingsomethingsomething=",
"email": "not#val.id"
I want to do same thing for another project which will be like:-
"https://gcr.io/jkl-mnopqr-313131": {
"auth": "somethingsomethingsomething=",
"email": "not#val.id"
So that if I do docker login to both registries it should work. I have followed below links:-
https://cloud.google.com/container-registry/docs/advanced-authentication
there are different methods in this but still confused. Please help.
The most common authentication method to use docker with container registry is using gcloud as a Docker credential helper. This authentication method is performed by running:
$ gcloud auth configure-docker
This will create a json file, ~/.docker/config.json, that will "tell" docker to authenticate in container registry using the gcloud current user.
However, I am assuming in your case it's Jenkins that builds and pushes your images to the container registry. You need to authenticate it, which I believe it's performed using the OAuth plugin. You can find it in the Jenkins interface:
Jenkins -> Credentials -> Global Credentials -> Add Credentials
Nevertheless, you could also refer to this documentation where explained how to troubleshoot common Container Registry and Docker issues. Also, make sure that you have the required permission to push or pull.
When I deployed with my self-hosted(private) Docker image registry, got this error:
This service will require authentication to be invoked.
Deploying container to Cloud Run service [serverless-functions-go] in project [PROJECT_ID] region [us-central1]
X Deploying new service...
. Creating Revision...
. Routing traffic...
Deployment failed
ERROR: (gcloud.beta.run.deploy) Invalid image provided in the revision template. Expected [region.]gcr.io/repo-path[:tag or #digest], obtained dtr.artifacts.xxx.com/xxxxx/xxxx/serverless-functions-go:latest
Before pulling the image from my private docker image registry, I need to use the command like:
docker login [options]
How can I solve this issue?
Can I use cloud run with private docker container registry?
No, not at this time. See "Images you can deploy" in the Cloud Run documentation.
I am trying to deploy a container on a Google VM instance.
From the doc it seems straightforward: specify your image in the container text field and start the VM.
My image is stored in the Google Container Registry in the same project as the VM. However, the VM starts but does not pull and run the docker image. I ssh'ed the VM and docker images ls returns an empty list.
Pulling the image doesn't work.
~ $ docker pull gcr.io/project/image
Using default tag: latest
Error response from daemon: repository gcr.io/project/image not found: does not exist or no pull access
I know we're supposed to use gcloud docker but gcloud isn't installed on the VM (which is dedicated to containers) so I supposed it's something else.
Also, the VM service account has read access to storage. Any idea?
From the GCR docs, you can use docker-credential-gcr to automatically authenticate with credentials from your GCE instance metadata.
To do that manually (assuming you have curl and jq installed):
TOKEN=$(curl "http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/token" -H "Metadata-Flavor: Google" | jq -r ".access_token")
docker login -u oauth2accesstoken -p "$TOKEN" https://gcr.io
To pull the image from the gcr.io container registry you can use the gcloud sdk, like this:
$ gcloud docker -- pull gcr.io/yrmv-191108/autoscaler
Or you can use the docker binary directly as you did. This command has the same effect as the previous gcloud one:
$ docker pull gcr.io/yrmv-191108/autoscaler
Basically you problem is that you are not specifying either the project you are working nor the image you are trying to pull, unless (very unlikely) your project ID is project and the image you want to pull is named image.
You can get a list of the images you have uploaded to your current project with:
$ gcloud container images list
Which, for me, gets:
NAME
gcr.io/yrmv-191108/autoscaler
gcr.io/yrmv-191108/kali
Only listing images in gcr.io/yrmv-191108. Use --repository to list images in other repositories.
If, for some reason you don't have permissions to install the Gcloud SDK (very advisable for working with Google Cloud) you can see your uploaded images on the Google Cloud GUI by navigating to "Container registry -> images"