403 Forbidden when downloading a container image from the registry outside GCE - google-container-registry

I am trying to download an image from the Google container registry in a CoreOS machine running in other server (not GCE).
I configured a new service account:
core#XXXX ~ $ docker run -t -i -v $(pwd)/keys:/tmp/keys --name gcloud-config ernestoalejo/google-cloud-sdk-with-docker gcloud auth activate-service-account XXXXXXX#developer.gserviceaccount.com --key-file /tmp/keys/key.p12 --project XXXX
Activated service account credentials for: [XXXXXXX#developer.gserviceaccount.com]
The account is active, but when I try to download the container image it returns a forbidden HTTP status.
core#XXXX ~ $ /usr/bin/docker run --volumes-from gcloud-config --rm -v /var/run/docker.sock:/var/run/docker.sock ernestoalejo/google-cloud-sdk-with-docker sh -c "gcloud preview docker pull gcr.io/XXXXX/influxdb"
Pulling repository gcr.io/XXXXX/influxdb
time="2015-05-08T06:38:55Z" level="fatal" msg="HTTP code: 403"
ERROR: (gcloud.preview.docker) A Docker command did not run successfully.
Tried to run: 'docker pull gcr.io/XXXXX/influxdb'
Exit code: 1
There is only one account in the server and is correctly configured:
core#XXXX ~ $ /usr/bin/docker run --volumes-from gcloud-config --rm -v /var/run/docker.sock:/var/run/docker.sock ernestoalejo/google-cloud-sdk-with-docker sh -c "gcloud auth list"
To set the active account, run:
$ gcloud config set account ``ACCOUNT''
Credentialed accounts:
- XXXXXXXXXXXXX#developer.gserviceaccount.com (active)
How can I authorize the external machine to download images from the registry?
NOTE: The image ernestoalejo/google-cloud-sdk-with-docker is the same as google/cloud-sdk but with this issue fixed.
UPDATE: I have also tried the solution of this answer, but it makes no difference.
PROJECT_ID=XXXXXX
ROBOT=XXXXXX#developer.gserviceaccount.com
gsutil acl ch -u $ROBOT:R gs://artifacts.$PROJECT_ID.appspot.com
gsutil -m acl ch -R -u $ROBOT:R gs://artifacts.$PROJECT_ID.appspot.com
gsutil defacl ch -u $ROBOT:R gs://artifacts.$PROJECT_ID.appspot.com

It seems that the new Frankfurt region of Digital Ocean can't access the Google Container Registry at all. It always returns a 403 Forbidden. As soon as I used a server in London everything started working.

Related

GCSFuse not finding default credentials when running a cloud run app docker locally

I am working on mounting a Cloud Storage Bucket to my Cloud Run App, using the example and code from the official tutorial https://cloud.google.com/run/docs/tutorials/network-filesystems-fuse
The application uses docker only (no cloudbuild.yaml)
The docker file compiles with out issue using command:
docker build --platform linux/amd64 -t fusemount .
I then start docker run with the following command
docker run --rm -p 8080:8080 -e PORT=8080 fusemount
and when run gcsfuse is triggered with both the directory endpoint and the bitbucket URL
gcsfuse --debug_gcs --debug_fuse gs://<my-bucket> /mnt/gs
But the connection fails:
022/12/11 13:54:35.325717 Start gcsfuse/0.41.9 (Go version go1.18.4)
for app "" using mount point: /mnt/gcs 2022/12/11 13:54:35.618704
Opening GCS connection...
2022/12/11 13:57:26.708666 Failed to open connection: GetTokenSource:
DefaultTokenSource: google: could not find default credentials. See
https://developers.google.com/accounts/docs/application-default-credentials
for more information.
I have already set up the application-defaut credentials with the following command:
gcloud auth application-default login
and I have a python based cloud function project that I have tested on the same local machine which has no problem accessing the same storage bucket with the same default login credentials.
What am I missing?
Google libraries search for ~/.config/gcloud when using APPLICATION_DEFAULT authorization approach.
Your local Docker container doesn't contain this config when running locally.
So, you might want to mount it when running a container:
$ docker run --rm -v /home/$USER/.config/gcloud:/root/.config/gcloud -p 8080:8080 -e PORT=8080 fusemount
Some notes:
I'm not sure which OS you are using, so that replace /home/$USER with a real path to your home
Same, I'm not sure your image has /root home, so make sure that path from 1. is mounted properly
Make sure your local user is authorized to gcloud cli, as you mentioned, using this command gcloud auth application-default login
Let me know, if this helped.
If you are using docker and not using Google Compute engine (GCE), did you try mounting service account key when running container and using that key while mounting GCSFuse ?
If you are building and deploying to Cloud run, did you grant required permissions mentioned in https://cloud.google.com/run/docs/tutorials/network-filesystems-fuse#ship-code?

Deploying imgproxy to AWS with Fargate

I would like to deploy imgproxy to AWS using Fargate to serve different sizes/formats of images from an s3 bucket. Ideally also behind Cloudfront.
Imgproxy has a docker image
docker pull darthsim/imgproxy:latest
docker run -p 8080:8080 -it darthsim/imgproxy
and serving from s3 is supported, e.g.:
docker run -p 8080:8080 -e AWS_ACCESS_KEY_ID=XXXX -e AWS_SECRET_ACCESS_KEY=YYYYYYXXX -e IMGPROXY_USE_S3=true -e IMGPROXY_S3_REGION=us-east-1 -it darthsim/imgproxy
Deploy with Fargate
I followed the Fargate wizard and chose "Custom"
The container
I set up the container as follows. Using the imgproxy Docker image and mapping port 8080, which I think is the one it usually runs on?
In the advanced section, I set the command as
docker run -p 8080:8080 -e IMGPROXY_USE_S3=true -e IMGPROXY_S3_REGION=us-east-1 -it darthsim/imgproxy
The task
I left this as the defaults:
The service
For the service, I chose to use a load balancer:
The results
After waiting for the launch to complete, I went to the load balancer and copied the DNS name:
http://.us-east-1.elb.amazonaws.com:8080/
But I got 503 Service Temporarily Unavailable
It seems the task failed to start
Status reason CannotStartContainerError: Error response from daemon: OCI runtime create failed: container_linux.go:370: starting container process caused: exec: "docker run -p 8080:8080 -e IMGPROXY_USE_S3=true -e IMGPROXY_S3_REGION=us-east-1 -it darthsim/imgproxy": st
Entry point ["docker run -p 8080:8080 -e IMGPROXY_USE_S3=true -e IMGPROXY_S3_REGION=us-east-1 -it darthsim/imgproxy"]
Command ["docker run -p 8080:8080 -e IMGPROXY_USE_S3=true -e IMGPROXY_S3_REGION=us-east-1 -it darthsim/imgproxy"]
Help
I'm looking initially to figure out how to get this deployed in basic form, maybe I need to do more with IAM roles so it doesn't need the AWS creds? Maybe something in the config was not right?
Then I'd also like to figure out how to bring cloudfront into the pictuire too.
Turns out I was overcomplicating this.
The CMD and ENTRYPOINT can be left blank.
I then simply set the environment variables:
AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY
IMGPROXY_S3_REGION
IMGPROXY_USE_S3 true
After waiting then for the task to go from PENDING to RUNNING, I can go copy the DNS name of the load balancer and be greeted by the imgproxy "hello" page.
The IAM Role vs creds
I didn't get this working via an IAM role for the task. I tried giving the ecsTaskExecutionRole s3 read permissions, but in the absence of AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY in the container environment imgproxy complained about missing creds.
In the end I just created a user with an s3 policy allowing read access to the relevant s3 bucket and copied the id and access key to the environment as per above.
If anyone knows how to get an IAM role working that would be nice to know.
Cloudfront
This was just a case of setting the cloudfront origin to be load balancer for the cluster and setting its http port to be 8080 to match imgproxy.
Signed URLs
Just need to add the following to the environment variables
IMGPROXY_KEY
IMGPROXY_SALT
and they can be generated with echo $(xxd -g 2 -l 64 -p /dev/random | tr -d '\n').
After setting these, the simple /insecure URL will not work.
In Python the signed url can be generated from the imgproxy example code. Note that here the url on line 11 should be the s3 url for the image, e.g "s3://somebucket/art/1.png". And you need to replace the key and salt with the hex encoded ones from the ECS environment.

Cloud Run: Forbidden error while accesing service

I have created a Wordpress Service using Cloud Run . I deployed using below command
gcloud beta run deploy wp --image gcr.io/<project>/wp:v1 \
--add-cloudsql-instances <project>:us-central1:mysql2 \
--update-env-vars DB_HOST='127.0.0.1',DB_NAME=mysql2,DB_USER=wordpress,DB_PASSWORD=password,CLOUDSQL_INSTANCE='<project>:us-central1:mysql2'
The service is deployed fine but while trying to access the service it is showing below error
<h1>Error: Forbidden</h1>
<h2>Your client does not have permission to get URL <code>/</code> from this server.</h2>
UPDATES:
Dockerfile is as follows . I am following this...
https://github.com/acadevmy/cloud-run-wordpress
FROM wordpress:5.2.1-php7.3-apache
EXPOSE 80
# Use the PORT environment variable in Apache configuration files.
RUN sed -i 's/80/${PORT}/g' /etc/apache2/sites-available/000-default.conf /etc/apache2/ports.conf
# wordpress conf
COPY wordpress/wp-config.php /var/www/html/wp-config.php
# download and install cloud_sql_proxy
RUN apt-get update && apt-get -y install net-tools wget && \
wget https://dl.google.com/cloudsql/cloud_sql_proxy.linux.amd64 -O /usr/local/bin/cloud_sql_proxy && \
chmod +x /usr/local/bin/cloud_sql_proxy
COPY wordpress/cloud-run-entrypoint.sh /usr/local/bin/docker-entrypoint.sh
ENTRYPOINT ["docker-entrypoint.sh"]
CMD ["/usr/local/sbin/apache2ctl -D FOREGROUND"]
##docker-entrypoint.sh
#!/usr/bin/env bash
# Start the sql proxy
cloud_sql_proxy -instances=$CLOUDSQL_INSTANCE=tcp:3306 &
# Execute the rest of your ENTRYPOINT and CMD as expected.
Following can be seen in Console Log
We allowed Unauthenticated authentication and now the error is
"Error establishing a database connection"
Additional Updates:
The DB is running with a private IP so using Serverless VPC .
DB information is as follows:
gcloud sql instances list
NAME DATABASE_VERSION LOCATION TIER PRIMARY_ADDRESS PRIVATE_ADDRESS STATUS
mysql2 MYSQL_5_7 us-central1-b db-f1-micro - 10.0.100.5 RUNNABLE
This is Serverless VPC access range
testserverlessvpc kube-shared-vpc us-central1 192.168.60.0/28 200 300
Now I have added an additional parameter as shown below with both gcloud run deploy and gcloud run service command
--vpc-connector projects/< HOST-Project >/locations/us-central1/connectors/testserverlessvpc
But during gcloud run deploy it is failing with below error
⠏ Deploying new service... Internal system error, system will retry.

Docker unable to connect AWS EC2 cloud

Hi I am able to deploy my spring boot application in my local docker container(1.11.2) in Windows-7.I followed the below steps to run the docker image in AWS EC2(Free Account:eu-central-1) but getting error
Step 1
Generated Amazon "AccessKeyID" and "SecretKey".
Step 2
Created new repository and it shows 5 Steps to push my docker image in AWS EC2.
Step 3
Installed Amazon CLI and run "aws configure" and configured all the details.
While running aws iam list-users --output table it shows all the user list
Step 4
Run the following command in Docker Container aws ecr get-login --region us-west-2
It returns the docker login.
While running the docker login it returns the following error :
XXXX#XXXX MINGW64 ~
$ docker login -u AWS -p <accessKey>/<secretKey>
Uwg
Error response from daemon: Get https://registry-1.docker.io/v2/: unauthorized:
incorrect username or password
XXXX#XXXX MINGW64 ~
$ gLBBgkqhkiG9w0BBwagggKyMIICrgIBADCCAqcGCSqGSIb3DQEHATAeBglghkgBZQMEAS4wEQQME8
Zei
bash: gLBBgkqhkiG9w0BBwagggKyMIICrgIBADCCAqcGCSqGSIb3DQEHATAeBglghkgBZQMEAS4wEQQ
ME8Zei: command not found
XXXX#XXXX MINGW64 ~
$ lJnpBND9CwzAgEQgIICeLBms72Gl3TeabEXDx+YkK9ZlbyGxPmsuVI/rq81tDeIC68e0Ma+ghg3Dt
Bus
bash: lJnpBND9CwzAgEQgIICeLBms72Gl3TeabEXDx+YkK9ZlbyGxPmsuVI/rq81tDeIC68e0Ma+ghg
3DtBus: No such file or directory
I didn't get proper answer in google.It would be great if some one guide me to resolve this issue.Thanks in advance.
Your command is not pointing to your ECR endpoint, but to DockerHub. Using Linux, normally I would simply run:
$ eval $(aws ecr get-login --region us-west-2)
This is possible because the get-login command is a wrapper that retrieves a new authorization token and formats the docker login command. You only need to execute the formatted command (in this case with eval)
But if you really want to run the docker login manually, you'll have to specify the authorization token and the endpoint of your repository:
$ docker login -u AWS -p <password> -e none https://<aws_account_id>.dkr.ecr.<region>.amazonaws.com
Where <password> is actually the authorization token (which can be generated by the aws ecr get-authorization-token command).
Please refer to the documentation for more details: http://docs.aws.amazon.com/cli/latest/reference/ecr/index.html

how to use Google Container Registry

I tried to use Google Container Registry, but it did not work for me.
I wrote the following containers.yaml.
$ cat containers.yaml
version: v1
kind: Pod
spec:
containers:
- name: amazonssh
image: asia.gcr.io/<project-id>/amazonssh
imagePullPolicy: Always
restartPolicy: Always
dnsPolicy: Default
I run instance by the following command.
$ gcloud compute instances create containervm-amazonssh --image container-vm --network product-network --metadata-from-file google-container-manifest=containers.yaml --zone asia-east1-a --machine-type f1-micro
I set the following acl permission.
# gsutil acl ch -r -u <project-number>#developer.gserviceaccount.com:R gs://asia.artifacts.<project-id>.appspot.com
But Access denied occurs when docker pull image from Google Container Registry.
# docker pull asia.gcr.io/<project-id>.a/amazonssh
Pulling repository asia.gcr.io/<project-id>.a/amazonssh
FATA[0000] Error: Status 403 trying to pull repository <project-id>/amazonssh: "Access denied."
Can you verify from your instance that you can read data from your Google Cloud Storage bucket? This can be verified by:
$ curl -H 'Metadata-Flavor: Google' $SVC_ACCT/scopes
...
https://www.googleapis.com/auth/devstorage.full_control
https://www.googleapis.com/auth/devstorage.read_write
https://www.googleapis.com/auth/devstorage.read_only
...
If so then try:
On Google Compute Engine you can login without gcloud with:
$ METADATA=http://metadata.google.internal./computeMetadata/v1
$ SVC_ACCT=$METADATA/instance/service-accounts/default
$ ACCESS_TOKEN=$(curl -H 'Metadata-Flavor: Google' $SVC_ACCT/token \
| cut -d'"' -f 4)
$ docker login -e not#val.id -u _token -p $ACCESS_TOKEN https://gcr.io
Then try your docker pull command again.
You have an extra .a after project-id here, not sure if you ran the command that way?
docker pull asia.gcr.io/<project-id>.a/amazonssh
The container-vm has a cron job running gcloud docker -a as root, so you should be able to docker pull as root.
The kubelet, which launches the container-vm Docker containers also understands how to natively authenticate with GCR, so it should just work.
Feel free to reach out to us at gcr-contact#google.com. It would be useful if you could include your project-id, and possibly the /var/log/kubelet.log from your container-vm.