I tried to use Google Container Registry, but it did not work for me.
I wrote the following containers.yaml.
$ cat containers.yaml
version: v1
kind: Pod
spec:
containers:
- name: amazonssh
image: asia.gcr.io/<project-id>/amazonssh
imagePullPolicy: Always
restartPolicy: Always
dnsPolicy: Default
I run instance by the following command.
$ gcloud compute instances create containervm-amazonssh --image container-vm --network product-network --metadata-from-file google-container-manifest=containers.yaml --zone asia-east1-a --machine-type f1-micro
I set the following acl permission.
# gsutil acl ch -r -u <project-number>#developer.gserviceaccount.com:R gs://asia.artifacts.<project-id>.appspot.com
But Access denied occurs when docker pull image from Google Container Registry.
# docker pull asia.gcr.io/<project-id>.a/amazonssh
Pulling repository asia.gcr.io/<project-id>.a/amazonssh
FATA[0000] Error: Status 403 trying to pull repository <project-id>/amazonssh: "Access denied."
Can you verify from your instance that you can read data from your Google Cloud Storage bucket? This can be verified by:
$ curl -H 'Metadata-Flavor: Google' $SVC_ACCT/scopes
...
https://www.googleapis.com/auth/devstorage.full_control
https://www.googleapis.com/auth/devstorage.read_write
https://www.googleapis.com/auth/devstorage.read_only
...
If so then try:
On Google Compute Engine you can login without gcloud with:
$ METADATA=http://metadata.google.internal./computeMetadata/v1
$ SVC_ACCT=$METADATA/instance/service-accounts/default
$ ACCESS_TOKEN=$(curl -H 'Metadata-Flavor: Google' $SVC_ACCT/token \
| cut -d'"' -f 4)
$ docker login -e not#val.id -u _token -p $ACCESS_TOKEN https://gcr.io
Then try your docker pull command again.
You have an extra .a after project-id here, not sure if you ran the command that way?
docker pull asia.gcr.io/<project-id>.a/amazonssh
The container-vm has a cron job running gcloud docker -a as root, so you should be able to docker pull as root.
The kubelet, which launches the container-vm Docker containers also understands how to natively authenticate with GCR, so it should just work.
Feel free to reach out to us at gcr-contact#google.com. It would be useful if you could include your project-id, and possibly the /var/log/kubelet.log from your container-vm.
Related
I would like to deploy imgproxy to AWS using Fargate to serve different sizes/formats of images from an s3 bucket. Ideally also behind Cloudfront.
Imgproxy has a docker image
docker pull darthsim/imgproxy:latest
docker run -p 8080:8080 -it darthsim/imgproxy
and serving from s3 is supported, e.g.:
docker run -p 8080:8080 -e AWS_ACCESS_KEY_ID=XXXX -e AWS_SECRET_ACCESS_KEY=YYYYYYXXX -e IMGPROXY_USE_S3=true -e IMGPROXY_S3_REGION=us-east-1 -it darthsim/imgproxy
Deploy with Fargate
I followed the Fargate wizard and chose "Custom"
The container
I set up the container as follows. Using the imgproxy Docker image and mapping port 8080, which I think is the one it usually runs on?
In the advanced section, I set the command as
docker run -p 8080:8080 -e IMGPROXY_USE_S3=true -e IMGPROXY_S3_REGION=us-east-1 -it darthsim/imgproxy
The task
I left this as the defaults:
The service
For the service, I chose to use a load balancer:
The results
After waiting for the launch to complete, I went to the load balancer and copied the DNS name:
http://.us-east-1.elb.amazonaws.com:8080/
But I got 503 Service Temporarily Unavailable
It seems the task failed to start
Status reason CannotStartContainerError: Error response from daemon: OCI runtime create failed: container_linux.go:370: starting container process caused: exec: "docker run -p 8080:8080 -e IMGPROXY_USE_S3=true -e IMGPROXY_S3_REGION=us-east-1 -it darthsim/imgproxy": st
Entry point ["docker run -p 8080:8080 -e IMGPROXY_USE_S3=true -e IMGPROXY_S3_REGION=us-east-1 -it darthsim/imgproxy"]
Command ["docker run -p 8080:8080 -e IMGPROXY_USE_S3=true -e IMGPROXY_S3_REGION=us-east-1 -it darthsim/imgproxy"]
Help
I'm looking initially to figure out how to get this deployed in basic form, maybe I need to do more with IAM roles so it doesn't need the AWS creds? Maybe something in the config was not right?
Then I'd also like to figure out how to bring cloudfront into the pictuire too.
Turns out I was overcomplicating this.
The CMD and ENTRYPOINT can be left blank.
I then simply set the environment variables:
AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY
IMGPROXY_S3_REGION
IMGPROXY_USE_S3 true
After waiting then for the task to go from PENDING to RUNNING, I can go copy the DNS name of the load balancer and be greeted by the imgproxy "hello" page.
The IAM Role vs creds
I didn't get this working via an IAM role for the task. I tried giving the ecsTaskExecutionRole s3 read permissions, but in the absence of AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY in the container environment imgproxy complained about missing creds.
In the end I just created a user with an s3 policy allowing read access to the relevant s3 bucket and copied the id and access key to the environment as per above.
If anyone knows how to get an IAM role working that would be nice to know.
Cloudfront
This was just a case of setting the cloudfront origin to be load balancer for the cluster and setting its http port to be 8080 to match imgproxy.
Signed URLs
Just need to add the following to the environment variables
IMGPROXY_KEY
IMGPROXY_SALT
and they can be generated with echo $(xxd -g 2 -l 64 -p /dev/random | tr -d '\n').
After setting these, the simple /insecure URL will not work.
In Python the signed url can be generated from the imgproxy example code. Note that here the url on line 11 should be the s3 url for the image, e.g "s3://somebucket/art/1.png". And you need to replace the key and salt with the hex encoded ones from the ECS environment.
I have created a Wordpress Service using Cloud Run . I deployed using below command
gcloud beta run deploy wp --image gcr.io/<project>/wp:v1 \
--add-cloudsql-instances <project>:us-central1:mysql2 \
--update-env-vars DB_HOST='127.0.0.1',DB_NAME=mysql2,DB_USER=wordpress,DB_PASSWORD=password,CLOUDSQL_INSTANCE='<project>:us-central1:mysql2'
The service is deployed fine but while trying to access the service it is showing below error
<h1>Error: Forbidden</h1>
<h2>Your client does not have permission to get URL <code>/</code> from this server.</h2>
UPDATES:
Dockerfile is as follows . I am following this...
https://github.com/acadevmy/cloud-run-wordpress
FROM wordpress:5.2.1-php7.3-apache
EXPOSE 80
# Use the PORT environment variable in Apache configuration files.
RUN sed -i 's/80/${PORT}/g' /etc/apache2/sites-available/000-default.conf /etc/apache2/ports.conf
# wordpress conf
COPY wordpress/wp-config.php /var/www/html/wp-config.php
# download and install cloud_sql_proxy
RUN apt-get update && apt-get -y install net-tools wget && \
wget https://dl.google.com/cloudsql/cloud_sql_proxy.linux.amd64 -O /usr/local/bin/cloud_sql_proxy && \
chmod +x /usr/local/bin/cloud_sql_proxy
COPY wordpress/cloud-run-entrypoint.sh /usr/local/bin/docker-entrypoint.sh
ENTRYPOINT ["docker-entrypoint.sh"]
CMD ["/usr/local/sbin/apache2ctl -D FOREGROUND"]
##docker-entrypoint.sh
#!/usr/bin/env bash
# Start the sql proxy
cloud_sql_proxy -instances=$CLOUDSQL_INSTANCE=tcp:3306 &
# Execute the rest of your ENTRYPOINT and CMD as expected.
Following can be seen in Console Log
We allowed Unauthenticated authentication and now the error is
"Error establishing a database connection"
Additional Updates:
The DB is running with a private IP so using Serverless VPC .
DB information is as follows:
gcloud sql instances list
NAME DATABASE_VERSION LOCATION TIER PRIMARY_ADDRESS PRIVATE_ADDRESS STATUS
mysql2 MYSQL_5_7 us-central1-b db-f1-micro - 10.0.100.5 RUNNABLE
This is Serverless VPC access range
testserverlessvpc kube-shared-vpc us-central1 192.168.60.0/28 200 300
Now I have added an additional parameter as shown below with both gcloud run deploy and gcloud run service command
--vpc-connector projects/< HOST-Project >/locations/us-central1/connectors/testserverlessvpc
But during gcloud run deploy it is failing with below error
⠏ Deploying new service... Internal system error, system will retry.
Follow this link, I can create a pod whose service account's role can access the AWS resources; so the pod can access them either.
Then, inspired by this EKS-Jenkins-Workshop, I change this workshop a little bit. I want to deploy Jenkins Pipeline, this Jenkins Pipeline can create a pod whose account service's role can access aws resources, but the problem is the cdk code in this pod cannot access AWS resources. (I write the cdk code to access AWS resources, reference (Your first AWS CDK app)[https://docs.aws.amazon.com/cdk/latest/guide/hello_world.html])
This is my Jenkinsfile
pipeline {
agent {
kubernetes {
yaml """
apiVersion: v1
kind: Pod
metadata:
name: jenkins-agent
Namespace: default
spec:
serviceAccountName: jenkins
containers:
- name: node-yuvein
image: node
command:
- cat
tty: true
"""
}
}
stages {
stage('Build') {
steps {
container('node-yuvein') {
dir('hello-cdk'){
sh "pwd"
sh 'npm --version'
sh 'node -v'
sh 'npm install -g typescript'
sh 'npm install -g aws-cdk'
sh 'npm install #aws-cdk/aws-s3'
sh 'npm run build'
sh 'cdk deploy'
}
}
}
}
}
}
When I run the pipeline, it has this error:
User: arn:aws:sts::450261875116:assumed-role/eksctl-eksworkshop-eksctl3-nodegr-NodeInstanceRole-1TCVDYSM1QKSO/i-0a4df3778517df0c6 is not authorized to perform: cloudformation:DescribeStacks on resource: arn:aws:cloudformation:us-west-2:450261875116:stack/HelloCdkStack/*
I am a beginner of K8s, Jenkins and cdk.
Hope someone can help me.
Thanks a lot.
Further Debugging:
In Jenkins Console, I can get serviceAccountName: "jenkins", and the name of my service account in EKS is jenkins.
the pod also get correct ENV:
+ echo $AWS_ROLE_ARN
arn:aws:iam::450261875116:role/eksctl-eksworkshop-eksctl3-addon-iamservicea-Role1-YYYFXFS0J4M2
+ echo $AWS_WEB_IDENTITY_TOKEN_FILE
/var/run/secrets/eks.amazonaws.com/serviceaccount/token
The node.js and npm I installed are the lastest version.
+ npm --version
6.14.8
+ node -v
v14.13.0
+ aws sts get-caller-identity
{
"UserId": "AROAWRVNS7GWO5C7QJGRF:botocore-session-1601436882",
"Account": "450261875116",
"Arn": "arn:aws:sts::450261875116:assumed-role/eksctl-eksworkshop-eksctl3-addon-iamservicea-Role1-YYYFXFS0J4M2/botocore-session-1601436882"
}
when I run this command, it appears my service account role. But I still get the original error.
Jenkins podTemplate has serviceAccount option:
https://github.com/jenkinsci/kubernetes-plugin#pod-and-container-template-configuration
Create an IAM role mapped to an EKS cluster
Create a ServiceAccount mapped to an IAM role
Pass ServiceAccount name to a podTemplate
Further debugging:
Ensure the pod has correct service account name.
Check if pod got AWS_ROLE_ARN and AWS_WEB_IDENTITY_TOKEN_FILE env vars (they are added automatically).
Check if AWS SDK you use is above the minimal version: https://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts-minimum-sdk.html
Run aws sts get-caller-identity to see the role, don't waste time on running an actual job.
In the case of working with Jenkins slaves, one needs to customize the container images to use AWS CLI V2 instead of AWS CLI V1. I was running into errors related to authorization like the question poses; my client was using the cluster node roles instead of using the assumed web identity role of my service account attached to my Jenkins-pods for the slave containers.
Apparently V2 of the AWS CLI includes the web identity token file as part of the default credentials chain whereas V1 does not.
Here's a sample Dockerfile that pulls the latest AWS CLI version so this pattern works.
FROM jenkins/inbound-agent
# run updates as root
USER root
# Create docker group
RUN addgroup docker
# Update & Upgrade OS
RUN apt-get update
RUN apt-get -y upgrade
#install python3
RUN apt-get -y install python3
# add AWS Cli version 2 for web_identity_token files
RUN curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
RUN unzip awscliv2.zip
RUN ./aws/install
# Add Maven
RUN apt-get -y install maven --no-install-recommends
# Add docker
RUN curl -sSL https://get.docker.com/ | sh
RUN usermod -aG docker jenkins
# Add docker compose
RUN curl -L "https://github.com/docker/compose/releases/download/1.26.0/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
RUN chmod +x /usr/local/bin/docker-compose
# Delete cached files we don't need anymore:
RUN apt-get clean
RUN rm -rf /var/lib/apt/lists/*
# close root access
USER jenkins
Further, I had to make sure my serviceaccount was created and attached to both the Jenkins master image and the jenkins slaves. This can be accomplished via Manage Jenkins -> Manage Nodes and Clouds -> Configure Clouds -> Pod Template Details.
Be sure to edit Namespace and Serviceaccount fields with the appropriate values.
I need to connect to GKE kubernetes cluster from gitlab runner, but I don't want to use AutoDevops feature, I would like to setup all of those things on my own. So, basically I would like to install gcloud sdk on a gitlab runner, then set gcloud account to my service account, authorize with the generated key and finally perform "gcloud container clusters get-credentials ..." command to get a valid kubernetes config - to be able to interact with kubernetes cluster.
Interesting fact is, that I tried to perform the entire procedure on my local machine using docker with the same image - and it works here! I does not work only on gitlab runner. The only difference is that gitlab runner is running not with docker executor but on kubernetes one (on the same k8s I want to interact with).
So the working case is:
$ winpty docker run -it --entrypoint=sh lachlanevenson/k8s-kubectl:latest
# apk add python
# wget https://dl.google.com/dl/cloudsdk/release/google-cloud-sdk.tar.gz
# tar zxvf google-cloud-sdk.tar.gz && ./google-cloud-sdk/install.sh --usage-# # reporting=false --path-update=true > /dev/null
# PATH="google-cloud-sdk/bin:${PATH}"
# gcloud config set account <my-service-account>
# gcloud auth activate-service-account --key-file=key.json --project=<my_project>
# gcloud container clusters get-credentials cluster1 --zone europe-west2-b --project <my_project>
# kubectl get all
but when I try do do the same with gitlab runner:
gitlab-ci-yml:
deployment_be:
image: lachlanevenson/k8s-kubectl:latest
stage: deploy
only:
- master
tags:
- kubernetes
before_script:
- apk add python
script:
# Download and install Google Cloud SDK
- wget https://dl.google.com/dl/cloudsdk/release/google-cloud-sdk.tar.gz
- tar zxvf google-cloud-sdk.tar.gz && ./google-cloud-sdk/install.sh --usage-reporting=false --path-update=true
- PATH="google-cloud-sdk/bin:${PATH}"
# Authorize with service account and fetch k8s config file
- gcloud config set account <my_service_account>
- gcloud auth activate-service-account --key-file=key.json --project=<my_project>
- gcloud container clusters get-credentials cluster1 --zone europe-west2-b --project <my_project>
# Interact with kubectl
- kubectl get all
I get the following error:
$ gcloud config set account <my_service_account>
Updated property [core/account].
$ gcloud auth activate-service-account --key-file=key.json --project=<my_project>
Activated service account credentials for: [<my_service_account>]
$ gcloud container clusters get-credentials cluster1 --zone europe-west2-b --project <my_project>
Fetching cluster endpoint and auth data.
ERROR: (gcloud.container.clusters.get-credentials) ResponseError: code=403, message=Request had insufficient authentication scopes.
ERROR: Job failed: command terminated with exit code 1
I tried to set all possible roles for this service account, including: Compute Administrator,
Kubernetes Engine Administrator,
Kubernetes Engine Clusters Administrator,
Container Administrator,
Editor,
Owner
Why this service account works fine on isolated docker image and on the same image launched over kubernetes cluster it fails ?
I am trying to download an image from the Google container registry in a CoreOS machine running in other server (not GCE).
I configured a new service account:
core#XXXX ~ $ docker run -t -i -v $(pwd)/keys:/tmp/keys --name gcloud-config ernestoalejo/google-cloud-sdk-with-docker gcloud auth activate-service-account XXXXXXX#developer.gserviceaccount.com --key-file /tmp/keys/key.p12 --project XXXX
Activated service account credentials for: [XXXXXXX#developer.gserviceaccount.com]
The account is active, but when I try to download the container image it returns a forbidden HTTP status.
core#XXXX ~ $ /usr/bin/docker run --volumes-from gcloud-config --rm -v /var/run/docker.sock:/var/run/docker.sock ernestoalejo/google-cloud-sdk-with-docker sh -c "gcloud preview docker pull gcr.io/XXXXX/influxdb"
Pulling repository gcr.io/XXXXX/influxdb
time="2015-05-08T06:38:55Z" level="fatal" msg="HTTP code: 403"
ERROR: (gcloud.preview.docker) A Docker command did not run successfully.
Tried to run: 'docker pull gcr.io/XXXXX/influxdb'
Exit code: 1
There is only one account in the server and is correctly configured:
core#XXXX ~ $ /usr/bin/docker run --volumes-from gcloud-config --rm -v /var/run/docker.sock:/var/run/docker.sock ernestoalejo/google-cloud-sdk-with-docker sh -c "gcloud auth list"
To set the active account, run:
$ gcloud config set account ``ACCOUNT''
Credentialed accounts:
- XXXXXXXXXXXXX#developer.gserviceaccount.com (active)
How can I authorize the external machine to download images from the registry?
NOTE: The image ernestoalejo/google-cloud-sdk-with-docker is the same as google/cloud-sdk but with this issue fixed.
UPDATE: I have also tried the solution of this answer, but it makes no difference.
PROJECT_ID=XXXXXX
ROBOT=XXXXXX#developer.gserviceaccount.com
gsutil acl ch -u $ROBOT:R gs://artifacts.$PROJECT_ID.appspot.com
gsutil -m acl ch -R -u $ROBOT:R gs://artifacts.$PROJECT_ID.appspot.com
gsutil defacl ch -u $ROBOT:R gs://artifacts.$PROJECT_ID.appspot.com
It seems that the new Frankfurt region of Digital Ocean can't access the Google Container Registry at all. It always returns a 403 Forbidden. As soon as I used a server in London everything started working.