Unable to push Helm Chart to Google Cloud Artifact Registry using OCI - google-cloud-platform

I'm trying to push a helm chart to Google Cloud OCI registry (Artifact Registry) but I get forbidden error:
helm push testapp-1.0.0.tgz oci://europe-north1-docker.pkg.dev/project-id/my-artifact-registry/
Error: failed to authorize: failed to fetch anonymous token:
unexpected status: 403 Forbidden
It seems that I'm authenticated ok since when I do try to push it but without "oci://" it works fine:
helm chart push europe-north1-docker.pkg.dev/project-id/my-artifact-registry/charts/testapp:1.0.0
The push refers to repository [europe-north1-docker.pkg.dev/..]
ref: europe-north1-docker.pkg.dev/...
digest: 2757354aef8af2db48261d52c17c0df35a99d6fccaf016b0e67e167c391b69c7
size:3.9 KiB
name: testapp
version: 1.0.0
1.0.0: pushed to remote (1 layer, 3.9 KiB total)
I logged in to the helm registry using service account json key, using below command:
helm registry login -u _json_key_base64 --password <base_64_key> https://europe-north1-docker.pkg.dev
and this service-account has below roles:
roles/artifactregistry.admin
roles/artifactregistry.repoAdmin
roles/artifactregistry.writer
roles/container.developer
roles/storage.admin
roles/storage.objectViewer
Is there any specific permission needs to be enabled in GCP to use "OCI" protocol?
or any service need to be enabled?
or any different authentication required?
I followed the instructions here but with no success

its funny, but this is not the first time it happens to me... once I submit the question to Stackoverflow, something hits me and I'm able to find the problem with my issue!!
Anyway, the problem is basically with the authentication, where the URL to login to should be in the format of:
https://LOCATION-docker.pkg.dev/PROJECT/REPOSITORY
like this:
helm registry login -u _json_key_base64 --password <base_64_key> \
https://europe-north1-docker.pkg.dev/project-id/my-artifact-registry

I faced the same issue but using Cloudbuild.
I am glad if this snippet of code can help someone.
steps:
- name: 'alpine/helm:3.9.1'
id: 'helm package'
args: ['package', '.']
- name: 'alpine/helm:3.9.1'
id: 'helm push'
env:
- 'HELM_REGISTRY_CONFIG=../builder/home/.docker/config.json'
entrypoint: 'sh'
args:
- '-c'
- |
helm push --debug mylibchart-*.tgz oci://europe-west3-docker.pkg.dev/$PROJECT_ID/helm-registry
Basically in the step where we want to push our *.tgz., we need to set the env HELM_REGISTRY_CONFIG equal to the default path of the docker config.json.

This is kinda stupid but I was transitioning from container registry to the artifact registry and I forgot to give my service account permissions for the artifact registry which resulted in this exact error.

Related

Gitlab Cloud run deploy successfully but Job failed

Im having an issue with my CI/CD pipeline ,
its successfully deployed to GCP cloud run but on Gitlab dashboard the status is failed.
I tried to replace images to some other docker images but it fails as well .
# File: .gitlab-ci.yml
image: google/cloud-sdk:alpine
deploy_int:
stage: deploy
environment: integration
only:
- integration # This pipeline stage will run on this branch alone
script:
- echo $GCP_SERVICE_KEY > gcloud-service-key.json # Google Cloud service accounts
- gcloud auth activate-service-account --key-file gcloud-service-key.json
- gcloud config set project $GCP_PROJECT_ID
- gcloud builds submit . --config=cloudbuild_int.yaml
# File: cloudbuild_int.yaml
steps:
# build the container image
- name: 'gcr.io/cloud-builders/docker'
args: [ 'build','--build-arg','APP_ENV=int' , '-t', 'gcr.io/$PROJECT_ID/tpdropd-int-front', '.' ]
# push the container image
- name: 'gcr.io/cloud-builders/docker'
args: [ 'push', 'gcr.io/$PROJECT_ID/tpdropd-int-front']
# deploy to Cloud Run
- name: "gcr.io/cloud-builders/gcloud"
args: ['run', 'deploy', 'tpd-front', '--image', 'gcr.io/$PROJECT_ID/tpdropd-int-front', '--region', 'us-central1', '--platform', 'managed', '--allow-unauthenticated']
gitlab build output :
ERROR: (gcloud.builds.submit)
The build is running, and logs are being written to the default logs bucket.
This tool can only stream logs if you are Viewer/Owner of the project and, if applicable, allowed by your VPC-SC security policy.
The default logs bucket is always outside any VPC-SC security perimeter.
If you want your logs saved inside your VPC-SC perimeter, use your own bucket.
See https://cloud.google.com/build/docs/securing-builds/store-manage-build-logs.
Cleaning up project directory and file based variables
00:01
ERROR: Job failed: exit code 1
I fix it by using:
options:
logging: CLOUD_LOGGING_ONLY
in cloudbuild.yaml
there you can use this work around :
Fix it by giving the Viewer role to the service account running this but this feels like giving too much permission to such a role.
This worked for me: Use --suppress-logs
gcloud builds submit --suppress-logs --tag=<my-tag>
To fix the issue, you just need to create a bucket in your project (by default - without public access) and add the role 'Store Admin' to your user or service account via https://console.cloud.google.com/iam-admin/iam
After that, you can refer the new bucket into the gcloud builds submit via parameter --gcs-log-dir gs://YOUR_NEW_BUCKET_NAME_HERE like this:
gcloud builds submit --gcs-log-dir gs://YOUR_NEW_BUCKET_NAME_HERE ...(other parameters here)
We need a new bucket because the default bucket for logs is a global (cross-projects). That's why it has specific security requirements to access it especially from outside the Google Cloud, like GitLab, Azure DevOps ant etc via service accounts.
(Moreover, in this case you no need to turn off logging via --suppress-logs)
Kevin's answer worked like a magic for me, since I am not able to comment, I am writing this new answer.
Initially I was facing the same issue where inspite of gcloud build submit command passed , my gitlab CI was failing.
Below is the cloudbuild.yaml file where I add the option logging as Kevin suggested.
steps:
name: gcr.io/cloud-builders/gcloud
entrypoint: 'bash'
args: ['run_query.sh', '${_SCRIPT_NAME}']
options:
logging: CLOUD_LOGGING_ONLY
Check this document for details: https://cloud.google.com/build/docs/build-config-file-schema#options
To me worked the options solution as mentioned for #Kevin. Just add the parameter as mentioned before in the cloudbuild.yml file.
steps:
- name: 'gcr.io/cloud-builders/docker'
args: ['build', '-t', 'gcr.io/myproject/myimage', '.']
options:
logging: CLOUD_LOGGING_ONLY

Is there a way to pass github repository credentials storing helm charts at install?

I've followed this guide: https://dev.to/jamiemagee/how-to-host-your-helm-chart-repository-on-github-3kd to setup a private github helm chart repository, everything is working fine, github action scripts are testing the docs, releases are being made, but at the time of install it fails with a 404.
➜ ~ helm install --devel -f thor.job-export.values.yaml job-export sample/thor --debug
install.go:172: [debug] Original chart version: ""
install.go:174: [debug] setting version to >0.0.0-0
Error: failed to fetch https://github.com/myuser/helm-charts/releases/download/thor-0.1.4/thor-0.1.4.tgz : 404 Not Found
Understanding that helm cannot fetch the *.tgz files because it seems it's not using my GitHub credentials.
I've tried to add the private github repo with this command:
helm repo add sample 'https://mytoken#raw.githubusercontent.com/myuser/helm-charts/gh-pages/'
Which seems to be able to gather the chart info in the error described above.
Is there a way to pass the github credentials at install time of the chart? tried this:
helm install -f thor.job-export.values.yaml --password mytoken --username myuser job-export sample/thor --debug
But fails with the same error... So, is there any way to pass the github repo user and password to helm at chart install ?

Google Cloud Build error no project active

I am trying to set up Google Cloud Build with a really simple project hosted on firebase, but every time it reaches the deploy stage it tells me:
Error: No project active, but project aliases are available.
Step #2: Run firebase use <alias> with one of these options:
ERROR: build step 2 "gcr.io/host-test-xxxxx/firebase" failed: step exited with non-zero status: 1
I have set the alias to production and my .firebasesrc is:
{
"projects": {
"default": "host-test-xxxxx",
"production": "host-test-xxxxx"
}
I have firebase admin and API Keys Admin permissions on my cloud builder service and I also want to encrypt so I have Cloud KMS CryptoKey Decrypter
I do
firebase login:ci
to generate a token in my terminal and paste this to my .env variable, then generate an alias called production and do
firebase use production
My yaml is:
steps:
# Install
- name: 'gcr.io/cloud-builders/npm'
args: ['install']
# Build
- name: 'gcr.io/cloud-builders/npm'
args: ['run', 'build']
# Deploy
- name: 'gcr.io/host-test-xxxxx/firebase'
args: ['deploy']
and install and build work fine. What is happening here?
Rerunning firebase init does not seem to help.
Update:
building locally then doing firebase deploy does not help either.
Ok the thing that worked was changing the .firebasesrc file to:
{
"projects": {
"default": "host-test-xxxxx"
}
}
and
firebase use --add
and adding an alias called default.

How to solve insufficient authentication scopes when use Pubsub on GCP

I'm trying to build 2 microservices (in Java Spring Boot) to communicate with each other using GCP Pub/Sub.
First, I tested the programs(in Eclipse) working as epxected in my local laptop(http://localhost), i.e. one microservice published the message and the other received it successfully using the Topic/Subscriber created in GCP (as well as the credential private key: mypubsub.json).
Then, I deployed the same programs to run GCP, and got following errors:
- 2020-03-21 15:53:16.831 WARN 1 --- [bsub-publisher2] o.s.c.g.p.c.p.PubSubPublisherTemplate : Publishing to json-payload-sample-topic topic failed
- com.google.api.gax.rpc.PermissionDeniedException: io.grpc.StatusRuntimeException: PERMISSION_DENIED: Request had insufficient authentication scopes. at com.google.api.gax.rpc.ApiExceptionFactory
What I did to deploy the programs(in container) to run on GCP/Kubernetes Engine:
Login the Cloud Shell after switch to my project for the Pubsub testing
Git clone my programs which being tested in Eclipse
Move the mypubsub.json file to under /home/my_user_id
export GOOGLE_APPLICATION_CREDENTIALS="/home/my_user_id/mp6key.json"
Run 'mvn clean package' to build the miscroservice programs
Run 'docker build' to create the image files
Run 'docker push' to push the image files to gcr.io repo
Run 'kubectl create' to create the deployments and expose the services
Once the 2 microservices deployed and exposed, when I tried to access them in browser, the one to publish a message worked fine to retrieve data from database and processed the data, then failed with the above errors when trying to access the GCP Pubsub API to publish the message
Could anyone provide a hint for what to check to solve the issue?
The issue has been resolved by following the guide:
https://cloud.google.com/kubernetes-engine/docs/tutorials/authenticating-to-cloud-platform
Briefly the solution is to add following lines in the deployment.yaml to load the credential key:
- name: google-cloud-key
secret:
secretName: pubsub-key
containers:
- name: my_container
image: gcr.io/my_image_file
volumeMounts:
- name: google-cloud-key
mountPath: /var/secrets/google
env:
- name: GOOGLE_APPLICATION_CREDENTIALS
value: /var/secrets/google/key.json
Try to explicitly provide CredentialsProvider to your Publisher class, I faced the same authentication issue.
This approach worked for me !
CredentialsProvider credentialsProvider = FixedCredentialsProvider.create(
ServiceAccountCredentials.fromStream(
PubSubUtil.class.getClassLoader().getResourceAsStream("key.json")));
Publisher publisher = Publisher.newBuilder(topicName)
.setCredentialsProvider(credentialsProvider)
.build();

Unable to pull docker image into Kubernetes Pod from Google Container Registry

I have read this question and this one, and created my Kubernetes secret for Google Container Registry using a service account JSON key with project: owner and viewer permissions. I have also verified that the image does in fact exist in Google Container Registry by going to the console.
I have also read this document.
When I run:
minikube dashboard
And then from the user interface, I click the "+" sybmol, specify the URL of my image like this:
project-123456/bot-image
then click on 'advanced options' and specify the Secret that was imported.
After a few seconds I see this error:
Error: Status 403 trying to pull repository project-123456/bot-image: "Unable to access the repository: project-123456/bot-image; please verify that it exists and you have permission to access it (no valid credential was supplied)."
If I look at what's inside the Secret file (.dockerconfigjson), it's like:
{"https://us.gcr.io": {"email": "admin#domain.com", "auth": "longtexthere"}}
What could be the issue?
The json needs to have a top level "{auths": json key from:
Creating image pull secret for google container registry that doesn't expire?
So the json should be structured like:
{"auths":{"https://us.gcr.io": {"email": "admin#domain.com", "auth": "longtexthere"}}}
If you are still having issues, you can alternatively download the latest version of minikube (0.17.1) and run
minikube addons configure registry-creds
following the prompts there to setup creds
then run minikube addons enable registry-creds
Now you should be able to pull down pods from GCR using a yaml structured like this:
apiVersion: v1
kind: Pod
metadata:
name: foo
namespace: default
spec:
containers:
- image: gcr.io/example-vm/helloworld:latest
name: foo
EDIT: 6/13/2018 updating the commands to reflect comment by #Rambatino