How to solve insufficient authentication scopes when use Pubsub on GCP - google-cloud-platform

I'm trying to build 2 microservices (in Java Spring Boot) to communicate with each other using GCP Pub/Sub.
First, I tested the programs(in Eclipse) working as epxected in my local laptop(http://localhost), i.e. one microservice published the message and the other received it successfully using the Topic/Subscriber created in GCP (as well as the credential private key: mypubsub.json).
Then, I deployed the same programs to run GCP, and got following errors:
- 2020-03-21 15:53:16.831 WARN 1 --- [bsub-publisher2] o.s.c.g.p.c.p.PubSubPublisherTemplate : Publishing to json-payload-sample-topic topic failed
- com.google.api.gax.rpc.PermissionDeniedException: io.grpc.StatusRuntimeException: PERMISSION_DENIED: Request had insufficient authentication scopes. at com.google.api.gax.rpc.ApiExceptionFactory
What I did to deploy the programs(in container) to run on GCP/Kubernetes Engine:
Login the Cloud Shell after switch to my project for the Pubsub testing
Git clone my programs which being tested in Eclipse
Move the mypubsub.json file to under /home/my_user_id
export GOOGLE_APPLICATION_CREDENTIALS="/home/my_user_id/mp6key.json"
Run 'mvn clean package' to build the miscroservice programs
Run 'docker build' to create the image files
Run 'docker push' to push the image files to gcr.io repo
Run 'kubectl create' to create the deployments and expose the services
Once the 2 microservices deployed and exposed, when I tried to access them in browser, the one to publish a message worked fine to retrieve data from database and processed the data, then failed with the above errors when trying to access the GCP Pubsub API to publish the message
Could anyone provide a hint for what to check to solve the issue?

The issue has been resolved by following the guide:
https://cloud.google.com/kubernetes-engine/docs/tutorials/authenticating-to-cloud-platform
Briefly the solution is to add following lines in the deployment.yaml to load the credential key:
- name: google-cloud-key
secret:
secretName: pubsub-key
containers:
- name: my_container
image: gcr.io/my_image_file
volumeMounts:
- name: google-cloud-key
mountPath: /var/secrets/google
env:
- name: GOOGLE_APPLICATION_CREDENTIALS
value: /var/secrets/google/key.json

Try to explicitly provide CredentialsProvider to your Publisher class, I faced the same authentication issue.
This approach worked for me !
CredentialsProvider credentialsProvider = FixedCredentialsProvider.create(
ServiceAccountCredentials.fromStream(
PubSubUtil.class.getClassLoader().getResourceAsStream("key.json")));
Publisher publisher = Publisher.newBuilder(topicName)
.setCredentialsProvider(credentialsProvider)
.build();

Related

GCP ci/cd: skaffold cannot access private git repository using google cloud build

I'm trying to configure auto ci/cd process with google cloud platform.
So I went through this instruction https://davelms.medium.com/automate-gke-deployments-using-cloud-build-and-cloud-deploy-2c15909ddf22 and all works good. So I have a trigger in cloud build that goes to cloud build file that using skaffold for manifest rendering. It builds an image and deploy the app. All good.
But as we have a lot of apps we want to have deploy configs in the separate repo. In that case I see from skaffold docs https://skaffold.dev/docs/references/yaml/?version=v2beta29#build-artifacts-docker-ssh that you could use as configs:
requires:
- configs: []
git:
repo: https://github.com/GoogleContainerTools/skaffold.git
path: skaffold.yaml
ref: main
sync: true
this config works for public repo, but for private repo I get error:
error parsing skaffold configuration file: caching remote dependency https://github.com/your_repo.git: failed to clone repo: running [/usr/bin/git clone https://github.com/your_repo.git ./P7akUPb6jdsgjfgTnOedB92BH8UE7 --branch main --depth 1]
" - stderr: "Cloning into './P7akUPb6jdsgjfgTnOedB92BH8UE7'...\nfatal: could not read Username for 'https://github.com': No such device or address\n""
Where or how I could add details for accessing private repo?

Overwrite Cloud Build repository url when triggering a push

In GCP cloud build, I have a Bitbucket Data Center repository connected that uses a proxy; https://bitbucket-sv3.mydomain.com. However, the repo lives at https://bitbucket.mydomain.com
When I push to the repo, my trigger, which uses the connected repo starts a build by trying to clone the second domain, which the workerpool can't reach and I get a connection error. I assume that the webhook created by gcloud is just sending the server's address and the cloud build machine is using it from the payload. Is there some way I can force the build to use the sv3 address when it clones the source repo during the Fetch Source step?
Here's an example of my cloudbuild.yaml showing some of the things I've tried:
steps:
- name: gcr.io/cloud-builders/git
env:
- "_URL='https://bitbucket-sv3.mydomain.com/reponame.git'"
- "_HEAD_REPO_URL='https://bitbucket-sv3.mydomain.com/reponame.git'"
args: ['clone', 'https://bitbucket-sv3.mydowmain.com/reponame.git']
An alternative would be to stop the FETCHSOURCE step, but I haven't found a way to do that.

Unable to push Helm Chart to Google Cloud Artifact Registry using OCI

I'm trying to push a helm chart to Google Cloud OCI registry (Artifact Registry) but I get forbidden error:
helm push testapp-1.0.0.tgz oci://europe-north1-docker.pkg.dev/project-id/my-artifact-registry/
Error: failed to authorize: failed to fetch anonymous token:
unexpected status: 403 Forbidden
It seems that I'm authenticated ok since when I do try to push it but without "oci://" it works fine:
helm chart push europe-north1-docker.pkg.dev/project-id/my-artifact-registry/charts/testapp:1.0.0
The push refers to repository [europe-north1-docker.pkg.dev/..]
ref: europe-north1-docker.pkg.dev/...
digest: 2757354aef8af2db48261d52c17c0df35a99d6fccaf016b0e67e167c391b69c7
size:3.9 KiB
name: testapp
version: 1.0.0
1.0.0: pushed to remote (1 layer, 3.9 KiB total)
I logged in to the helm registry using service account json key, using below command:
helm registry login -u _json_key_base64 --password <base_64_key> https://europe-north1-docker.pkg.dev
and this service-account has below roles:
roles/artifactregistry.admin
roles/artifactregistry.repoAdmin
roles/artifactregistry.writer
roles/container.developer
roles/storage.admin
roles/storage.objectViewer
Is there any specific permission needs to be enabled in GCP to use "OCI" protocol?
or any service need to be enabled?
or any different authentication required?
I followed the instructions here but with no success
its funny, but this is not the first time it happens to me... once I submit the question to Stackoverflow, something hits me and I'm able to find the problem with my issue!!
Anyway, the problem is basically with the authentication, where the URL to login to should be in the format of:
https://LOCATION-docker.pkg.dev/PROJECT/REPOSITORY
like this:
helm registry login -u _json_key_base64 --password <base_64_key> \
https://europe-north1-docker.pkg.dev/project-id/my-artifact-registry
I faced the same issue but using Cloudbuild.
I am glad if this snippet of code can help someone.
steps:
- name: 'alpine/helm:3.9.1'
id: 'helm package'
args: ['package', '.']
- name: 'alpine/helm:3.9.1'
id: 'helm push'
env:
- 'HELM_REGISTRY_CONFIG=../builder/home/.docker/config.json'
entrypoint: 'sh'
args:
- '-c'
- |
helm push --debug mylibchart-*.tgz oci://europe-west3-docker.pkg.dev/$PROJECT_ID/helm-registry
Basically in the step where we want to push our *.tgz., we need to set the env HELM_REGISTRY_CONFIG equal to the default path of the docker config.json.
This is kinda stupid but I was transitioning from container registry to the artifact registry and I forgot to give my service account permissions for the artifact registry which resulted in this exact error.

Gitlab Cloud run deploy successfully but Job failed

Im having an issue with my CI/CD pipeline ,
its successfully deployed to GCP cloud run but on Gitlab dashboard the status is failed.
I tried to replace images to some other docker images but it fails as well .
# File: .gitlab-ci.yml
image: google/cloud-sdk:alpine
deploy_int:
stage: deploy
environment: integration
only:
- integration # This pipeline stage will run on this branch alone
script:
- echo $GCP_SERVICE_KEY > gcloud-service-key.json # Google Cloud service accounts
- gcloud auth activate-service-account --key-file gcloud-service-key.json
- gcloud config set project $GCP_PROJECT_ID
- gcloud builds submit . --config=cloudbuild_int.yaml
# File: cloudbuild_int.yaml
steps:
# build the container image
- name: 'gcr.io/cloud-builders/docker'
args: [ 'build','--build-arg','APP_ENV=int' , '-t', 'gcr.io/$PROJECT_ID/tpdropd-int-front', '.' ]
# push the container image
- name: 'gcr.io/cloud-builders/docker'
args: [ 'push', 'gcr.io/$PROJECT_ID/tpdropd-int-front']
# deploy to Cloud Run
- name: "gcr.io/cloud-builders/gcloud"
args: ['run', 'deploy', 'tpd-front', '--image', 'gcr.io/$PROJECT_ID/tpdropd-int-front', '--region', 'us-central1', '--platform', 'managed', '--allow-unauthenticated']
gitlab build output :
ERROR: (gcloud.builds.submit)
The build is running, and logs are being written to the default logs bucket.
This tool can only stream logs if you are Viewer/Owner of the project and, if applicable, allowed by your VPC-SC security policy.
The default logs bucket is always outside any VPC-SC security perimeter.
If you want your logs saved inside your VPC-SC perimeter, use your own bucket.
See https://cloud.google.com/build/docs/securing-builds/store-manage-build-logs.
Cleaning up project directory and file based variables
00:01
ERROR: Job failed: exit code 1
I fix it by using:
options:
logging: CLOUD_LOGGING_ONLY
in cloudbuild.yaml
there you can use this work around :
Fix it by giving the Viewer role to the service account running this but this feels like giving too much permission to such a role.
This worked for me: Use --suppress-logs
gcloud builds submit --suppress-logs --tag=<my-tag>
To fix the issue, you just need to create a bucket in your project (by default - without public access) and add the role 'Store Admin' to your user or service account via https://console.cloud.google.com/iam-admin/iam
After that, you can refer the new bucket into the gcloud builds submit via parameter --gcs-log-dir gs://YOUR_NEW_BUCKET_NAME_HERE like this:
gcloud builds submit --gcs-log-dir gs://YOUR_NEW_BUCKET_NAME_HERE ...(other parameters here)
We need a new bucket because the default bucket for logs is a global (cross-projects). That's why it has specific security requirements to access it especially from outside the Google Cloud, like GitLab, Azure DevOps ant etc via service accounts.
(Moreover, in this case you no need to turn off logging via --suppress-logs)
Kevin's answer worked like a magic for me, since I am not able to comment, I am writing this new answer.
Initially I was facing the same issue where inspite of gcloud build submit command passed , my gitlab CI was failing.
Below is the cloudbuild.yaml file where I add the option logging as Kevin suggested.
steps:
name: gcr.io/cloud-builders/gcloud
entrypoint: 'bash'
args: ['run_query.sh', '${_SCRIPT_NAME}']
options:
logging: CLOUD_LOGGING_ONLY
Check this document for details: https://cloud.google.com/build/docs/build-config-file-schema#options
To me worked the options solution as mentioned for #Kevin. Just add the parameter as mentioned before in the cloudbuild.yml file.
steps:
- name: 'gcr.io/cloud-builders/docker'
args: ['build', '-t', 'gcr.io/myproject/myimage', '.']
options:
logging: CLOUD_LOGGING_ONLY

Unable to pull docker image into Kubernetes Pod from Google Container Registry

I have read this question and this one, and created my Kubernetes secret for Google Container Registry using a service account JSON key with project: owner and viewer permissions. I have also verified that the image does in fact exist in Google Container Registry by going to the console.
I have also read this document.
When I run:
minikube dashboard
And then from the user interface, I click the "+" sybmol, specify the URL of my image like this:
project-123456/bot-image
then click on 'advanced options' and specify the Secret that was imported.
After a few seconds I see this error:
Error: Status 403 trying to pull repository project-123456/bot-image: "Unable to access the repository: project-123456/bot-image; please verify that it exists and you have permission to access it (no valid credential was supplied)."
If I look at what's inside the Secret file (.dockerconfigjson), it's like:
{"https://us.gcr.io": {"email": "admin#domain.com", "auth": "longtexthere"}}
What could be the issue?
The json needs to have a top level "{auths": json key from:
Creating image pull secret for google container registry that doesn't expire?
So the json should be structured like:
{"auths":{"https://us.gcr.io": {"email": "admin#domain.com", "auth": "longtexthere"}}}
If you are still having issues, you can alternatively download the latest version of minikube (0.17.1) and run
minikube addons configure registry-creds
following the prompts there to setup creds
then run minikube addons enable registry-creds
Now you should be able to pull down pods from GCR using a yaml structured like this:
apiVersion: v1
kind: Pod
metadata:
name: foo
namespace: default
spec:
containers:
- image: gcr.io/example-vm/helloworld:latest
name: foo
EDIT: 6/13/2018 updating the commands to reflect comment by #Rambatino