I am using https://dataproc.googleapis.com/v1/projects/{projectId}/regions/{region}/clusters to create GCP Dataproc clusters as described at https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.clusters/create.
I am using service account credentials that have been exported into a JSON keyfile. That service account (myserviceaccount#projectA.iam.gserviceaccount.com) exists in projectA and I have been able to use it to successfully create Dataproc clusters in projectA.
I now need to use the same service account to create Dataproc clusters in projectB. I'm running exactly the same code using exactly the same credentials, the only difference is the project that I'm creating it in. I have granted myserviceaccount#projectA.iam.gserviceaccount.com the exact same permissions in projectB as it has in projectA but when I try and create the cluster it fails:
2019-03-22 10:58:47 INFO: _retrieve_discovery_doc():272: URL being requested: GET https://www.googleapis.com/discovery/v1/apis/dataproc/v1/rest
2019-03-22 10:58:54 INFO: method():873: URL being requested: GET https://dataproc.googleapis.com/v1/projects/dh-coop-no-test-35889/regions/europe-west1/clusters?alt=json
2019-03-22 10:58:54 INFO: new_request():157: Attempting refresh to obtain initial access_token
2019-03-22 10:58:54 DEBUG: make_signed_jwt():100: [b'blahblahblah', b'blahblahblah']
2019-03-22 10:58:54 INFO: _do_refresh_request():777: Refreshing access_token
2019-03-22 10:58:55 WARNING: _should_retry_response():121: Encountered 403 Forbidden with reason "forbidden"
So, that service account is forbidden from creating clusters in projectB, but I don't get any information about why. I am hoping there are some audit logs that explain more about why the request was forbidden but I've looked in https://console.cloud.google.com/logs/viewer?project=projectB and can't find any.
Can someone tell me where I can get more information to diagnose why this request is failing?
As mentioned in the comments, one way to get more information on the failed request is to set up gcloud to use the service account. Running gcloud commands with --log-http may also give additional information.
Re-pasting here for easier readability/visibility.
Related
I am using airflow.providers.google.suite.transfers.gcs_to_gdrive.GCSToGoogleDriveOperator
to upload a file from GCS to Google Drive.
Getting 403 Forbidden with reason "insufficientPermissions"
This is the logging. Not sure where and what the issue is. any help is highly appreciated !!!
{gcs_to_gdrive.py:151} INFO - Executing copy of gs://adwords_data/conversion_data.csv to gdrive://google_adwords/
{gcs.py:328} INFO - File downloaded to /tmp/tmpzmk4n2z9
{http.py:126} WARNING - Encountered 403 Forbidden with reason "insufficientPermissions"
Code
from airflow.providers.google.suite.transfers.gcs_to_gdrive import GCSToGoogleDriveOperator
copy_google_adwords_from_gcs_to_google_drive = GCSToGoogleDriveOperator(
task_id="copy_google_adwords_from_gcs_to_google_drive",
source_bucket="{}".format(gcs_to_gdrive_bucket),
source_object="conversion_data.csv",
destination_object="adwords_data/",
gcp_conn_id='google_cloud_default',
dag=dag
)
In the gcp_conn_id = google_cloud_default. I have added the scope = https://www.googleapis.com/auth/drive
In your airflow connection, in the scopes field, try adding:
https://www.googleapis.com/auth/drive, https://www.googleapis.com/auth/cloud-platform
You'll also need to ensure the service account has the correct roles assigned (as well as the drive being shared w/ the Service Account's email)
Some of the roles you can add to the service account by going to IAM > click edit on the service account, and add the roles
To preface, I am very new to terraform and cloud. I am, in Google Cloud Shell, trying to create a GCP landing zone. However, when I run terraform plan, I get the following error message:
Error: Error when reading or editing Organization Not Found: 000000000000: googleapi : Error 403: The caller does not have permission, forbidden with modle cloudbuild_bootstrap.data.google_organization.org,
on .terraform/modules/cloudbuild_bootstrap/modules/cloudbuild/main.tf line 31, in data "google_organization" "org": 31: data "google_organization" "org" {
How should I alleviate this issue? I tried to run gcloud auth application-default login, which tells me the following:
You are authorizing client libraries without access to a web browser. Please run the following command on a machine with a web browser and copy its output back here. Make sure the installed gcloud version is 372.0.0 or newer.
gcloud auth application-default login --remote-bootstrap="[Long URL here]".
Enter the output of the above command:
I'm not exactly sure what the output I should be pasting in the prompt area is however.
Any and all comments/tips are appreciated, thank you.
Please follow as suggested by gcloud commands output. Besides the Cloud Shell, you would need a different laptop(office/personal) with gcloud installed. I use Macbook.
In your PC/Macbook terminal, run gcloud auth application-default login --remote-bootstrap="....
You will be prompted to open a browser and authenticate your Google Cloud e-mail. Once you are done, a code will be generated & displayed in PC/Macbook terminal.
Copy the code from your terminal and provide it as input for your Cloud Run command input.
When I tried for the first time, I felt this a little weird but its a one-time task and ensures the safety of your Cloud Credentials & other.
I am currently deploying a Django application to GCP Cloud Run.
I have replaced Cloud Run Default Service Account (....compute#developer.gserviceaccount.com) with a custom one.
But I get an error message:
AttributeError: you need a private key to sign credentials.the credentials you are currently using <class 'google.auth.compute_engine.credentials.Credentials'> just contains a token. ...
This bit is confusing me:
<class 'google.auth.compute_engine.credentials.Credentials'>
The error reported of the context of Django Storages which I use to store files.
Is this message means the service account which is used by Cloud Run is still the default one (GOOGLE_APPLICATION_CREDENTIALS is set as Compute Engine)?
Why not the Custom Service Account is used as an Identity for the Cloud Run?
Why Cloud Run still checking the Default one I expected I replaced it with te Custome Service Account?
I am a bit new with the IAM, but if somebody could explian why it is happeing would be appriciated.
I registered dataflow with command : gcloud deployment-manager type-providers create dataflow --descriptor-url='https://dataflow.googleapis.com/$discovery/rest?version=v1b3'
When i run this script
- name: "my-topic-to-avro"
type: 'project_id/dataflow:dataflow.projects.locations.templates.launch'
properties:
projectId: project_id
gcsPath: "gs://test-create-123"
jobName: "my-topic-to-avro"
location: "europe-west1"
parameters:
inputTopic: "projects/project_id/topics/test"
outputDirectory: "gs://test-create-123/"
avroTempDirectory: "gs://test-create-123/"
In output i have this :
ERROR: (gcloud.beta.deployment-manager.deployments.update) Error in Operation [operation-1598980583c2a0ec69]: errors:
- code: RESOURCE_ERROR
location: /deployments/quick-deployment/resources/my-topic-to-avro
message: '{"ResourceType":"project_id/dataflow:dataflow.projects.locations.templates.launch","ResourceErrorCode":"401","ResourceErrorMessage":{"code":401,"message":"Request
is missing required authentication credential. Expected OAuth 2 access token,
login cookie or other valid authentication credential. See https://developers.google.com/identity/sign-in/web/devconsole-project.","status":"UNAUTHENTICATED","statusMessage":"Unauthorized","requestPath":"https://dataflow.googleapis.com/v1b3/projects/project_id/locations/europe-west1/templates:launch","httpMethod":"POST"}}'
I can have my token run command : gcloud auth print-access-token, but i don't know where insert my value and the schema for my yaml to insert all value to create dataflow.
Any help appreciated.
The "401 - Request is missing required authentication credential" error message that is triggered when doing a POST request to the Dataflow API is due to a missing credential. The following public reference explains in detail how to use OAuth 2.0 to access Google APIs. Please read it carefully and make sure to follow the steps as mentioned to avoid any errors.
Another place to check this is double check that the Dataflow API is enabled and try setting "GOOGLE_APPLICATION_CREDENTIALS" to point to the JSON of the service account you are using.
I found this Authentication documentation where is mentioned that you can use the Authorization header to supply an access token from the project's service account, you can also try with this method but is important t o clarify that is an example for GKE cluster.
I noticed this question was addressed in this post.
I'm trying to login Cloud Foundry endpoint.
But when I connect by Cloud Foundry CLI, I get a error message below:
C:\Users\abc>cf login -a https://xxx.predix-
uaa.run.aws-usw02-pr.ice.predix.io
API endpoint: https://xxx.predix-uaa.run.aws-us
w02-pr.ice.predix.io
Not logged in. Use 'cf login' to log in.
FAILED
Error performing request: Get /login: unsupported protocol scheme ""
Please help!
The issue is likely that you are not specifying your CF API endpoint url. Please contact your platform operator to confirm what it should be.
We'll improve the error message, but what seems to be happening is that the cf CLI tries to retrieve a json configuration from [api-endpoint]/v2/info, but not getting the response it expects.
It then builds a URL to the login endpoint from the "authorization_endpoint" that should be advertised in that json configuration. As that field is not in your response, it tries to access "/login" instead of e.g. "https://xxx.predix-uaa.run.aws-usw02-pr.ice.predix.io/login", causing the error.
CF endpoint urls generally start with "api.". In fact, I've never seen one starting differently.