I get an error (in red, in picture below) whilst creating pubsub Bigquery subscription. Error shows up in subscription creation view:
First question - why is this error appears in the first place?
One of my attempts to solve this was to try to first create standard subscription, then add these missing permissions by assigning a role using such command:
gcloud pubsub subscriptions add-iam-policy-binding EventIngestSubscription-4475d78 --member=serviceAccount:service-388032002134#gcp-sa-pubsub.iam.gserviceaccount.com --role="roles/roles/bigquery.dataEditor", but this produces ERROR: (gcloud.pubsub.subscriptions.add-iam-policy-binding) INVALID_ARGUMENT: Role roles/bigquery.dataEditor is not supported for this resource. error.
When I tried to set pubsub.subscriber roles instead of bigquery.admin - it worked.
Thanks a lot on any insights and suggestions on how to create a Bigquery Subscription. Am really stuck with this one...
The permission that needs to be set is not on the subscription, it is on the BigQuery table itself. Therefore, you are not going to be able to set the BigQuery permissions on the subscription. Instead, you need to ensure that the service account has roles/bigquery.dataEditor on the table you are using with the subscription. You can do this with the bq command-line tool:
bq add-iam-policy-binding --member="serviceAccount:service-<project number>#gcp-sa-pubsub.iam.gserviceaccount.com" --role=roles/bigquery.dataEditor -t "<dataset>.<table>"
This permission is needed so that Pub/Sub can write to BigQuery on your behalf.
Related
Hi I am relatively new to Google Cloud and am trying to set up a pipeline from DataFlow to BigQuery. I was trying to use pub/sub for this.
So here's what I wanted to achieve:
DataFlow -> Data Manipulation using some script (probably Python) -> Bucket (temporary) -> Pub Sub -> BigQuery
I have looked into a bunch of pipelines and docs and I was trying to create a subscription that would take my data from bucket to BQ. However, when I try to create it, it keeps saying the service account associated does not have the right perms.
I tried looking for it in the IAM but can't seem to find the account. I also tried adding principal in pub sub permissions directly.
Nothing is working. Help.
If you would like to use a PubSub to BigQuery subscription, your PubSub servcie account should have relevant IAM permissions for BigQuery access - see Pub/Sub service account permissions and Assign BigQuery roles to the Pub/Sub service account
A picture in your question has a sentence in red font with the service account name - service-69802119849#gcp-sa-pubsub.iam.gserviceaccount.com. If you work through UI console, you may go to your target BigQuery dataset - clq I reckon, and add BigQuery Data Editor role and BigQuery Metadata Viewer role to that service account, so it has permissions to work with the given dataset. If you would like, you may grant those roles on the project level.
Coming back to a design of the pipeline - I don't know your context and requirements, but as a general observation - if you already have to use a Dataflow, it may be posible to 'stream' data directly to BigQuery without an intermediate storage in a bucket.
I have succesfully deployed a 2nd generation cloud function with a storage trigger per the google tutorial.
The Cloud Function works when I run a test command in the shell. But if I try for real by uploading a file to my bucket the could function is not invoked.
I can see that the event triggers the pubsub topic:
And in Eventarc I can see signs of the problem:
So, my layman analyse of why the cloud function invokation fails is that I lack some permission for Eventarc to receive the message from PubSub (?). I have read Eventarc troubleshooting and Eventarc accesscontrol and tried to add the eventarc admin role to the eventarc serviceaccount (as seen in image below) but to no result. (I've also added it to any other service account I can find, made the compute service account project owner, etc. but no luck). What am I missing?
(Note, I had an earlier question about this but with broader scope but I opted for a new, more specific question)
You used the Compute Engine default Service Account.
You need to give the needed permissions to this Service Account :
According to the documentation :
Make sure the runtime service account key you are using for your
Application Default Credentials has either the
cloudfunctions.serviceAgent role or the storage.buckets.{get, update}
and the resourcemanager.projects.get permissions. For more information
on setting these permissions, see Granting, changing, and revoking
access to resources.
Please check in IAM page if the default Service Account has the following permissions :
cloudfunctions.serviceAgent
storage.buckets.{get, update}
resourcemanager.projects.get
Also, don't hesitate to check in Cloud logging to see the exact error and the missing permissions.
I am trying to restrict a bigquery so that users can only access a specific datasets, i did so without any issues, but why user is not able to create scheduled queries? it is saying to enable api and only project
owner can able to schedule queries , is there anyway to add permissions to create a custom role so that users can query,create and schedule queries ?
//dinesh
Ensure that the person creating the transfer has the following required permissions in BigQuery:
bigquery.transfers.update permissions to create the transfer
bigquery.datasets.update permissions on the target dataset
The bigquery.admin predefined Cloud IAM role includes bigquery.transfers.update and bigquery.datasets.update permissions. You should check the official documentation and the Cloud IAM roles in BigQuery to see the Predefined roles and permissions.
I have a small python app running in google cloud run with docker. The application is triggered by http requests, executes a query in big query and return the result. Unfortunately I get the following permission error:
Reason: 403 POST https://bigquery.googleapis.com/bigquery/v2/projects/XXXX/jobs: Access Denied: Project XXXX: User does not have bigquery.jobs.create permission in project XXXX.\n\n(job ID: XXXX-XX-XX-XX-XXXX)\n\n
I understand I need to give access from cloud run to big query. How do I do it? to which user? how can i find out?
You need to add BiqQuery permissions via IAM Roles to the service account assigned to Cloud Run.
To allow Cloud Run to create Big Query jobs (bigquery.jobs.create) you need one of the following roles:
roles/bigquery.user
roles/bigquery.jobUser
The service account for Cloud Run is displayed in the Google Cloud Console in the Cloud Run section for your service. Most likely this is Compute Engine default service account.
To add a BiqQuery role, you can use the Google Cloud Console. Go to IAM, find the service account. Add roles to the service account.
Documentation:
BigQuery predefined Cloud IAM roles
Service accounts on Cloud Run (fully managed)
Granting roles to service accounts
One of the issues could be that Service Account which your Cloud Run job is using does not have permissions on BigQuery.
You can update the service account permission and add roles/bigquery.user role to create a job.
Also, based on your application requirement add relevant roles. You can see details about different BigQuery roles here.
A good rule is provide only required permissions to a service account.
I hope this helps.
The application is triggered by http requests, executes a query in big query and return the result.
From the security standpoint the permissions required are identical to those used by the custom website from this solution. I'm the author. The website is also triggered by http requests, executes a query in BQ and returns the result. And granting the permission to create jobs (via bigquery.jobUser role) is not enough.
You can grant the required permissions to the service account in different ways (e.g. a more sweeping permission and a more restricted one), the details are here at the Step 6.
Generally speaking, the more restricted and the more granular the permissions are the better for security.
I'm adding extra clarifications and also pasting specific instructions related to Google's tools usage.
To add the permission to create and run jobs (the BQ error message says this permission is lacking) execute the command:
gcloud projects add-iam-policy-binding <project-name> --member=serviceAccount:<sa-name>#<project-name>.iam.gserviceaccount.com --role roles/bigquery.jobUser
The command can be executed in Cloud Shell, open it using the "Activate Cloud Shell" icon in BigQuery Web UI or from other Google Console page. Replace the placeholders:
<sa-name> - replace with service account name used by Cloud Run,
<project-name> - replace with the project name.
The command adds the role bigquery.jobUser to the service account. Do not add other permissions/roles to solve the inability to create/run jobs because excessive permissions are bad for security.
Another permission is required to read BQ data. There are two options to add it:
Grant the bigquery.dataViewer role to the service account:
gcloud projects add-iam-policy-binding <project-name> --member=serviceAccount:<sa-name>#<project-name>.iam.gserviceaccount.com --role roles/bigquery.dataViewer
Then proceed to the next step. Not recommended unless you are using a throw-away project. The drawback of this approach is granting permissions to view all project datasets.
Take more granular approach (recommended) by allowing the service account to query one dataset only. This is the approach described below.
Execute the commands replacing <ds-name> with the dataset name (used by your query):
bq show --format=prettyjson <ds-name> >/tmp/mydataset.json
vi /tmp/mydataset.json
Using vi, append the following item to the existing access array and replace the placeholders before saving the file:
,
{
"role": "READER",
"userByEmail": "[<sa-name>#<project-name>.iam.gserviceaccount.com](mailto:<sa-name>#<project-name>.iam.gserviceaccount.com)"
}
Execute the command to effect the changes for the dataset:
bq update --source /tmp/mydataset.json <ds-name>
I would like to define the steps in making a pubsub log export sink from one project to another. As a secondary goal, I would like the sink to bridge organizations. So far I have followed these steps as outlined in the gcloud help pages and the auth documentation.
First I created a sink:
# from project A
gcloud logging sinks create \
<sink_name> \
pubsub.googleapis.com/projects/<project_B>/topics/<topic_name> \
--log-filter <filter>
The CLI returns successfully and gives a little advice on setting up permissions for a service account it created:
Created [https://logging.googleapis.com/v2/projects/<project_A>/sinks/<sink_name>].
Please remember to grant `serviceAccount:<new_user>#logging-<project_A_account_number>.iam.gserviceaccount.com` Pub/Sub Publisher role to the topic.
More information about sinks can be found at https://cloud.google.com/logging/docs/export/configure_export
Following this advice, I gave the new service account the appropriate permissions for that topic.
gcloud projects add-iam-policy-binding <project_B> \
--member serviceAccount:<new_user>#logging-<project_A_account_numbe_id>.iam.gserviceaccount.com \
--role roles/pubsub.publisher
This command returns without issue.
In spite of everything seeming OK, no logs flow through the sink.
Here are some clues: The Exports tab on the Logs Viewer reports a permissions error in the sink. The Project Activity tab reports a permissions issue.
Image: Logs Viewer, Exports
Image: Project, Activity
Is there a solution to make this work? Is it possible to generalize this to send logs to a sink in this project from other gcloud organizations?
I have been able to reproduce the scenario you wanted. Let me set the basics for the scenario description:
Project A (where my logs are stored): project-a
Project B (where my Pub/Sub Topic and Subscription are): project-b
Topic: projects/project-b/topics/myTopic
Subscription: mySub
Sink: test-sink
Then, this is the processed I followed:
In project A: create a filter for logs in the Logging > Logs tab in the Google Cloud Console.
Create an export with the elements in the image below (remember to append pubsub.googleapis.com/ to the name of your topic in the other project):
Move to the Exports tab and copy the Writer Identity, which should have the format test-sink#XXXXXXXX.iam.gserviceaccount.com
In project B: go to the IAM & admin > IAM tab in the Console and add a new member being the previous service account obtained in step 3 with the role Pub/Sub Editor enabled.
Create a Pub/Sub subscription with the command gcloud beta pubsub subscriptions create --topic myTopic mySub
Do some operation that results in logs read by the filter you specified in Project A.
Consume the logs written to the topic using the subscription, with the command gcloud beta pubsub subscriptions pull mySub.
There you will find the logs that are written from Project A to your tropic in Project B. I have reproduced this same scenario writing logs from a simple App Engine application (and therefore with the appropriate log filter searching for App Engine logs), and when I make requests to the App Engine app, some logs are created and then written in myTopic, which I can read using mySub.
Regarding your second question, I cannot make sure whether this same procedure works in a cross-organization scenario but I see no issues with that.
Matt, your gcloud commands to create a sink and grant publisher role to the service account used by the sink looks correct. The error you saw may be transient and it should have resolved after some time.
A possible reason for the error could be that there was a time delay between the two commands, and the sink tried to export logs immediately before you granted the IAM role. Can you confirm that the error resolved itself eventually?
To answer your last question, yes you can export logs from one project to a destination in another project in a different organization.