Access Denied: BigQuery BigQuery: Permission denied while opening file - google-cloud-platform

When trying to perform a simple query in BigQuery I am getting this error:
Access Denied: BigQuery BigQuery: Permission denied while opening file.
I am using an IAM user with a BigQuery admin role. I can view the datasets and tables just not any data.
I have authorised the dataset too.

You might be missing a storage permission. (storage.objects.get).
Try running a gsutil -D 1, check output for any 403 errors.

Open your GCP console (Logs Explorer) and filter on the service you want (GCS here). It will give you the service account/account which need access + rights missing on the target ressource.
If you can recreate the error, do it and then refresh the log explorer.

Related

Permission bigquery.tables.get denied or it may not exist

I am using the AWS Glue connector for BigQuery. My glue jobs were running fine in multiple AWS accounts but suddenly it started failing with the below response in all the accounts together:
Access Denied: Table common-infra-services:detailedcost.gcp_billing_export_resource_v1_01E8AD_3E792E_BB0E5D: Permission bigquery.tables.get denied on table common-infra-services:detailedcost.gcp_billing_export_resource_v1_01E8AD_3E792E_BB0E5D (or it may not exist).", "reason": "accessDenied"
Please review and let me know what could be the issue of this problem.
I am using the GCP IAM service account role to run queries using Glue to BigQuery with the following set of permissions:
bigquery.jobs.create
bigquery.tables.getData
bigquery.tables.list
And with these permissions, all jobs were running fine till yesterday.
Based on that error message I'd check if table common-infra-services:detailedcost.gcp_billing_export_resource_v1_01E8AD_3E792E_BB0E5D exists. If it does you might need to add permission bigquery.tables.get to your service account.

Awd dms permission issue while transferring data from cluster to cluster in sql

When applied got the error in status as running with errors, and in logs i am only able to find one error "Failed to get table definition for 'awsdms_control'. 'awsdms_apply_exceptions', checking if Metadata connection dropped by server". Followed by "Native error: command denied to user 'dms_user' for table 'awsdms_apply_exceptions ' ". Is it because of db permissions. If it is which permissions were required. Thanks
It is because of permission issues, Admin permissions resolved it as temporary fix, for now and also we gave target schema in metadata.

Amazon Athena error opening Hive split s3 path and Access Denied

I'm query data from glue catalog. For some of table I can see the data and some of table getting below error:
Error opening Hive split s3://test/sample/run-1-part-r-03 (offset=0, length=1156) using org.apache.hadoop.mapred.TextInputFormat: Permission denied on S3 path: s3://test/sample/run-1-part-r-03
I have give full access to Athena.
Amazon Athena adopts the permissions from the user when accessing Amazon S3.
If the user can access the objects in Amazon S3, then they can access them via Amazon Athena.
Does the user who ran the command have access to those objects?

Permissions For Google Cloud SQL Import Using Service Accounts

I've exported MySQL Database following the MySQL Export Guide successfully.
Now, I'm trying to import MySQL Database following the MySQL Import Guide.
I've checked the permissions for the service_account_email I'm using, and I have allowed both Admin SQL and Admin Storage permissions.
I was able to successfully activate my service account using this command locally:
gcloud auth activate-service-account <service_account_email> --key-file=<service_account_json_file>
After I ran the command:
gcloud sql import sql <instance> <gstorage_file> --database=<db_name> --async
I got this information:
{
"error": {
"errors": Array[1][
{
"domain": "global",
"reason": "required",
"message": "Login Required",
"locationType": "header",
"location": "Authorization"
}
],
"code": 401,
"message": "Login Required"
}
}
Other Things I've Tried
I also tried using the service_account_email of my SQL instance, which came from:
gcloud sql instances describe <instance_name>
But, it seems to have the same error.
Question
Based on the REST API JSON error I'm given, how do I "login" using the service_account_email so I wouldn't get the 401 Error?
Problem is about the permission of database instance service account to write on created bucket. Steps to solve this issue
1) Go to your Cloud SQL Instance and copy service account of instance (Cloud SQL->{instance name}->OVERVIEW->Service account)
2) After copy the service account, go the Cloud Storage Bucket where to want to dump and set desired permission to that account (Storage->{bucket name}->permissions->add member).
The cloud SQL instance is running under a Google service account that is not a part of your project. You will need to grant this user permissions on the file in Cloud Storage that you want to import. Here is a handy dandy bash snippet that will do that.
SA_NAME=$(gcloud sql instances describe YOUR_DB_INSTANCE_NAME --project=YOUR_PROJECT_ID --format="value(serviceAccountEmailAddress)")
gsutil acl ch -u ${SA_NAME}:R gs://YOUR_BUCKET_NAME;
gsutil acl ch -u ${SA_NAME}:R gs://${YOUR_BUCKET_NAME}/whateverDirectory/fileToImport.sql;
The first line gets the service account email address.
The next line gives this service account read permissions on the bucket.
The last line gives the service account read permissions on the file.
Google also has some of the worst error reporting around. If you get this error message it might also be that you entered a PATH incorrectly. In my case it was my path to my bucket directory. Go figure, I don't have permissions to access a bucket that doesn't exist. Technically correct but hardly useful.
After performing some research, and based in the permission error, these are the steps that I find more useful for you to troubleshoot the issue:
In order to easier test ACLs and permissions, you can:
Create and download a key for a service account in question
Use 'gcloud auth activate-service-account' to obtain credentials of service account
Use gsutil as usual to see if you can access the object in question
You might need to grant additional IAM role such as 'roles/storage.admin' to service account in question, see more information here.
According to the google Docs
Describe the instance you are importing to:
gcloud sql instances describe INSTANCE_NAME
Copy the serviceAccountEmailAddress field.
Use gsutil iam to grant the storage.objectAdmin IAM role to the service account for the bucket.
gsutil iam ch serviceAccount:SERVICE-ACCOUNT:objectAdmin gs://BUCKET-NAME
Then Import the database

Spark Redshift: error while reading redshift tables using spark

I am getting below error while reading data from redshift table using spark.
Below is the code:
Dataset<Row> dfread = sql.read()
.format("com.databricks.spark.redshift")
.option("url", url)
//.option("query","select * from TESTSPARK")
.option("dbtable", "TESTSPARK")
.option("forward_spark_s3_credentials", true)
.option("tempdir","s3n://test/Redshift/temp/")
.option("sse", true)
.option("region", "us-east-1")
.load();
error:
Exception in thread "main" java.sql.SQLException: [Amazon](500310) Invalid operation: Unable to upload manifest file - S3ServiceException:Access Denied,Status 403,Error AccessDenied,Rid=,CanRetry 1
Details:
error: Unable to upload manifest file - S3ServiceException:Access Denied,Status 403,Error AccessDenied,Rid 6FC2B3FD56DA0EAC,ExtRid I,CanRetry 1
code: 9012
context: s3://jd-us01-cis-machine-telematics-devl-data-
processed/Redshift/temp/f06bc4b2-494d-49b0-a100-2246818e22cf/manifest
query: 44179
Can any one please help?
You're getting a permission error from S3 when Redshift tries to access the files you're telling it to load.
Have you configured the access keys for S3 access before calling the load()?
sc.hadoopConfiguration.set("fs.s3.awsAccessKeyId", "ASDFGHJKLQWERTYUIOP")
sc.hadoopConfiguration.set("fs.s3.awsSecretAccessKey", "QaZWSxEDC/rfgyuTGBYHY&UKEFGBTHNMYJ")
You should be able to check which access key id was used from the Redshift side by querying the stl_query table.
From the error "S3ServiceException:Access Denied"
It seems the permission is not set for Redshift to Access the S3 files. Please follow the below steps
Add a bucket policy to that bucket that allows the Redshift Account
access Create an IAM role in the Redshift Account that redshift can
assume Grant permissions to access the S3 Bucket to the newly created role
Associate the role with the Redshift cluster
Run COPY statements