I am trying to register a respository on AWS S3 to store ElasticSearch snapshots.
I am following guide and ran the very first command listed in the doc.
But I am getting the error Access Denied while executing that command.
The role that is being used to perform operations on S3 is the AmazonEKSNodeRole.
I have assigned the appropriate permissions to the role to perform operations on the S3 bucket.
Also, here is another doc which suggests to use kibana for ElasticSearch version > 7.2 but I am doing the same via cURL requests.
Below is trust Policy of the role through which I am making the request to register repository in the S3 bucket.
Also, below are the screenshots of the permissions of the trusting and trusted accounts respectively -
When trying to perform a simple query in BigQuery I am getting this error:
Access Denied: BigQuery BigQuery: Permission denied while opening file.
I am using an IAM user with a BigQuery admin role. I can view the datasets and tables just not any data.
I have authorised the dataset too.
You might be missing a storage permission. (storage.objects.get).
Try running a gsutil -D 1, check output for any 403 errors.
Open your GCP console (Logs Explorer) and filter on the service you want (GCS here). It will give you the service account/account which need access + rights missing on the target ressource.
If you can recreate the error, do it and then refresh the log explorer.
I'm trying to provide cross-account Glue access to Account B from Account A.
I'm first getting an error that says,
User {my_arn} is not authorized to perform: glue:GetDatabases on resource: {catalog}
I researched and found that I can grant Data Catalog permissions through Lake Formation. I selected "External accounts" and added the catalog resources along with table permissions. However, I get another error that says:
You don't have IAM permissions to make cross-account grants.
The required permissions are in the AWS managed policy AWSLakeFormationCrossAccountManager.
So I go to the IAM Management Console, find the policy specified in this error message, and attach it to the role I'm using (the one in the top right corner of the AWS Management Console).
But the same error message keeps popping up and this doesn't seem to have solved the issue.
What am I doing wrong here? How can I bypass this issue?
I am trying to use an AWS Glue crawler on an S3 bucket to populate a Glue database. I run the Create Crawler wizard, select my datasource (the S3 bucket with the avro files), have it create the IAM role, and run it, and I get the following error:
Database does not exist or principal is not authorized to create tables. (Database name: zzz-db, Table name: avroavro_all) (Service: AWSGlue; Status Code: 400; Error Code: AccessDeniedException; Request ID: 78fc18e4-c383-11e9-a86f-736a16f57a42). For more information, see Setting up IAM Permissions in the Developer Guide (http://docs.aws.amazon.com/glue/latest/dg/getting-started-access.html).
I tried to create this table in a new blank database (as opposed to an existing one with tables), I tried prefixing the names, I tried sourcing different schemas, and I tried using an existing role with Admin access. I though the latter would work, but I keep getting the same error, and have no idea why.
To be explicit, the service role I created has several policies I assume a premissive enough to create tables:
The logs are vanilla:
19:52:52
[10cb3191-9785-49dc-8935-fb02dcbd69a3] BENCHMARK : Running Start Crawl for Crawler avro
19:53:22
[10cb3191-9785-49dc-8935-fb02dcbd69a3] BENCHMARK : Classification complete, writing results to database zzz-db
19:53:22
[10cb3191-9785-49dc-8935-fb02dcbd69a3] INFO : Crawler configured with SchemaChangePolicy {"UpdateBehavior":"UPDATE_IN_DATABASE","DeleteBehavior":"DEPRECATE_IN_DATABASE"}.
19:53:34
[10cb3191-9785-49dc-8935-fb02dcbd69a3] ERROR : Insufficient Lake Formation permission(s) on s3://zzz-data/avro-all/ (Database name: zzz-db, Table name: avroavro_all) (Service: AWSGlue; Status Code: 400; Error Code: AccessDeniedException; Request ID: 31481e7e-c384-11e9-a6e1-e78dc8223fae). For more information, see Setting up IAM Permissions in the Developer Guide (http://docs.aws.amazon.com/glu
19:54:44
[10cb3191-9785-49dc-8935-fb02dcbd69a3] BENCHMARK : Crawler has finished running and is in state READY
I had the same problem when I setup and ran a new AWS crawler after enabling Lake Formation (in the same AWS account). I've been running Glue crawler for a long time and was stumped when I saw this new error.
After some trial and error, I found that the root cause of the problem is when you enable Lake Formation, it adds an additional layer of permission on new Glue database(s) that are created via Glue Crawler and to any resource (Glue catalog, S3, etc) that you add it to the Lake Formation service.
To fix this problem, you have to grant the Crawler's IAM role, a proper set of Lake Formation permissions (CRUD) for the database.
You can manage these permissions in AWS Lake Formation console (UI) under the Permissions > Data permissions section or via awscli lake formation commands.
I solved this problem by adding a grant in AWS Lake Formations -> Permissions -> Data locations. (Do not forget to add a forward slash (/) behind the bucket name)
I had to add the custom role I created for Glue to the "Data lake Administrators" grantees:
(Note: just saying this solves the crawler's denied access. There may be something with lesser privileges to do...)
Make sure you gave the necessary permissions to your crawler's IAM role in this path:
Lake Formation -> Permissions -> Data lake permissions
(Grant related Glue Database permissions to your crawler's IAM role)
I'm working through some of the example Sagemaker notebooks, and I receive the following Access Denied error when trying to run the linear_time_series_forecast example:
ValueError: Error training linear-learner-2017-12-21-15-29-34-676: Failed Reason: ClientError: Data download failed:AccessDenied (403): Access Denied
I can manually download and upload from my S3 bucket using the AWS command line interface, but the Jupyter notebook fails.
Note that I am running the notebook through Sagemaker's notebook instance.
Looks like this question was also answered on the AWS Forums.
The IAM Role referenced by
role = get_execution_role()
needs to have a policy attached to it that grants S3:GetObject permission on the S3 bucket holding your training data.
Note that as of at least October 28, 2022, the linked forum post now re-directs to a page which states (among other things):
The thread you are trying to access has outdated guidance, hence we have archived it.
Please keep this in mind as it is possible that this answer no longer works, or that it at some point in the future will no longer work.