AWS account root user not able to query Athena tables - amazon-web-services

I have logged into AWS as the root account and still unable to query a Athena table that was created by another IAM User having Administrator Role. When I log off and login in as the IAM User which created the Tables, I am able to query.
Any inputs how can I grant permission to the AWS Root Account to the Athena Table (also available in the Glue Catalog) ?

This was because, the Admin role had not been granted access in Lakeformation to the given database which was being queried. Once, Admin role was assigned, the Account Owner was able to query the tables in the given database from Athena.

Related

Give access of dynamodb tables to users

I created a table in DynamoDB and it works as expected.
Next, I have created a new user in the IAM and attached the AmazonDynamoDBFullAccess policy to this user.
However, when I log in as the newly created user I can see none of the tables I created as a root user.
So, how can the root user, create new users and given them access to DynamoDB tables?

unable to create schedule queries in bigquery

I am trying to restrict a bigquery so that users can only access a specific datasets, i did so without any issues, but why user is not able to create scheduled queries? it is saying to enable api and only project
owner can able to schedule queries , is there anyway to add permissions to create a custom role so that users can query,create and schedule queries ?
//dinesh
Ensure that the person creating the transfer has the following required permissions in BigQuery:
bigquery.transfers.update permissions to create the transfer
bigquery.datasets.update permissions on the target dataset
The bigquery.admin predefined Cloud IAM role includes bigquery.transfers.update and bigquery.datasets.update permissions. You should check the official documentation and the Cloud IAM roles in BigQuery to see the Predefined roles and permissions.

IAM Custom Role for Inserting to Specific BigQuery Dataset

I have several customer projects that write analytic events into a BigQuery dataset. The setup is organised like this:
1) Each GCP project has its own set of GCP resources and some of them report analytics using BigQuery insert API.
2) There's a single "Main Analytics" project that intakes all the data from the different projects in a standardised table (all projects write in the same data format).
I've created a custom IAM role in "Main Analytics" with the required permissions to execute a row insert operation:
bigquery.datasets.get
bigquery.tables.get
bigquery.tables.updateData
For every customer project I've created a unique service account with the above role. This allows each resource in any project to authenticate and insert rows (but not create/delete tables).
Problem: What I really want to do is limit the service accounts to write only to a specific dataset that intakes all the data. The above IAM role allows the service account to list all datasets/tables in the "Main Analytics" project and to insert into them.
If I use dataset permissions - add the service account email as a user to the dataset ACL - then it would have to be WRITER dataset role which would allow the service account to create & delete tables in the dataset which is too broad.
Combining the IAM role with the dataset permissions results in a union so the wider WRITER permission take effect over the narrower IAM role.
Anyway I can configure roles/permissions to allow each service account to insert and only-insert to a specific dataset?
You can drop the bigquery.datasets.get permission from the custom IAM role so that they can’t list all the datasets, and then in the dataset's permissions give the READER role instead of WRITER to the user for that specific dataset.

unable to run query against BigQuery - permission error 403

I have a IAM user with Role: BigQuery Data Editor
In my data set I did Share dataset added the user with Can Edit privileges.
However when I'm running my script which access BigQuery I get error 403
When I add to my IAM user the Role BigQuery User The script works.
The scripts runs only SELECT query from a table in this data set.
I don't understand why I must grant BigQuery User for this to work.
According to the documentation https://cloud.google.com/bigquery/docs/access-control
Rationale: The dataEditor role extends bigquery.dataViewer by issuing
create, update, delete privileges for the tables within the dataset
roles/bigquery.dataViewer has bigquery.tables.getData which get table data
What am I doing wrong here?
Having access to the data and being able to retrieve it with a query are different things and that's where the confusion is coming from.
Per the documentation, roles/bigquery.dataEditor has the following permissions:
Read the dataset's metadata and to list tables in the dataset.
Create, update, get, and delete the dataset's tables.
This means that the user with this role has access and manipulation rights to the dataset's information and the tables in it. An example would be that a user with this role can see all the table information by navigating to it through the GCP console (schema, details and preview tabs) but when trying to run a query there, the following message will appear:
Access Denied: Project <PROJECT-ID>: The user <USER> does not have bigquery.jobs.create permission in project <PROJECT-ID>.
Now let's check the roles/bigquery.user permissions:
Permissions to run jobs, including queries, within the project.
The key element here is that the BigQuery User role can run jobs and the BigQuery DataEditor can't. BigQuery Jobs are the objects that manage the BigQuery tasks, this includes running queries.
With this information, it's clearer in the roles comparison matrix that for what you are trying to accomplish you'll need the BigQuery DataEditor role (Get table data/metadata) and the BigQuery User role (Create jobs/queries).

Redshift Revoke Permission not Working

I have an Amazon Redshift cluster with four schemas (Schema1, Schema2, Schema3 and Schema 4).
I created a user User1 in this cluster. I want this user to have only read-only access to all the tables in Schema1. Currently, this user has access(Select, Insert, Update, Delete) to all the tables from all the schemas.
I tried all the commands from the Redshift manual, but looks like nothing is working.
Example:
REVOKE ALL on schema schema1 from User1
REVOKE ALL on schema schema2 from User1
REVOKE ALL on schema schema3 from User1
REVOKE ALL on schema schema4 from User1
I also tried to revoke individual permissions (Insert, Update, Delete).
I also tried to revoke permissions (Insert, Update, Delete) from individual table
Tried all the combinations from the manual. I am using SQL Workbench and all the statements were successfully executed without any syntax error.
Not able to figure it. Any help is appreciated.
P.S. I have 15 years of database experience working on roles and permissions.
In my case the issue I had was that I had 3 users belonging to the same group.
The group had been granted ALL privileges to all the tables of the schema.
Therefore revoking the permissions for a single user did not work since group permissions supersede user permissions.
TL;DR
The solution in my case was to create a group for each user and revoke access to the schema for the other groups.
REVOKE ALL ON SCHEMA my_schema FROM group my_group;
These commands seem to work:
CREATE SCHEMA schema1;
CREATE TABLE schema1.foo (name TEXT);
CREATE USER user1 PASSWORD 'Abcd1234';
GRANT USAGE ON SCHEMA schema1 TO user1;
GRANT SELECT ON ALL TABLES IN SCHEMA schema1 TO user1;
However, it might not automatically grant access on tables created in future.
Since Amazon Redshift is based on PostgreSQL 8.0.2, see: How do you create a read-only user in PostgreSQL?
This might not be what caused the issue of the OP, but it solved the issue for me, and could solve it for people who encounter the same situation and end up on this thread.
In addition to George V's answer, note that there is in Redshift a PUBLIC group, that grants permissions to every user.
PUBLIC represents a group that always includes all users. An individual user's privileges consist of the sum of privileges granted to PUBLIC, privileges granted to any groups that the user belongs to, and any privileges granted to the user individually.
(from the doc on GRANT)
So if you want to make sure that User1 doesn't have access to tables in schema2 for example, you should run:
REVOKE ALL on schema schema2 from User1;
REVOKE ALL on schema schema2 from Group1; --assuming this is the only group of User1
REVOKE ALL on schema schema2 from PUBLIC;