I am trying to build an AppSync API connected to a DynamoDB table in AWS using the CDK in Python. I want this to be a read only API with no create, delete, update. In my stack I add the AppSync API:
# AppSync API for data and data catalogue queries
api = _appsync.GraphqlApi(self,
'DataQueryAPI',
name='dataqueryapi',
log_config=_appsync.LogConfig(field_log_level=_appsync.FieldLogLevel.ALL),
schema=_appsync.Schema.from_asset('graphql/schema.graphql')
)
I then add the DynamoDB table as a data source as follows:
# Data Catalogue DynamoDB Table as AppSync Data Source
data_catalogue_api_ds = api.add_dynamo_db_data_source(
'DataCatalogueDS',
data_catalogue_dynamodb
)
I later add some resolvers with mapping templates but even after just the above, if I run cdk diff I see that this will create permission changes that appear to grant full access to AppSync when interacting with the DynamoDB table.
I only want this to be a read only API and so the question is how can I restrict permissions so that the AppSync API can only read from the table?
What I have tried was to add a role that would explicitly grant query permissions in the hope that this would prevent the creation of the wider set of permissions but it didn't have that effect and I'm not really sure where I was going with it or if it was on the right track:
role = _iam.Role(self,
"Role",
assumed_by=_iam.ServicePrincipal("appsync.amazonaws.com")
)
api.grant_query(role, "getData")
Following a comment on this question I have swapped add_dynamo_db_data_source for DynamoDbDataSource as it has a read_only_access parameter. So I am now using:
data_catalogue_api_ds = _appsync.DynamoDbDataSource(self,
'DataCatalogueDS',
table=data_catalogue_dynamodb,
read_only_access=True,
api=api
)
Which seems to then just give me read permissions:
Related
I'm trying to query a partitioned table that is based on S3 bucket from Lambda
and get the following error:
But, when I used the same query via Athena it works well.
My Lambda role includes S3 full permission for all the resources.
BTW I received access to other S3 bucket (another account), this is not my bucket but I've read, and list permissions. and using Lambda I'm able to create the partition table on their bucket.
Using Lambda, this query is working
ALTER TABLE access_Partition ADD PARTITION
(year = '2022', month = '03',day= '15' ,hour = '01') LOCATION 's3://sddds/2022/03/15/01/';
But select query on the above table (after the creation) get a permission error
(When I open the executed query on Athena it's marked as failed but I can run it successfully )
select * from access_Partition
Please advise!!!
Amazon Athena uses the permissions of the entity making the call to access Amazon S3. So, when you run an Athena query in the console, it is using permissions from your IAM User. When it is run from Lambda, it uses the permissions from the IAM Role associated with the Lambda function.
When this command is run:
ALTER TABLE access_Partition ADD PARTITION
(year = '2022', month = '03',day= '15' ,hour = '01') LOCATION 's3://sddds/2022/03/15/01/';
it is updating information (metadata) in the data catalog used in Athena in your own account. It is not actually accessing the bucket until a query is run.
The fact that the query fails when it is run suggests that the IAM Role does not have permission to access the bucket in the other AWS Account.
You should add a Bucket Policy on the S3 bucket in the other account that grants access permission for the IAM Role used by the Lambda function.
I want to create fully automated creating of new roles in AWS and connecting this with Snowflake. To connect Snowflake with AWS we must edit trust relationships and paste their STORAGE_AWS_EXTERNAL_ID.
Is there any way to do this fully automated?
How about creating a batch script using AWS CLI et SNOW SQL following the steps provided in the Snowflake user guide.
Create your AWS IAM policy and get your policy's arn
Create an AWS IAM role linked to this policy and get the role's arn
Create the snowflake storage integration linked to this role and get STORAGE_AWS_IAM_USER_ARN and STORAGE_AWS_EXTERNAL_ID from DESC INTEGRATION command.
Update the AWS IAM policy with previous values (i.e snowflake's user arn and external id).
I want to restrict user from executing INSERT queries in master table(Not CTAS table) in athena.
If there way, I can achieve this ?
user will executing queries from Lambda.
Athena just supports StartQueryExecution and StopQueryExecution as actions in IAM permission policies - so there is no differentiation which type of SQL Command (DDL, DML) is being executed.
However, I think you can overcome this by denying permissions on glue and S3 so Athena queries that try to execute INSERTs will fail:
glue permissions can be managed on catalog, database and table level, some examples can be found in AWS' Identity-Based Policies (IAM Policies) for Access Control for Glue
Relevant glue actions to deny: BatchCreatePartition, CreatePartition, UpdatePartition - see Actions, resources, and condition keys for AWS Glue
On S3 you need to deny PutObject or Put* for the S3 location of the specific table, see Actions defined by Amazon S3 - again this can be defined on a object level in a bucket.
I am trying to use an AWS Glue crawler on an S3 bucket to populate a Glue database. I run the Create Crawler wizard, select my datasource (the S3 bucket with the avro files), have it create the IAM role, and run it, and I get the following error:
Database does not exist or principal is not authorized to create tables. (Database name: zzz-db, Table name: avroavro_all) (Service: AWSGlue; Status Code: 400; Error Code: AccessDeniedException; Request ID: 78fc18e4-c383-11e9-a86f-736a16f57a42). For more information, see Setting up IAM Permissions in the Developer Guide (http://docs.aws.amazon.com/glue/latest/dg/getting-started-access.html).
I tried to create this table in a new blank database (as opposed to an existing one with tables), I tried prefixing the names, I tried sourcing different schemas, and I tried using an existing role with Admin access. I though the latter would work, but I keep getting the same error, and have no idea why.
To be explicit, the service role I created has several policies I assume a premissive enough to create tables:
The logs are vanilla:
19:52:52
[10cb3191-9785-49dc-8935-fb02dcbd69a3] BENCHMARK : Running Start Crawl for Crawler avro
19:53:22
[10cb3191-9785-49dc-8935-fb02dcbd69a3] BENCHMARK : Classification complete, writing results to database zzz-db
19:53:22
[10cb3191-9785-49dc-8935-fb02dcbd69a3] INFO : Crawler configured with SchemaChangePolicy {"UpdateBehavior":"UPDATE_IN_DATABASE","DeleteBehavior":"DEPRECATE_IN_DATABASE"}.
19:53:34
[10cb3191-9785-49dc-8935-fb02dcbd69a3] ERROR : Insufficient Lake Formation permission(s) on s3://zzz-data/avro-all/ (Database name: zzz-db, Table name: avroavro_all) (Service: AWSGlue; Status Code: 400; Error Code: AccessDeniedException; Request ID: 31481e7e-c384-11e9-a6e1-e78dc8223fae). For more information, see Setting up IAM Permissions in the Developer Guide (http://docs.aws.amazon.com/glu
19:54:44
[10cb3191-9785-49dc-8935-fb02dcbd69a3] BENCHMARK : Crawler has finished running and is in state READY
I had the same problem when I setup and ran a new AWS crawler after enabling Lake Formation (in the same AWS account). I've been running Glue crawler for a long time and was stumped when I saw this new error.
After some trial and error, I found that the root cause of the problem is when you enable Lake Formation, it adds an additional layer of permission on new Glue database(s) that are created via Glue Crawler and to any resource (Glue catalog, S3, etc) that you add it to the Lake Formation service.
To fix this problem, you have to grant the Crawler's IAM role, a proper set of Lake Formation permissions (CRUD) for the database.
You can manage these permissions in AWS Lake Formation console (UI) under the Permissions > Data permissions section or via awscli lake formation commands.
I solved this problem by adding a grant in AWS Lake Formations -> Permissions -> Data locations. (Do not forget to add a forward slash (/) behind the bucket name)
I had to add the custom role I created for Glue to the "Data lake Administrators" grantees:
(Note: just saying this solves the crawler's denied access. There may be something with lesser privileges to do...)
Make sure you gave the necessary permissions to your crawler's IAM role in this path:
Lake Formation -> Permissions -> Data lake permissions
(Grant related Glue Database permissions to your crawler's IAM role)
I created a user with following permission. When I login as this user, I don't see any tables under DynamoDB. Do I need to add any additional permissions.
AWSDataPipeline_FullAccess
AmazonDynamoDBReadOnlyAccess
AmazonSNSReadOnlyAccess
IAMReadOnlyAccess
AWSLambdaReadOnlyAccess
CloudWatchReadOnlyAccess
It works fine for me!
Here's what I did:
Created a table in DynamoDB using an existing user
Create a new IAM User in IAM with the AmazonDynamoDBReadOnlyAccess standard policy
Logged in as the new User, went to the DynamoDB console
I could view the list of tables and the content of tables
It turns out the region was an issue. After I switched to the right region, the tables started showing. Thanks all.