Can I create a database and table in Athena service within my account to access S3 data in another account?
I went over the below link and I assume as per this documentation both Amazon Athena and S3 bucket have to be in the same account and access is provided to the user in another account.
https://console.aws.amazon.com/athena/home?force®ion=us-east-1#query
From Access Control Policies - Amazon Athena:
To run queries in Athena, you must have the appropriate permissions for:
The Athena actions.
The Amazon S3 locations where the underlying data is stored that you are going to query in Athena.
...
So, it seems that the IAM User who is executing the Athena query requires access to the Amazon S3 location.
This could be done by adding a Bucket Policy to the S3 bucket in the other account that permits the IAM User access to the bucket.
To explain better:
Account-A with IAM-User-A and AWS Athena
Account-B with Bucket-B that has a Bucket Policy granting access to IAM-User-A
This answer deals with the additional information that:
A Lambda function in Account-A must be able to create a table in Amazon Athena in Account-B
I haven't tested it, but I think you will require:
Role-A in Account-A for the Lambda function that:
Permits AssumeRole on Role-B
Role-B in Account-B that:
Permits access to Amazon Athena and the source bucket in Amazon S3
Trusts Role-A
The Lambda function will run with Role-A. It will then use credentials from Role-A to call AssumeRole on Role-B. This will return a new set of credentials that can be used to call Amazon Athena in Account-B.
Related
Requirement: Create SakeMaker GroundTruth labeling job with input/output location pointing to S3 bucket in another AWS account
High Level Steps Followed: Lets say, Account_A: SageMaker GroundTruth labeling job and Account_B: S3 bucket
Create role AmazonSageMaker-ExecutionRole in Account_A with 3 policies attached:
AmazonSageMakerFullAccess
Account_B_S3_AccessPolicy: Policy with necessary S3 permissions to access S3 bucket in Account_B
AssumeRolePolicy: Assume role policy for arn:aws:iam::Account_B:role/Cross-Account-S3-Access-Role
Create role Cross-Account-S3-Access-Role in Account_B with 1 policy and 1 trust relationship attached:
S3_AccessPolicy: Policy with necessary S3 permissions to access S3 bucket in the this Account_B
TrustRelationship: For principal arn:aws:iam::Account_A:role/AmazonSageMaker-ExecutionRole
Error: While trying to create SakeMaker GroundTruth labeling job with IAM role as AmazonSageMaker-ExecutionRole, it throws error AccessDenied: Access Denied - The S3 bucket 'Account_B_S3_bucket_name' you entered in Input dataset location cannot be reached. Either the bucket does not exist, or you do not have permission to access it. If the bucket does not exist, update Input dataset location with a new S3 URI. If the bucket exists, give the IAM entity you are using to create this labeling job permission to read and write to this S3 bucket, and try your request again.
In your high level step 2, the approach should change to using a Resource Policy on your S3 bucket that allows account A to write to it. Rather than expecting Account A to assume a role in Account B, which I don't believe Sagemeker will do. Therefore the general approach is to do the following:
Account A Sagemaker service is given has a iam policy with a that allows access to Account B bucket. (Basically what you've done).
Account B bucket is given a resource policy that allows Account A to access it.
The following article gives additional help on this topic: How can I provide cross-account access to objects that are in Amazon S3 buckets?
Reverted back to original approach where access to the SageMaker execution role was provided through direct S3 bucket policy.
While creating the GT job from console:
(i) Expects the user creating the job also to have access to the data in cross account S3 bucket; Updated bucket policy to have access for both SageMaker execution role as well as user
(ii) Expects the manifest in own account's S3 bucket; Fails with 403 if manifest is in cross account S3 bucket even though SageMaker execution role had access to the cross account S3 bucket
While creating the GT job from CLI: Above restrictions doesn't apply and was able to create the GT job.
I have manually created a Glue table with S3 bucker as the source.
The S3 bucket has a bucket policy defined to allow access only from
root
my user_id
or a role defined for Glue
Now when a different user who has AWSGlueConsoleFullAccess tries to access the table from Glue console he gets access denied although Glue has service access to the S3 bucket.
Request help in understanding this behavior.
Thanks
Can you please look into the policy details of role "AWSGlueConsoleFullAccess"? Most probably its expecting the S3 bucket will have certain prefix e.g. "aws-glue-*". In that case either update your policy or rename your bucket to have aws-glue- prefix.
"Resource": [
"arn:aws:s3:::aws-glue-*"
Using pyathena and SQLalchemy, I connect to AWS Athena.
If I use keys of AWS admin, all is working fine, can query data.
If I use keys of an aws user that have AmazonAthenaFullAccess and AWSQuicksightAthenaAccess permissions, I get access deny.
I have permission to the output S3, and Athena access a public data set S3 bucket.
What permissions am I missing?
Thanks
AmazonAthenaFullAccess policy provides access to S3 buckets such as: "arn:aws:s3:::aws-athena-query-results-" and "arn:aws:s3:::athena-examples".
You have 2 options:
Create a new policy and add content from AmazonAthenaFullAccess policy, but with different S3 resources.
Add AmazonS3FullAccess policy to your user, which grants permissions for all your S3 buckets
I have a redshift cluster in an AWS account "A" and an S3 bucket in account "B". I need to unload data from redshift account in A to an S3 bucket in B.
I've already provided the necessary bucket policy and role policy to unload the data. The data is also getting unloaded successfully. Now the problem is that the owner of the file created from this unload is account
A and the file needs to be used by user B. On trying to access that object I am getting access denied. How do I solve this?
PS: ListBucket and GetObject permissions have been granted by the redshift IAM policy.
This is what worked for me - Chaining IAM roles.
For example, suppose Company A wants to access data in an Amazon S3 bucket that belongs to Company B. Company A creates an AWS service role for Amazon Redshift named RoleA and attaches it to their cluster. Company B creates a role named RoleB that's authorized to access the data in the Company B bucket. To access the data in the Company B bucket, Company A runs a COPY command using an iam_role parameter that chains RoleA and RoleB. For the duration of the UNLOAD operation, RoleA temporarily assumes RoleB to access the Amazon S3 bucket.
More details here: https://docs.aws.amazon.com/redshift/latest/mgmt/authorizing-redshift-service.html#authorizing-redshift-service-chaining-roles
Everywhere I can see IAM Role is created on EC2 instance and given Roles like S3FullAccess.
Is it possible to create IAM Role on S3 instead of EC2? And attach that Role to S3 bucket?
I created IAM Role on S3 with S3FULLACCESS. Not able to attach that to the existing bucket or create a new bucket with this Role. Please help
IAM (Identity and Access Management) Roles are a way of assigning permissions to applications, services, EC2 instances, etc.
Examples:
When a Role is assigned to an EC2 instance, credentials are passed to software running on the instance so that they can call AWS services.
When a Role is assigned to an Amazon Redshift cluster, it can use the permissions within the Role to access data stored in Amazon S3 buckets.
When a Role is assigned to an AWS Lambda function, it gives the function permission to call other AWS services such as S3, DynamoDB or Kinesis.
In all these cases, something is using the credentials to call AWS APIs.
Amazon S3 never requires credentials to call an AWS API. While it can call other services for Event Notifications, the permissions are actually put on the receiving service rather than S3 as the requesting service.
Thus, there is never any need to attach a Role to an Amazon S3 bucket.
Roles do not apply to S3 as it does with EC2.
Assuming #Sunil is asking if we can restrict access to data in S3.
In that case, we can either Set S3 ACL on the buckets or the object in it OR Set S3 bucket policies.