I am trying to access an s3 bucket in different AWS account using Athena, but I'm getting the below error message :
User does not have access to target.
I have added all the necessary policies to the s3 bucket in the other AWS account.
When I add the s3 location as a datastore in my account, I am unable to crawl the bucket which is located in the other account.
Is there anything that I need to do from my side in terms on policy.
Related
I'm trying to understand the AWS Amplify documentation section "Using Amazon S3". It says:
If you set up your Cognito resources manually, the roles will need to be given permission to access the S3 bucket.
There are two roles created by Cognito: an Auth_Role that grants signed-in-user-level bucket access and an Unauth_Role that allows unauthenticated access to resources. Attach the corresponding policies to each role for proper S3 access. Replace {enter bucket name} with the correct S3 bucket.
And then the docs provide JSON examples of Policies for Auth_Role and Unauth_Role. What's confusing me is that when I go into my Roles in my IAM console, I have the following:
amplify--dev-153155-authRole (contains AppSync resources)
amplify--dev-153155-authRole-idp (log group resources)
amplify--dev-153155-unauthRole (empty)
Neither of which contain anything like the JSON examples. The "...authRole" Policy contains actions/resources concerning AppSync, but nothing to do with S3. Likewise for the other two. I expected to find permissions to allow my Amplify app to get/store S3 items, otherwise how is it able to currently do it?
So my questions are:
How do I create and attach the policies provided in the above documentation? Do I simply paste the JSON into new policies in my IAM console, and attach them to the Auth_Role?
Where are the default permissions stored? I have set up an amplify app and added S3 with amplify add storage. I can connect to the S3 bucket to add and retrieve files - so presumably there must be existing Polices. But my Auth_Role contains no Policies that reference S3?
Requirement: Create SakeMaker GroundTruth labeling job with input/output location pointing to S3 bucket in another AWS account
High Level Steps Followed: Lets say, Account_A: SageMaker GroundTruth labeling job and Account_B: S3 bucket
Create role AmazonSageMaker-ExecutionRole in Account_A with 3 policies attached:
AmazonSageMakerFullAccess
Account_B_S3_AccessPolicy: Policy with necessary S3 permissions to access S3 bucket in Account_B
AssumeRolePolicy: Assume role policy for arn:aws:iam::Account_B:role/Cross-Account-S3-Access-Role
Create role Cross-Account-S3-Access-Role in Account_B with 1 policy and 1 trust relationship attached:
S3_AccessPolicy: Policy with necessary S3 permissions to access S3 bucket in the this Account_B
TrustRelationship: For principal arn:aws:iam::Account_A:role/AmazonSageMaker-ExecutionRole
Error: While trying to create SakeMaker GroundTruth labeling job with IAM role as AmazonSageMaker-ExecutionRole, it throws error AccessDenied: Access Denied - The S3 bucket 'Account_B_S3_bucket_name' you entered in Input dataset location cannot be reached. Either the bucket does not exist, or you do not have permission to access it. If the bucket does not exist, update Input dataset location with a new S3 URI. If the bucket exists, give the IAM entity you are using to create this labeling job permission to read and write to this S3 bucket, and try your request again.
In your high level step 2, the approach should change to using a Resource Policy on your S3 bucket that allows account A to write to it. Rather than expecting Account A to assume a role in Account B, which I don't believe Sagemeker will do. Therefore the general approach is to do the following:
Account A Sagemaker service is given has a iam policy with a that allows access to Account B bucket. (Basically what you've done).
Account B bucket is given a resource policy that allows Account A to access it.
The following article gives additional help on this topic: How can I provide cross-account access to objects that are in Amazon S3 buckets?
Reverted back to original approach where access to the SageMaker execution role was provided through direct S3 bucket policy.
While creating the GT job from console:
(i) Expects the user creating the job also to have access to the data in cross account S3 bucket; Updated bucket policy to have access for both SageMaker execution role as well as user
(ii) Expects the manifest in own account's S3 bucket; Fails with 403 if manifest is in cross account S3 bucket even though SageMaker execution role had access to the cross account S3 bucket
While creating the GT job from CLI: Above restrictions doesn't apply and was able to create the GT job.
I found an issue with a S3 bucket.
The bucket don't have any ACL associated, and the user that create the bucket was deleted.
How it's possible add some ACL in the bucket to get the control back?
For any command using AWS CLI, the result are the same always: An error occurred (AccessDenied) when calling the operation: Access Denied
Also in AWS console the access is denied.
First things first , AccessDenied error in AWS indicates that your AWS user does not have access to S3 service , Get S3 permission to your IAM user account , if in case you had access to AWS S3 service.
The thing is since you are using cli make sure AWS client KEY and secret are still correctly in local.
Now the interesting use case :
You have access to S3 service but cannot access the bucket since the bucket had some policies set
In this case if user who set the policies left and no user was able to access this bucket, the best way is to ask AWS root account holder to change the bucket permissions
An IAM user with the managed policy named AdministratorAccess should be able to access all S3 buckets within the same AWS account. Unless you have applied some unusual S3 bucket policy or ACL, in which case you might need to log in as the account's root user and modify that bucket policy or ACL.
See Why am I getting an "Access Denied" error from the S3 when I try to modify a bucket policy?
I just posted this on a related thread...
https://stackoverflow.com/a/73977525/999943
https://aws.amazon.com/premiumsupport/knowledge-center/s3-bucket-owner-full-control-acl/
Basically when putting objects from the non-bucket owner, you need to set the acl at the same time.
--acl bucket-owner-full-control
I have a redshift cluster in an AWS account "A" and an S3 bucket in account "B". I need to unload data from redshift account in A to an S3 bucket in B.
I've already provided the necessary bucket policy and role policy to unload the data. The data is also getting unloaded successfully. Now the problem is that the owner of the file created from this unload is account
A and the file needs to be used by user B. On trying to access that object I am getting access denied. How do I solve this?
PS: ListBucket and GetObject permissions have been granted by the redshift IAM policy.
This is what worked for me - Chaining IAM roles.
For example, suppose Company A wants to access data in an Amazon S3 bucket that belongs to Company B. Company A creates an AWS service role for Amazon Redshift named RoleA and attaches it to their cluster. Company B creates a role named RoleB that's authorized to access the data in the Company B bucket. To access the data in the Company B bucket, Company A runs a COPY command using an iam_role parameter that chains RoleA and RoleB. For the duration of the UNLOAD operation, RoleA temporarily assumes RoleB to access the Amazon S3 bucket.
More details here: https://docs.aws.amazon.com/redshift/latest/mgmt/authorizing-redshift-service.html#authorizing-redshift-service-chaining-roles
When a user has Resource-based permissions to a ressource but does not have User-based permissions for that service. Can he use that service than?
example : user Jack has Resource based permission to use the S3 bucket 'jamm'. But Jack has no permission to use S3. Can Jack use the S3 bucket?
If you don't have permissions to access the S3 service, then you cannot use it at all.
In order to access any S3 bucket, you must have permissions to execute the S3 commands such as s3:GetObject. These permissions tells AWS which commands the user is allowed to execute. Anything not explicitly allowed is automatically denied.
The S3 bucket policy (your resource-level permissions) instruct the S3 service which users are allowed to access the bucket. But that only happens after the user has been given the needed permissions to execute S3 commands with which to access the bucket.
So you need:
Give the user permissions to execute the S3 commands to access the bucket (default is none), and
Give the bucket a policy to restrict the users that can access the bucket (default is anyone in the AWS account)
It is possible to restrict some S3 commands to your bucket, so the user has permission to execute s3:GetObject (for example), but only on your bucket.
But some commands, such as s3:ListAllMyBuckets cannot be restricted this way.