is multi client copy objects in S3 allowed - amazon-web-services

We can copy objects to different buckets in the same namespace using.
CopyObjectRequest copyReq = CopyObjectRequest.builder()
.copySourceIfMatch(encodedUrl)
.destinationBucket(toBucket)
.destinationKey(objectKey)
.build();
CopyObjectResponse copyRes = s3.copyObject(copyReq);
But if I need to transfer file to another namespace (so different connection details) - how it can be achieved?

When using copyObject(), you must use a single set of credentials that has Read permission on the source bucket and Write permission on the destination bucket.
Assuming that you want to copy the object between buckets owned by different AWS Accounts, then your options are:
Option 1: Push
This option uses credentials from the AWS Account that owns the 'source' bucket. You will need:
Permission on the IAM User/IAM Role to GetObject from the source bucket
A Bucket Policy on the destination bucket that permits PutObject from the IAM User/IAM Role being used
I recommend you also set ACL=bucket-owner-full-control to give the destination bucket ownership of the object
Option 2: Pull
This option uses credentials from the AWS Account that owns the 'destination' bucket. You will need:
A Bucket Policy on the source bucket that permits GetObject from the IAM User/IAM Role being used
Permission on the IAM User/IAM Role to PutObject to the destination bucket
Your situation
You mention "different connection details", so I assume that these are credentials that have been given to you by the owner of the 'other' account. As per the options above:
If your account is the source account, add a Bucket Policy that permits GetObject using those credentials
If your account is the destination account, add a Bucket Policy that permits PutObject using those credentials and use ACL=bucket-owner-full-control

Related

Limiting S3 bucket access to users within one account

I have an IAM user that has full S3 access (i.e. can perform any S3 actions on any S3 resource within the AWS account). This user has created a bucket and put some files in it. The bucket has a policy which just contains an Allow rule that grants access to a different IAM user, in the same AWS account. Public access is turned off for the bucket.
Should the first user be able to access objects in this bucket? If so, is that because they created the bucket, or because they're in the account that owns the bucket? Is it possible to limit access to a bucket for users within the same AWS account?
S3 is one of the few services with resource policies, in this case they are called bucket policies.
A user in the same account has access to a (S3) resource if
nothing explicitly denies the access AND
either the bucket policy grants access OR the user / entity has a policy attached that grants access
If you wanted to restrict a bucket to a single user / entity you would
need to write a bucket policy that specifies that using a Deny statement for every user except the target one AND
either add a statement to the bucket policy or a policy attached to the user / entity granting access to the bucket.
The standard doc for understanding policy evaluation logic is this. There are other, more complicated ways to achieve your goal using e.g. permission boundaries and SCPs but they are probably overkill in your situation.

is it possible to copy s3 bucket content from one bucket to another account s3 bucket without using bucket policy?

I want to copy the S3 bucket object to a different account, but the requirement can't use the Bucket policy,
then is it possible to copy content from one bucket to another without using the bucket policy?
You cannot use native S3 object replication between different accounts without using a bucket policy. As stated in the permissions documentation:
When the source and destination buckets aren't owned by the same accounts, the owner of the destination bucket must also add a bucket policy to grant the owner of the source bucket permissions to perform replication actions
You could write a custom application that uses IAM roles to replicate objects, but this will likely be quite involved as you'll need to track the state of the bucket and all of the objects written to it.
install AWS CLI,
run AWS configure set source bucket credentials as default and,
visit https://github.com/Shi191099/S3-Copy-old-data-without-Policy.git

Unable to configure SageMaker execution Role with access to S3 bucket in another AWS account

Requirement: Create SakeMaker GroundTruth labeling job with input/output location pointing to S3 bucket in another AWS account
High Level Steps Followed: Lets say, Account_A: SageMaker GroundTruth labeling job and Account_B: S3 bucket
Create role AmazonSageMaker-ExecutionRole in Account_A with 3 policies attached:
AmazonSageMakerFullAccess
Account_B_S3_AccessPolicy: Policy with necessary S3 permissions to access S3 bucket in Account_B
AssumeRolePolicy: Assume role policy for arn:aws:iam::Account_B:role/Cross-Account-S3-Access-Role
Create role Cross-Account-S3-Access-Role in Account_B with 1 policy and 1 trust relationship attached:
S3_AccessPolicy: Policy with necessary S3 permissions to access S3 bucket in the this Account_B
TrustRelationship: For principal arn:aws:iam::Account_A:role/AmazonSageMaker-ExecutionRole
Error: While trying to create SakeMaker GroundTruth labeling job with IAM role as AmazonSageMaker-ExecutionRole, it throws error AccessDenied: Access Denied - The S3 bucket 'Account_B_S3_bucket_name' you entered in Input dataset location cannot be reached. Either the bucket does not exist, or you do not have permission to access it. If the bucket does not exist, update Input dataset location with a new S3 URI. If the bucket exists, give the IAM entity you are using to create this labeling job permission to read and write to this S3 bucket, and try your request again.
In your high level step 2, the approach should change to using a Resource Policy on your S3 bucket that allows account A to write to it. Rather than expecting Account A to assume a role in Account B, which I don't believe Sagemeker will do. Therefore the general approach is to do the following:
Account A Sagemaker service is given has a iam policy with a that allows access to Account B bucket. (Basically what you've done).
Account B bucket is given a resource policy that allows Account A to access it.
The following article gives additional help on this topic: How can I provide cross-account access to objects that are in Amazon S3 buckets?
Reverted back to original approach where access to the SageMaker execution role was provided through direct S3 bucket policy.
While creating the GT job from console:
(i) Expects the user creating the job also to have access to the data in cross account S3 bucket; Updated bucket policy to have access for both SageMaker execution role as well as user
(ii) Expects the manifest in own account's S3 bucket; Fails with 403 if manifest is in cross account S3 bucket even though SageMaker execution role had access to the cross account S3 bucket
While creating the GT job from CLI: Above restrictions doesn't apply and was able to create the GT job.

Copying files from S3 bucket in one account to S3 bucket in another

I have 2 AWS accounts. account1 has 1 file in bucket1 in us-east-1 region. I am trying to copy file from account 1 to account2 in bucket2 under us-west-2 region. I have all the required IAM policies in place and same credentials work for both accounts. I am using python boto3 library.
cos = boto3.resource('s3', aws_access_key_id=COMMON_KEY_ID, aws_secret_access_key=COMMON_ACCESS_KEY, endpoint_url="https://s3.us-west-2.amazonaws.com")
copy_source = {
'Bucket': bucket1,
'Key': SOURCE_KEY
}
cos.meta.client.copy(copy_source, "bucket2", TARGET_KEY)
As seen the copy function is executed on client object pointing to target account2/us-west-2. How does it get the source files in account1/us-east1? Am I supposed to provide SourceClient as input to copy function?
The cleanest way to perform such a copy is:
Use credentials (IAM User or IAM Role) from Account-2 that have GetObject permission on Bucket-1 (or all buckets) and PutObject permissions on Bucket-2
Add a Bucket policy to Bucket-1 that allows the Account-2 credentials to GetObject from the bucket
Send the copy command to the destination region
This method is good because it only requires one set of credentials.
A few things to note:
If you instead copy files using credentials from the source account, be sure to set ACL=bucket-owner-full-control to handover ownership to the destination bucket.
The resource copy() method allows a SourceClient to be specified. I don't think this is available for the client copy() method.

Use of s3:PutBucketPolicy

I was trying few things with aws s3 bucket policy and the documentation for put-bucket-policy says that the user should have PutBucketPolicy on the bucket and should be the owner.
I do not understand the use of PutBucketPolicy permission then.
Also is the bucket owner given a default PutBucketPolicy permission on his bucket?
http://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketPUTpolicy.html
The confusion here, I suspect, is related to the fact that users don't own buckets. The "owner" of a bucket is an individual AWS account.
You can't successfully grant PutBucketPolicy to any user in a different AWS account -- only your own account's user(s).
There's an illusion of circular logic here: How can I set a bucket policy... allowing myself to set the bucket policy... unless I am already able to set the bucket policy... which would make it unnecessary to set a bucket policy allowing me to set the bucket policy?
This is not as it seems: the problem is resolved by the fact that IAM user policies can grant a user permission to set the bucket policy, and the root account can do this by default -- which is why you should not use your root account credentials routinely: they are too privileged, if they fall into the wrong hands.