I'm using lambda to modify some csv files from s3 bucket and writing it to a different s3 bucket using AWS Javascript SDK. The buckets for getObject and putObject are in different regions. The lambda is in the same region as the destination buckets. But the modified files in the destination buckets have this error in them
AuthorizationHeaderMalformedThe authorization header is malformed; the region 'us-east-1' is wrong; expecting 'us-west-2'us-west-2.
Whenever the source and destination buckets are in same region, I get the proper modified files.
What changes do I need to make this work when source and destination bucket are in different regions.
Thanks
S3 service is global, but bucket itself in regional, which means when you neet to use a bucket you need to do it using the same region where the bucket exists.
If I understood correct your source bucket is in us-west-2 and your destination bucket is in us-east-1.
So you need to use like this:
s3_source = boto3.client('s3', region_name='us-west-2')
... your logic to get and handle the file ...
s3_destination = boto3.client('s3', region_name='us-east-1')
... your logic to write the file ...
Related
I have read-only access to a source S3 bucket. I cannot change permissions or anything of the sort on this source account and bucket. I do not own this account.
I would like to sync all files from the source bucket to my destination bucket. I own the account that contains the destination bucket.
I have a separate sets of credentials for the source bucket that I do not own and the destination bucket that I do own.
Is there a way to use the AWS CLI to sync between buckets using two sets of credentials?
aws s3 sync s3://source-bucket/ --profile source-profile s3://destination-bucket --profile default
If not, how can I setup permissions on my owned destination bucket to that I can sync with the CLI?
The built-in S3 copy mechanism, at the API level, requires the request be submitted to the target bucket, identifying the source bucket and object inside the request, and using a single set of credentials that has both authorization to read from the source and write to the target.
This is the only supported way to copy from one bucket to another without downloading and uploading the files.
The standard solution is found at http://docs.aws.amazon.com/AmazonS3/latest/dev/example-walkthroughs-managing-access-example2.html.
You can grant their user access to write your bucket or they can grant your user access to their bucket... but copying from one bucket to another without downloading and re-uploading the files is impossible without the complicity of both account owners to establish a single set of credentials with both privileges.
Use rclone for this. It's convenient but it does download and upload the files I believe which makes it slow for large data volumes.
rclone --config=creds.cfg copy source:bucket-name1/path/ destination:bucket-name2/path/
creds.cfg:
[source]
type = s3
provider = AWS
access_key_id = AAA
secret_access_key = bbb
[target]
type = s3
provider = AWS
access_key_id = CCC
secret_access_key = ddd
For this use case, I would consider Cross-Region Replication Where Source and Destination Buckets Are Owned by Different AWS Accounts
... you set up cross-region replication on the source
bucket owned by one account to replicate objects in a destination
bucket owned by another account.
The process is the same as setting up cross-region replication when
both buckets are owned by the same account, except that you do one
extra step—the destination bucket owner must create a bucket policy
granting the source bucket owner permission for replication actions.
I'm trying to use an S3 bucket to upload files to as part of a build, it is configured to provide files as a static site and the content is protected using a Lambda and CloudFront. When I manually create files in the bucket they are all visible and everything is happy, but when the files are uploaded what is created are not available, resulting in an access denied response.
The user that's pushing to the bucket does not belong in the same AWS environment, but it has been set up with an ACL that allows it to push to the bucket, and the bucket with a policy that allows it to be pushed to by that user.
The command that I'm using is:
aws s3 sync --no-progress --delete docs/_build/html "s3://my-bucket" --acl bucket-owner-full-control
Is there something else that I can try that basically uses the bucket permissions for anything that's created?
According to OP's feedback in the comment section, setting Object Ownership to Bucket owner preferred fixed the issue.
I have a usecase to use AWS Lambda to copy files/objects from one S3 bucket to another. In this usecase Source S3 bucket is in a separate AWS account(say Account 1) where the provider has only given us AccessKey & SecretAccess Key. Our Lambda runs in Account 2 and the destination bucket can be either in Account 2 or some other account 3 altogether which can be accessed using IAM role. The setup is like this due to multiple partner sharing data files
Usually, I used to use the following boto3 command to copy the contents between two buckets when everything is in the same account but want to know how this can be modified for the new usecase
copy_source_object = {'Bucket': source_bucket_name, 'Key': source_file_key}
s3_client.copy_object(CopySource=copy_source_object, Bucket=destination_bucket_name, Key=destination_file_key)
How can the above code be modified to fit my usecase of having accesskey based connection to source bucket and roles for destination bucket(which can be cross-account role as well)? Please let me know if any clarification is required
There's multiple options here. Easiest is by providing credentials to boto3 (docs). I would suggest retrieving the keys from the SSM parameter store or secrets manager so they're not stored hardcoded.
Edit: I realize the problem now, you can't use the same session for both buckets, makes sense. The exact thing you want is not possible (ie. use copy_object). The trick is to use 2 separate session so you don't mix the credentials. You would need to get_object from the first account and put_object to the second objects. You should be able to simply put the resp['Body'] from the get in the put request but I haven't tested this.
import boto3
acc1_session = boto3.session.Session(
aws_access_key_id=ACCESS_KEY_acc1,
aws_secret_access_key=SECRET_KEY_acc1
)
acc2_session = boto3.session.Session(
aws_access_key_id=ACCESS_KEY_acc2,
aws_secret_access_key=SECRET_KEY_acc2
)
acc1_client = acc1_session.client('s3')
acc2_client = acc2_session.client('s3')
copy_source_object = {'Bucket': source_bucket_name, 'Key': source_file_key}
resp = acc1_client.get_object(Bucket=source_bucket_name, Key=source_file_key)
acc2_client.put_object(Bucket=destination_bucket_name, Key=destination_file_key, Body=resp['Body'])
Your situation appears to be:
Account-1:
Amazon S3 bucket containing files you wish to copy
You have an Access Key + Secret Key from Account-1 that can read these objects
Account-2:
AWS Lambda function that has an IAM Role that can write to a destination bucket
When using the CopyObject() command, the credentials used must have read permission on the source bucket and write permission on the destination bucket. There are normally two ways to do this:
Use credentials from Account-1 to 'push' the file to Account-2. This requires a Bucket Policy on the destination bucket that permits PutObject for the Account-1 credentials. Also, you should set ACL= bucket-owner-full-control to handover control to Account-2. (This sounds similar to your situation.) OR
Use credentials from Account-2 to 'pull' the file from Account-1. This requires a Bucket Policy on the source bucket that permits GetObject for the Account-2 credentials.
If you can't ask for a change to the Bucket Policy on the source bucket that permits Account-2 to read the contents, then **you'll need a Bucket Policy on the Destination bucket that permits write access by the credentials from Account-1`.
This is made more complex by the fact that you are potentially copying the object to a bucket in "some other account". There is no easy answer if you are starting to use 3 accounts in the process.
Bottom line: If possible, ask them for a change to the source bucket's Bucket Policy so that your Lambda function can read the files without having to change credentials. It can then copy objects to any bucket that the function's IAM Role can access.
I have 2 AWS accounts. account1 has 1 file in bucket1 in us-east-1 region. I am trying to copy file from account 1 to account2 in bucket2 under us-west-2 region. I have all the required IAM policies in place and same credentials work for both accounts. I am using python boto3 library.
cos = boto3.resource('s3', aws_access_key_id=COMMON_KEY_ID, aws_secret_access_key=COMMON_ACCESS_KEY, endpoint_url="https://s3.us-west-2.amazonaws.com")
copy_source = {
'Bucket': bucket1,
'Key': SOURCE_KEY
}
cos.meta.client.copy(copy_source, "bucket2", TARGET_KEY)
As seen the copy function is executed on client object pointing to target account2/us-west-2. How does it get the source files in account1/us-east1? Am I supposed to provide SourceClient as input to copy function?
The cleanest way to perform such a copy is:
Use credentials (IAM User or IAM Role) from Account-2 that have GetObject permission on Bucket-1 (or all buckets) and PutObject permissions on Bucket-2
Add a Bucket policy to Bucket-1 that allows the Account-2 credentials to GetObject from the bucket
Send the copy command to the destination region
This method is good because it only requires one set of credentials.
A few things to note:
If you instead copy files using credentials from the source account, be sure to set ACL=bucket-owner-full-control to handover ownership to the destination bucket.
The resource copy() method allows a SourceClient to be specified. I don't think this is available for the client copy() method.
I have read-only access to a source S3 bucket. I cannot change permissions or anything of the sort on this source account and bucket. I do not own this account.
I would like to sync all files from the source bucket to my destination bucket. I own the account that contains the destination bucket.
I have a separate sets of credentials for the source bucket that I do not own and the destination bucket that I do own.
Is there a way to use the AWS CLI to sync between buckets using two sets of credentials?
aws s3 sync s3://source-bucket/ --profile source-profile s3://destination-bucket --profile default
If not, how can I setup permissions on my owned destination bucket to that I can sync with the CLI?
The built-in S3 copy mechanism, at the API level, requires the request be submitted to the target bucket, identifying the source bucket and object inside the request, and using a single set of credentials that has both authorization to read from the source and write to the target.
This is the only supported way to copy from one bucket to another without downloading and uploading the files.
The standard solution is found at http://docs.aws.amazon.com/AmazonS3/latest/dev/example-walkthroughs-managing-access-example2.html.
You can grant their user access to write your bucket or they can grant your user access to their bucket... but copying from one bucket to another without downloading and re-uploading the files is impossible without the complicity of both account owners to establish a single set of credentials with both privileges.
Use rclone for this. It's convenient but it does download and upload the files I believe which makes it slow for large data volumes.
rclone --config=creds.cfg copy source:bucket-name1/path/ destination:bucket-name2/path/
creds.cfg:
[source]
type = s3
provider = AWS
access_key_id = AAA
secret_access_key = bbb
[target]
type = s3
provider = AWS
access_key_id = CCC
secret_access_key = ddd
For this use case, I would consider Cross-Region Replication Where Source and Destination Buckets Are Owned by Different AWS Accounts
... you set up cross-region replication on the source
bucket owned by one account to replicate objects in a destination
bucket owned by another account.
The process is the same as setting up cross-region replication when
both buckets are owned by the same account, except that you do one
extra step—the destination bucket owner must create a bucket policy
granting the source bucket owner permission for replication actions.