Amazon S3 Policy: Manage folder level encryption - amazon-web-services

I have a s3 bucket structure like..
device1/01
device2/01
device3/01
Each device has their own KMS key. I used boto3 to create structure and encryption.
s3_client.put_object(
Body="Hello world",
Bucket='MyBucket',
Key="device1/01",
ServerSideEncryption='aws:kms',
SSEKMSKeyId='device1_kms_key_id'
)
Now if I keep using this put_object() with different KMS, will s3 policy get updated and have all KMS's access automatically?

It's the Kms policy that needs to be updated so that you have access to the kms operations the you need for the specific keys.
Alternatively the iam policy could be updated to provide you access to the keys.
The s3 resource policy is unrelated.
None of this happen automatically.
https://aws.amazon.com/premiumsupport/knowledge-center/s3-bucket-access-default-encryption/

Related

is it possible to copy s3 bucket content from one bucket to another account s3 bucket without using bucket policy?

I want to copy the S3 bucket object to a different account, but the requirement can't use the Bucket policy,
then is it possible to copy content from one bucket to another without using the bucket policy?
You cannot use native S3 object replication between different accounts without using a bucket policy. As stated in the permissions documentation:
When the source and destination buckets aren't owned by the same accounts, the owner of the destination bucket must also add a bucket policy to grant the owner of the source bucket permissions to perform replication actions
You could write a custom application that uses IAM roles to replicate objects, but this will likely be quite involved as you'll need to track the state of the bucket and all of the objects written to it.
install AWS CLI,
run AWS configure set source bucket credentials as default and,
visit https://github.com/Shi191099/S3-Copy-old-data-without-Policy.git

Put Object to S3 Bucket of another account

We are able to put objects into our S3 Bucket.
But now we have a requirement that we need to put these Object directly to an S3 Bucket which belongs to a different account and different region.
Here we have few questions:
Is this possible?
If possible what changes we need to do for this?
They have provided us Access Key, Secret Key, Region, and Bucket details.
Any comments and suggestions will be appreciated.
IAM credentials are associated with a single AWS Account.
When you launch your own Amazon EC2 instance with an assigned IAM Role, it will receive access credentials that are associated with your account.
To write to another account's Amazon S3 bucket, you have two options:
Option 1: Your credentials + Bucket Policy
The owner of the destination Amazon S3 bucket can add a Bucket Policy on the bucket that permits access by your IAM Role. This way, you can just use the normal credentials available on the EC2 instance.
Option 2: Their credentials
It appears that you have been given access credentials for their account. You can use these credentials to access their Amazon S3 bucket.
As detailed on Working with AWS Credentials - AWS SDK for Java, you can provide these credentials in several ways. However, if you are using BOTH the credentials provided by the IAM Role AND the credentials that have been given to you, it can be difficult to 'switch between' them. (I'm not sure if there is a way to tell the Credentials Provider to switch between a profile stored in the ~/.aws/credentials file and those provided via instance metadata.)
Thus, the easiest way is to specify the Access Key and Secret Key when creating the S3 client:
BasicAWSCredentials awsCreds = new BasicAWSCredentials("access_key_id", "secret_key_id");
AmazonS3 s3Client = AmazonS3ClientBuilder.standard()
.withCredentials(new AWSStaticCredentialsProvider(awsCreds))
.build();
It is generally not a good idea to put credentials in your code. You should load them from a configuration file.
Yes, it's possible. You need to allow cross account S3 put operation in bucket's policy.
Here is a blog by AWS. It should help you in setting up cross account put action.

AWS Lambda Cross account Keys & Roles usage for S3 transfer

I have a usecase to use AWS Lambda to copy files/objects from one S3 bucket to another. In this usecase Source S3 bucket is in a separate AWS account(say Account 1) where the provider has only given us AccessKey & SecretAccess Key. Our Lambda runs in Account 2 and the destination bucket can be either in Account 2 or some other account 3 altogether which can be accessed using IAM role. The setup is like this due to multiple partner sharing data files
Usually, I used to use the following boto3 command to copy the contents between two buckets when everything is in the same account but want to know how this can be modified for the new usecase
copy_source_object = {'Bucket': source_bucket_name, 'Key': source_file_key}
s3_client.copy_object(CopySource=copy_source_object, Bucket=destination_bucket_name, Key=destination_file_key)
How can the above code be modified to fit my usecase of having accesskey based connection to source bucket and roles for destination bucket(which can be cross-account role as well)? Please let me know if any clarification is required
There's multiple options here. Easiest is by providing credentials to boto3 (docs). I would suggest retrieving the keys from the SSM parameter store or secrets manager so they're not stored hardcoded.
Edit: I realize the problem now, you can't use the same session for both buckets, makes sense. The exact thing you want is not possible (ie. use copy_object). The trick is to use 2 separate session so you don't mix the credentials. You would need to get_object from the first account and put_object to the second objects. You should be able to simply put the resp['Body'] from the get in the put request but I haven't tested this.
import boto3
acc1_session = boto3.session.Session(
aws_access_key_id=ACCESS_KEY_acc1,
aws_secret_access_key=SECRET_KEY_acc1
)
acc2_session = boto3.session.Session(
aws_access_key_id=ACCESS_KEY_acc2,
aws_secret_access_key=SECRET_KEY_acc2
)
acc1_client = acc1_session.client('s3')
acc2_client = acc2_session.client('s3')
copy_source_object = {'Bucket': source_bucket_name, 'Key': source_file_key}
resp = acc1_client.get_object(Bucket=source_bucket_name, Key=source_file_key)
acc2_client.put_object(Bucket=destination_bucket_name, Key=destination_file_key, Body=resp['Body'])
Your situation appears to be:
Account-1:
Amazon S3 bucket containing files you wish to copy
You have an Access Key + Secret Key from Account-1 that can read these objects
Account-2:
AWS Lambda function that has an IAM Role that can write to a destination bucket
When using the CopyObject() command, the credentials used must have read permission on the source bucket and write permission on the destination bucket. There are normally two ways to do this:
Use credentials from Account-1 to 'push' the file to Account-2. This requires a Bucket Policy on the destination bucket that permits PutObject for the Account-1 credentials. Also, you should set ACL= bucket-owner-full-control to handover control to Account-2. (This sounds similar to your situation.) OR
Use credentials from Account-2 to 'pull' the file from Account-1. This requires a Bucket Policy on the source bucket that permits GetObject for the Account-2 credentials.
If you can't ask for a change to the Bucket Policy on the source bucket that permits Account-2 to read the contents, then **you'll need a Bucket Policy on the Destination bucket that permits write access by the credentials from Account-1`.
This is made more complex by the fact that you are potentially copying the object to a bucket in "some other account". There is no easy answer if you are starting to use 3 accounts in the process.
Bottom line: If possible, ask them for a change to the source bucket's Bucket Policy so that your Lambda function can read the files without having to change credentials. It can then copy objects to any bucket that the function's IAM Role can access.

AWS Glue: Access denied for accessing table with S3 source data

I have manually created a Glue table with S3 bucker as the source.
The S3 bucket has a bucket policy defined to allow access only from
root
my user_id
or a role defined for Glue
Now when a different user who has AWSGlueConsoleFullAccess tries to access the table from Glue console he gets access denied although Glue has service access to the S3 bucket.
Request help in understanding this behavior.
Thanks
Can you please look into the policy details of role "AWSGlueConsoleFullAccess"? Most probably its expecting the S3 bucket will have certain prefix e.g. "aws-glue-*". In that case either update your policy or rename your bucket to have aws-glue- prefix.
"Resource": [
"arn:aws:s3:::aws-glue-*"

Access denied when accessing Athena in SQLalchemy

Using pyathena and SQLalchemy, I connect to AWS Athena.
If I use keys of AWS admin, all is working fine, can query data.
If I use keys of an aws user that have AmazonAthenaFullAccess and AWSQuicksightAthenaAccess permissions, I get access deny.
I have permission to the output S3, and Athena access a public data set S3 bucket.
What permissions am I missing?
Thanks
AmazonAthenaFullAccess policy provides access to S3 buckets such as: "arn:aws:s3:::aws-athena-query-results-" and "arn:aws:s3:::athena-examples".
You have 2 options:
Create a new policy and add content from AmazonAthenaFullAccess policy, but with different S3 resources.
Add AmazonS3FullAccess policy to your user, which grants permissions for all your S3 buckets