S3glacier logging and Encryption - amazon-web-services

I am trying to write a prisma policy to ensure s3glacier has logging and encryption enabled. However l donot see an arguement in the terraform code for these two checks. Does glacier inherit these from s3 automatically? Thank you
I tried using the glacier sns topic present for logging but l still cannot figure out how to check for encryption.

Related

AWS Control Tower Guardrail - Prevents S3 Bucket being created with encryption

We have applied the guardrails mentioned in this posting, AWS Preventive S3 Guardrails. 1. Unfortunately, we are not getting the anticipated outcome. We applied the Disallow Changes to Encryption Configuration for Amazon S3 Buckets 2.
The SCP has a DENY for s3:PutEncryptionConfiguration, with a condition excepting the arn:aws:iam::*:role/AWSControlTowerExecution role.
The issue is that anyone can create an S3 bucket, which is acceptable. However, when creating the bucket either in the console or via CloudFormation and attempting to specify encryption either SSE or KMS an error is generated and the bucket created without encryption.
Ideally we need to have anyone be able to create an S3 bucket and enable encryption. What we were hoping that this SCP would do would be to prevent removing encryption once applied to the bucket.
We are anticiapting similar issues with the other guardrails mentioned in the article:
Disallow Changes to Encryption Configuration for all Amazon S3 Buckets [Previously: Enable Encryption at Rest for Log Archive]
Disallow Changes to Logging Configuration for all Amazon S3 Buckets [Previously: Enable Access Logging for Log Archive]
Disallow Changes to Bucket Policy for all Amazon S3 Buckets [Previously: Disallow Policy Changes to Log Archive]
Disallow Changes to Lifecycle Configuration for all Amazon S3 Buckets [Previously: Set a Retention Policy for Log Archive]
Has anyone encountered this issue? What would be the best way to implement allowing the buckets be created with the needed encryption, logging, bucket policy and lifecycle and once created disallowing removal or changes after the bucket was created?
I'm afraid scp's dont offer the flexibility you need, simply because the condition keys you need are not present in the api calls. There is not a policy that says "allow createbucket with the condition that it has encryption enabled".
I've worked in various platform teams for corporates to implement these types of controls and have encountered these limitations many times. Basically there are three strategies:
Detective compliance
Corrective compliance
Preventive compliance
First make sure you have visibility over how stuff is configured. You can use aws config rules for this. There are definitely rules out there that check s3 buckets for encryption settings. Make sure to centralize the results of these rules using a aws config aggregator in your security account. After detection you can manually follow up on detected misconfigurations (or automate this when running at scale).
If you also like to correct mistakes you can use aws config auto remediation actions. Also various open source tools are available to help you with this. An often used one is cloud custodian with the c7n-org plugin. Also many commercial offerimgs exist but are quite expensive.
With scp's or iam policies you can prevent someone from doing stuff which is a bit lower risk than correcting misconfigurations after they happened. However, it's also very inflexible, policies can get complex real quickly and it also it doesnt tell the user why he cant do something. Often, scp's are only used for the very simple tasks (e.g. no iam users may be created) or blocking actions outside or certain regions.
I'd opt for making sure you detect stuff properly first, then see if you can either correct or prevent it.
Edit:
If you have mature teams that only use ci/cd and infra as code you can also make sure your security controls are implemented using tools like cfn-guard in a pipeline build stage. Simply fail the build if their templates are not up to standards.
Edit2: to get back on your question: for some actions it's possible to prevent using scp's if there is a separate api for disabling stuff like a 'DisableEncryption' action. However for most actions it's a PutEncryptionSetting-like action and you cant really tell if its being enabled or disabled.

AWS S3 is there a notification on GetObject?

I have a usecase where I want to put data into an S3 bucket, for it to read later, by another account. I only want the other account to be able to read the file in S3, and once they have read it, I will then delete the file myself.
I have been reading the S3 documentation, and cannot see they cover this usecase: of sending a notification when a file in an S3 bucket is read ?
Can anyone help, or suggest an alternative workflow ? I have been looking at AWS SNS and was wondering if that would be a better solution ?
You could use CloudTrail and CloudWatch Events to enable this workflow.
By default S3 API calls are not logged so you'd want to enable that following the instructions here.
Then enable a CloudWatch event rule for the Simple Storage Service where the "GetObject" operation occurs.
Have this event invoke a Lambda function that will remove the object.
More information available here.

How do I find out what AWS actions do I need to grant for a CLI command?

Consider I want to run some AWS CLI command, e.g. aws s3 sync dist/ "s3://${DEPLOY_BUCKET_NAME}" --delete.
How do I know what specific permissions (actions) do I need to grant in order for this command to work correctly? I want to adhere to the least privileged principle.
Just to clarify my question. I know where to find a list of all actions for S3 or other service and I know how to write a policy. The question is how do I know what specific actions do I need to grant for some CLI command?
Because, each command will use different actions and the arguments of the command also play a role here.
Almost every command used in the AWS CLI map one-to-one to IAM Actions.
However, the aws s3 commands such as sync are higher-level functions that call multiple commands.
For sync, I would imagine you would need:
ListBucket
CopyObject
GetObjectACL
PutObjectACL
If that still doesn't help, then you can use AWS CloudTrail to look at the underlying API calls that the AWS CLI made to your account. The CloudTrail records will show each API call and whether it succeeded or failed.
There's no definitive mapping to API actions from high-level awscli commands (like aws s3 sync) or from AWS console actions that I'm aware of.
One thing that you might consider is to enable CloudTrail, then temporarily enable all actions on all resources in an IAM policy, then run a test of aws s3 sync, and then review CloudTrail for what API actions were invoked on which resources. Not ideal, but it might give you something to start with.
You can use Athena to query CloudTrail Logs. It might seem daunting to set up at first, but it's actually quite easy. Then you can issue simple SQL queries such as:
SELECT eventtime, eventname, resources FROM trail20191021 ORDER BY eventtime DESC;
If you want to know for S3 specifically, that is documented in the S3 Developer Guide:
Specifying Permissions in a Policy
Specifying Conditions in a Policy
Specifying Resources in a Policy
In general, you can get what you need for any AWS resource from Actions, Resources, and Condition Keys for AWS Services
And you may find the AWS Policy Generator useful

How to use AWS AES-256 in AWS Lambda C#

Beginner here. How can I access an encrypted bucket. I read the documentation and it doesnt clearly state how will I create an encrypted bucket besides the default encryption option and bucket policy. I cant use a KMS key for the encryption
I checked this AWS Server-Side Encryption C# but I dont know where I will get the following
ServerSideEncryptionCustomerMethod =AES256
ServerSideEncryptionCustomerProvidedKey=base64(secretkey)
ServerSideEncryptionCustomerProvidedKeyMD5 : md5(base64(secretkey))
Could you please simplify or discuss the steps.

I am not able to upload Logs from Cloudwatch to my s3 bucket

I am not able to upload Logs in Cloudwatch to S3 bucket through Amazon Console. As it is showing the following error message. Can any one please help me.
"One or more of the specified parameters are invalid e.g. Time Range etc"
Probably you are using an S3 bucket with encryption. This error is shown when the export task to S3 fails due to the fact that CloudWatch Logs export task doesn't support encryption on server side yet.
(I reproduced this).
In my case, it was wrong access permissions configured on the bucket policy. It works with AES-256 encryption enabled in my test run.