AWS Macie - list of query fields - amazon-web-services

In the AWS Macie documentation, it shows an example of adding a basic alert.
The example query to add is s3_world_readability:"true"
Where do we find a list of valid fields that we can query on?
The docs refer to Constructing Queries in Macie, but nowhere do I see any listing of what fields I can query.
I'm trying to figure out whether I can create Macie alert if a Bucket doesn't have a bucket policy that enforces Server Side Encryption
Am I missing something obvious?
Update
Found out you can get some suggestions from the Macie console in the Research tab.
Using this pattern when selecting S3 bucket properties, I'm able to drill down into the bucket policy.
My Bucket policy is
{
"Version": "2008-10-17",
"Id": "Policy123456789",
"Statement": [
{
"Sid": "DenyIncorrectEncryptionHeader",
"Effect": "Deny",
"Principal": {
"AWS": "*"
},
"Action": "s3:PutObject",
"Resource": "arn:aws:s3:::my-bucket/*",
"Condition": {
"StringNotEquals": {
"s3:x-amz-server-side-encryption": "AES256"
}
}
},
{
"Sid": "DenyUnEncryptedObjectUploads",
"Effect": "Deny",
"Principal": {
"AWS": "*"
},
"Action": "s3:PutObject",
"Resource": "arn:aws:s3:::my-bucket/*",
"Condition": {
"Null": {
"s3:x-amz-server-side-encryption": "true"
}
}
}
]
}
I can use the following query in Macie and it will return the bucket with this policy
policy.Policy.Statement.Action:"s3:PutObject"
So if want to query bucket policies that match the Conditions forcing SSE, I try:
policy.Policy.Statement.Condition.StringNotEquals.s3\:x\-amz\-server\-side\-encryption:"AES256"
But I get nothing back. Is there a better way for me to query these properties?

Related

AWS: Permissions for exporting logs from Cloudwatch to Amazon S3

I am trying to export logs from one of my CloudWatch log groups into Amazon S3, using AWS console.
I followed the guide from AWS documentation but with little success. My organization does not allow me to manage IAM roles/policies, however I was able to find out that my role is allowed all log-related operations (logs:* on all resources within the account).
Currently, I am stuck on the following error message:
Could not create export task. PutObject call on the given bucket failed. Please check if CloudWatch Logs has been granted permission to perform this operation.
My bucket policy is set in the following way:
{
[
...
{
"Sid": "Cloudwatch Log Export 1",
"Effect": "Allow",
"Principal": {
"Service": "logs.eu-central-1.amazonaws.com"
},
"Action": "s3:GetBucketAcl",
"Resource": "arn:aws:s3:::my-bucket"
},
{
"Sid": "Cloudwatch Log Export 2",
"Effect": "Allow",
"Principal": {
"Service": "logs.eu-central-1.amazonaws.com"
},
"Action": "s3:PutObject",
"Resource": "arn:aws:s3:::my-bucket/*"
}
]
}
Prior to editing bucket policy, my error message had been
Could not create export task. GetBucketAcl call on the given bucket failed. Please check if CloudWatch Logs has been granted permission to perform this operation.
but editing the bucket policy fixed that. I would expect allowing PutObject to do the same, but this has not been the case.
Thank you for help.
Ensure when exporting the data you configure the following aptly
S3 bucket prefix - optional This would be the object name you want to use to store the logs.
While creating the policy for PutBucket, you must ensure the object/prefix is captured adequately. See the diff for the PutBucket statement Resource:
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "s3:GetBucketAcl",
"Effect": "Allow",
"Resource": "arn:aws:s3:::my-exported-logs",
"Principal": { "Service": "logs.us-east-2.amazonaws.com" }
},
{
"Action": "s3:PutObject" ,
"Effect": "Allow",
- "Resource": "arn:aws:s3:::my-exported-logs/*",
+ "Resource": "arn:aws:s3:::my-exported-logs/**_where_i_want_to_store_my_logs_***",
"Condition": { "StringEquals": { "s3:x-amz-acl": "bucket-owner-full-control" } },
"Principal": { "Service": "logs.us-east-2.amazonaws.com" }
}
]
}
Please check this guide Export log data to Amazon S3 using the AWS CLI
Policy's looks like the document that you share but slight different.
Assuming that you are doing this in same account and same region, please check that you are placing the right region ( in this example is us-east-2)
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "s3:GetBucketAcl",
"Effect": "Allow",
"Resource": "arn:aws:s3:::my-exported-logs",
"Principal": { "Service": "logs.us-east-2.amazonaws.com" }
},
{
"Action": "s3:PutObject" ,
"Effect": "Allow",
"Resource": "arn:aws:s3:::my-exported-logs/*",
"Condition": { "StringEquals": { "s3:x-amz-acl": "bucket-owner-full-control" } },
"Principal": { "Service": "logs.us-east-2.amazonaws.com" }
}
]
}
I think that bucket owner full control is not the problem here, the only chance is the region.
Anyway, take a look to the other two examples in case that you were in different accounts/ using role instead user.
This solved my issue, that was the same that you mention.
One thing to check is your encryption settings. According to https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/S3ExportTasksConsole.html
Exporting log data to Amazon S3 buckets that are encrypted by AWS KMS is not supported.
Amazon S3-managed keys (SSE-S3) bucket encryption might solve your problem. If you use SSE-KMS, Cloudwatch can't access your encryption key in order to properly encrypt the objects as they are put into the bucket.
I had the same situation and what worked for me is to add the bucket name itself as a resource in the Allow PutObject Sid, like:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowLogsExportGetBucketAcl",
"Effect": "Allow",
"Principal": {
"Service": "logs.eu-west-1.amazonaws.com"
},
"Action": "s3:GetBucketAcl",
"Resource": "arn:aws:s3:::my-bucket"
},
{
"Sid": "AllowLogsExportPutObject",
"Effect": "Allow",
"Principal": {
"Service": "logs.eu-west-1.amazonaws.com"
},
"Action": "s3:PutObject",
"Resource": [
"my-bucket",
"my-bucket/*"
]
}
]
}
I also believe that all the other answers are relevant, especially using the time in milliseconds.

AWS IAM policy restriction based on Tags not giving me any access

So I followed this AWS tutorial and created this IAM policy that should give access to any dynamodb action that has these keys. But as you can see in the image attached, it tells me I do not have any permission. Also it does happen to other services, so not only dynamodb, and also I tried to hardcode the 'access-project' tag in the policy as done with the 'access-environment as you can see.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllActionsSameProjectEnvironment",
"Effect": "Allow",
"Action": "dynamodb:*",
"Resource": "*",
"Condition": {
"StringEquals": {
"aws:ResourceTag/access-project": "${aws:PrincipalTag/access-project}",
"aws:ResourceTag/access-environment": "pre"
},
"ForAllValues:StringEquals": {
"aws:TagKeys": [
"access-project",
"access-environment",
"Name",
"OwnedBy"
]
},
"StringEqualsIfExists": {
"aws:RequestTag/access-project": "${aws:PrincipalTag/access-project}",
"aws:RequestTag/access-environment": "pre"
}
}
}
]
}
error image
Any idea why is this happening? Thanks!
DynamoDB does not support Authorization based on tags as listed in the docs.

AWS - Permission Denied After Setting a Policy with SecureTransport:false

I was trying to enforce a policy that allows only SSL access.
However, after attaching the Policy, now I get "You don't have permissions" on every single thing in this bucket, including the Permissions tab and Bucket Policy section.
I am the admin and I do have all access permissions to S3 in IAM for my user.
This is the policy:
{
"Id": "Policy98421321896",
"Version": "2012-10-17",
"Statement": [
{
"Sid": "MustBeEncryptedInTransit",
"Action": "s3:*",
"Effect": "Deny",
"Resource": [
"arn:aws:s3:::cf-templates-98d9d7a96z21x-us-east-1",
"arn:aws:s3:::cf-templates-98d9d7a96z21x-us-east-1/*"
],
"Condition": {
"ArnEqualsIfExists": {
"aws:SecureTransport": "false"
}
},
"Principal": "*"
}
]
}
Question is:
How do I restore permissions to this bucket?
And how should I correctly set this policy?
When you want to add a condition which checks for Boolean values then it should be "Bool" key with valid value.
"Condition": {
"Bool": {
"aws:SecureTransport": "true"
}
}
What you are trying to achieve is mentioned in this blog and you can use it according to your need.
https://aws.amazon.com/premiumsupport/knowledge-center/s3-bucket-policy-for-config-rule/
{
"Id": "ExamplePolicy",
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowSSLRequestsOnly",
"Action": "s3:*",
"Effect": "Deny",
"Resource": [
"arn:aws:s3:::DOC-EXAMPLE-BUCKET",
"arn:aws:s3:::DOC-EXAMPLE-BUCKET/*"
],
"Condition": {
"Bool": {
"aws:SecureTransport": "false"
}
},
"Principal": "*"
}
]
}
About your 2nd part of the question, you can reset the permission using your root account as it should have god level permissions. But it is strange that updating a bucket policy changes your IAM policies and you can't access certain parts of S3 config. Maybe something else is missing here..

How to give access to all AWS resources based on a resource tag?

I’m trying to create an IAM Admin role that has access to all AWS resources, across all services, that have a specific tag. In other words, I need the equivalent of AWS’ native “Administrator” but for tagged resources only. How do I accomplish this?
For context, I need team-specific IAM admin roles. If an EC2 server, or and S3 bucket, or an ECS task has the tag “team” with the tag’s value being the team’s name, that role should be able to administer those resources.
What have I tried so far?
1
The first approach was the most obvious: copy the AWS Administrator role and add a Condition to it:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "*",
"Resource": "*",
"Condition": {
"StringLike": {
"aws:ResourceTag/team": "teamA"
}
}
}
]
}
This is something that's described in this related post but this does not work.
AWS documentation Controlling access to AWS resources using resource tags notes that that some services need the service-specific prefix, such as iam:ResourceTag. I thought that this would work for at least the services that supported the generic aws:ResourceTag prefix but it doesn't even do that.
2
I then tried a more targeted approach by listing the Actions more selectively. I grabbed the AWS AmazonEC2FullAccess policy and added a Condition to it:
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "ec2:*",
"Effect": "Allow",
"Resource": "arn:aws:ec2:*:*:instance/*",
"Condition": {
"StringLike": {
"ec2:ResourceTag/team": "teamA"
}
}
},
{
"Effect": "Allow",
"Action": "elasticloadbalancing:*",
"Resource": "*",
"Condition": {
"StringLike": {
"ec2:ResourceTag/team": "teamA"
}
}
},
{
"Effect": "Allow",
"Action": "cloudwatch:*",
"Resource": "*",
"Condition": {
"StringLike": {
"ec2:ResourceTag/team": "teamA"
}
}
},
{
"Effect": "Allow",
"Action": "autoscaling:*",
"Resource": "*",
"Condition": {
"StringLike": {
"ec2:ResourceTag/team": "teamA"
}
}
},
{
"Effect": "Allow",
"Action": "iam:CreateServiceLinkedRole",
"Resource": "*",
"Condition": {
"StringEquals": {
"iam:AWSServiceName": [
"autoscaling.amazonaws.com",
"ec2scheduled.amazonaws.com",
"elasticloadbalancing.amazonaws.com",
"spot.amazonaws.com",
"spotfleet.amazonaws.com",
"transitgateway.amazonaws.com"
]
},
"StringLike": {
"ec2:ResourceTag/team": "teamA"
}
}
}
]
}
I tried this with a generic "Resource": "*" and a specific "Resource": "arn:aws:ec2:*:*:instance/*", neither of which worked. The EC2 service either reports API Error or You do not have any instances in this region when navigating to the EC2 service.
Also tried with both generic aws:ResourceTag and service-specific condition, e.g. ec2:ResourceTag.
Any thoughts are appreciated. It seems more and more likely that AWS does not support a "shotgun" approach that I'm looking to do.
If a shotgun approach is not possible, has anyone compiled an IAM policy that accomplishes resource tags-based access for all AWS services?
I've been testing this a lot. You cannot even trust the policy simulator. In theory any resource listed here with the "Authorization based on tags" set to "yes" can use the ResourceTag condition. The only feasible way I've found is to go service by service in the policy generator looking for service specific conditions that you can add, tedious. I'll try to update my answer with a list of actually working conditions based on the ResourceTag element.

How to Give Amazon SES Permission to Write to Your Amazon S3 Bucket

I want my SES(AWS) can receive emails, so I follow the following tutorial,
http://docs.aws.amazon.com/ses/latest/DeveloperGuide/receiving-email-getting-started-receipt-rule.html
When I am at last step - creating rule, it comes with following error,
Could not write to bucket: "email-receiving"
I google and found this information on (http://docs.aws.amazon.com/ses/latest/DeveloperGuide/receiving-email-permissions.html) can fix the issue.
However, when adding my policy statement, it comes with an error - This policy contains the following error: Has prohibited field Principal For more information about the IAM policy grammar, see AWS IAM Policies.
My policy statement is,
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "GiveSESPermissionToWriteEmail",
"Effect": "Allow",
"Principal": {
"Service": [
"ses.amazonaws.com"
]
},
"Action": [
"s3:PutObject"
],
"Resource": "arn:aws:s3:::mybulketname/*",
"Condition": {
"StringEquals": {
"aws:Referer": "my12accountId"
}
}
}
]
}
If I take off
"Principal": {
"Service": [
"ses.amazonaws.com"
]
}
Validate policy will pass.
Thanks
Find bucket->permission->bucketPolicy
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowSESPuts",
"Effect": "Allow",
"Principal": {
"Service": "ses.amazonaws.com"
},
"Action": "s3:PutObject",
"Resource": "arn:aws:s3:::BUCKEN_NAME/*",
"Condition": {
"StringEquals": {
"aws:Referer": "YOUR ID"
}
}
}
]
}
Read more here https://docs.aws.amazon.com/ses/latest/DeveloperGuide/receiving-email-permissions.html
To find your AWS account ID number on the AWS Management Console, choose Support on the navigation bar on the upper-right, and then choose Support Center. Your currently signed-in account ID appears in the upper-right corner below the Support menu.
Read more here https://docs.aws.amazon.com/IAM/latest/UserGuide/console_account-alias.html
I follow this advice but I was still having the issue. After much debugging, I realized that SES was failing to write because I had default server-side encryption (on the bucket) set to "AWS-KMS"
I did a 5 minute google search and couldn't find this incompatibility documented anywhere.
You can work around this by updating your default encryption setting on the target bucket to either "AES-256" or "None".
This problem has been resolved.
Create the policy on the bucket you want to grant the SES permission, not in the IAM
Note, I continued to have this error even after correctly specifying permissions. If you are using cross-region (e.g. SES is in N Virginia and S3 Bucket is in Africa) then you either need to specify the bucket name with the region or else just make the bucket in the same region.
I have the same problem, if I only delete the "Condition"
the policy passes and the "RuleSet" is Ok:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "GiveSESPermissionToWriteEmail",
"Effect": "Allow",
"Principal": {
"Service": "ses.amazonaws.com"
},
"Action": "s3:PutObject",
"Resource": "arn:aws:s3:::mybulketname/*"
}
]
}