Automating source IP of OpenSearch using Boto3/Lambda - amazon-web-services

Every time our IPs change, I have to keep updating this policy to access Kibana. I thought I could automate this, but is there any way I can delete an existing policy and create a new one on Lambda? I'm unable to find anything on Boto3 regarding this.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "es:*",
"Resource": "arn:aws:es:ap-south-1:xxxxxxxxxxx/*",
"Condition": {
"IpAddress": {
"aws:SourceIp": [
"xxxxxxxx",
"xxxxxxx",
"xxxxxxxxx",
"xxxxxxxxx"
]
}
}
}
]
}

In boto3 you can use update_elasticsearch_domain_config which has the option AccessPolicies. So basically you have to overwrite the entire policy. You can't just modify directly the IP addresses.

Related

Enforce tags for S3-bucket creation

I have been trying to create an IAM policy to enforce tagging for S3 resources.
The policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Deny",
"Action": [
"s3:CreateBucket"
],
"Resource": "*",
"Condition": {
"Null": {
"aws:RequestTag/Tag1": "true"
}
}
},
{
"Sid": "VisualEditor1",
"Effect": "Deny",
"Action": [
"s3:CreateBucket"
],
"Resource": "*",
"Condition": {
"Null": {
"aws:RequestTag/Tag2": "true"
}
}
}
]
}
This condition works for the EC2 and EB, but here it fails with the following message:
What is the error here, and what other permissions do I need to enforce tagging for S3 resources?
Sorry, I saw some questions alike mine, but none really answers my question.
The CreateBucket API doesn't support tags. They have to be added later via PutBucketTagging.
Consequently, you cannot enforce tags on creation, to the best of my knowledge. You could implement some reactive process, e.g. scan buckets periodically to ensure proper tagging.

restrict aws iam user to a specific region (eu-west-1)

I'm trying to create a policy in which the user exam can access only to the region eu-west-1.
I tried to find a solution but didn't found the right one.
the policy looks something like this:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "*",
"Resource": "user_arn",
"Condition": {
"StringEquals": {
"aws:RequestedRegion": "eu-west-1"
}
}
}
]
}
but it does not seem to work no matter what I do.
what is the best way to do so that the user can do whatever he wants but only in this region?
found a solution
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "ec2:*",
"Resource": "*",
"Condition": {
"StringEquals": {
"aws:RequestedRegion": "eu-west-1"
}
}
}
]
}
This should work as well, however you are granting full access to EC2 limited to one region. In the example below you "deny" any ec2 action outside the region or regions defined below, however you are not granting any privileges (they should be assigned in a separate policy or use an Allow statement. Normally this is used as an SCP in AWS organizations,a and you jusy deny action "*", to force all users to create resources only in the designated regions, and deny any API action in regions not authorized.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Deny",
"Action": "ec2:*",
"Resource": "*",
"Condition": {
"StringNotEquals": {
"aws:RequestedRegion": "eu-west-1"
}
}
}
]

AWS: Permissions for exporting logs from Cloudwatch to Amazon S3

I am trying to export logs from one of my CloudWatch log groups into Amazon S3, using AWS console.
I followed the guide from AWS documentation but with little success. My organization does not allow me to manage IAM roles/policies, however I was able to find out that my role is allowed all log-related operations (logs:* on all resources within the account).
Currently, I am stuck on the following error message:
Could not create export task. PutObject call on the given bucket failed. Please check if CloudWatch Logs has been granted permission to perform this operation.
My bucket policy is set in the following way:
{
[
...
{
"Sid": "Cloudwatch Log Export 1",
"Effect": "Allow",
"Principal": {
"Service": "logs.eu-central-1.amazonaws.com"
},
"Action": "s3:GetBucketAcl",
"Resource": "arn:aws:s3:::my-bucket"
},
{
"Sid": "Cloudwatch Log Export 2",
"Effect": "Allow",
"Principal": {
"Service": "logs.eu-central-1.amazonaws.com"
},
"Action": "s3:PutObject",
"Resource": "arn:aws:s3:::my-bucket/*"
}
]
}
Prior to editing bucket policy, my error message had been
Could not create export task. GetBucketAcl call on the given bucket failed. Please check if CloudWatch Logs has been granted permission to perform this operation.
but editing the bucket policy fixed that. I would expect allowing PutObject to do the same, but this has not been the case.
Thank you for help.
Ensure when exporting the data you configure the following aptly
S3 bucket prefix - optional This would be the object name you want to use to store the logs.
While creating the policy for PutBucket, you must ensure the object/prefix is captured adequately. See the diff for the PutBucket statement Resource:
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "s3:GetBucketAcl",
"Effect": "Allow",
"Resource": "arn:aws:s3:::my-exported-logs",
"Principal": { "Service": "logs.us-east-2.amazonaws.com" }
},
{
"Action": "s3:PutObject" ,
"Effect": "Allow",
- "Resource": "arn:aws:s3:::my-exported-logs/*",
+ "Resource": "arn:aws:s3:::my-exported-logs/**_where_i_want_to_store_my_logs_***",
"Condition": { "StringEquals": { "s3:x-amz-acl": "bucket-owner-full-control" } },
"Principal": { "Service": "logs.us-east-2.amazonaws.com" }
}
]
}
Please check this guide Export log data to Amazon S3 using the AWS CLI
Policy's looks like the document that you share but slight different.
Assuming that you are doing this in same account and same region, please check that you are placing the right region ( in this example is us-east-2)
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "s3:GetBucketAcl",
"Effect": "Allow",
"Resource": "arn:aws:s3:::my-exported-logs",
"Principal": { "Service": "logs.us-east-2.amazonaws.com" }
},
{
"Action": "s3:PutObject" ,
"Effect": "Allow",
"Resource": "arn:aws:s3:::my-exported-logs/*",
"Condition": { "StringEquals": { "s3:x-amz-acl": "bucket-owner-full-control" } },
"Principal": { "Service": "logs.us-east-2.amazonaws.com" }
}
]
}
I think that bucket owner full control is not the problem here, the only chance is the region.
Anyway, take a look to the other two examples in case that you were in different accounts/ using role instead user.
This solved my issue, that was the same that you mention.
One thing to check is your encryption settings. According to https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/S3ExportTasksConsole.html
Exporting log data to Amazon S3 buckets that are encrypted by AWS KMS is not supported.
Amazon S3-managed keys (SSE-S3) bucket encryption might solve your problem. If you use SSE-KMS, Cloudwatch can't access your encryption key in order to properly encrypt the objects as they are put into the bucket.
I had the same situation and what worked for me is to add the bucket name itself as a resource in the Allow PutObject Sid, like:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowLogsExportGetBucketAcl",
"Effect": "Allow",
"Principal": {
"Service": "logs.eu-west-1.amazonaws.com"
},
"Action": "s3:GetBucketAcl",
"Resource": "arn:aws:s3:::my-bucket"
},
{
"Sid": "AllowLogsExportPutObject",
"Effect": "Allow",
"Principal": {
"Service": "logs.eu-west-1.amazonaws.com"
},
"Action": "s3:PutObject",
"Resource": [
"my-bucket",
"my-bucket/*"
]
}
]
}
I also believe that all the other answers are relevant, especially using the time in milliseconds.

How to give access to all AWS resources based on a resource tag?

I’m trying to create an IAM Admin role that has access to all AWS resources, across all services, that have a specific tag. In other words, I need the equivalent of AWS’ native “Administrator” but for tagged resources only. How do I accomplish this?
For context, I need team-specific IAM admin roles. If an EC2 server, or and S3 bucket, or an ECS task has the tag “team” with the tag’s value being the team’s name, that role should be able to administer those resources.
What have I tried so far?
1
The first approach was the most obvious: copy the AWS Administrator role and add a Condition to it:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "*",
"Resource": "*",
"Condition": {
"StringLike": {
"aws:ResourceTag/team": "teamA"
}
}
}
]
}
This is something that's described in this related post but this does not work.
AWS documentation Controlling access to AWS resources using resource tags notes that that some services need the service-specific prefix, such as iam:ResourceTag. I thought that this would work for at least the services that supported the generic aws:ResourceTag prefix but it doesn't even do that.
2
I then tried a more targeted approach by listing the Actions more selectively. I grabbed the AWS AmazonEC2FullAccess policy and added a Condition to it:
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "ec2:*",
"Effect": "Allow",
"Resource": "arn:aws:ec2:*:*:instance/*",
"Condition": {
"StringLike": {
"ec2:ResourceTag/team": "teamA"
}
}
},
{
"Effect": "Allow",
"Action": "elasticloadbalancing:*",
"Resource": "*",
"Condition": {
"StringLike": {
"ec2:ResourceTag/team": "teamA"
}
}
},
{
"Effect": "Allow",
"Action": "cloudwatch:*",
"Resource": "*",
"Condition": {
"StringLike": {
"ec2:ResourceTag/team": "teamA"
}
}
},
{
"Effect": "Allow",
"Action": "autoscaling:*",
"Resource": "*",
"Condition": {
"StringLike": {
"ec2:ResourceTag/team": "teamA"
}
}
},
{
"Effect": "Allow",
"Action": "iam:CreateServiceLinkedRole",
"Resource": "*",
"Condition": {
"StringEquals": {
"iam:AWSServiceName": [
"autoscaling.amazonaws.com",
"ec2scheduled.amazonaws.com",
"elasticloadbalancing.amazonaws.com",
"spot.amazonaws.com",
"spotfleet.amazonaws.com",
"transitgateway.amazonaws.com"
]
},
"StringLike": {
"ec2:ResourceTag/team": "teamA"
}
}
}
]
}
I tried this with a generic "Resource": "*" and a specific "Resource": "arn:aws:ec2:*:*:instance/*", neither of which worked. The EC2 service either reports API Error or You do not have any instances in this region when navigating to the EC2 service.
Also tried with both generic aws:ResourceTag and service-specific condition, e.g. ec2:ResourceTag.
Any thoughts are appreciated. It seems more and more likely that AWS does not support a "shotgun" approach that I'm looking to do.
If a shotgun approach is not possible, has anyone compiled an IAM policy that accomplishes resource tags-based access for all AWS services?
I've been testing this a lot. You cannot even trust the policy simulator. In theory any resource listed here with the "Authorization based on tags" set to "yes" can use the ResourceTag condition. The only feasible way I've found is to go service by service in the policy generator looking for service specific conditions that you can add, tedious. I'll try to update my answer with a list of actually working conditions based on the ResourceTag element.

AWS SCP for EC2 type

I want to allow users only to create t2.micro/small/medium for development and allow them to use only spot instances. Have created IAM policy to restrict type/size of instances. In addition I want to put restriction on "on-demand" instances (team MUST opt for spot instances only). What is the cleaner way of achieving it?
allow full access with the account
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "limitedSize",
"Effect": "Deny",
"Action": [
"ec2:RunInstances",
"cloudwatch:DescribeAlarms"
],
"Resource": [
"arn:aws:ec2:*:*:instance/*"
],
"Condition": {
"ForAnyValue:StringNotLike": {
"ec2:InstanceType": [
"t3.*",
"t2.*"
]
}
}
}
]
}
Try AWS Service Catalog.. that is the exact service which can help u here.
Use the ec2:InstanceMarketType condition key in your IAM policy.
Example (untested):
{
"Version": "2012-10-17",
"Statement": {
"Effect": "Allow",
"Action": "*",
"Resource": "*",
"Condition": {
"StringEquals": {
"ec2:InstanceMarketType": "spot"
}
}
}
}
References:
Condition Keys for EC2
EC2 Condition Key Example
Another SO Question