Lambda Function Does not Add Rule to Security Group - amazon-web-services

I have this python 3.7 code that runs as well from my local computer. This is the code for my lambda function. However, when I test it in AWS, it does not add the inbound rule to the security group. I would like help in getting it to work. Again, when I run it from my local computer, it works.
import boto3
ec2 = boto3.client('ec2')
def modify_sg_add_rules(event, context):
response = ec2.authorize_security_group_ingress(
GroupName='boto3-sg',
IpPermissions=[
{
'FromPort': 1521,
'IpProtocol': 'tcp',
'IpRanges': [
{
'CidrIp': '12.345.67.890/32',
'Description': 'My home IP',
},
],
'ToPort': 1521,
},
],
DryRun=False
) #closes response
return response
#if __name__ == '__main__':
# modify_sg_add_rules()
These are the permission in the policy that is attached to a role:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"logs:CreateLogGroup",
"logs:CreateLogStream",
"logs:PutLogEvents"
],
"Resource": "arn:aws:logs:*:*:*"
},
{
"Effect": "Allow",
"Action": [
"ec2:DescribeSecurityGroups",
"ec2:AuthorizeSecurityGroupIngress",
"ec2:RevokeSecurityGroupIngress"
],
"Resource": "*"
}
]
}
Please, help me!
Thank you!
--Willie

Based on the comments.
The issue was caused by using wrong name for lambda function handler. Instead of modify_sg_add_rules it should be lambda_handler which is default name for the handler.
Thus, the solution was to rename modify_sg_add_rules into lambda_handler. The alternative is to change the default handler's name into modify_sg_add_rules.

Related

Adding custom cidr to ingress security group using Lambda without default vpc

First of all I have been searching stackflow and the internet for this but I didn't find exactly where the issue is.
Basically I am trying to add custom cidr ips to a security group via lambda function. I have given all the appropriate permissions (as far as i can tell) [REMOVED]and also tried attaching the vpc (which is non-default) to the lambda function to access the security group[REMOVED].
But I am getting "An error occurred (VPCIdNotSpecified) when calling the AuthorizeSecurityGroupIngress operation: No default VPC for this user"
Policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"ec2:RevokeSecurityGroupIngress",
"ec2:CreateNetworkInterface",
"ec2:AuthorizeSecurityGroupIngress",
"ec2:DescribeNetworkInterfaces",
"ec2:DescribeVpcs",
"ec2:DeleteNetworkInterface",
"ec2:DescribeSubnets",
"ec2:DescribeSecurityGroups"
],
"Resource": "*"
},
{
"Sid": "VisualEditor1",
"Effect": "Allow",
"Action": [
"logs:CreateLogStream",
"wafv2:GetIPSet",
"logs:CreateLogGroup",
"wafv2:UpdateIPSet"
],
"Resource": [
"arn:aws:logs:us-west-2:xxxx:log-group:xxx:log-stream:*",
"arn:aws:wafv2:us-west-2:xxx:*/ipset/*/*"
]
}
]
}
Lambda function:
#!/usr/bin/python3.9
import boto3
ec2 = boto3.client('ec2')
def lambda_handler(event, context):
response = ec2.authorize_security_group_ingress(
GroupId='sg-xxxxxxx'
IpPermissions=[
{
'FromPort': 443,
'IpProtocol': 'tcp',
'IpRanges': [
{
'CidrIp': '1x.1x.x.1x/32',
'Description': 'adding test cidr using lambda'
},
],
'ToPort': 443
}
],
DryRun=True
)
return response
Could someone point me to the right direction? VPC is non-default. All I need is the add ingress rule to existing security group within non-default vpc
Thanks
Found the solution: Initially it was syntax error but after googling i thought it requires vpc so I added VPC to the Lambda configuration which was not required for this purpose.
For anyone having the same issue (only want to update security group with the cidr): below is the correct function and permissions (function isnt complete as depending on the solution u may want to delete old rules too):
Lambda function:
#!/usr/bin/python3.9
import boto3
ec2 = boto3.client('ec2')
def lambda_handler(event, context):
response = ec2.authorize_security_group_ingress(
DryRun=False,
GroupId='sg-0123456789',
IpPermissions=[
{
'FromPort': 443,
'IpProtocol': 'tcp',
'IpRanges': [
{
'CidrIp': '1x.2x.3x.4x/32',
'Description': 'Security group updated via lambda'
}
],
'ToPort': 443
}
]
)
return response
IAM Policy on lambda execution role:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"ec2:AuthorizeSecurityGroupIngress",
"ec2:ModifySecurityGroupRules",
"ec2:UpdateSecurityGroupRuleDescriptionsIngress"
],
"Resource": "arn or all"
}
]
}

deleted s3 bucket then recreated it in another region. Now Cors Upload is getting error Only AWS4-HMAC-SHA256 is supported

so i deleted my s3 bucket and then recreated it in another region. I added back my block public access settings ( All public access is blocked)
added back my bucket policy
{
"Version": "2012-10-17",
"Id": "<hidden>",
"Statement": [
{
"Sid": "Stmt<hidden>",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::<hidden>"
},
"Action": [
"s3:ListBucket",
"s3:ListBucketVersions",
"s3:GetBucketLocation",
"s3:Get*",
"s3:Put*"
],
"Resource": "arn:aws:s3:::<hidden>"
}
]
}
and added back my cors policy
[
{
"AllowedHeaders": [
"Authorization"
],
"AllowedMethods": [
"GET",
"POST",
"PUT"
],
"AllowedOrigins": [
<hidden>
],
"ExposeHeaders": [],
"MaxAgeSeconds": 3000
}
]
and my principal user which my boto3 uses access keys and secret keys to do work with is back on the bucket
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"s3:GetAccessPoint",
"s3:PutAccountPublicAccessBlock",
"s3:GetAccountPublicAccessBlock",
"s3:ListAllMyBuckets",
"s3:ListAccessPoints",
"s3:ListJobs",
"s3:CreateJob"
],
"Resource": "*"
},
{
"Sid": "VisualEditor1",
"Effect": "Allow",
"Action": [
"s3:ListBucket",
"s3:ListBucketVersions",
"s3:GetBucketLocation",
"s3:Get*",
"s3:Put*",
"s3:DeleteObject"
],
"Resource": [
my nice buckets
]
}
]
}
The point I am making is nothing has changed. I just changed the bucket region and now the cors presigned post upload that was working while I had my bucket in the old region isn't. I keep getting the error mentioned in the title. Its very weird and AWS has not gotten back to me yet.
The full error:
<?xml version="1.0" encoding="UTF-8"?>
<Error><Code>InvalidArgument</Code><Message>Only AWS4-HMAC-SHA256 is supported</Message><ArgumentName>X-Amz-Algorithm</ArgumentName><ArgumentValue>undefined</ArgumentValue><RequestId></RequestId><HostId></HostId></Error>
it says the signature is undefined but why?
I moved the bucket from us-east-2 to us-east-1 to make the bucket play nice with aaws elastic transcoder
here is the full code being used to generate the presigned post
import boto3
class PrivateGeneratePresignedUrlResource(APIView):
def get(self, request, *args, **kwargs):
userid = kwargs.get('userid')
contentpostid = kwargs.get('contentpostid')
contenttype = kwargs.get('contentype')
if checkIfUserEmailIsValidated(request.user):
if checkIfUserIsContentCreator(request.user):
if checkIfUserIsActive(request.user):
user = getUserObject(request.user)
if user.id == int(userid):
contentcreatorobject = user.contentcreatoruserid
get_object_or_404(ContentFeedPost, id = int(contentpostid), contentcreator= contentcreatorobject)
keytime = datetime.now().strftime('%H%M%S%f')
randomkey = random.randrange(10000000000000, 99999999999999)
awskey = keytime + str(randomkey) + 'raw'
fields = {'acl': 'bucket-owner-full-control',
'x-amz-meta-user': userid,
'x-amz-meta-contentpost': contentpostid,
'x-amz-meta-rawbucketkey': str(awskey),
'content-type': contenttype}
conditions = [
{'acl': 'bucket-owner-full-control'},
{'x-amz-meta-user': userid},
{'x-amz-meta-contentpost': contentpostid},
{'x-amz-meta-rawbucketkey': str(awskey)},
{'content-type': contenttype}
]
s3 = boto3.client('s3',
aws_access_key_id=AWS_ACCESS_KEY_ID,
aws_secret_access_key=AWS_SECRET_ACCESS_KEY,
region_name=AWS_US_1_REGION)
post = s3.generate_presigned_post(
Bucket=AWS_S3_SHOFI_KIRKE_UPLOAD_BUCKET_NAME,
Key=awskey,
Fields=fields,
Conditions=conditions
)
print('here is the post>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>')
print(post)
return Response({
'url': post['url'],
'fields': post['fields'],
'uriroot': AWS_S3_SHOFI_KIRKE_UPLOAD_BUCKET_ROOT_URI
})
context = {'param userid is not request user id'}
return Response(context, status=HTTP_401_UNAUTHORIZED )
context = {'content creator is not active'}
return Response(context, status=HTTP_401_UNAUTHORIZED)
context = {'user is not content creator'}
return Response(context, status=HTTP_401_UNAUTHORIZED)
context = {'user needs to validate email'}
return Response(context, status=HTTP_401_UNAUTHORIZED)

Execute ssm.send_command to EC2 from Lambda. IAM problems

I have problems with execution command on Windows machine from Lambda function using ssm.send_command in Python. This Lambda functions should execute simple command on windows machine:
import boto3
ssm = boto3.client('ssm')
region = 'us-east-1'
instances = ['i-XXXXXXXXXXXXX']
def lambda_handler(event, context):
response = ssm.send_command(
InstanceIds=instances,
DocumentName='AWS-RunPowerShellScript',
DocumentVersion='$DEFAULT',
DocumentHash='2142e42a19e0955cc09e43600bf2e633df1917b69d2be9693737dfd62e0fdf61',
DocumentHashType='Sha256',
TimeoutSeconds=123,
Comment='string',
Parameters={
'commands': [
# 'query user'
'mkdir test-dir'
]
},
MaxErrors='1',
CloudWatchOutputConfig={
'CloudWatchLogGroupName': 'WindowsLogs',
'CloudWatchOutputEnabled': True
}
)
print response
Execution role for this L-functions is
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ssm:SendCommand"
],
"Resource": [
"arn:aws:ssm:*:*:document/*"
]
},
{
"Effect": "Allow",
"Action": [
"ssm:SendCommand"
],
"Resource": [
"arn:aws:ec2:*:*:instance/*",
"arn:aws:ec2:*:*:*"
]
},
{
"Action": [
"iam:PassRole"
],
"Effect": "Allow",
"Resource": "*"
}
]
}
Also I added policies:
AmazonEC2FullAccess
AmazonEC2RoleforSSM
AmazonSSMManagedInstanceCore
CloudWatchLogsFullAccess
AmazonSSMFullAccess
AmazonSSMAutomationRole
AmazonSSMMaintenanceWindowRole
For EC2 no roles were assigned
Problem: I don't see that folder "test-dir' was created on Windows server. Please can you help me to determine what is missing, or how can I configure Lambda function for executing command and send results to CloudWatch.
Thank you.
You need to assign the AmazonSSMFullAccess policy to the instance, otherwise it won't work.
Make sure to restart the instance after the change.
If that doesn't work:
Add try and except blocks to your code to check what's the error.
Check that you have the SSMAgent installed on your instance (connect to it, open PowerShell and execute Restart-Service AmazonSSMAgent).
Thank you #fsinis90 for your recommendations.
I tried them and also I added such policies to my instance's role:
AWSHealthFullAccess
AmazonEC2RoleforSSM
AWSConfigUserAccess
AmazonSSMFullAccess
CloudWatchReadOnlyAccess
And it helps.

Not able to register Snapshot repository for AWS es domain

I am trying to register a snapshot repository. I have used the below role and policy:
{
"Version": "2012-10-17",
"Statement": [{
"Sid": "",
"Effect": "Allow",
"Principal": {
"Service": "es.amazonaws.com"
},
"Action": "sts:AssumeRole"
}]
}
And policy as below:
{
"Version": "2012-10-17",
"Statement": [{
"Action": ["s3:ListBucket"],
"Effect": "Allow",
"Resource": ["arn:aws:s3:::es-backuptest"]
}, {
"Action": ["s3:GetObject", "s3:PutObject", "s3:DeleteObject", "iam:PassRole"],
"Effect": "Allow",
"Resource": ["arn:aws:s3:::es-backuptest/*"]
}]
}
And I am using the below python script:
from boto.connection import AWSAuthConnection
class ESConnection(AWSAuthConnection):
def __init__(self, region, **kwargs):
super(ESConnection, self).__init__(**kwargs)
self._set_auth_region_name(region)
self._set_auth_service_name("es")
def _required_auth_capability(self):
return ['hmac-v4']
if __name__ == "__main__":
client = ESConnection(
region='ap-south-1',
host='es.domain.com',
aws_access_key_id='test_id',
aws_secret_access_key='test_secret_id', is_secure=False)
print 'Registering Snapshot Repository'
resp = client.make_request(method='POST',
path='/_snapshot/snapshot-backup',
data='{"type": "s3","settings": { "bucket": "es-backuptest","region": "ap-south-1","role_arn": "arn:aws:iam::arn:aws:iam::arn:aws:iam::rolename"}}')
body = resp.read()
print body
After having all this in place I am running the python script to register, but I am getting the below error:
{"Message":"Cross-account pass role is not allowed."}
Could anyone please let me know what I am missing here.
there was mistake in the bucket configuration so changed it as below
data='{"type": "s3","settings": { "bucket": "S3-test-bucket","region": "us-east-1","base_path":"es-backuptest/","role_arn": "arn:aws:iam::rolename"}}')
Which solved the issue.

Mandatory tagging when launching EC2 instance

In AWS, is there a way to force an IAM user to tag the instance he/she is about to launch? It doesn't matter what the value is. I want to make sure it is correctly tagged so that long running instances can be properly identified and the owner notified. Currently tagging is optional.
What I do currently is to use CloudTrail and identify the instances with their IAM users. I do not like it because it is an extra work to run the script periodically and CloudTrail has only 7 days worth of data. It would be nice if AWS has an instance attribute for owner.
Using keypairs to identify the owners is not a viable solution in our case. Anyone faced this problem before and how did you tackle it?
One way: Don't give them IAM permissions to launch boxes. Instead, have a web service that allows them to do it. (Production should be fully automated anyway). When they use your service, you can enforce all the rules you want. Yes, it's quite a bit of work, so not for everybody.
Currently tagging is optional.
It's worse than that. Tagging requires a 2nd API call, so even when using the API, things can launch without tags because of a hiccup.
I resolved this by using AWS Lambda. When CloudTrail creates an object in S3, it triggers an event that cause a Lambda function to execute. The Lambda function then parses the S3 object and creates the tag. There is a lag of ~2 mins but the solution works perfectly.
As #helloV mentions, this is possible by using AWS CloudTrail logs (once properly enabled) and AWS Lambda. I was able to accomplish this with the following code running in a python Lambda function:
s3 = boto3.client('s3')
ec2 = boto3.client(service_name='ec2', aws_access_key_id=aws_key, aws_secret_access_key=aws_secret_key)
def lambda_handler(event, context):
# Get the object from the event and show its content type
bucket = event['Records'][0]['s3']['bucket']['name']
key = urllib.unquote_plus(event['Records'][0]['s3']['object']['key']).decode('utf8')
try:
response = s3.get_object(Bucket=bucket, Key=key)
compressed_file = StringIO.StringIO()
compressed_file.write(response['Body'].read())
compressed_file.seek(0)
decompressed_file = gzip.GzipFile(fileobj=compressed_file, mode='rb')
successful_tags = 0;
json_data = json.load(decompressed_file)
for record in json_data['Records']:
if record['eventName'] == 'RunInstances':
instance_user = record['userIdentity']['userName']
instances_set = record['responseElements']['instancesSet']
for instance in instances_set['items']:
instance_id = instance['instanceId']
ec2.create_tags(Resources=[instance_id], Tags=[{'Key':'Owner', 'Value':instance_user}])
successful_tags += 1
return 'Tagged ' + str(successful_tags) + ' instances successfully'
except Exception as e:
print(e)
print('Error tagging object {} from bucket {}'.format(key, bucket))
raise e
Check out the capitalone.io/cloud-custodian open source project -- it has the ability to enforce policies like this
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "GrantIAMPassRoleOnlyForEC2",
"Action": [
"iam:PassRole"
],
"Effect": "Allow",
"Resource": [
"arn:aws:iam::*:role/ec2tagrestricted",
"arn:aws:iam::*:role/ec2tagrestricted"
],
"Condition": {
"StringEquals": {
"iam:PassedToService": "ec2.amazonaws.com"
}
}
},
{
"Sid": "ReadOnlyEC2WithNonResource",
"Action": [
"ec2:Describe*",
"iam:ListInstanceProfiles"
],
"Effect": "Allow",
"Resource": "*"
},
{
"Sid": "ModifyingEC2WithNonResource",
"Action": [
"ec2:CreateKeyPair",
"ec2:CreateSecurityGroup"
],
"Effect": "Allow",
"Resource": "*"
},
{
"Sid": "RunInstancesWithTagRestrictions",
"Effect": "Allow",
"Action": "ec2:RunInstances",
"Resource": [
"arn:aws:ec2:us-east-1:*:instance/*",
"arn:aws:ec2:us-east-1:*:volume/*"
],
"Condition": {
"StringEquals": {
"aws:RequestTag/test": "${aws:userid}"
}
}
},
{
"Sid": "RemainingRunInstancePermissionsNonResource",
"Effect": "Allow",
"Action": "ec2:RunInstances",
"Resource": [
"arn:aws:ec2:us-east-1::image/*",
"arn:aws:ec2:us-east-1::snapshot/*",
"arn:aws:ec2:us-east-1:*:network-interface/*",
"arn:aws:ec2:us-east-1:*:key-pair/*",
"arn:aws:ec2:us-east-1:*:security-group/*"
]
},
{
"Sid": "EC2RunInstancesVpcSubnet",
"Effect": "Allow",
"Action": "ec2:RunInstances",
"Resource": "arn:aws:ec2:us-east-1:*:subnet/*",
"Condition": {
"StringEquals": {
"ec2:Vpc": "arn:aws:ec2:us-east-1:*:vpc/vpc-8311b8f9"
}
}
},
{
"Sid": "EC2VpcNonResourceSpecificActions",
"Effect": "Allow",
"Action": [
"ec2:DeleteNetworkAcl",
"ec2:DeleteNetworkAclEntry",
"ec2:DeleteRoute",
"ec2:DeleteRouteTable",
"ec2:AuthorizeSecurityGroupEgress",
"ec2:AuthorizeSecurityGroupIngress",
"ec2:RevokeSecurityGroupEgress",
"ec2:RevokeSecurityGroupIngress",
"ec2:DeleteSecurityGroup",
"ec2:CreateNetworkInterfacePermission",
"ec2:CreateRoute",
"ec2:UpdateSecurityGroupRuleDescriptionsEgress",
"ec2:UpdateSecurityGroupRuleDescriptionsIngress"
],
"Resource": "*",
"Condition": {
"StringEquals": {
"ec2:Vpc": "arn:aws:ec2:us-east-1:*:vpc/vpc-8311b8f9"
}
}
},
{
"Sid": "AllowInstanceActionsTagBased",
"Effect": "Allow",
"Action": [
"ec2:RebootInstances",
"ec2:StopInstances",
"ec2:TerminateInstances",
"ec2:StartInstances",
"ec2:AttachVolume",
"ec2:DetachVolume",
"ec2:AssociateIamInstanceProfile",
"ec2:DisassociateIamInstanceProfile",
"ec2:GetConsoleScreenshot",
"ec2:ReplaceIamInstanceProfileAssociation"
],
"Resource": [
"arn:aws:ec2:us-east-1:347612567792:instance/*",
"arn:aws:ec2:us-east-1:347612567792:volume/*"
],
"Condition": {
"StringEquals": {
"ec2:ResourceTag/test": "${aws:userid}"
}
}
},
{
"Sid": "AllowCreateTagsOnlyLaunching",
"Effect": "Allow",
"Action": [
"ec2:CreateTags"
],
"Resource": [
"arn:aws:ec2:us-east-1:347612567792:instance/*",
"arn:aws:ec2:us-east-1:347612567792:volume/*"
],
"Condition": {
"StringEquals": {
"ec2:CreateAction": "RunInstances"
}
}
}
]
}
This policy restricts a user to Launch an ec2 instance only if the Tag key is test and value is the variable ${aws.userid} different values can be found here
Notable things
This does not restrict the number of ec2 instances a user can launch
User can change the tag of existing instances tag and gain control
We can use TagKeys https://docs.aws.amazon.com/IAM/latest/UserGuide/access_tags.html#access_tags_control-tag-keys to tackle the above two situations but I did not do it
Attach this policy to the user or group to prevent them from launching an instance without tagging it:
{
"Version": "2012-10-17",
"Statement": {
"Effect": "Deny",
"Action": "ec2:RunInstances",
"Resource": "*",
"Condition": {
"Null": {
"aws:RequestTag/Owner": "true"
}
}
}
}
When the user tries to launch an instance, they'll get an error:
(If anyone knows a way to display a cleaner error message, please let us know in the comments.)
Decode the error like so:
aws sts decode-authorization-message \
--encoded-message <encoded-message> \
--query DecodedMessage --output text | jq '.'
Part of the (giant) response is as follows:
{
"allowed": false,
"explicitDeny": true,
"matchedStatements": {
"items": [
{
"statementId": "",
"effect": "DENY",
"principals": {
"items": [
{
"value": "AIDATDOMLI3YFAYEBFGSO"
}
]
},
"principalGroups": {
"items": []
},
"actions": {
"items": [
{
"value": "ec2:RunInstances"
}
]
},
"resources": {
"items": [
{
"value": "*"
}
]
},
"conditions": {
"items": [
{
"key": "aws:RequestTag/Owner",
"values": {
"items": [
{
"value": "true"
}
]
}
}
]
}
}
]
}
}
It shows that the launch failed because the Owner tag is missing.
Do you use/require userdata scripts at launch time? We use that script process to properly tag each instance as it is launched.
We burn a support script into the AMI that is launched by the userdata, and parses the command line for parameters. These parameters are then used to create tags for the newly launched instances.
For manual launches, the user must load the correct userdata script for this to work. But from automated launching script, or from a properly configured Launch Configuration in an Auto-scaling Group, it works perfectly.
<script>
PowerShell -ExecutionPolicy Bypass -NoProfile -File c:\tools\server_userdata.ps1 -function Admin -environment production
</script>
Using this method, an instance launched with that userdata will be automatically tagged with the Function and Environment tags.