I have a simple Lambda function the its job is to make HTTP GET to a certain server.
I need to run many copies (hundreds) of the function at the same time and I want to have a distinct source IP address for each HTTP GET coming from each Lambda.
My questions:
How do I make sure that each 'copy' of the Lambda function will have its own IP address?
How do I use boto API invoke call in order to tell AWS that I need N concurrent copies of my Lambda? I am looking here but I can not find the argument that sets the number of concurrent copies.
Thanks
Avishay
As for question #2 I am using the following code in order to invoke N concurrent copies of the Lambda function.
import boto3, json
from concurrent.futures import ThreadPoolExecutor
N = 5
unique_ips = set()
lambda_client = boto3.client('lambda', region_name='us-west-2')
def _lambda_caller(idx):
test_event = dict(idx=idx)
res = lambda_client.invoke(
FunctionName='SimpleHTTPGetter',
InvocationType='RequestResponse',
Payload=json.dumps(test_event),
)
data = json.loads(res['Payload']._raw_stream.data)
print('Thread {} is done'.format(idx))
unique_ips.add(data['body'])
with ThreadPoolExecutor(max_workers=N) as executor:
for i in range(0,N):
future = executor.submit(_lambda_caller,i)
executor.shutdown()
print('Done')
My Lambda code (short version)
import json
import socket
def lambda_handler(event, context):
print('-- HTTP Client started')
hostname = socket.gethostname()
ip = socket.gethostbyname(hostname)
print('My IP address is {}:'.format(ip))
return {
"statusCode": 200,
"body": ip
}
You need to create a VPC, Make sure you attach a subnet that allows internet access. And then attach a security policy to your lambda.
Step by Step here:
https://medium.com/#philippholly/aws-lambda-enable-outgoing-internet-access-within-vpc-8dd250e11e12
Related
I am trying to create lambda script using Python3.9 which will return total ec2 servers in AWS account, their status & details. Some of my code snippet is -
def lambda_handler(event, context):
client = boto3.client("ec2")
#s3 = boto3.client("s3")
# fetch information about all the instances
status = client.describe_instances()
for i in status["Reservations"]:
instance_details = i["Instances"][0]
if instance_details["State"]["Name"].lower() in ["shutting-down","stopped","stopping","terminated",]:
print("AvailabilityZone: ", instance_details['AvailabilityZone'])
print("\nInstanceId: ", instance_details["InstanceId"])
print("\nInstanceType: ",instance_details['InstanceType'])
On ruunning this code i get error -
If I comment AZ details, code works fine.If I create a new function with only AZ parameter in it, all AZs are returned. Not getting why it fails in above mentioned code.
In python, its always a best practice to use get method to fetch value from list or dict to handle exception.
AvailibilityZone is actually present in Placement dict and not under instance details. You can check the entire response structure from below boto 3 documentation
Reference
def lambda_handler(event, context):
client = boto3.client("ec2")
#s3 = boto3.client("s3")
# fetch information about all the instances
status = client.describe_instances()
for i in status["Reservations"]:
instance_details = i["Instances"][0]
if instance_details["State"]["Name"].lower() in ["shutting-down","stopped","stopping","terminated",]:
print(f"AvailabilityZone: {instance_details.get('Placement', dict()).get('AvailabilityZone')}")
print(f"\nInstanceId: {instance_details.get('InstanceId')}")
print(f"\nInstanceType: {instance_details.get('InstanceType')}")
The problem is that in response of describe_instances availability zone is not in first level of instance dictionary (in your case instance_details). Availability zone is under Placement dictionary, so what you need is
print(f"AvailabilityZone: {instance_details.get('Placement', dict()).get('AvailabilityZone')}")
So, I have this code, and I will love to grab the public IP address of the new windows instance that will be created when I adjust the desired capacity.
The launch template assigns an automatic tag name when I adjust the desired_capacity. I want to be able to grab the public IP address of that tag name.
import boto3
session = boto3.session.Session()
client = session.client('autoscaling')
def set_desired_capacity(asg_name, desired_capacity):
response = client.set_desired_capacity(
AutoScalingGroupName=asg_name,
DesiredCapacity=desired_capacity,
)
return response
def lambda_handler(event, context):
asg_name = "test"
desired_capacity = 1
return set_desired_capacity(asg_name, desired_capacity)
if __name__ == '__main__':
print(lambda_handler("", ""))
I took a look at the EC2 client documentation, and I wasn't sure what to use. I just need help modifying my code
If you know the tag that you are assigning in the autoscaling group, then you can just use a describe_instances method. The Boto3 docs have an example with filtering. Something like this should work, replacing TAG, VALUE, and TOPICARN with the appropriate values.
import boto3
ec2_client = boto3.client('ec2', 'us-west-2')
sns_client = boto3.client('sns', 'us-west-2')
response = ec2_client.describe_instances(
Filters=[
{
'Name': 'tag:TAG',
'Values': [
'VALUE'
]
}
]
)
for reservation in response["Reservations"]:
for instance in reservation["Instances"]:
ip = instance["PublicIpAddress"]
sns_publish = sns_client.publish(
TopicArn='TOPICARN',
Message=ip,
)
print(sns_publish)
Objective:
After an EC2 instance starts
Obtain the IP address
Send a message via Amazon SNS
It can take some time for a Public IP address to be assigned to an Amazon EC2 instance. Rather than continually calling DescribeInstances(), it would be easier to Run commands on your Linux instance at launch - Amazon Elastic Compute Cloud via a User Data script.
The script could:
Obtain its Public IP address via Instance metadata and user data - Amazon Elastic Compute Cloud:
IP=$(curl 169.254.169.254/latest/meta-data/public-ipv4)
Send a message to an Amazon SNS topic with:
aws sns publish --topic-arn xxx --message $IP
If you also want the message to include a name from a tag associated with the instance, the script will need to call aws ec2 describe-instances with its own Instance ID (which can be obtained via the Instance Metadata) and then extra the name from the tags returned.
The scenario is like this
I have a microservice which invokes a LAMBDA function whose role will be to delete things from the AWS IOT.
Is there a way I can perform operations in AWS IOT using the lambda function?
Any article, blog regarding this will be a huge help as I'm not able to find any integration document on the web.
I found a way to do several operations in AWS IOT using lambda function by the means of API's.
https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/iot.html#IoT.Client.delete_thing_group
The above link has description about API's which will help in this case.
A sample lambda function script to delete a thing from the IOT is
import json
import boto3
def lambda_handler(event, context):
thing_name = event['thing_name']
delete_thing(thing_name=thing_name)
def delete_thing(thing_name):
c_iot = boto3.client('iot')
print(" DELETING {}".format(thing_name))
try:
r_principals = c_iot.list_thing_principals(thingName=thing_name)
except Exception as e:
print("ERROR listing thing principals: {}".format(e))
r_principals = {'principals': []}
print("r_principals: {}".format(r_principals))
for arn in r_principals['principals']:
cert_id = arn.split('/')[1]
print(" arn: {} cert_id: {}".format(arn, cert_id))
r_detach_thing = c_iot.detach_thing_principal(thingName=thing_name, principal=arn)
print("DETACH THING: {}".format(r_detach_thing))
r_upd_cert = c_iot.update_certificate(certificateId=cert_id, newStatus='INACTIVE')
print("INACTIVE: {}".format(r_upd_cert))
r_del_cert = c_iot.delete_certificate(certificateId=cert_id, forceDelete=True)
print(" DEL CERT: {}".format(r_del_cert))
r_del_thing = c_iot.delete_thing(thingName=thing_name)
print(" DELETE THING: {}\n".format(r_del_thing))
And the input for this lambda function will be
{
"thing_name": "MyIotThing"
}
I have three s3 buckets that are invoking a lambda function whenever there is a change in the content of specific objects inside the buckets.
Does anyone know if it is possible, using boto3, to retrieve those objects that have triggered the function?
Thanks!
UPDATE
I would like to get the objects that have triggered the lambda function from the response contents. I have tried to get it from the response of the get_function method of the lambda client but to no avail:
import boto3
lam = boto3.client('lambda')
response = lam.get_function(FunctionName='mylambdafunction')
Here's some sample code to retrieve the object that triggered the AWS Lambda function invocation:
import urllib
import boto3
# Connect to S3
s3_client = boto3.client('s3')
# This handler is executed every time the Lambda function is triggered
def lambda_handler(event, context):
# Get the bucket and object key from the Event
bucket = event['Records'][0]['s3']['bucket']['name']
key = urllib.parse.unquote_plus(event['Records'][0]['s3']['object']['key'])
localFilename = '/tmp/foo.txt'
# Download the file from S3 to the local filesystem
s3_client.download_file(bucket, key, localFilename)
# Do other stuff here
Basically, it extracts the Bucket and Key (filename) from the event data that is passed to the function, then calls download_file().
I am trying to execute the following code
def createSecurityGroup(self, securitygroupname):
conn = boto3.resource('ec2')
response = conn.create_security_group(GroupName=securitygroupname, Description = 'test')
VPC_NAT_SecurityObject = createSecurityGroup("mysecurity_group")
response_egress_all = VPC_NAT_SecurityObject.authorize_egress(
IpPermissions=[{'IpProtocol': '-1'}])
and getting the below exception
EXCEPTION :
An error occurred (InvalidParameterValue) when calling the AuthorizeSecurityGroupEgress operation: Only Amazon VPC security
groups may be used with this operation.
I tried several different combinations but not able to set the protocol to all . I used '-1' as explained in the boto3 documentation. Can somebody pls suggest how to get this done.
(UPDATE)
1.boto3.resource("ec2") class actually a high level class wrap around the client class. You must create an extract class instantiation using boto3.resource("ec2").Vpc in order to attach to specific VPC ID e.g.
import boto3
ec2_resource = boto3.resource("ec2")
myvpc = ec2_resource.Vpc("vpc-xxxxxxxx")
response = myvpc.create_security_group(
GroupName = securitygroupname,
Description = 'test')
2.Sometime it is straightforward to use boto3.client("ec2") If you check boto3 EC2 client create_security_group, you will see this:
response = client.create_security_group(
DryRun=True|False,
GroupName='string',
Description='string',
VpcId='string'
)
If you use automation script/template to rebuild the VPC, e.g. salt-cloud, you need give the VPC a tag name in order to acquire it automatically from boto3 script. This will save all the hassle when AWS migrate all the AWS resources ID from 8 alphanumeric to 12 or 15 character.
Another option is using cloudformation that let you put everything and specify variable in a template to recreate the VPC stack.