I am running a python job in AWS Lambda to stop ec2 instances based on tags.
The script is running properly, but even if the script completes successfully, I am getting output "null" in the result returned by function execution.
Attached herewith is the python script. I am new to Python scripting. I am from ops side.
import boto3
import logging
#setup simple logging for INFO
logger = logging.getLogger()
logger.setLevel(logging.INFO)
#define the connection
ec2 = boto3.resource('ec2')
def lambda_handler(event, context):
# Use the filter() method of the instances collection to retrieve
# all running EC2 instances.
filters = [{
'Name': 'tag:AutoOff',
'Values': ['True']
},
{
'Name': 'instance-state-name',
'Values': ['running']
}
]
#filter the instances
instances = ec2.instances.filter(Filters=filters)
#locate all running instances
RunningInstances = [instance.id for instance in instances]
#print the instances for logging purposes
print RunningInstances
#make sure there are actually instances to shut down.
if len(RunningInstances) > 0:
#perform the shutdown
shuttingDown = ec2.instances.filter(InstanceIds=RunningInstances).stop()
print shuttingDown
else:
print "NOTHING"
In order to get response from the lambda you need to return something (usually a dictionary) from lambda_handler method. By default all Python methods return None type, that is why you do not receive any valuable response.
def lambda_handler(event, context):
... your code here ...
return {"turned_off": RunningInstances}
PS. it is preferred to use logging.debug|info|... method instead of print(). You can find more info in the documentation: https://docs.python.org/2.7/library/logging.html
Anyway all the output is saved to CloudWatch Logs. The Log Stream is created automatically when you create a Lambda function. You can find all your prints there for debugging.
Related
So I have two different launch templates I am spinning up into 2 AZs. The code runs successfully, but only the first launch template "lt" runs successfully. Why is that?
import boto3
# I have called the EC2 resource
ec2 = boto3.resource('ec2')
# I have defined my launch template
lt = {
'LaunchTemplateName': 'My_Launch_Template',
'Version': '1'
}
lt_a = {
'LaunchTemplateName': 'My_Launch_Template_2',
'Version': '1'
}
# I have defined a function to execute
def lambda_handler(event, context):
instances = ec2.create_instances(
LaunchTemplate=lt,
MinCount=1,
MaxCount=1
)
def lambda_handler(event, context):
instances = ec2.create_instances(
LaunchTemplate=lt_a,
MinCount=1,
MaxCount=1
)
Your code isn't valid Python code. I suggest you review the basics of the language before you start implementing the function on AWS lambda. Get all of your code working on your local machine and once you're happy, move it to AWS.
Anyway, I would rewrite your code to have a helper function. This way you maximize code reuse (and you make your life easier):
import boto3
ec2 = boto3.resource('ec2')
def create_instance(launch_template_name, launch_template_id):
lt = {
'LaunchTemplateName': launch_template_name,
'Version': launch_template_id
}
return ec2.create_instanceS(
LaunchTemplate=lt,
MinCount=1,
MaxCount=1
)
def lambda_handler(event, context):
first_instance = create_instance('My_Launch_Template', '1')
second_instance = create_instance('My_Launch_Template_2', '1')
...
any idea how I can modify this to passing the instance id as argument instead of putting it in the script?
import boto3
def stop_instance(instance_id):
ec2 = boto3.client("ec2", region_name="us-west-1")
response = ec2.stop_instances(InstanceIds=[instance_id])
print(response)
stop_instance()
If your objective if to stop all EC2 instances via Python script, you can first call the describe-instances method to dynamically build a list of all the running EC2 instances (pull id values based on the state property = running). From there you can pass that list into your function to stop them all, as the stop-instances method can handle a list of ids.
like this?
import boto3
import sys
import argparse
instance_id = sys.argv[1]
def stop_instance(instance_id):
ec2 = boto3.client("ec2", region_name="us-west-1")
response = ec2.stop_instances(InstanceIds=[instance_id])
print(response)
stop_instance(instance_id)
then i ran it with python3 stop_ec2.py instance-id
# This code was intended to list all the running instances that I have.
it is showing ""errorMessage": "'s3.ServiceResource' object has no attribute 'object'""
This code is intended to run on AWS Lambda. I am a beginner, and so if there are any other mistakes in this code, please do share your inputs.
import json
import boto3
ec2 = boto3.resource("ec2")
s3 = boto3.resource('s3')
object = s3.Bucket("object")
def lambda_handler(event, context):
filters = [{"Name" : "instance-state-name",
"Values" : ["running"]
}
]
instances = ec2.instances.filter(Filters = filters)
RunningInstances = []
for instance in instances:
RunningInstances.append(instance.id)
instanceList = json.dumps(RunningInstances)
s3.object(
'sameeeeeerjajs',
'instanceList.txt'
).put(Body=instanceList)
return("Status code:200")
From boto's docs for S3.Object it should be (Object, not object):
s3.Object(
'sameeeeeerjajs',
'instanceList.txt'
).put(Body=instanceList)
We are looking for AWS Lambda Script to Stop all EC2 instances in a particular region expect two instances
First you need to add tags for the instances that you want to stop, for example, AutoStop tag and assign them with value True, and then run this code:
import boto3
import logging
#setup simple logging for INFO
logger = logging.getLogger()
logger.setLevel(logging.INFO)
#define the connection
ec2 = boto3.resource('ec2')
def lambda_handler(event, context):
# Use the filter() method of the instances collection to retrieve
# all running EC2 instances.
filters = [{
'Name': 'tag:AutoOff',
'Values': ['True']
},
{
'Name': 'instance-state-name',
'Values': ['running']
},
{
'Name': 'region',
'Values': ['us-east-1'] #replace it with your region
}
]
#filter the instances
instances = ec2.instances.filter(Filters=filters)
#locate all running instances
RunningInstances = [instance.id for instance in instances]
#print the instances for logging purposes
#print RunningInstances
#make sure there are actually instances to shut down.
if len(RunningInstances) > 0:
#perform the shutdown
shuttingDown = ec2.instances.filter(InstanceIds=RunningInstances).stop()
print shuttingDown
else:
print "Nothing to see here"
I am using Composer to run my Dataflow pipeline on a schedule. If the job is taking over a certain amount of time, I want it to be killed. Is there a way to do this programmatically either as a pipeline option or a DAG parameter?
Not sure how to do it as a pipeline config option, but here is an idea.
You could launch a taskqueue task with countdown set to your timeout value. When the task does launch, you could check to see if your task is still running:
https://cloud.google.com/dataflow/docs/reference/rest/v1b3/projects.jobs/list
If it is, you can call update on it with job state JOB_STATE_CANCELLED
https://cloud.google.com/dataflow/docs/reference/rest/v1b3/projects.jobs/update
https://cloud.google.com/dataflow/docs/reference/rest/v1b3/projects.jobs#jobstate
This is done through the googleapiclient lib: https://developers.google.com/api-client-library/python/apis/discovery/v1
Here is an example of how to use it
class DataFlowJobsListHandler(InterimAdminResourceHandler):
def get(self, resource_id=None):
"""
Wrapper to this:
https://cloud.google.com/dataflow/docs/reference/rest/v1b3/projects.jobs/list
"""
if resource_id:
self.abort(405)
else:
credentials = GoogleCredentials.get_application_default()
service = discovery.build('dataflow', 'v1b3', credentials=credentials)
project_id = app_identity.get_application_id()
_filter = self.request.GET.pop('filter', 'UNKNOWN').upper()
jobs_list_request = service.projects().jobs().list(
projectId=project_id,
filter=_filter) #'ACTIVE'
jobs_list = jobs_list_request.execute()
return {
'$cursor': None,
'results': jobs_list.get('jobs', []),
}