Boto3 Cloudformation Drift Status - amazon-web-services

I'm trying to loop through every region and check if a stack has drifted or not, and then print a list of drifted stacks.
# !/usr/bin/env python
import boto3
import time
## Create a AWS Session
session = boto3.Session(profile_name='default', region_name='us-east-1')
if __name__ == '__main__':
## Connect to the EC2 Service
client = session.client('ec2')
## Make a list of all the regions
response = client.describe_regions()
for region in response['Regions']:
name = region['RegionName']
print("Drifted CFn in region: " + name)
## Connect to the CFn service in the region
cloudformationClient = boto3.client("cloudformation")
stacks = cloudformationClient.describe_stacks()
detection_id = cloudformationClient.detect_stack_drift(StackName=stacks)
for stack in stacks['Stacks']:
while True:
time.sleep(3)
# sleep between api calls to prevent lockout
response = cloudformationClient.describe_stack_drift_detection_status(
StackDriftDetectionId=detection_id
)
if response['DetectionStatus'] == 'DETECTION_IN_PROGRESS':
continue
else:
print("Stack" + stack + " has a drift status:" + response)
I am still new to Python and am unsure why it's failing on the StackName on line 22 when I know that the that's the name of the variable in "detect_stack_drift" that I'm trying to parse. Some help would be appreciated!

See these lines:
stacks = cloudformationClient.describe_stacks()
detection_id = cloudformationClient.detect_stack_drift(StackName=stacks)
The describe_stacks() call returns:
{
'Stacks': [
{
'StackId': 'string',
'StackName': 'string',
...
},
],
'NextToken': 'string'
}
However, the detect_stack_drift() function is expecting a single string in StackName.

Related

Return FilesystemID after cloudformation stack deployment complete

i am trying to create efs filesystem using cloudformation template inside lambda using boto3. And interested to return output as Filesystemid from stack using describe_stack. however i am getting null values in return. please suggest where i am making mistake.
error is:
Response
null
Code is:
import boto3
import time
import json
import botocore
datetime = time.strftime("%Y%m%d%H%M%S")
stackname = 'My-EFS'
region = "ap-south-1"
client = boto3.client('cloudformation')
s = boto3.Session(region_name=region)
def lambda_handler(event, context):
response = client.create_stack(
StackName= stackname,
TemplateURL='https://cloudnaeem.s3.amazonaws.com/efs.yaml',
)
waiter = client.get_waiter('stack_create_complete')
res = waiter.wait(
StackName=stackname,
)
stack = client.describe_stacks(StackName=stackname)
FileSystem_id=None
for v in stack["Stacks"][0]["Outputs"]:
if v["OutputKey"] == "FileSystemId":
FileSystem_id = v["OutputValue"]
return FileSystem_id
template output is :
Outputs:
EFS:
Description: The created EFS
Value: !Ref EFSFileSystem
Your output is called EFS but you are looking for FileSystemId. Your code should be thus:
for v in stack["Stacks"][0]["Outputs"]:
if v["OutputKey"] == "EFS":
FileSystem_id = v["OutputValue"]

Lauch of EC2 Intance and attach to Target Group using Lambda

I am trying to launch EC2 instance and attachig to Target group using following code in lambda function But i am getting following error. But lambda funciton not getting Intsance ID and giving and error, please guide
Error is:
An error occurred (ValidationError) when calling the RegisterTargets operation: Instance ID 'instance_id' is not valid",
"errorType": "ClientError",
Code is :
import boto3
import json
import time
import os
AMI = 'ami-047a51fa27710816e'
INSTANCE_TYPE = os.environ['MyInstanceType']
KEY_NAME = 'miankeyp'
REGION = 'us-east-1'
ec2 = boto3.client('ec2', region_name=REGION)
def lambda_handler(event, context):
instance = ec2.run_instances(
ImageId=AMI,
InstanceType=INSTANCE_TYPE,
KeyName=KEY_NAME,
MaxCount=1,
MinCount=1
)
print ("New instance created:")
instance_id = instance['Instances'][0]['InstanceId']
print (instance_id)
client=boto3.client('elbv2')
time.sleep(5)
response = client.register_targets(
TargetGroupArn='arn:aws:elasticloadbalancing:us-east-1::targetgroup/target-demo/c46e6bfc00b6886f',
Targets=[
{
'Id': 'instance_id'
},
]
)
To wait until an instance is running, you can use an Instance State waiter.
This is a boto3 capability that will check the state of an instance every 15 seconds until it reaches the desired state, up to a limit of 40 checks, which allows 10 minutes.

exporting log from cloudwatch to s3 using create_export_task not missing some logs

I am trying to export past 15mins access logs that are in cloudwatch to S3 bucket however when run it, it succesfully store logs into S3 however missing lots of logs. for example in cloudwatch past 15mins, there are 30 logs and in S3 it only has about 3 logs.
group_name = '/aws/elasticbeanstalk/my-env/var/app/current/storage/logs/laravel.log'
s3 = boto3.client('s3')
log_file = boto3.client('logs')
now = datetime.now()
deletionDate = now - timedelta(minutes=15)
start = math.floor(deletionDate.replace(second=0, microsecond=0).timestamp()*1000)
end = math.floor(now.replace(second=0, microsecond=0).timestamp()*1000)
destination_bucket = 'past15mins-log'
prefix = 'lambda2-test-log/'+str(start)+'-'+str(end)
# # TODO implement
response = log_file.create_export_task(
logGroupName = group_name,
fromTime = start,
to = end,
destination = destination_bucket,
destinationPrefix = prefix
)
if not response['ResponseMetadata']['HTTPStatusCode'] == 200:
raise Exception('fail ' + str(start)+" - "+ str(end))
In the documentation it says it is an asynchronous call, my guess is since there are 3 servers where I get logs from, that is causing the problem?
Thanks in advance.

How to get EC2 memory utilization using command line (aws cli)

I am trying to get EC2 memory utilization using aws cli and I see that EC2MemoryUtilization is not available as a metric. I installed cloudwatch agent in the ec2 instance and I have created a dashboard for mem_used_percent.
Now I want to consume the memory used data points programmatically. I could find for CPUUtilization but I am unable to find anything for Memory utilization.
Any help in this regard is helpful. Thanks!
This python script pushes the system memory metrics to cloudwatch in a custom namespace. Schedule the script in crontab to execute it at every 1 minute or 5 minutes to plot the system memory metrics with respect to time. Ensure that IAM role assigned to the vm has sufficient privileges to put metric data to cloudwatch.
#!/usr/bin/env python
import psutil
import requests
import json
import os
import boto3
get_memory = psutil.virtual_memory()
free_memory = get_memory.free/(1024*1024*1024)
print "Free Memory: ", free_memory, "GB"
headers = {'content-type': 'application/json'}
req = requests.get(url='http://169.254.169.254/latest/meta-data/iam/security-credentials/cloudwatch_access', headers=headers)
res = json.loads(req.text)
AccessKeyId = res['AccessKeyId']
SecretAccessKey = res['SecretAccessKey']
Token = res['Token']
Region = "ap-south-1"
os.environ["AWS_ACCESS_KEY_ID"] = AccessKeyId
os.environ["AWS_SECRET_ACCESS_KEY"] = SecretAccessKey
os.environ["AWS_SESSION_TOKEN"] = Token
os.environ["AWS_DEFAULT_REGION"] = Region
namespace = 'mynamespace'
dimension_name = 'my_dimension_name'
dimension_value = 'my_dimension_value'
cloudwatch = boto3.client('cloudwatch')
cloudwatch.put_metric_data(
MetricData=[
{
'MetricName': 'Free Memory',
'Dimensions': [
{
'Name': dimension_name,
'Value': dimension_value
},
],
'Unit': 'Gigabytes',
'Value': free_memory
},
],
Namespace=namespace
)

Iteration not working in Lambda as I want to run lambda in two regions listed in code

Hi I have this simple lambda function which stops all EC-2 instances tagged with Auto_off. I have set a for loop so that it works for two regions us-east-1 and us-east-2. I am running the function in us-east-2 region.
the problem is that only the instance located in us-east2 is stopping and the other instance is not(located in us-east-1). what modifications can i make.
please suggest as i am new to python and boto library
import boto3
import logging
#setup simple logging for INFO
logger = logging.getLogger()
logger.setLevel(logging.INFO)
#define the connection
ec2 = boto3.resource('ec2')
client = boto3.client('ec2', region_name='us-east-1')
ec2_regions = ['us-east-1','us-east-2']
for region in ec2_regions:
conn = boto3.resource('ec2',region_name=region)
def lambda_handler(event, context):
# Use the filter() method of the instances collection to retrieve
# all running EC2 instances.
filters = [{
'Name': 'tag:AutoOff',
'Values': ['True']
},
{
'Name': 'instance-state-name',
'Values': ['running']
}
]
#filter the instances
instances = ec2.instances.filter(Filters=filters)
#locate all running instances
RunningInstances = [instance.id for instance in instances]
#print the instances for logging purposes
#print RunningInstances
#make sure there are actually instances to shut down.
if len(RunningInstances) > 0:
#perform the shutdown
shuttingDown = ec2.instances.filter(InstanceIds=RunningInstances).stop()
print shuttingDown
else:
print "Nothing to see here"
You are creating 2 instances of ec2 resource, and 1 instance of ec2 client. You are only using one instance of ec2 resource, and not using the client at all. You are also setting the region in your loop on a different resource object from the one you are actually using.
Change all of this:
ec2 = boto3.resource('ec2')
client = boto3.client('ec2', region_name='us-east-1')
ec2_regions = ['us-east-1','us-east-2']
for region in ec2_regions:
conn = boto3.resource('ec2',region_name=region)
To this:
ec2_regions = ['us-east-1','us-east-2']
for region in ec2_regions:
ec2 = boto3.resource('ec2',region_name=region)
Also your indentation is all wrong in the code in your question. I hope that's just a copy/paste issue and not how your code is really indented, because indentation is syntax in Python.
The loop you do here
ec2_regions = ['us-east-1','us-east-2']
for region in ec2_regions:
conn = boto3.resource('ec2',region_name=region)
Firstly assigns us-east-1 to the conn variable and on the second step, it overwrites it with us-east-2 and then it enters your function.
So what you can do is put that loop inside your function and do the current definition of the function inside that loop.