CloudFormation Stack resources - amazon-web-services

I am trying to receive Stack resources ARN information using boto3.
I tried to use:
import boto3
client = boto3.resource('cloudformation',
aws_access_key_id='xxxxxxxx',
aws_secret_access_key='xxxxxxxxxxxx')
response = client.list_stack_resources(
StackName='ORG-ROLES')
I get "AttributeError: 'cloudformation.ServiceResource' object has no attribute 'list_stack_resources'"
This Stack runs 9 resources, I want to get one resource ARN information.
Hope you can help me.

You're mixing up the client-level and resource-level APIs. You need to use one or the other. Here's an example of each.
import boto3
session = boto3.Session(profile_name='xxxx', region_name='us-east-1')
STACK_NAME = 'ORG-ROLES'
# Use client-level API
client = session.client('cloudformation')
response = client.list_stack_resources(StackName=STACK_NAME)
print('Client API:', response['StackResourceSummaries'])
# Use resource-level API
resource = session.resource('cloudformation')
stack = resource.Stack(STACK_NAME)
print('Resource API:', list(stack.resource_summaries.all()))

Related

AWS Lambda failing to fetch EC2 AZ details

I am trying to create lambda script using Python3.9 which will return total ec2 servers in AWS account, their status & details. Some of my code snippet is -
def lambda_handler(event, context):
client = boto3.client("ec2")
#s3 = boto3.client("s3")
# fetch information about all the instances
status = client.describe_instances()
for i in status["Reservations"]:
instance_details = i["Instances"][0]
if instance_details["State"]["Name"].lower() in ["shutting-down","stopped","stopping","terminated",]:
print("AvailabilityZone: ", instance_details['AvailabilityZone'])
print("\nInstanceId: ", instance_details["InstanceId"])
print("\nInstanceType: ",instance_details['InstanceType'])
On ruunning this code i get error -
If I comment AZ details, code works fine.If I create a new function with only AZ parameter in it, all AZs are returned. Not getting why it fails in above mentioned code.
In python, its always a best practice to use get method to fetch value from list or dict to handle exception.
AvailibilityZone is actually present in Placement dict and not under instance details. You can check the entire response structure from below boto 3 documentation
Reference
def lambda_handler(event, context):
client = boto3.client("ec2")
#s3 = boto3.client("s3")
# fetch information about all the instances
status = client.describe_instances()
for i in status["Reservations"]:
instance_details = i["Instances"][0]
if instance_details["State"]["Name"].lower() in ["shutting-down","stopped","stopping","terminated",]:
print(f"AvailabilityZone: {instance_details.get('Placement', dict()).get('AvailabilityZone')}")
print(f"\nInstanceId: {instance_details.get('InstanceId')}")
print(f"\nInstanceType: {instance_details.get('InstanceType')}")
The problem is that in response of describe_instances availability zone is not in first level of instance dictionary (in your case instance_details). Availability zone is under Placement dictionary, so what you need is
print(f"AvailabilityZone: {instance_details.get('Placement', dict()).get('AvailabilityZone')}")

Can we Integrate AWS lambda with AWS IOT and perform operations in the IOT using this lambda function?

The scenario is like this
I have a microservice which invokes a LAMBDA function whose role will be to delete things from the AWS IOT.
Is there a way I can perform operations in AWS IOT using the lambda function?
Any article, blog regarding this will be a huge help as I'm not able to find any integration document on the web.
I found a way to do several operations in AWS IOT using lambda function by the means of API's.
https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/iot.html#IoT.Client.delete_thing_group
The above link has description about API's which will help in this case.
A sample lambda function script to delete a thing from the IOT is
import json
import boto3
def lambda_handler(event, context):
thing_name = event['thing_name']
delete_thing(thing_name=thing_name)
def delete_thing(thing_name):
c_iot = boto3.client('iot')
print(" DELETING {}".format(thing_name))
try:
r_principals = c_iot.list_thing_principals(thingName=thing_name)
except Exception as e:
print("ERROR listing thing principals: {}".format(e))
r_principals = {'principals': []}
print("r_principals: {}".format(r_principals))
for arn in r_principals['principals']:
cert_id = arn.split('/')[1]
print(" arn: {} cert_id: {}".format(arn, cert_id))
r_detach_thing = c_iot.detach_thing_principal(thingName=thing_name, principal=arn)
print("DETACH THING: {}".format(r_detach_thing))
r_upd_cert = c_iot.update_certificate(certificateId=cert_id, newStatus='INACTIVE')
print("INACTIVE: {}".format(r_upd_cert))
r_del_cert = c_iot.delete_certificate(certificateId=cert_id, forceDelete=True)
print(" DEL CERT: {}".format(r_del_cert))
r_del_thing = c_iot.delete_thing(thingName=thing_name)
print(" DELETE THING: {}\n".format(r_del_thing))
And the input for this lambda function will be
{
"thing_name": "MyIotThing"
}

Finding Canonical ID of the account using CDK

I'm writing a custom S3 bucket policy using AWS that requires canonical ID of the account as a key parameter. I can get the account ID programmatically using cdk core. You may refer the python sample below.
cid = core.Aws.ACCOUNT_ID
Is there any way that we can get the same for canonical-ID.
Update:
I've found a workaround using S3API call. I've added the following code in my CDK stack. May be helpful to someone.
def find_canonical_id(self):
s3_client = boto3.client('s3')
return s3_client.list_buckets()['Owner']['ID']
I found 2 ways to get the canonical ID (boto3):
Method-1 Through List bucket API (also mentioned by author in the update)
This method is recommended by AWS as well.
import boto3
client = boto3.client("s3")
response = client.list_buckets()
canonical_id = response["Owner"]["ID"]
Method-2 Through Get bucket ACL API
import boto3
client = boto3.client("s3")
response = client.get_bucket_acl(
Bucket='sample-bucket' # should be in your acct
)
canonical_id = response["Owner"]["ID"]

Trying to disable all the Cloud Watch alarms in one shot

My organization is planning for a maintenance window for the next 5 hours. During that time, I do not want Cloud Watch to trigger alarms and send notifications.
Earlier, when I had to disable 4 alarms, I have written the following code in AWS Lambda. This worked fine.
import boto3
import collections
client = boto3.client('cloudwatch')
def lambda_handler(event, context):
response = client.disable_alarm_actions(
AlarmNames=[
'CRITICAL - StatusCheckFailed for Instance 456',
'CRITICAL - StatusCheckFailed for Instance 345',
'CRITICAL - StatusCheckFailed for Instance 234',
'CRITICAL - StatusCheckFailed for Instance 123'
]
)
But now, I was asked to disable all the alarms which are 361 in number. So, including all those names would take a lot of time.
Please let me know what I should do now?
Use describe_alarms() to obtain a list of them, then iterate through and disable them:
import boto3
client = boto3.client('cloudwatch')
response = client.describe_alarms()
names = [[alarm['AlarmName'] for alarm in response['MetricAlarms']]]
disable_response = client.disable_alarm_actions(AlarmNames=names)
You might want some logic around the Alarm Name to only disable particular alarms.
If you do not have the specific alarm arns, then you can use the logic in the previous answer. If you have a specific list of arns that you want to disable, you can fetch names using this:
def get_alarm_names(alarm_arns):
names = []
response = client.describe_alarms()
for i in response['MetricAlarms']:
if i['AlarmArn'] in alarm_arns:
names.append(i['AlarmName'])
return names
Here's a full tutorial: https://medium.com/geekculture/terraform-structure-for-enabling-disabling-alarms-in-batches-5c4f165a8db7

Setting .authorize_egress() with protocol set to all

I am trying to execute the following code
def createSecurityGroup(self, securitygroupname):
conn = boto3.resource('ec2')
response = conn.create_security_group(GroupName=securitygroupname, Description = 'test')
VPC_NAT_SecurityObject = createSecurityGroup("mysecurity_group")
response_egress_all = VPC_NAT_SecurityObject.authorize_egress(
IpPermissions=[{'IpProtocol': '-1'}])
and getting the below exception
EXCEPTION :
An error occurred (InvalidParameterValue) when calling the AuthorizeSecurityGroupEgress operation: Only Amazon VPC security
groups may be used with this operation.
I tried several different combinations but not able to set the protocol to all . I used '-1' as explained in the boto3 documentation. Can somebody pls suggest how to get this done.
(UPDATE)
1.boto3.resource("ec2") class actually a high level class wrap around the client class. You must create an extract class instantiation using boto3.resource("ec2").Vpc in order to attach to specific VPC ID e.g.
import boto3
ec2_resource = boto3.resource("ec2")
myvpc = ec2_resource.Vpc("vpc-xxxxxxxx")
response = myvpc.create_security_group(
GroupName = securitygroupname,
Description = 'test')
2.Sometime it is straightforward to use boto3.client("ec2") If you check boto3 EC2 client create_security_group, you will see this:
response = client.create_security_group(
DryRun=True|False,
GroupName='string',
Description='string',
VpcId='string'
)
If you use automation script/template to rebuild the VPC, e.g. salt-cloud, you need give the VPC a tag name in order to acquire it automatically from boto3 script. This will save all the hassle when AWS migrate all the AWS resources ID from 8 alphanumeric to 12 or 15 character.
Another option is using cloudformation that let you put everything and specify variable in a template to recreate the VPC stack.