Bad handler error when running AWS Lambda with console - amazon-web-services

I try to do Amazon S3 Bucket resource tagging with AWS Lambda. I need Lambda function to take tags from users and apply them to newly uploaded objects in an Amazon S3 Bucket.
I think get_user_tags in handler is the faulty part
def lambda_handler(event, context):
#Get uncompressed CloudWatch Event data for parsing API calls
def get_unc_cw_event_data(event):
cw_data_dict = dict()
cw_data_dict = event['detail']
return cw_data_dict
#Get resource tags assigned to a specified IAM role
#Returns a list of key:string,value:string resource tag dictionaries
def get_user_tags(user_name):
try:
client = boto3.client('iam')
response = dict()
response = client.list_user_tags(
UserName=user_name
)
except botocore.exceptions.ClientError as error:
print("Boto3 API returned error: ", error)
print("No Tags Applied To: ", resource_id)
no_tags = list()
return no_tags
return response['Tags']
#Get resource tags stored in AWS SSM Parameter Store
#Returns a list of key:string,value:string resource tag dictionaries
def get_ssm_parameter_tags(role_name, user_name):
tag_list = list()
try:
path_string = "/auto-tag/" + role_name + "/" + user_name + "/tag"
ssm_client = boto3.client('ssm')
get_parameter_response = ssm_client.get_parameters_by_path(
Path=path_string,
Recursive=True,
WithDecryption=True
)
for parameter in get_parameter_response['Parameters']:
tag_dictionary = dict()
path_components = parameter['Name'].split("/")
tag_key = path_components[-1]
tag_dictionary['Key'] = tag_key
tag_dictionary['Value'] = parameter['Value']
tag_list.append(tag_dictionary)
return tag_list
except botocore.exceptions.ClientError as error:
print("Boto3 API returned error: ", error)
tag_list.clear()
return tag_list
#Apply tags to resource
def set_resource_tags(resource_id, resource_tags):
# Is this an EC2 resource?
if re.search("^i-", resource_id):
try:
client = boto3.client('ec2')
response = client.create_tags(
Resources=[
resource_id
],
Tags=resource_tags
)
response = client.describe_volumes(
Filters=[
{
'Name': 'attachment.instance-id',
'Values': [
resource_id
]
}
]
)
try:
for volume in response['Volumes']:
ec2 = boto3.resource('ec2')
ec2_vol = ec2.Volume(volume['VolumeId'])
vol_tags = ec2_vol.create_tags(
Tags=resource_tags
)
except botocore.exceptions.ClientError as error:
print("Boto3 API returned error: ", error)
print("No Tags Applied To: ", response['Volumes'])
return False
except botocore.exceptions.ClientError as error:
print("Boto3 API returned error: ", error)
print("No Tags Applied To: ", resource_id)
return False
return True
else:
return False
data_dict = get_unc_cw_event_data(event)
#data_dict = get_cw_event_data(event)
user_id_arn = data_dict['userIdentity']['arn']
user_id_components = user_id_arn.split("/")
user_id = user_id_components[-1]
#role_arn = data_dict['userIdentity']['sessionContext']['sessionIssuer']['arn']
role_arn = data_dict['userIdentity']['arn']
role_components = role_arn.split("/")
role_name = role_components[-1]
resource_date = data_dict['eventTime']
resource_role_tags = list()
resource_role_tags = get_role_tags(role_name)
resource_parameter_tags = list()
resource_parameter_tags = get_ssm_parameter_tags(role_name, user_id)
resource_tags = list()
resource_tags = resource_role_tags + resource_parameter_tags
created_by = dict()
created_by['Key'] = 'Created by'
created_by['Value'] = user_id
resource_tags.append(created_by)
roleName = dict()
roleName['Key'] = 'Role Name'
roleName['Value'] = role_name
resource_tags.append(roleName)
date_created = dict()
date_created['Key'] = 'Date created'
date_created['Value'] = resource_date
resource_tags.append(date_created)
if 'instancesSet' in data_dict['responseElements']:
for item in data_dict['responseElements']['instancesSet']['items']:
resource_id = item['instanceId']
if set_resource_tags(resource_id, resource_tags):
return {
'statusCode': 200,
'Resource ID': resource_id,
'body': json.dumps(resource_tags)
}
else:
return {
'statusCode': 500,
'No tags applied to Resource ID': resource_id,
'Lambda function name': context.function_name,
'Lambda function version': context.function_version
}
else:
return {
'statusCode': 200,
'No resources to tag': event['id']
}
And the resulting error is this:
Response { "errorMessage": "Bad handler
'resource-auto-tagger-main/source/resource-auto-tagger': not enough
values to unpack (expected 2, got 1)", "errorType":
"Runtime.MalformedHandlerName", "stackTrace": []
I run my first AWS Lambda, so will be happy about any help.

You can certainly use an AWS Lambda function to tag objects that are located in an Amazon S3 bucket. There is a development article that shows this use case. It's implemented in Java; however, it will point you in the right direction and you can port to your programming language.
Creating an Amazon Web Services Lambda function that tags digital assets located in Amazon S3 buckets
In this example, the Lambda function reads all objects in a given Amazon S3 bucket. For each object in the bucket, it passes the image to the Amazon Rekognition service to generate a series of labels. Each label is used to create a tag that is applied to the image. After you execute the Lambda function, it automatically creates tags based on all images in a given Amazon S3 bucket and applies them to the images.

Related

Botocore Stubber - Unable to locate credentials

I'm working on unit tests for my lambda which is getting some files from S3, processing them and loading data from them to DynamoDB. I created botocore stubbers that are used during tests, but I got botocore.exceptions.NoCredentialsError: Unable to locate credentials
My lambda handler code
s3_client = boto3.client('s3')
ddb_client = boto3.resource('dynamodb', region_name='eu-west-1')
def lambda_handler(event, context):
for record in event['Records']:
s3_event = record.get('s3')
bucket = s3_event.get('bucket', {}).get('name', '')
file_key = s3_event.get('object', {}).get('key', '')
file = s3_client.get_object(Bucket=bucket, Key=file_key)
and tests file:
class TestLambda(unittest.TestCase):
def setUp(self) -> None:
self.session = botocore.session.get_session()
# S3 Stubber Set Up
self.s3_client = self.session.create_client('s3', region_name='eu-west-1')
self.s3_stubber = Stubber(self.s3_client)
# DDB Stubber Set Up
self.ddb_resource = boto3.resource('dynamodb', region_name='eu-west-1')
self.ddb_stubber = Stubber(self.ddb_resource.meta.client)
def test_s3_to_ddb_handler(self) -> None:
event = {}
with self.s3_stubber:
with self.ddb_stubber:
response = s3_to_ddb_handler.s3_to_ddb_handler(event, ANY)
Issue seems to be that actual call to AWS resources is done which shouldnt be the case and stubber should be used, how can I force that?
You need to call .activate() on your Stubber instances: https://botocore.amazonaws.com/v1/documentation/api/latest/reference/stubber.html#botocore.stub.Stubber.activate

How to get the list of Nitro system based EC2 instance type by CLI?

I know this page lists up the instance types which based on Nitro system but I would like to know the list in a dynamic way with CLI. (for example, using aws ec2 describe-instances). Is it possible to get Nitro based instance type other than parsing the static page? If so, could you tell me the how?
You'd have to write a bit of additional code to get that information. aws ec2 describe-instances will give you InstanceType property. You should use a programming language to parse the JSON, extract InstanceType and then call describe-instances like so: https://docs.aws.amazon.com/cli/latest/reference/ec2/describe-instance-types.html?highlight=nitro
From the JSON you get back, extract hypervisor. That'll give you Nitro if the instance is Nitro.
Here's a Python code that might work. I have not tested it fully but you can tweak this to get the results you want.
"""List all EC2 instances"""
import boto3
def ec2_connection():
"""Connect to AWS using API"""
region = 'us-east-2'
aws_key = 'xxx'
aws_secret = 'xxx'
session = boto3.Session(
aws_access_key_id = aws_key,
aws_secret_access_key = aws_secret
)
ec2 = session.client('ec2', region_name = region)
return ec2
def get_reservations(ec2):
"""Get a list of instances as a dictionary"""
response = ec2.describe_instances()
return response['Reservations']
def process_instances(reservations, ec2):
"""Print a colorful list of IPs and instances"""
if len(reservations) == 0:
print('No instance found. Quitting')
return
for reservation in reservations:
for instance in reservation['Instances']:
# get friendly name of the server
# only try this for mysql1.local server
friendly_name = get_friendly_name(instance)
if friendly_name.lower() != 'mysql1.local':
continue
# get the hypervisor based on the instance type
instance_type = get_instance_info(instance['InstanceType'], ec2)
# print findings
print(f'{friendly_name} // {instance["InstanceType"]} is {instance_type}')
break
def get_instance_info(instance_type, ec2):
"""Get hypervisor from the instance type"""
response = ec2.describe_instance_types(
InstanceTypes=[instance_type]
)
return response['InstanceTypes'][0]['Hypervisor']
def get_friendly_name(instance):
"""Get friendly name of the instance"""
tags = instance['Tags']
for tag in tags:
if tag['Key'] == 'Name':
return tag['Value']
return 'Unknown'
def run():
"""Main method to call"""
ec2 = ec2_connection()
reservations = get_reservations(ec2)
process_instances(reservations, ec2)
if __name__ == '__main__':
run()
print('Done')
In the above answer , the statement "From the JSON you get back, extract hypervisor. That'll give you Nitro if the instance is Nitro " is not longer accurate.
As per the latest AWS documentation,
hypervisor - The hypervisor type of the instance (ovm | xen ). The value xen is used for both Xen and Nitro hypervisors.
Cleaned up, verified working code below:
# Get all instance types that run on Nitro hypervisor
import boto3
def get_nitro_instance_types():
"""Get all instance types that run on Nitro hypervisor"""
ec2 = boto3.client('ec2', region_name = 'us-east-1')
response = ec2.describe_instance_types(
Filters=[
{
'Name': 'hypervisor',
'Values': [
'nitro',
]
},
],
)
instance_types = []
for instance_type in response['InstanceTypes']:
instance_types.append(instance_type['InstanceType'])
return instance_types
get_nitro_instance_types()
Example output as of 12/06/2022 below:
['r5dn.8xlarge', 'x2iedn.xlarge', 'r6id.2xlarge', 'r6gd.medium',
'm5zn.2xlarge', 'r6idn.16xlarge', 'c6a.48xlarge', 'm5a.16xlarge',
'im4gn.2xlarge', 'c6gn.16xlarge', 'c6in.24xlarge', 'r5ad.24xlarge',
'r6i.xlarge', 'c6i.32xlarge', 'x2iedn.2xlarge', 'r6id.xlarge',
'i3en.24xlarge', 'i3en.12xlarge', 'm5d.8xlarge', 'c6i.8xlarge',
'r6g.large', 'm6gd.4xlarge', 'r6a.2xlarge', 'x2iezn.4xlarge',
'c6i.large', 'r6in.24xlarge', 'm6gd.xlarge', 'm5dn.2xlarge',
'd3en.2xlarge', 'c6id.8xlarge', 'm6a.large', 'is4gen.xlarge',
'r6g.8xlarge', 'm6idn.large', 'm6a.2xlarge', 'c6i.4xlarge',
'i4i.16xlarge', 'm5zn.6xlarge', 'm5.8xlarge', 'm6id.xlarge',
'm5n.16xlarge', 'c6g.16xlarge', 'r5n.12xlarge', 't4g.nano',
'm5ad.12xlarge', 'r6in.12xlarge', 'm6idn.12xlarge', 'g5.2xlarge',
'trn1.32xlarge', 'x2gd.8xlarge', 'is4gen.4xlarge', 'r6gd.xlarge',
'r5a.xlarge', 'r5a.2xlarge', 'c5ad.24xlarge', 'r6a.xlarge',
'r6g.medium', 'm6id.12xlarge', 'r6idn.2xlarge', 'c5n.2xlarge',
'g5.4xlarge', 'm5d.xlarge', 'i3en.3xlarge', 'r5.24xlarge',
'r6gd.2xlarge', 'c5d.large', 'm6gd.12xlarge', 'm6id.2xlarge',
'm6i.large', 'z1d.2xlarge', 'm5a.4xlarge', 'm5a.2xlarge',
'c6in.xlarge', 'r6id.16xlarge', 'c7g.8xlarge', 'm5dn.12xlarge',
'm6gd.medium', 'im4gn.8xlarge', 'm5dn.large', 'c5ad.4xlarge',
'r6g.16xlarge', 'c6a.24xlarge', 'c6a.16xlarge']
"""List all EC2 instances"""
import boto3
def ec2_connection():
"""Connect to AWS using API"""
region = 'us-east-2'
aws_key = 'xxx'
aws_secret = 'xxx'
session = boto3.Session(
aws_access_key_id = aws_key,
aws_secret_access_key = aws_secret
)
ec2 = session.client('ec2', region_name = region)
return ec2
def get_reservations(ec2):
"""Get a list of instances as a dictionary"""
response = ec2.describe_instances()
return response['Reservations']
def process_instances(reservations, ec2):
"""Print a colorful list of IPs and instances"""
if len(reservations) == 0:
print('No instance found. Quitting')
return
for reservation in reservations:
for instance in reservation['Instances']:
# get friendly name of the server
# only try this for mysql1.local server
friendly_name = get_friendly_name(instance)
if friendly_name.lower() != 'mysql1.local':
continue
# get the hypervisor based on the instance type
instance_type = get_instance_info(instance['InstanceType'], ec2)
# print findings
print(f'{friendly_name} // {instance["InstanceType"]} is {instance_type}')
break
def get_instance_info(instance_type, ec2):
"""Get hypervisor from the instance type"""
response = ec2.describe_instance_types(
InstanceTypes=[instance_type]
)
return response['InstanceTypes'][0]['Hypervisor']
def get_friendly_name(instance):
"""Get friendly name of the instance"""
tags = instance['Tags']
for tag in tags:
if tag['Key'] == 'Name':
return tag['Value']
return 'Unknown'
def run():
"""Main method to call"""
ec2 = ec2_connection()
reservations = get_reservations(ec2)
process_instances(reservations, ec2)
if name == 'main':
run()
print('Done')

How to get list of active connections on RDS using boto3

I can see following information regarding the RDS Instance
I want to know how can I get value of current activity using boto3. Current value as shown in below screenshot is 0.
I tried
response=client.describe_db_instances()
But it didnt returned the value of active connections.
You can get that data from CloudWatch. RDS sends any state information there and just render a few metrics in RDS dashboard.
#ivan thanks for the directions.
I created following python script to get information about instances with 0 connections and delete them after that. I hope it helps someone.
import datetime
import boto3
class RDSTermination:
#Strandard constructor for RDSTermination class
def __init__(self, cloudwatch_object, rds_object):
self.cloudwatch_object = cloudwatch_object
self.rds_object = rds_object
#Getter and setters for variables.
#property
def cloudwatch_object(self):
return self._cloudwatch_object
#cloudwatch_object.setter
def cloudwatch_object(self, cloudwatch_object):
self._cloudwatch_object = cloudwatch_object
#property
def rds_object(self):
return self._rds_object
#rds_object.setter
def rds_object(self, rds_object):
self._rds_object = rds_object
# Fetch connections details for all the RDS instances.Filter the list and return
# only those instances which are having 0 connections at the time of this script run
def _get_instance_connection_info(self):
rds_instances_connection_details = {}
response = self.cloudwatch_object.get_metric_data(
MetricDataQueries=[
{
'Id': 'fetching_data_for_something',
'Expression': "SEARCH('{AWS/RDS,DBInstanceIdentifier} MetricName=\"DatabaseConnections\"', 'Average', 300)",
'ReturnData': True
},
],
EndTime=datetime.datetime.utcnow(),
StartTime=datetime.datetime.utcnow() - datetime.timedelta(hours=2),
ScanBy='TimestampDescending',
MaxDatapoints=123
)
# response is of type dictionary with MetricDataResults as key
for instance_info in response['MetricDataResults']:
if len(instance_info['Timestamps']) > 0:
rds_instances_connection_details[instance_info['Label']] = instance_info['Values'][-1]
return rds_instances_connection_details
# Fetches list of all instances and there status.
def _fetch_all_rds_instance_state(self):
all_rds_instance_state = {}
response = self.rds_object.describe_db_instances()
instance_details = response['DBInstances']
for instance in instance_details:
all_rds_instance_state[instance['DBInstanceIdentifier']] = instance['DBInstanceStatus']
return all_rds_instance_state
# We further refine the list and remove instances which are stopped. We will work on
# Instances with Available state only
def _get_instance_allowed_for_deletion(self):
instances = self._get_instance_connection_info()
all_instance_state = self._fetch_all_rds_instance_state()
instances_to_delete = []
try:
for instance_name in instances.keys():
if instances[instance_name] == 0.0 and all_instance_state[instance_name] == 'available':
instances_to_delete.append(instance_name)
except BaseException:
print("Check if instance connection_info is empty")
return instances_to_delete
# Function to delete the instances reported in final list.It deletes instances with 0 connection
# and status as available
def terminate_rds_instances(self, dry_run=True):
if dry_run:
message = 'DRY-RUN'
else:
message = 'DELETE'
rdsnames = self._get_instance_allowed_for_deletion()
if len(rdsnames) > 0:
for rdsname in rdsnames:
try:
response = self.rds_object.describe_db_instances(
DBInstanceIdentifier=rdsname
)
termination_protection = response['DBInstances'][0]['DeletionProtection']
except BaseException as e:
print('[ERROR]: reading details' + str(e))
exit(1)
if termination_protection is True:
try:
print("Removing delete termination for {}".format(rdsname))
if not dry_run:
response = self.rds_object.modify_db_instance(
DBInstanceIdentifier=rdsname,
DeletionProtection=False
)
except BaseException as e:
print(
"[ERROR]: Could not modify db termination protection "
"due to following error:\n " + str(
e))
exit(1)
try:
if not dry_run:
print("i got executed")
response = self.rds_object.delete_db_instance(
DBInstanceIdentifier=rdsname,
SkipFinalSnapshot=True,
)
print('[{}]: RDS instance {} deleted'.format(message, rdsname))
except BaseException:
print("[ERROR]: {} rds instance not found".format(rdsname))
else:
print("No RDS instance marked for deletion")
if __name__ == "__main__":
cloud_watch_object = boto3.client('cloudwatch', region_name='us-east-1')
rds_object = boto3.client('rds', region_name='us-east-1')
rds_termination_object = RDSTermination(cloud_watch_object, rds_object)
rds_termination_object.terminate_rds_instances(dry_run=True)

Invoke a Lambda function with S3 payload from boto3

I need to invoke a Lambda function that accepts an S3 path. Below sample code of the lambda function.
def lambda_handler(event, context):
bucket = "mybucket"
key = "mykey/output/model.tar.gz"
model = load_model(bucket, key)
somecalc = some_func(model)
result = {'mycalc': json.dumps(somecalc)}
return result
I need to invoke this handler from my client code using boto3. I know I can do a request like below
lambda_client = boto3.client('lambda')
response = lambda_client.invoke(
FunctionName='mylambda_function',
InvocationType='RequestResponse',
LogType='Tail',
ClientContext='myContext',
Payload=b'bytes'|file,
Qualifier='1'
)
But I am not sure how to specify an S3 path in the payload. Looks like it is expecting a JSON.
Any suggestion?
You can specify a payload like so:
payload = json.dumps({ 'bucket': 'myS3Bucket' })
lambda_client = boto3.client('lambda')
response = lambda_client.invoke(
FunctionName='mylambda_function',
InvocationType='RequestResponse',
LogType='Tail',
ClientContext='myContext',
Payload=payload,
Qualifier='1'
)
And access the payload properties in your lamdba handler like so:
def lambda_handler(event, context):
bucket = event['bucket'] # pull from 'event' argument
key = "mykey/output/model.tar.gz"
model = load_model(bucket, key)
somecalc = some_func(model)
result = {'mycalc': json.dumps(somecalc)}
return result

How do you automate tagging using a Lambda function on AWS?

I'm trying to automate tagging for the following resources on AWS:
Amazon RDS
Amazon DynamoDB
Amazon Image AMI
Amazon EC2 Instance
Amazon Lambda Function
Amazon RDS Snapshot
Amazon RDS Security
Amazon RDS Subnet
Amazon Route 53 Hosted
Amazon S3 Bucket
Amazon CloudFormation
Currently I have a Lambda function that is almost identical to this article: How to Automatically Tag Amazon EC2 Resources in Response to API Events | AWS Security Blog
How can I modify this Lambda function so it tags the above resources as well?
I've tried finding documentation on how to tag these specific resources and I can't seem to find anything that's relevant to tagging using a Lambda function.
elif eventname == 'CreateImage':
ids.append(detail['responseElements']['imageId'])
logger.info(ids)
elif eventname == 'CreateSnapshot':
ids.append(detail['responseElements']['snapshotId'])
logger.info(ids)
elif eventname == 'CreateSecurityGroup':
ids.append(detail['responseElements']['groupId'])
else:
logger.warning('Not supported action')
The above code is adding tags for EC2, but we need it to add tags to the resources I listed above.
This should help on a few of them.
Here are the cloudwatch event patterns:
{
"detail-type": [
"AWS API Call via CloudTrail"
],
"detail": {
"eventSource": [
"ec2.amazonaws.com",
"rds.amazonaws.com",
"lambda.amazonaws.com",
"s3.amazonaws.com",
"dynamodb.amazonaws.com",
"elasticfilesystem.amazonaws.com"
],
"eventName": [
"CreateVolume",
"RunInstances",
"CreateImage",
"CreateSnapshot",
"CreateDBInstance",
"CreateFunction20150331",
"UpdateFunctionConfiguration20150331v2",
"UpdateFunctionCode20150331v2",
"CreateBucket",
"CreateTable",
"CreateMountTarget"
]
}
}
and then here is the corresponding lambda code which will need a few modifications for your environment.
from __future__ import print_function
import json
import boto3
import logging
import time
import datetime
logger = logging.getLogger()
logger.setLevel(logging.INFO)
def lambda_handler(event, context):
logger.info('################ Event: ############## ' + str(event))
#print('Received event: ' + json.dumps(event, indent=2))
ids = []
try:
region = event['region']
detail = event['detail']
eventname = detail['eventName']
arn = detail['userIdentity']['arn']
principal = detail['userIdentity']['principalId']
userType = detail['userIdentity']['type']
if userType == 'IAMUser':
user = detail['userIdentity']['userName']
else:
user = principal.split(':')[1]
logger.info('principalId: ' + str(principal))
logger.info('region: ' + str(region))
logger.info('eventName: ' + str(eventname))
logger.info('detail: ' + str(detail))
ec2_client = boto3.resource('ec2')
lambda_client = boto3.client('lambda')
rds_client = boto3.client('rds')
s3_client = boto3.resource('s3')
ddb_client = boto3.client('dynamodb')
efs_client = boto3.client('efs')
if eventname == 'CreateVolume':
ids.append(detail['responseElements']['volumeId'])
logger.info(ids)
elif eventname == 'RunInstances':
items = detail['responseElements']['instancesSet']['items']
for item in items:
ids.append(item['instanceId'])
logger.info(ids)
logger.info('number of instances: ' + str(len(ids)))
base = ec2_client.instances.filter(InstanceIds=ids)
#loop through the instances
for instance in base:
for vol in instance.volumes.all():
ids.append(vol.id)
for eni in instance.network_interfaces:
ids.append(eni.id)
elif eventname == 'CreateImage':
ids.append(detail['responseElements']['imageId'])
logger.info(ids)
elif eventname == 'CreateSnapshot':
ids.append(detail['responseElements']['snapshotId'])
logger.info(ids)
elif eventname == 'CreateFunction20150331':
try:
functionArn = detail['responseElements']['functionArn']
lambda_client.tag_resource(Resource=functionArn,Tags={'CreatedBy': user})
lambda_client.tag_resource(Resource=functionArn,Tags={'DateCreated': time.strftime("%B %d %Y")})
except Exception as e:
logger.error('Exception thrown at CreateFunction20150331' + str(e))
pass
elif eventname == 'UpdateFunctionConfiguration20150331v2':
try:
functionArn = detail['responseElements']['functionArn']
lambda_client.tag_resource(Resource=functionArn,Tags={'LastConfigModifiedByNetID': user})
except Exception as e:
logger.error('Exception thrown at UpdateFunctionConfiguration20150331v2' + str(e))
pass
elif eventname == 'UpdateFunctionCode20150331v2':
try:
functionArn = detail['responseElements']['functionArn']
lambda_client.tag_resource(Resource=functionArn,Tags={'LastCodeModifiedByNetID': user})
except Exception as e:
logger.error('Exception thrown at UpdateFunctionCode20150331v2' + str(e))
pass
elif eventname == 'CreateDBInstance':
try:
dbResourceArn = detail['responseElements']['dBInstanceArn']
rds_client.add_tags_to_resource(ResourceName=dbResourceArn,Tags=[{'Key':'CreatedBy','Value': user}])
except Exception as e:
logger.error('Exception thrown at CreateDBInstance' + str(e))
pass
elif eventname == 'CreateBucket':
try:
bucket_name = detail['requestParameters']['bucketName']
s3_client.BucketTagging(bucket_name).put(Tagging={'TagSet': [{'Key':'CreatedBy','Value': user}]})
except Exception as e:
logger.error('Exception thrown at CreateBucket' + str(e))
pass
elif eventname == 'CreateTable':
try:
tableArn = detail['responseElements']['tableDescription']['tableArn']
ddb_client.tag_resource(ResourceArn=tableArn,Tags=[{'Key':'CreatedBy','Value': user}])
except Exception as e:
logger.error('Exception thrown at CreateTable' + str(e))
pass
elif eventname == 'CreateMountTarget':
try:
system_id = detail['requestParameters']['fileSystemId']
efs_client.create_tags(FileSystemId=system_id, Tags=[{'Key':'CreatedBy','Value': user}])
except Exception as e:
logger.error('Exception thrown at CreateMountTarget' + str(e))
pass
# todo: EMR and Glacier also possible candidates
else:
logger.warning('No matching eventname found in the Auto Tag lambda function (Ln 118)')
if ids:
for resourceid in ids:
print('Tagging resource ' + resourceid)
ec2_client.create_tags(Resources=ids, Tags=[{'Key': 'CreatedBy', 'Value': user}])
logger.info(' Remaining time (ms): ' + str(context.get_remaining_time_in_millis()) + '\n')
return True
except Exception as e:
logger.error('Something went wrong: ' + str(e))
return False
You are kind of limited by what is supported by Cloudwatch events, but this will hopefully help you knock out a few of the ones on your list.
Maybe something from python boto lib would help (on example of RDS) :
https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/rds.html#RDS.Client.add_tags_to_resource
response = client.add_tags_to_resource(
ResourceName='arn:aws:rXXXXX,
Tags=[
{
'Key': 'Staging',
'Value': '2019',
},
], )
To tag resources upon creation you need to use Lambda, Cloudwatch events and some automation.
You can tag resources with the IAM ARN of the resource creator and resource created-date with the documentation available here, this is an open-source tagging solution for AWS.
This will help you tag your resources with the IAM ARN of the creator and the resource created-date.
There are two ways you can implement the solution available on the above link,
1. Shell Script and
2. CloudFormation template.