I ran my code to create an EC2 instance but I keep getting this error.
"errorMessage": "'message'",
"errorType": "KeyError",
The full code
import boto3
import os
AMI = os.environ['AMI']
INSTANCE_TYPE = os.environ['INSTANCE_TYPE']
KEY_NAME = os.environ['KEY_NAME']
SUBNET_ID = os.environ['SUBNET_ID']
REGION = os.environ['AWS_REGION']
ec2 = boto3.client('ec2', region_name=REGION)
def lambda_handler(event, context):
message = event['message']
instance = ec2.run_instances(
ImageId=AMI,
InstanceType=INSTANCE_TYPE,
KeyName=KEY_NAME,
SubnetId=SUBNET_ID,
MaxCount=1,
MinCount=1,
InstanceInitiatedShutdownBehavior='terminate',
UserData=init_script
)
instance_id = instance['Instances'][0]['InstanceId']
print instance_id
return instance_id
What could be triggering this key error?
As an environmental variable, am I supposed to use the full key name including its file type.
Ex: "key.pem" instead of "key"
The error indicates that you have an error on this line:
message = event['message']
Most likely the lambda event does not have the 'message' key you are expecting. You should print out the event to CloudWatch Logs and take a look.
I ran the rest of your code and it created an EC2 instance successfully.
I'm trying to filter the images that follow the following pattern in their tag value: dev/yyyy-mm-dd
eg. : dev/2021-09-11
the best idea that came up to me was something like this:
aws ec2 describe-images --filters Name=tag:tag_name,Values="dev/????-??-??"
is there a more logical approach?
You would use the boto3 AWS SDK to call AWS directly, something like:
import boto3
from datetime import datetime
FIND_TAG = 'your-tag'
ec2_resource = boto3.resource('ec2')
images = ec2_resource.images.filter(Owners=['self'])
# Find the matching tag
for image in images:
values = [tag['Value'] for tag in image.tags if tag['Key'] == FIND_TAG]
if len(values) > 0:
value = values[0]
# Is it a date?
try:
date = datetime.strptime(value, '%Y-%m-%d')
except ValueError:
print('Not a date')
I know this page lists up the instance types which based on Nitro system but I would like to know the list in a dynamic way with CLI. (for example, using aws ec2 describe-instances). Is it possible to get Nitro based instance type other than parsing the static page? If so, could you tell me the how?
You'd have to write a bit of additional code to get that information. aws ec2 describe-instances will give you InstanceType property. You should use a programming language to parse the JSON, extract InstanceType and then call describe-instances like so: https://docs.aws.amazon.com/cli/latest/reference/ec2/describe-instance-types.html?highlight=nitro
From the JSON you get back, extract hypervisor. That'll give you Nitro if the instance is Nitro.
Here's a Python code that might work. I have not tested it fully but you can tweak this to get the results you want.
"""List all EC2 instances"""
import boto3
def ec2_connection():
"""Connect to AWS using API"""
region = 'us-east-2'
aws_key = 'xxx'
aws_secret = 'xxx'
session = boto3.Session(
aws_access_key_id = aws_key,
aws_secret_access_key = aws_secret
)
ec2 = session.client('ec2', region_name = region)
return ec2
def get_reservations(ec2):
"""Get a list of instances as a dictionary"""
response = ec2.describe_instances()
return response['Reservations']
def process_instances(reservations, ec2):
"""Print a colorful list of IPs and instances"""
if len(reservations) == 0:
print('No instance found. Quitting')
return
for reservation in reservations:
for instance in reservation['Instances']:
# get friendly name of the server
# only try this for mysql1.local server
friendly_name = get_friendly_name(instance)
if friendly_name.lower() != 'mysql1.local':
continue
# get the hypervisor based on the instance type
instance_type = get_instance_info(instance['InstanceType'], ec2)
# print findings
print(f'{friendly_name} // {instance["InstanceType"]} is {instance_type}')
break
def get_instance_info(instance_type, ec2):
"""Get hypervisor from the instance type"""
response = ec2.describe_instance_types(
InstanceTypes=[instance_type]
)
return response['InstanceTypes'][0]['Hypervisor']
def get_friendly_name(instance):
"""Get friendly name of the instance"""
tags = instance['Tags']
for tag in tags:
if tag['Key'] == 'Name':
return tag['Value']
return 'Unknown'
def run():
"""Main method to call"""
ec2 = ec2_connection()
reservations = get_reservations(ec2)
process_instances(reservations, ec2)
if __name__ == '__main__':
run()
print('Done')
In the above answer , the statement "From the JSON you get back, extract hypervisor. That'll give you Nitro if the instance is Nitro " is not longer accurate.
As per the latest AWS documentation,
hypervisor - The hypervisor type of the instance (ovm | xen ). The value xen is used for both Xen and Nitro hypervisors.
Cleaned up, verified working code below:
# Get all instance types that run on Nitro hypervisor
import boto3
def get_nitro_instance_types():
"""Get all instance types that run on Nitro hypervisor"""
ec2 = boto3.client('ec2', region_name = 'us-east-1')
response = ec2.describe_instance_types(
Filters=[
{
'Name': 'hypervisor',
'Values': [
'nitro',
]
},
],
)
instance_types = []
for instance_type in response['InstanceTypes']:
instance_types.append(instance_type['InstanceType'])
return instance_types
get_nitro_instance_types()
Example output as of 12/06/2022 below:
['r5dn.8xlarge', 'x2iedn.xlarge', 'r6id.2xlarge', 'r6gd.medium',
'm5zn.2xlarge', 'r6idn.16xlarge', 'c6a.48xlarge', 'm5a.16xlarge',
'im4gn.2xlarge', 'c6gn.16xlarge', 'c6in.24xlarge', 'r5ad.24xlarge',
'r6i.xlarge', 'c6i.32xlarge', 'x2iedn.2xlarge', 'r6id.xlarge',
'i3en.24xlarge', 'i3en.12xlarge', 'm5d.8xlarge', 'c6i.8xlarge',
'r6g.large', 'm6gd.4xlarge', 'r6a.2xlarge', 'x2iezn.4xlarge',
'c6i.large', 'r6in.24xlarge', 'm6gd.xlarge', 'm5dn.2xlarge',
'd3en.2xlarge', 'c6id.8xlarge', 'm6a.large', 'is4gen.xlarge',
'r6g.8xlarge', 'm6idn.large', 'm6a.2xlarge', 'c6i.4xlarge',
'i4i.16xlarge', 'm5zn.6xlarge', 'm5.8xlarge', 'm6id.xlarge',
'm5n.16xlarge', 'c6g.16xlarge', 'r5n.12xlarge', 't4g.nano',
'm5ad.12xlarge', 'r6in.12xlarge', 'm6idn.12xlarge', 'g5.2xlarge',
'trn1.32xlarge', 'x2gd.8xlarge', 'is4gen.4xlarge', 'r6gd.xlarge',
'r5a.xlarge', 'r5a.2xlarge', 'c5ad.24xlarge', 'r6a.xlarge',
'r6g.medium', 'm6id.12xlarge', 'r6idn.2xlarge', 'c5n.2xlarge',
'g5.4xlarge', 'm5d.xlarge', 'i3en.3xlarge', 'r5.24xlarge',
'r6gd.2xlarge', 'c5d.large', 'm6gd.12xlarge', 'm6id.2xlarge',
'm6i.large', 'z1d.2xlarge', 'm5a.4xlarge', 'm5a.2xlarge',
'c6in.xlarge', 'r6id.16xlarge', 'c7g.8xlarge', 'm5dn.12xlarge',
'm6gd.medium', 'im4gn.8xlarge', 'm5dn.large', 'c5ad.4xlarge',
'r6g.16xlarge', 'c6a.24xlarge', 'c6a.16xlarge']
"""List all EC2 instances"""
import boto3
def ec2_connection():
"""Connect to AWS using API"""
region = 'us-east-2'
aws_key = 'xxx'
aws_secret = 'xxx'
session = boto3.Session(
aws_access_key_id = aws_key,
aws_secret_access_key = aws_secret
)
ec2 = session.client('ec2', region_name = region)
return ec2
def get_reservations(ec2):
"""Get a list of instances as a dictionary"""
response = ec2.describe_instances()
return response['Reservations']
def process_instances(reservations, ec2):
"""Print a colorful list of IPs and instances"""
if len(reservations) == 0:
print('No instance found. Quitting')
return
for reservation in reservations:
for instance in reservation['Instances']:
# get friendly name of the server
# only try this for mysql1.local server
friendly_name = get_friendly_name(instance)
if friendly_name.lower() != 'mysql1.local':
continue
# get the hypervisor based on the instance type
instance_type = get_instance_info(instance['InstanceType'], ec2)
# print findings
print(f'{friendly_name} // {instance["InstanceType"]} is {instance_type}')
break
def get_instance_info(instance_type, ec2):
"""Get hypervisor from the instance type"""
response = ec2.describe_instance_types(
InstanceTypes=[instance_type]
)
return response['InstanceTypes'][0]['Hypervisor']
def get_friendly_name(instance):
"""Get friendly name of the instance"""
tags = instance['Tags']
for tag in tags:
if tag['Key'] == 'Name':
return tag['Value']
return 'Unknown'
def run():
"""Main method to call"""
ec2 = ec2_connection()
reservations = get_reservations(ec2)
process_instances(reservations, ec2)
if name == 'main':
run()
print('Done')
Currently I have several hundred AWS IAM Roles with inline policies.
I would like to somehow convert these inline policies to managed policies.
While AWS Documentation has a way to do this via the Console, this will be very time consuming.
Does anyone know of a way, or have a script to do this via BOTO or AWS CLI...or direct me to some method that I can do this programmatically?
Thanks in advance
boto3 code will be like this.
In this code, inline policies that are embedded in the specified IAM user will be copied to customer managed policies.
Note delete part is commented out.
import json
import boto3
user_name = 'xxxxxxx'
client = boto3.client("iam")
response = client.list_user_policies(UserName=user_name)
for policy_name in response["PolicyNames"]:
response = client.get_user_policy(UserName=user_name, PolicyName=policy_name)
policy_document = json.dumps(response["PolicyDocument"])
response = client.create_policy(
PolicyName=policy_name, PolicyDocument=policy_document_json
)
# response = client.delete_user_policy(
# UserName=user_name,
# PolicyName=policy_name
# )
Updated:
For IAM roles, changing User to Role, user to role (case sensitive) above code works.
Besides, if you execute for multiple roles, use list_roles to get role_name.
response=client.list_roles()
for i in response['Roles']:
role_name = i['RoleName']
# print(role_name)
with #shimo snippet, the following works with added error handling and attaching the newly created managed policy to the IAM role:
import json
import boto3
from botocore.exceptions import ClientError
role_name = 'xxxxxxxx'
account_id = '123456789'
client = boto3.client("iam")
resource = boto3.resource('iam')
response = client.list_role_policies(RoleName=role_name)
for policy_name in response["PolicyNames"]:
response = client.get_role_policy(RoleName=role_name, PolicyName=policy_name)
policy_document = json.dumps(response["PolicyDocument"])
print(policy_document)
try:
response = client.create_policy(
PolicyName=policy_name, PolicyDocument=policy_document
)
print(policy_name + 'Policy Created')
except ClientError as error:
if error.response['Error']['Code'] == 'EntityAlreadyExists':
print(policy_name + ' policy already exists')
else:
print("Unexpected error: %s" % error)
policy_arn = f'arn:aws:iam::{account_id}:policy/{policy_name}'
role = resource.Role(role_name)
role.attach_policy(PolicyArn=policy_arn)
response = client.delete_role_policy(
RoleName=role_name,
PolicyName=policy_name
)
# This code was intended to list all the running instances that I have.
it is showing ""errorMessage": "'s3.ServiceResource' object has no attribute 'object'""
This code is intended to run on AWS Lambda. I am a beginner, and so if there are any other mistakes in this code, please do share your inputs.
import json
import boto3
ec2 = boto3.resource("ec2")
s3 = boto3.resource('s3')
object = s3.Bucket("object")
def lambda_handler(event, context):
filters = [{"Name" : "instance-state-name",
"Values" : ["running"]
}
]
instances = ec2.instances.filter(Filters = filters)
RunningInstances = []
for instance in instances:
RunningInstances.append(instance.id)
instanceList = json.dumps(RunningInstances)
s3.object(
'sameeeeeerjajs',
'instanceList.txt'
).put(Body=instanceList)
return("Status code:200")
From boto's docs for S3.Object it should be (Object, not object):
s3.Object(
'sameeeeeerjajs',
'instanceList.txt'
).put(Body=instanceList)