boto3: Create a instance with an instanceprofile/ IAM Role - amazon-web-services

I want to write a script that starts servers for me and does the setup. It should:
Create two S3-Buckets and set its CORS (solved)
Create a ec2 server based on an image
give this server access to that bucket
What I have found so far is how to create the bucket and how to create the instance itself:
##see http://boto3.readthedocs.io/en/latest/reference/services/ec2.html#ec2
aws = boto3.Session(profile_name="myProfile")
s3 = aws.resource('s3', region_name='my-region-1')
bucket = s3.create_bucket(
Bucket='my-cool-bucket',
#...
)
#...
ec2 = aws.resource('ec2', region_name='my-region-1')
ec2.create_instances(
ImageId="my-ami-image-id",
MinCount=1, # I want exactly 1 server
MaxCount=1,
KeyName='my-ssh-key',
SecurityGroupIds=['my-security-group'],
UserData=myStartupScript, # script that will start when server starts
InstanceType='t2.nano',
SubnetId="my-subnet-id",
DisableApiTermination=True,
PrivateIpAddress='10.0.0.1',
#...
)
but how do I now create the Role for that server and give that role access to the bucket?

I have found the way to create the InstanceProfile:
https://boto3.readthedocs.io/en/latest/reference/services/iam.html#IAM.ServiceResource.create_instance_profile
instance_profile = iam.create_instance_profile(
InstanceProfileName='string',
Path='string'
)

You will need to:
Create an InstanceProfile
Associate a Role to the Instance Profile
Launch the instance(s) with the IamInstanceProfile parameter

Create a role named Test-emr-instance-role, then you can use this code to create an instance profile and attach a role to the instance profile.
session = boto3.session.Session(profile_name='myProfile')
iam = session.client('iam')
instance_profile = iam.create_instance_profile (
InstanceProfileName ='Test-emr-instance-profile'
)
response = iam.add_role_to_instance_profile (
InstanceProfileName = 'Test-emr-instance-profile',
RoleName = 'Test-emr-instance-role'
)

Related

Grab Public IP Of a New Running Instance and send it via SNS

So, I have this code, and I will love to grab the public IP address of the new windows instance that will be created when I adjust the desired capacity.
The launch template assigns an automatic tag name when I adjust the desired_capacity. I want to be able to grab the public IP address of that tag name.
import boto3
session = boto3.session.Session()
client = session.client('autoscaling')
def set_desired_capacity(asg_name, desired_capacity):
response = client.set_desired_capacity(
AutoScalingGroupName=asg_name,
DesiredCapacity=desired_capacity,
)
return response
def lambda_handler(event, context):
asg_name = "test"
desired_capacity = 1
return set_desired_capacity(asg_name, desired_capacity)
if __name__ == '__main__':
print(lambda_handler("", ""))
I took a look at the EC2 client documentation, and I wasn't sure what to use. I just need help modifying my code
If you know the tag that you are assigning in the autoscaling group, then you can just use a describe_instances method. The Boto3 docs have an example with filtering. Something like this should work, replacing TAG, VALUE, and TOPICARN with the appropriate values.
import boto3
ec2_client = boto3.client('ec2', 'us-west-2')
sns_client = boto3.client('sns', 'us-west-2')
response = ec2_client.describe_instances(
Filters=[
{
'Name': 'tag:TAG',
'Values': [
'VALUE'
]
}
]
)
for reservation in response["Reservations"]:
for instance in reservation["Instances"]:
ip = instance["PublicIpAddress"]
sns_publish = sns_client.publish(
TopicArn='TOPICARN',
Message=ip,
)
print(sns_publish)
Objective:
After an EC2 instance starts
Obtain the IP address
Send a message via Amazon SNS
It can take some time for a Public IP address to be assigned to an Amazon EC2 instance. Rather than continually calling DescribeInstances(), it would be easier to Run commands on your Linux instance at launch - Amazon Elastic Compute Cloud via a User Data script.
The script could:
Obtain its Public IP address via Instance metadata and user data - Amazon Elastic Compute Cloud:
IP=$(curl 169.254.169.254/latest/meta-data/public-ipv4)
Send a message to an Amazon SNS topic with:
aws sns publish --topic-arn xxx --message $IP
If you also want the message to include a name from a tag associated with the instance, the script will need to call aws ec2 describe-instances with its own Instance ID (which can be obtained via the Instance Metadata) and then extra the name from the tags returned.

How to get EC2 ID and Private IP from EC2 Autoscaling Group using AWS CDK

How can I get the instance ID and private IP for EC2 instance deployed with AutoscalingGroup (AWS CDK Python) ?
The AutoscalingGroup Construct is like this:
from aws_cdk import (
core,
aws_ec2,
aws_autoscaling
)
autoscaling_group = aws_autoscaling.AutoScalingGroup(
self,
id="AutoscalingGroup",
instance_type=aws_ec2.InstanceType('m5.xlarge'),
machine_image=aws_ec2.MachineImage.latest_amazon_linux(),
vpc=Myvpc,
vpc_subnets=aws_ec2.SubnetSelection(subnet_type=aws_ec2.SubnetType.PUBLIC),
associate_public_ip_address=True,
desired_capacity=1,
key_name='MySSHKey'
)
Thank you very much.
You can retrieve them using boto3.
Here is an example to get them only for the running instances :
ec2_res = boto3.resource('ec2')
instances = ec2_res.instances.filter(
Filters=[
{'Name': 'instance-state-name', 'Values': ['running']}
]
)
for instance in instances:
print(instance.id, instance.instance_type, instance.private_ip_address)
You can check the doc here for available parameters and here for the boto3 call.
If you want to filter on a specific name, you have to check in the tags of the instances:
for instance in instances:
for tag in instance.tags:
if (tag.get('Key') == 'Name') and (tag.get('Value') == '<The name of your instance>'):
print(instance.id, instance.instance_type, instance.private_ip_address)

How to list AWS tagged Hosted Zones using ResourceGroupsTaggingAPI boto3

I am trying to retrieve all the AWS resources tagged using the boto3 ResourceGroupsTaggingAPI, but I can't seem to retrieve the Hosted Zones which have been tagged.
tagFilters = [{'Key': 'tagA', 'Values': 'a'}, {'Key': 'tagB', 'Values': 'b'}]
client = boto3.client('resourcegroupstaggingapi', region_name = 'us-east-2')
paginator = self.client.get_paginator('get_resources')
page_list = paginator.paginate(TagFilters = tagFilters)
# filter and get iterable object arn
# Refer filtering with JMESPath => http://jmespath.org/
arns = page_list.search("ResourceTagMappingList[*].ResourceARN")
for arn in arns:
print(arn)
I noticed through the Tag Editor in the AWS Console (which I guess is using the ResourceGroupsTaggingAPI) when the region is set to All the tagged Hosted zones can be retrieved (since global) while when a specific region is set the tagged Hosted Zones are not shown in the results. Is there a way to set the boto3 client region to all?, or is there another way to do this?
I have already tried
client = boto3.client('resourcegroupstaggingapi')
which returns an empty result
(https://console.aws.amazon.com/resource-groups/tag-editor/find-resources?region=us-east-1)
You need to iterate it over all regions,
ec2 = boto3.client('ec2')
region_response = ec2.describe_regions()
#print('Regions:', region_response['Regions'])
for this_region_info in region_response['Regions']:
region = this_region_info["RegionName"]
my_config = Config(
region_name = region
)
client = boto3.client('resourcegroupstaggingapi', config=my_config)

Terraform: Create AWS Lightsail instance from snapshot

Given the Terraform documentation on AWS Lightsail, I can construct a brand new Lightsail instance as follows.
resource "aws_lightsail_instance" "my_ls_instance" {
name = "my_ls"
availability_zone = "us-east-1b"
blueprint_id = "ubuntu_18_04"
bundle_id = "2xlarge_2_0"
key_pair_name = "MyKeyName"
}
It is possible to create a Lightsail instance from a Lightsail snapshot using Terraform?
No, it's not. Right now Terraform can only create instances based on Lightsail blueprints.
You can, however create an instance from snapshot in python3 /w boto3. Let me include my code:
#######
import boto3
client = boto3.client('lightsail')
response = client.create_instances_from_snapshot(
instanceNames=[
'myitblog',
],
availabilityZone='us-east-1a',
instanceSnapshotName='MYITBLOG_https',
bundleId='nano_2_0',
)
response = client.attach_static_ip(
staticIpName='StaticIp-1',
instanceName='myitblog'
)

Is there anyway to fetch tags of a RDS instance using boto3?

rds_client = boto3.client('rds', 'us-east-1')
instance_info = rds_client.describe_db_instances( DBInstanceIdentifier='**myinstancename**')
But the instance_info doesn't contain any tags I set in the RDS instance. I want to fetch the instances that has env='production' in them and want to exclude env='test'. Is there any method in boto3 that fetched the tags as well?
Only through boto3.client("rds").list_tags_for_resource
Lists all tags on an Amazon RDS resource.
ResourceName (string) --
The Amazon RDS resource with tags to be listed. This value is an Amazon Resource Name (ARN). For information about creating an ARN, see Constructing an RDS Amazon Resource Name (ARN) .
import boto3
rds_client = boto3.client('rds', 'us-east-1')
db_instance_info = rds_client.describe_db_instances(
DBInstanceIdentifier='**myinstancename**')
for each_db in db_instance_info['DBInstances']:
response = rds_client.list_tags_for_resource(
ResourceName=each_db['DBInstanceArn'],
Filters=[{
'Name': 'env',
'Values': [
'production',
]
}])
Either use a simple exclusion over the simple filter, or you can dig through the documentation to build complicated JMESPath filter using
paginators.
Notes : AWS resource tags is not a universal implementation. So you must always refer to the boto3 documentation.
Python program will show you how to list all rds instance, there type and status.
list_rds_instances.py
import boto3
#connect ot rds instance
client = boto3.client('rds')
#rds_instance will have all rds information in dictionary.
rds_instance = client.describe_db_instances()
all_list = rds_instance['DBInstances']
print('RDS Instance Name \t| Instance Type \t| Status')
for i in rds_instance['DBInstances']:
dbInstanceName = i['DBInstanceIdentifier']
dbInstanceEngine = i['DBInstanceClass']
dbInstanceStatus = i['DBInstanceStatus']
print('%s \t| %s \t| %s' %(dbInstanceName, dbInstanceEngine, dbInstanceStatus))
Important Note: While working with boto3 you need to setup your credentials in two files ~/.aws/credentials and ~/.aws/config
~/.aws/credentials
[default]
aws_access_key_id=<ACCESS_KEY>
aws_secret_access_key=<SECRET_KEY>
~/.aws/config
[default]
region=ap-south-1