AWS : retrieving EC2 instance id linked to a beanstalk environment - amazon-web-services

How to get the instance id linked to an EBS environment (with boto for instance):
(the 'EnvironmentId' param is not the EC2 AMI id)
def get_environment_instance(self, env_name):
"""
Returns the environment instance id
"""
response = self.ebs.describe_environments(application_name=self.app_name,environment_names=[env_name],include_deleted=False)
envs = response['DescribeEnvironmentsResponse']['DescribeEnvironmentsResult']['Environments']
for env in envs[:]:
out('---'+ env['EnvironmentId'])
return None

I think you want the describe_environment_resources method, not describe_environments. That should return all of the AWS resources used by the environment, including instance ID's of all EC2 instances.

Related

how to catch instance_id that is of type class after creating a new server with boto3 with python

I am launching a new ec2 instance with this code:
ec2 = boto3.resource('ec2',
aws_access_key_id=existing_user.access_id,
aws_secret_access_key=existing_user.secret_id,
region_name='eu-west-2')
instance = ec2.create_instances(
ImageId="ami-084e8c05825742534",
MinCount=1,
MaxCount=1,
InstanceType="t2.micro",
KeyName="KeyPair1",
SecurityGroupIds=[
'sg-0f6e6789ff4e7e7c1',
],
)
print('successfully lauched an instance save it to User db')
print(instance[0])
print(type(instance[0]))
the instance variable returns an instance id of the new ec2 instance
which i am printing which output something like this:
ec2.Instance(id='i-03ee6121b4e7846d2')
<class 'boto3.resources.factory.ec2.Instance'>
I am new to python classes and stuff and not able to access/extract the id which i need to save
to my DB.
Can anybody help with this?
The ec2 class has an attribute which you can print like this or save to DB :
print(f'EC2 instance "{ec2.id}"')

Grab Public IP Of a New Running Instance and send it via SNS

So, I have this code, and I will love to grab the public IP address of the new windows instance that will be created when I adjust the desired capacity.
The launch template assigns an automatic tag name when I adjust the desired_capacity. I want to be able to grab the public IP address of that tag name.
import boto3
session = boto3.session.Session()
client = session.client('autoscaling')
def set_desired_capacity(asg_name, desired_capacity):
response = client.set_desired_capacity(
AutoScalingGroupName=asg_name,
DesiredCapacity=desired_capacity,
)
return response
def lambda_handler(event, context):
asg_name = "test"
desired_capacity = 1
return set_desired_capacity(asg_name, desired_capacity)
if __name__ == '__main__':
print(lambda_handler("", ""))
I took a look at the EC2 client documentation, and I wasn't sure what to use. I just need help modifying my code
If you know the tag that you are assigning in the autoscaling group, then you can just use a describe_instances method. The Boto3 docs have an example with filtering. Something like this should work, replacing TAG, VALUE, and TOPICARN with the appropriate values.
import boto3
ec2_client = boto3.client('ec2', 'us-west-2')
sns_client = boto3.client('sns', 'us-west-2')
response = ec2_client.describe_instances(
Filters=[
{
'Name': 'tag:TAG',
'Values': [
'VALUE'
]
}
]
)
for reservation in response["Reservations"]:
for instance in reservation["Instances"]:
ip = instance["PublicIpAddress"]
sns_publish = sns_client.publish(
TopicArn='TOPICARN',
Message=ip,
)
print(sns_publish)
Objective:
After an EC2 instance starts
Obtain the IP address
Send a message via Amazon SNS
It can take some time for a Public IP address to be assigned to an Amazon EC2 instance. Rather than continually calling DescribeInstances(), it would be easier to Run commands on your Linux instance at launch - Amazon Elastic Compute Cloud via a User Data script.
The script could:
Obtain its Public IP address via Instance metadata and user data - Amazon Elastic Compute Cloud:
IP=$(curl 169.254.169.254/latest/meta-data/public-ipv4)
Send a message to an Amazon SNS topic with:
aws sns publish --topic-arn xxx --message $IP
If you also want the message to include a name from a tag associated with the instance, the script will need to call aws ec2 describe-instances with its own Instance ID (which can be obtained via the Instance Metadata) and then extra the name from the tags returned.

Terraform aws_instance specify login credentials

I am provisionng a Centos 7 instance in AWS with terraform with this code block
resource "aws_instance" "my_instance" {
ami = "${var.image-aws-centos7}"
monitoring = "true"
availability_zone = "${var.aws-az1}"
subnet_id = "${var.my_subnet.id}"
vpc_security_group_ids = ["${aws_security_group.my_sg.id}"]
tags = {
Name = "my_instance"
os-type = "linux"
os-version = "centos7"
no_domainjoin = "true"
purpose = "my test vm"
}
The instance is created successfully but because i explicitly won't join it to my domain the autentication with my domain admin credentials fails which is understandable.
I login with ssh and the host is successfully added permenantly to known host.
I was searching in docs how to define local admin user name and password in terraform so that i can use those credentials to login to the instance.
I can't find an answer.
Any help is much appreciated.
Adding new users on your instance should be performed from the inside of the instance. For this you could use user_data attribute in your aws_instance.
User data is a script that will execute once your instance launches for the first time. Thus, instead of manually login into the instance through ssh, you would provide script in user_data that would reproduce the manual steps you take following instance launch.

How to get EC2 ID and Private IP from EC2 Autoscaling Group using AWS CDK

How can I get the instance ID and private IP for EC2 instance deployed with AutoscalingGroup (AWS CDK Python) ?
The AutoscalingGroup Construct is like this:
from aws_cdk import (
core,
aws_ec2,
aws_autoscaling
)
autoscaling_group = aws_autoscaling.AutoScalingGroup(
self,
id="AutoscalingGroup",
instance_type=aws_ec2.InstanceType('m5.xlarge'),
machine_image=aws_ec2.MachineImage.latest_amazon_linux(),
vpc=Myvpc,
vpc_subnets=aws_ec2.SubnetSelection(subnet_type=aws_ec2.SubnetType.PUBLIC),
associate_public_ip_address=True,
desired_capacity=1,
key_name='MySSHKey'
)
Thank you very much.
You can retrieve them using boto3.
Here is an example to get them only for the running instances :
ec2_res = boto3.resource('ec2')
instances = ec2_res.instances.filter(
Filters=[
{'Name': 'instance-state-name', 'Values': ['running']}
]
)
for instance in instances:
print(instance.id, instance.instance_type, instance.private_ip_address)
You can check the doc here for available parameters and here for the boto3 call.
If you want to filter on a specific name, you have to check in the tags of the instances:
for instance in instances:
for tag in instance.tags:
if (tag.get('Key') == 'Name') and (tag.get('Value') == '<The name of your instance>'):
print(instance.id, instance.instance_type, instance.private_ip_address)

Is there anyway to fetch tags of a RDS instance using boto3?

rds_client = boto3.client('rds', 'us-east-1')
instance_info = rds_client.describe_db_instances( DBInstanceIdentifier='**myinstancename**')
But the instance_info doesn't contain any tags I set in the RDS instance. I want to fetch the instances that has env='production' in them and want to exclude env='test'. Is there any method in boto3 that fetched the tags as well?
Only through boto3.client("rds").list_tags_for_resource
Lists all tags on an Amazon RDS resource.
ResourceName (string) --
The Amazon RDS resource with tags to be listed. This value is an Amazon Resource Name (ARN). For information about creating an ARN, see Constructing an RDS Amazon Resource Name (ARN) .
import boto3
rds_client = boto3.client('rds', 'us-east-1')
db_instance_info = rds_client.describe_db_instances(
DBInstanceIdentifier='**myinstancename**')
for each_db in db_instance_info['DBInstances']:
response = rds_client.list_tags_for_resource(
ResourceName=each_db['DBInstanceArn'],
Filters=[{
'Name': 'env',
'Values': [
'production',
]
}])
Either use a simple exclusion over the simple filter, or you can dig through the documentation to build complicated JMESPath filter using
paginators.
Notes : AWS resource tags is not a universal implementation. So you must always refer to the boto3 documentation.
Python program will show you how to list all rds instance, there type and status.
list_rds_instances.py
import boto3
#connect ot rds instance
client = boto3.client('rds')
#rds_instance will have all rds information in dictionary.
rds_instance = client.describe_db_instances()
all_list = rds_instance['DBInstances']
print('RDS Instance Name \t| Instance Type \t| Status')
for i in rds_instance['DBInstances']:
dbInstanceName = i['DBInstanceIdentifier']
dbInstanceEngine = i['DBInstanceClass']
dbInstanceStatus = i['DBInstanceStatus']
print('%s \t| %s \t| %s' %(dbInstanceName, dbInstanceEngine, dbInstanceStatus))
Important Note: While working with boto3 you need to setup your credentials in two files ~/.aws/credentials and ~/.aws/config
~/.aws/credentials
[default]
aws_access_key_id=<ACCESS_KEY>
aws_secret_access_key=<SECRET_KEY>
~/.aws/config
[default]
region=ap-south-1