Why python for loop stops at the first instance of iteration? - python-2.7

I'm updating python function for getting a list of rds instances with 'Backup' tag set to "true" but for some reason the for loop stops at the first instance.
We have 10 rds instances.
Could someone please tell me what I'm doing wrong here?
Here's the code:
def get_instances(region)
rds = boto3.client('rds', region)
instances_to_snapshot = []
response = rds.describe_db_instances()
aurora = "aurora"
oracle = "oracle-se1"
aurora_list = []
instances = response['DBInstances']
for instance in instances:
engine = instance['Engine']
if get_rds_instance_tag(region, instance['DBInstanceIdentifier'], 'Backup'):
print "This instance - %s has engine - %s " % (instance['DBInstanceIdentifier'], engine)
instances_to_snapshot.append(instance['DBInstanceIdentifier'])
return instances_to_snapshot
Thank you

Found it!! The issue was 'Time out' setting in lambda.. haha It was set to "3 seconds" No wonder function didn't work correctly.

Related

How start an EC2 instance through Apache Guacamole?

In my project, some EC2 instances will be shut down. These instances will only be connected when the user needs to work.
Users will access the instances using a clientless remote desktop gateway called Apache Guacamole.
If the instance is stopped, how start an EC2 instance through Apache Guacamole?
Home Screen
Guacamole is, essentially, an RDP/VNC/SSH client and I don't think you can get the instances to startup by themselves since there is no possibility for a wake-on-LAN feature or something like it out-of-the-box.
I used to have a similar issue and we always had one instance up and running and used it to run the AWS CLI to startup the instances we wanted.
Alternatively you could modify the calls from Guacamole to invoke a Lambda function to check if the instance you wish to connect to is running and start it up if not; but then you'd have to deal with the timeout for starting a session from Guacamole (not sure if this is a configurable value from the web admin console, or files), or set up another way of getting feedback for when your instance becomes available.
There was a discussion in the Guacamole mailing list regarding Wake-on-LAN feature and one approach was proposed. It is based on the script that monitors connection attempts and launches instances when needed.
Although it is more a workaround, maybe it will be helpful for you. For the proper solution, it is possible to develop an extension.
You may find the discussion and a link to the script here:
http://apache-guacamole-general-user-mailing-list.2363388.n4.nabble.com/guacamole-and-wake-on-LAN-td7526.html
http://apache-guacamole-general-user-mailing-list.2363388.n4.nabble.com/Wake-on-lan-function-working-td2832.html
There is unfortunately not a very simple solution. The Lambda approach is the way we solved it.
Guacamole has a feature that logs accesses to Cloudwatch Logs.
So next we need the the information of the connection_id and the username/id as a tag on the instance. We are automatically assigning theses tags with our back-end tool when starting the instances.
Now when a user connects to a machine, a log is written to Cloudwatch Logs.
A filter is applied to only get login attempts and trigger Lambda.
The triggered Lambda script checks if there is an instance with such tags corresponding to the current connection attempt and if the instance is stopped, plus other constraints, like if an instance is expired for example.
If yes, then the instance gets started, and in roughly 40 seconds the user is able to connect.
The lambda scripts looks like this:
#receive information from cloudwatch event, parse it call function to start instances
import re
import boto3
import datetime
from conn_inc import *
from start_instance import *
def lambda_handler(event, context):
# Variables
region = "eu-central-1"
cw_region = "eu-central-1"
# Clients
ec2Client = boto3.client('ec2')
# Session
session = boto3.Session(region_name=region)
# Resource
ec2 = session.resource('ec2', region)
print(event)
#print ("awsdata: ", event['awslogs']['data'])
userdata ={}
userdata = get_userdata(event['awslogs']['data'])
print ("logDataUserName: ", userdata["logDataUserName"], "connection_ids: ", userdata["startConnectionId"])
start_instance(ec2,ec2Client, userdata["logDataUserName"],userdata["startConnectionId"])
import boto3
import datetime
from datetime import date
import gzip
import json
import base64
from start_related_instances import *
def start_instance(ec2,ec2Client,logDataUserName,startConnectionId):
# Boto 3
# Use the filter() method of the instances collection to retrieve
# all stopped EC2 instances which have the tag connection_ids.
instances = ec2.instances.filter(
Filters=[
{
'Name': 'instance-state-name',
'Values': ['stopped'],
},
{
'Name': 'tag:connection_ids',
'Values': [f"*{startConnectionId}*"],
}
]
)
# print ("instances: ", list(instances))
#check if instances are found
if len(list(instances)) == 0:
print("No instances with connectionId ", startConnectionId, " found that is stopped.")
else:
for instance in instances:
print(instance.id, instance.instance_type)
expire = ""
connectionName = ""
for tag in instance.tags:
if tag["Key"] == 'expire': #get expiration date
expire = tag["Value"]
if (expire == ""):
print ("Start instance: ", instance.id, ", no expire found")
ec2Client.start_instances(
InstanceIds=[instance.id]
)
else:
print("Check if instance already expired.")
splitDate = expire.split(".")
expire = datetime.datetime(int(splitDate[2]) , int(splitDate[1]) , int(splitDate[0]) )
args = date.today().timetuple()[:6]
today = datetime.datetime(*args)
if (expire >= today):
print("Instance is not yet expired.")
print ("Start instance: ", instance.id, "expire: ", expire, ", today: ", today)
ec2Client.start_instances(
InstanceIds=[instance.id]
)
else:
print ("Instance not started, because it already expired: ", instance.id,"expiration: ", f"{expire}", "today:", f"{today}")
def get_userdata(cw_data):
compressed_payload = base64.b64decode(cw_data)
uncompressed_payload = gzip.decompress(compressed_payload)
payload = json.loads(uncompressed_payload)
message = ""
log_events = payload['logEvents']
for log_event in log_events:
message = log_event['message']
# print(f'LogEvent: {log_event}')
#regex = r"\'.*?\'"
#m = re.search(str(regex), str(message), re.DOTALL)
logDataUserName = message.split('"')[1] #get the username from the user logged into guacamole "Adm_EKoester_1134faD"
startConnectionId = message.split('"')[3] #get the connection Id of the connection which should be started
# create dict
dict={}
dict["connected"] = False
dict["disconnected"] = False
dict["error"] = True
dict["guacamole"] = payload["logStream"]
dict["logDataUserName"] = logDataUserName
dict["startConnectionId"] = startConnectionId
# check for connected or disconnected
ind_connected = message.find("connected to connection")
ind_disconnected = message.find("disconnected from connection")
# print ("ind_connected: ", ind_connected)
# print ("ind_disconnected: ", ind_disconnected)
if ind_connected > 0 and not ind_disconnected > 0:
dict["connected"] = True
dict["error"] = False
elif ind_disconnected > 0 and not ind_connected > 0:
dict["disconnected"] = True
dict["error"] = False
return dict
The cloudwatch logs trigger for lambda like that:

how can i get aws launch config from AMI iD

I have made script who clean ami id based on from not running instances.
but i want also delete feature this script to clean launch config form AMI ID(actually who are not exist).
good_images = set([instance.image_id for instance in ec2.instances.all()])
#LaunchConfig in use AMI
client = boto3.client('autoscaling', region_name=region)
response = client.describe_launch_configurations()
ls_list=[]
for LC in response['LaunchConfigurations']:
(LC['ImageId'])
print ls_list
but its not working.
Your code:
for LC in response['LaunchConfigurations']:
(LC['ImageId'])
should be:
for LC in response['LaunchConfigurations']:
(ls_list.append(LC['ImageId']))
used_lc = []
all_lc = []
def used_launch_config():
for asg in client.describe_auto_scaling_groups()['AutoScalingGroups']:
launch_config_attached_with_asg = asg['LaunchConfigurationName']
used_lc.append(launch_config_attached_with_asg)
used_launch_config()
print used_lc
def all_spot_lc():
for launch_config in client.describe_launch_configurations(MaxRecords=100,)['LaunchConfigurations']:
lc = launch_config['LaunchConfigurationName']
if str(lc).startswith("string"):
all_lc.append(lc)
all_spot_lc()
print all_lc
I just avoid to delete launch config from AMI.
Now i compared from used or unused it solved the problem.
I was doing wrong in Previous code.
is there way to increase max records

AWS: How to make sure that Instance Profile is initialized and propagized when starting the ec2 server

I am starting servers using a ec2 instance profile with instance profiles.
The problem is the Profile sometimes is not "there" after creating it, even if I wait like 10 seconds:
# Create new Instance Profile
instanceProfile = self.iam.create_instance_profile(InstanceProfileName=instProfName)
instanceProfile.add_role(RoleName="...")
time.sleep(10)
# Create the Instance
instances = self.ec2.create_instances(
# ...
IamInstanceProfile={
"Name":instanceProfile.instance_profile_name
}
)
is there a way to wait for it to be propagated?
My first attempt is:
error = 30
dryRun = True
while error > 0:
try:
# Create the Instance
instances = self.ec2.create_instances(
DryRun=dryRun
# ...
IamInstanceProfile={
"Name":instanceProfile.instance_profile_name
}
)
if not dryRun:
break;
dryRun = False
except botocore.exceptions.ClientError as e:
error = error - 1;
but how do I get only the IAM Profile error?
Much of AWS works using "eventual consistency". That means that after you make a change, it will take some time to propagate through the system.
After you create the instance profile:
Delay by 5 or 10 seconds (or some other time you're comfortable with),
Call iam.get_instance_profile with your instance profile name.
Repeat delay and check until get_instance_profile returns your information.
Another thing you can do is catch the "instance profile not found error" during the ec2.create_instances call, and delay and repeat it if you get that error.

AWS boto v2.32.0 - List tags for an ASG

I am trying to use boto v2.32.0 to list the tags on a particular ASG
something simple like this is obviously not working (especially with the lack of a filter system):
import boto.ec2.autoscale
asg = boto.ec2.autoscale.connect_to_region('ap-southeast-2')
tags = asg.get_all_tags('asgname')
print tags
or:
asg = boto.ec2.autoscale.connect_to_region('ap-southeast-2')
group = asg.get_all_groups(names='asgname')
tags = asg.get_all_tags(group)
print tags
or:
asg = boto.ec2.autoscale.connect_to_region('ap-southeast-2')
group = asg.get_all_groups(names='asgname')
tags = group.get_all_tags()
print tags
Without specifying an 'asgname', it's not returning every ASG. Despite what the documentation says about returning a token to see the next page, it doesn't seem to be implemented correctly - especially when you have a large number of ASG's and tags per ASG.
Trying something like this has basically shown me that the token system appears to be broken. it is not "looping" through all ASG's and tags before it returns "None":
asg = boto.ec2.autoscale.connect_to_region('ap-southeast-2')
nt = None
while ( True ):
tags = asg.get_all_tags(next_token=nt)
for t in tags:
if ( t.key == "MyTag" ):
print t.resource_id
print t.value
if ( tags.next_token == None ):
break
else:
nt = str(tags.next_token)
Has anyone managed to achieve this?
Thanks
This functionality is available in AWS using the AutoScaling DescribeTags API call, but unfortunately boto does not completely implement this call.
You should be able to pass a Filter with that API call to only get the tags for a specific ASG, but if you have a look at the boto source code for get_all_tags() (v2.32.1), the filter is not implemented:
:type filters: dict
:param filters: The value of the filter type used
to identify the tags to be returned. NOT IMPLEMENTED YET.
(quote from the source code mentioned above).
I eventually answered my own question by creating a work around using the amazon cli. Since there has been no activity on this question since the day I asked it I am posting this workaround as a solution.
import os
import json
## bash command
awscli = "/usr/local/bin/aws autoscaling describe-tags --filters Name=auto-scaling-group,Values=" + str(asgname)
output = str()
# run it
cmd = os.popen(awscli,"r")
while 1:
# get tag lines
lines = cmd.readline()
if not lines: break
output += lines
# json.load to manipulate
tags = json.loads(output.replace('\n',''))

How to poll in AWS SDK Java?

I am new to AWS sdk java. I am trying to write a code through which I want to control the instance and to get all AWS EC2 information.
I am able to start an instance and also stop it. But as you all must be aware that it takes some time to start an instance, so I want to wait there (don't want to use Thread.sleep) till it's up or when I'm stopping an instance it should wait there till I proceed to the next step.
Here's the code:
AmazonEC2 ec2 = = new AmazonEC2Client(credentialsProvider);
DescribeInstancesResult describeInstancesRequest = ec2.describeInstances();
List<Reservation> reservations = describeInstancesRequest.getReservations();
Set<Instance> instances = new HashSet<Instance>();
for (Reservation reservation : reservations) {
instances.addAll(reservation.getInstances());
}
for (Instance instance : instances) {
if ((instance.getInstanceId().equals("myimage"))) {
List<String> instancesToStart = new ArrayList<String>();
instancesToStart.add(instance.getInstanceId());
StartInstancesRequest startr = new StartInstancesRequest();
startr.setInstanceIds(instancesToStart);
ec2.startInstances(startr);
Thread.currentThread().sleep(60*1000);
}
if ((instat.getName()).equals("running")) {
List<String> instancesToStop = new ArrayList<String>();
instancesToStop.add(instance.getInstanceId());
StopInstancesRequest stoptr = new StopInstancesRequest();
stoptr.setInstanceIds(instancesToStop);
ec2.stopInstances(stoptr);
}
Also, I'd like to say that whenever I try to get the list of images it hangs in the below code.
DescribeImagesResult describeImagesResult = ec2.describeImages();
You can get an instance of the class "Instance" every time you want to see the updated status with the same "instance Id".
Instance instance = new Instance(<your instance id that you got previously from describe instances>);
To get the updated status with something like this:
InstanceStatus instat = instance.getStatus();
I think the key here is saving the "instance id" of the instance that you care about.
boto in Python has an nice method instance.update() that can be called on an instance and you can see its status but I can't find it in Java.
Hope this helps.