I'm starting a bunch of EC2 Instances using the following code
def start_ec2_instances(self, instanceids):
ec2client = boto3.resource('ec2')
response = ec2client.start_instances(InstanceIds=instanceids)
return
Now it starts successfully. However I want to use wait_until_running method to check the status of the instances and wait until all the instances are started.
wait_until_running method can be issued on single instances only? How do I wait for list of instances that has been started using boto3
This is what I'm doing currently. But wanted to know if there are any other ways to do it in one shot
def wait_until_instance_running(self, instanceids):
ec2 = boto3.resource('ec2')
for instanceid in instanceids:
instance = ec2.Instance(instanceid)
logger.info("Check the state of instance: %s",instanceid)
instance.wait_until_running()
return
Use
Wait until instance is running or
Wait until a successful state is reached
Try this:
ec2 = boto3.client('ec2')
start_ec2_instances(instanceids)
waiter = ec2.get_waiter('instance_running')
waiter.wait(InstanceIds=instanceids)
The waiter function polls every 15 seconds until a successful state is reached. An error is returned after 40 failed checks.
You can use EC2 client's Waiter.wait call to pass an array of EC2 instance Ids.
http://boto3.readthedocs.io/en/latest/reference/services/ec2.html#EC2.Waiter.InstanceRunning
Related
I am deploying a queue processing ECS service using EC2 deployment using CDK. Here is my stack
from aws_cdk import core, aws_ecs_patterns, aws_ec2, aws_ecs
class EcsTestStack(core.Stack):
def __init__(self, scope: core.Construct, construct_id: str, **kwargs) -> None:
super().__init__(scope, construct_id, **kwargs)
_container_image = aws_ecs.ContainerImage.from_asset(
directory=".",
file='Dockerfile.ECS_Test',
exclude=["cdk.out"]
)
_scaling_steps = [{"upper": 0, "change": -1}, {"lower": 50, "change": +1}, {"lower": 100, "change": +2}]
_vpc = aws_ec2.Vpc(self, "ecs-test-vpc-v4", max_azs=3)
_cluster = aws_ecs.Cluster(self, "ecs-test-cluster-v4", vpc=_vpc)
_cluster.add_capacity("ecs-autoscaling-capacity-v4",
instance_type=aws_ec2.InstanceType("t2.small"),
min_capacity=1,
max_capacity=3)
self.ecs_test = aws_ecs_patterns.QueueProcessingEc2Service(
self,
"ECS_Test_Pattern_v4",
cluster=_cluster,
cpu=512,
scaling_steps=_scaling_steps,
memory_limit_mib=256,
image=_container_image,
min_scaling_capacity=1,
max_scaling_capacity=5,
)
The stack starts out with 1 task in the service and 1 EC2 instance. Based on my _scaling_steps, a new task should be added to the service when the number of messages in the queue > 50 and 2 new tasks should be added to the service. The stack starts out with 1 task in the service and 1 EC2 instance.
But when I add 200 new messages to the queue, I can see 1 new task added to my service and then I get this error message in the even.
service
EcsTestStackV4-ECSTestPatternv4QueueProcessingService5C84D200-c00N6R56hB0p
was unable to place a task because no container instance met all of
its requirements. The closest matching container-instance
81e5380c718c4b558567dc6cc1fb1926 has insufficient CPU units available.
I also notice that no new EC2 instances were added.
Question: how do I get more EC2 instances added to my cluster when the service scales up?
I can see 1 new task added to my service and then I get this error message in the even.
This is because t2.small has 1000 CPU units. So your two tasks take all of them, and there is no other instances to place your extra task on.
I also notice that no new EC2 instances were added.
You set min_capacity=1 so you have only instance. The _scaling_steps are for the tasks only, not for your instances in autoscaling group. If you want more instance you have to set min_capacity=2 or whatever value you want.
I guess you thought that QueueProcessingEc2Service scales both instances and tasks. Sadly this is not the case.
I am scheduling ec2 instance shutdown everyday at 8 PM using chalice and lambda function.
I have configured the chalice but not able to trigger or integrate python script using chalice
import boto3
#creating session to connect to aws
#defining instances to be started or stopped
myins = ['i-043ae2fbfc26d423f','i-0df3f5ead69c6428c','i-0bac8502574c0cf1d','i-02e866c4c922f1e27','i-0f8a5591a7704f98e','i-08319c36611d11fa1','i-047fc5fc780935635']
#starting ec2 instances if stopped
ec2 = boto3.resource('ec2')
ec2client = boto3.client('ec2')
for instance in ec2.instances.all():
for i in myins:
if i == instance.id and instance.state['Name'] == "running":
ec2client.stop_instances(InstanceIds=[i])
I want to stop instance using chalice
AWS Instance Scheduler does the job that you are looking for. We have used it for several months. It works as expected. You may check this reference: https://aws.amazon.com/premiumsupport/knowledge-center/stop-start-instance-scheduler/
I have created an instance using the Boto 22 interface, my connection arguments including key_id and access_key, and a new security group authorized with the following.
security_group.authorize_ingress(IpProtocol="tcp",CidrIp="0.0.0.0/0",FromPort=22,ToPort=22)
I create the instance with
instance = ec2.create_instances(ImageId='ami-5b41123e', KeyName='test_pair57', InstanceType="t2.micro", MinCount=1, MaxCount=1)
I set the program to wait in a while loop until it finds the instance state is running. However, I still can't SSH into the public IP address it then prints out. The connection always times out. I have tried specifying the port, but that does not change it.
Do I need to pass my new keypair's fingerprint somewhere or is there something else I'm missing?
My security settings were incorrect. Checking them in the AWS online interface allowed me to verify the settings and showed me the issue.
I needed to add my created security settings to the instance like so:
instance = ec2.create_instances(ImageId=image.id, KeyName='test_pair' + str(rand), InstanceType="t2.micro", MinCount=1, MaxCount=1, SecurityGroupIds=[security_group.id])
Is there a way to automatically publish a message on SQS when an EC2 instance starts?
For example, could you use Cloudwatch to set an alarm that fires whenever an instance starts up? All the alarms on Cloudwatch appear to be related to a specific EC2 instance, rather than the EC2 service in general.
To better understand this question and offer a more accurate answer, further information is needed.
Are we talking about:
New instance created and started from any AMI ?
New instance created and started from a specific AMI?
Starting an existing AMI that is just in the stopped state?
Or creating a new instance inside a scale group?
These all affect the way you would create your cloudwatch alarm.
For example if it were an existing ec2 you would use status checks as per:
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/monitoring-system-instance-status-check.html
Though if it were a brand new Ec2 instance created you would need to use more advanced cloudtrail log alarms as per:
https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cw_create_alarms.html
However after that point it would follow the same basic logic and that is:
Create an Alarm that triggers a SNS as per:
https://docs.aws.amazon.com/AmazonCloudWatch/latest/DeveloperGuide/ConsoleAlarms.html
have that SNS Notifier publish to a SQS topic as per:
https://docs.aws.amazon.com/sns/latest/dg/SendMessageToSQS.html
As always though there are many ways to skin a cat.
If it were a critical event and I want the same typical response from every start event I personally would consider bootstrap scripts pushed out from puppet or chef, this way a fast change for all events is centralised in a single script.
Step1 : Create a Cloud watch Rule to notifiy the creation . As per the lifecycle of EC2 Instance when lauch button is pressed . Instance goes from Pending state to Running state and If instance is moved from stop state to running state. So create the Rule for Pending state
Create a Cloud watch Rule as specified in the image screenshot
Step2 : Create a Step function . Because cloud Trail logs all the event in the account with a delay of atleast 20 min . This step function is usefull if you want the name of user who has created the instance .
{
"StartAt": "Wait",
"States": {
"Wait": {
"Type": "Wait",
"Seconds": 1800,
"Next": "Ec2-Alert"
},
"Ec2-Alert":{
"Type": "Task",
"Resource":"arn:aws:lambda:ap-south-1:321039853697:function:EC2-Creation-Alert",
"End": true
}
}
}
Step3 : Create a SNS topic for notification
Step4 : Write a lambda function to fetch the log from cloud trail and get the user name who has created the instance .
import json
import os
import subprocess
import boto3
def lambda_handler(event, context):
client = boto3.client('cloudtrail')
client1 = boto3.client('sns')
Instance=event["detail"]["instance-id"]
response = client.lookup_events(
LookupAttributes=[
{
'AttributeKey': 'ResourceName',
'AttributeValue': Instance
},
],
MaxResults=1)
test=response['Events']
st="".join(str(x) for x in test)
print(st)
user=st.split("Username")[1]
finalname=user.split(",")
Creator=finalname[0]
#print(st[st.find("Username")])
Email= "Hi All ,\n\n\n The User%s has created new EC2-Instance in QA account and the Instance id is %s \n\n\n Thank you \n\n\n Regard's lamda"%(Creator,Instance)
response = client1.publish(
TopicArn='arn:aws:sns:ap-south-1:321039853697:Ec2-Creation-Alert',
Message=Email
)
# TODO implement
return {
'statusCode': 200,
}
Note: This code trigger an notification if the instance is changed from stop state to running state or a new instance is launched .
Does anyone know, in instances timebased, how to terminate rather than stop every time she goes to stop?
I'm trying to terminate by recipe, but it can not proceed.
It's easy enough to use the Opsworks API to do this directly;
require 'aws-sdk-core'
opsworks = Aws::OpsWorks::Client.new region: 'us-east-1'
opsworks.stop_instance(instance_id: instance_id)
begin
sleep 5
end until(opsworks.describe_instances(instance_ids: [instance_id]).
instances.first.status == 'stopped')
opsworks.delete_instance(instance_id: instance_id)
You could terminate by disabling the instance termination status and deleting it directly from the EC2 api