Does anyone know, in instances timebased, how to terminate rather than stop every time she goes to stop?
I'm trying to terminate by recipe, but it can not proceed.
It's easy enough to use the Opsworks API to do this directly;
require 'aws-sdk-core'
opsworks = Aws::OpsWorks::Client.new region: 'us-east-1'
opsworks.stop_instance(instance_id: instance_id)
begin
sleep 5
end until(opsworks.describe_instances(instance_ids: [instance_id]).
instances.first.status == 'stopped')
opsworks.delete_instance(instance_id: instance_id)
You could terminate by disabling the instance termination status and deleting it directly from the EC2 api
Related
Need to terminate the instance that is in standby. How can it be done.
I terminated instance using instance_id, instance got terminated but still showing in console as standby node.
How can I terminate node and and move from standby without receiving any requests.
I had the same issue. After terminating the instance, go to auto scaling group -> select that instance -> Actions -> Set to InService.
It gets removed automatically from the list.
It is not prefer way to terminate instance while it is on stand by mode in Auto Scaling group. If you manually terminate instance which is in stand by mode, it will still show in auto scaling group. If you want to terminate it, You must place instance in service. After that you can de-attach instance and terminate or stop it as required.
You can also remove the instance while it is in standby using the autoscaling CLI/API action 'terminate-instance-in-auto-scaling-group' as per the options below.
$ aws autoscaling terminate-instance-in-auto-scaling-group
--instance-id <value>
--should-decrement-desired-capacity | --no-should-decrement-desired-capacity
[--cli-input-json <value>]
[--generate-cli-skeleton <value>]
Docs: https://docs.aws.amazon.com/cli/latest/reference/autoscaling/terminate-instance-in-auto-scaling-group.html
I'm starting a bunch of EC2 Instances using the following code
def start_ec2_instances(self, instanceids):
ec2client = boto3.resource('ec2')
response = ec2client.start_instances(InstanceIds=instanceids)
return
Now it starts successfully. However I want to use wait_until_running method to check the status of the instances and wait until all the instances are started.
wait_until_running method can be issued on single instances only? How do I wait for list of instances that has been started using boto3
This is what I'm doing currently. But wanted to know if there are any other ways to do it in one shot
def wait_until_instance_running(self, instanceids):
ec2 = boto3.resource('ec2')
for instanceid in instanceids:
instance = ec2.Instance(instanceid)
logger.info("Check the state of instance: %s",instanceid)
instance.wait_until_running()
return
Use
Wait until instance is running or
Wait until a successful state is reached
Try this:
ec2 = boto3.client('ec2')
start_ec2_instances(instanceids)
waiter = ec2.get_waiter('instance_running')
waiter.wait(InstanceIds=instanceids)
The waiter function polls every 15 seconds until a successful state is reached. An error is returned after 40 failed checks.
You can use EC2 client's Waiter.wait call to pass an array of EC2 instance Ids.
http://boto3.readthedocs.io/en/latest/reference/services/ec2.html#EC2.Waiter.InstanceRunning
this example: https://aws.amazon.com/premiumsupport/knowledge-center/stop-start-ec2-instances/
does not seem to work. I followed the example and the pipeline is always canceled. There are no logs created, i did set up logging. the only "error message" i could find is.
Error MessageUnable to create resource for #Ec2Instance_2017-06-07T09:58:49 due to: No subnets found for the default VPC 'vpc-f7dxxxx'. Please specify a subnet. (Service: AmazonEC2; Status Code: 400; Error Code: MissingInput; Request ID: ebeeae6d-9537-4627-8a56-e832999a1940)
All i am trying to do is execute a aws ec2 start-instances aws cli command as outlined in the example. the instances do exist, they are in a "stopped" state. Has anyone been successful in setting up a pipeline to start and stop existing instances? How did you do it? Thanks for the help
yes, that was it. after you finish going through the example you need to look at the pipeline and edit it. Look for the EC2Resource area. Click on it. then add a subnet. place the micro instance in the same subnet as the ec2 instances you need to start or stop. The example does not address this
How can I dynamically capture the ec2 instance name on which my Chef recipe is running?
#coderanger I am using below code
Ohai.plugin(:EC2) do
provides "ec2"
depends "ec2" collect_data do
instance_id = ec2['instance_id']
end
end
How to print the instance id here ?
Assuming you mean the EC2 instance ID, you can find it in node['ec2']['instance_id'] if the EC2 ohai plugin has been activated. If the instance is created via knife ec2 server create this is done automatically for you, and there is an imperfect auto-enable that tries to guess if you're on EC2. If neither of these are the case, you can force it by creating an empty file in /etc/chef/ohai/hints/ec2.json.
I am trying to create a Spark cluster on EC2 with the following command
(I am referring Apache documetnation)
./spark-ec2 --key-pair=spark-cluster --identity-file=/Users/abc/spark-cluster.pem --slaves=3 --region=us-west-1 --zone=us-west-1c --vpc-id=vpc-2e44594 --subnet-id=subnet-18447841 --spark-version=1.6.1 launch spark-cluster
Once I fire above command master and slaves are getting created but once process reaches to 'SSH-ready' state process keeps on waiting for password
below is the Trace. I have referred apache official documentation and many other documents/videos none of the creations asked for the password. not sure whether I am missing something, any pointer to this issue is much appreciated.
Creating security group spark-cluster-master Creating security group
spark-cluster-slaves Searching for existing cluster spark-cluster in
region us-west-1... Spark AMI: ami-1a250d3e Launching instances...
Launched 3 slaves in us-west-1c, regid = r-32249df4 Launched master in
us-west-1c, regid = r-5r426bar Waiting for AWS to propagate instance
metadata...
**
Waiting for cluster to enter 'ssh-ready' state..........Password:
**
Modified the spark-ec2.py script to include the proxy and enabled the AWS Nat to allow the outbound calls