it seems that I have a technical issue regarding terminating the EC2 instance.
When I terminate it, a new instance is being created with the same name.
I terminate an EC2 instance. After a refresh, another instance was running so I started to terminate it.
After one another refresh of the page, a new instance started to run.
Do you maybe have any tips regarding this?
This instance seems to be a part of the autoscaling group which is continuously trying to meet the minimum capacity requirements. Delete that Autoscaling group first and then stop the instance or update Min capacity in autoscaling group to 0 it will be automatically terminated.
Related
Good morning,
I had a free tier EC2 instance, now I want to stop it and delete it. The problem is that when I am trying to terminate running EC2 Instance, then AWS terminates it but it creates new one (same as terminated). How can I fully terminate and delete AWS EC2 instance?
As you can see it should delete on termination:
The "Delete on Termination" flag is for the volume attached to the instance which indicates whether you want to keep the storage after terminating your ec2-instance.
The only way I think of here is where the ec2-server is attached to auto-scaling group with min =1, you need to check the auto-scaling group and decrease the min to : 0
No Code here.
I am building solution based on EC2 using AutoScaling.
I have created the solution where instance will get detached from AutoScaling group before they are stopped and they will be added back to AutoScaling group when in use.
Now what if the instance that I have deteched from AutoScaling has terminated, now I am left with nothing to attach to AutoScaling ( that particular instance id does not exist anymore).
How to handle this , if instance has terminated then at time of attachment AutoScaling should know that Instance does not exist anymore and create a new instance.
From here i am planning to create a event bridge which will take Instance-Id of new instance and update it SSM.
I think you might want to use standby instead of detaching/attaching. Any time you have an instance exit standby, the desired will get incremented by 1. No healthchecks are happening on the instance while its in standby, so if the instance got terminated while in standby the ASG wouldn't know about it until you remove the instance from standby.
Since the desired has already been increased at that point, a new instance would be launched to replace the old one as part of a healthcheck replacement
I followed these instructions below to get an AWS Ethereum instance running, however since I am just learning blockchain, I would like to create an image and start/stop as needed. However when I go to EC2 and stop my instance, it restarts. I saw other posts about this being caused by elastic beanstalk but when I go to elastic beanstalk, I don't see anything there. What else could be causing it to restart?
Thanks!
https://docs.aws.amazon.com/blockchain-templates/latest/developerguide/blockchain-templates-getting-started.html
If you check the source code of one of its nested stacks ethereum-autoscalegroup.template.yaml, you can see that it actually creates instances in an autoscaling group (ASG).
Instances in ASG can't be stopped. However, you can terminate them, by setting desired capacity and minimum capacity to 0 of your ASG. Then if you want to create new instances when you need them, you can change desired capacity back to 1.
I'm having an issue with AWS boxes (EC2) where I terminate the box and it re-spawns. To give context, there is no autoscaling group. Anywhere I can search for some config that might be triggering the launch?
I would make sure you don't have a persistent spot request active in your account, and also check to see if you perhaps installed the AWS Instance scheduler - either or both of those could be starting instances on your behalf - (double check the autoscaling group, that is the most obvious reason though)
If you terminate a running Spot Instance that was launched by a
persistent Spot request, the Spot request returns to the open state so
that a new Spot Instance can be launched. To cancel a persistent Spot
request and terminate its Spot Instances, you must cancel the Spot
request first and then terminate the Spot Instances. Otherwise, the
persistent Spot request can launch a new instance.
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/spot-requests.html#terminating-a-spot-instance
https://aws.amazon.com/solutions/instance-scheduler/
So I found out the culprit, maybe it can help more people. Apparently, there is a service from AWS called OpsWorks that automates things like Cheff of Puppet, which my company had configured some time ago. That would be checking for instances running and triggering re-provisioning when it didn't see the instance running. OpsWorks is here
I'm very new on AWS and now I have two EC2 instances. In order to avoid waste the free tier plan I'm trying to stop instances when I'm not working with them.
This is what my EC2 Management console shows. As you can see there are two instances running and two instances terminated. I did not terminate swipe-dev, I have just stop it. But for any reason now is terminated plus new same instance with same source code was started. Why?
What I'm doing wrong? I just want stop instances.
Edit
I have decide keep just one project so I terminate eb-flask-demo-dev and stop swipe-dev instance. After few minutes instace state was stoped and I thought finally everything is fine. But I rejoin to EC2 console and this is what it shows.
Why swipe-dev is running again? and Why there is another terminated instance?
This is possible if your instance is a member of an Autoscaling group with desired capacity = 1 and set minimum size to 0. To maintain the number of healthy instances, Auto Scaling performs a periodic health check on running instances within an Auto Scaling group. When it finds that an instance is unhealthy (in this case because you stopped it), it terminates that instance and launches a new one.