AWS terminated instance refuses to stay terminated - amazon-web-services

I have 2 EC2 Micro instances. I've stopped one of it several times and terminated it several times as well. But, for some reason it boots itself back up again.
Not sure how or why this happens, but I'm being billed 10$ every month because of this. Is there some hidden AWS setting where it says start every terminated Instance?
All Google/aws results/doc's speaks about -- Terminated instance will automatically get removed after 10-20min's. but, in my case -- it automatically gets started.
Any help, pointers would be great.

You are using some process that will auto launch instances if the previous ones were terminated.
This is often done with an autoscaling group that may have been created with elastic beanstalk.

Found the answer here
It seems like you have been using Elastic Beanstalk. If you open up that section of the AWS Management Console, you can delete the application/environment from there. This will bring down the instance as well. When you terminate the Elastic Beanstalk instance manually through the EC2 section, the system thinks that it has failed and will launch a replacement.

Related

How to terminate AWS managed blockchain instance

I followed these instructions below to get an AWS Ethereum instance running, however since I am just learning blockchain, I would like to create an image and start/stop as needed. However when I go to EC2 and stop my instance, it restarts. I saw other posts about this being caused by elastic beanstalk but when I go to elastic beanstalk, I don't see anything there. What else could be causing it to restart?
Thanks!
https://docs.aws.amazon.com/blockchain-templates/latest/developerguide/blockchain-templates-getting-started.html
If you check the source code of one of its nested stacks ethereum-autoscalegroup.template.yaml, you can see that it actually creates instances in an autoscaling group (ASG).
Instances in ASG can't be stopped. However, you can terminate them, by setting desired capacity and minimum capacity to 0 of your ASG. Then if you want to create new instances when you need them, you can change desired capacity back to 1.

Stop and Start AWS Elastic Beanstalk which is running php on EC2 and mysql RDS

I am using AWS free tier, and running Elastic Beanstalk which is running EC2 and RDS, and I am looking for a way to stop and start the environment only when needed should i stop EC2 and RDS individually from the dashboard or what would be a good way of doing it?
One of the post i found was When I stop and start an ec2 cents os instance , what data do I loose and it says data will not be lost but how do I stop and start the EBS when needed?
If you are using Beanstalk and try to stop particular EC2 instance from EC2 console than Beanstalk will bring it back automatically. If you want to stop whole EBS environment than you can use Terminate option that will terminate it, obviously. You will be able to bring it back for 40 days and after that it will be lost. Remember that you will see terminated environment for ~1h in the EBS console and after that time you will be able to bring it back only using eb tool so remember to write down your environment's ID so you can restore it later with $ eb restore ENV_ID
As far as the EC2 instance is concerned, if you have a load-balanced, auto-scaling setup, then you can use the scheduled autoscaling feature to shrink your desired number of instances to zero on whatever schedule you like. To do this, go to the capacity section of the environment dashboard in the console, and scroll to the bottom ("Time-based Scaling"). Here you can enter two cron expressions, one for scale out and one for scale in, for a recurring pattern which will shut down the EC2 instance when you like.
The RDS instance is a bit trickier. You could write a lambda function that would take a snapshot, shut it down, and later restore the snapshot to a new instance, scheduled using a cron expression or similar in CloudWatch Events. A similar approach could work for the EC2 instance and its EBS volume.

AWS - EC2 stop instances not working properly

I'm very new on AWS and now I have two EC2 instances. In order to avoid waste the free tier plan I'm trying to stop instances when I'm not working with them.
This is what my EC2 Management console shows. As you can see there are two instances running and two instances terminated. I did not terminate swipe-dev, I have just stop it. But for any reason now is terminated plus new same instance with same source code was started. Why?
What I'm doing wrong? I just want stop instances.
Edit
I have decide keep just one project so I terminate eb-flask-demo-dev and stop swipe-dev instance. After few minutes instace state was stoped and I thought finally everything is fine. But I rejoin to EC2 console and this is what it shows.
Why swipe-dev is running again? and Why there is another terminated instance?
This is possible if your instance is a member of an Autoscaling group with desired capacity = 1 and set minimum size to 0. To maintain the number of healthy instances, Auto Scaling performs a periodic health check on running instances within an Auto Scaling group. When it finds that an instance is unhealthy (in this case because you stopped it), it terminates that instance and launches a new one.

Amazon Instance Got Terminated frequently

We have basic plan Amazon configuration. Also we have created Windows instance which is terminated frequently and new Ip address is assigned. I don't know how to check and why its terminated frequently.
Under standard conditions, Amazon EC2 instances are never terminated on their own. You may be accidentally using EC2 Spot instances which works on a bidding model. AWS has the power to terminate these instances with a warning time of ~2 minutes if there is a a higher bid for the spot instance.
If your instances get terminated immediately after you launch, refer this page: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Using_InstanceStraightToTerminated.html
For more information on EC2 spot instances, refer this page:
http://aws.amazon.com/ec2/purchasing-options/spot-instances/

AWS Automatically Generating New Instance After I Terminate It.

OK, odd thing is happening on AWS.
I downloaded the AWS .NET developer tools and created an elastic beanstalk default instance.
I then, for one reason or another, created another instance via the Visual Studio interface and that instance is where all the clients code / configurations reside. I then returned to the default instance created by elastic beanstalk and terminated it. An hour later, I logged back on and another default instance was up and running. It seems that AWS has detected that I terminated the instance and has spawned another. Some sort of check seems to be in place.
Can somebody tell me what is going on here and how to completely remove the default instance (and its termination protection behavior)?
Thanks.
I've experienced something similar. If the instance was created through Elastic Beanstalk, you need to go the Elastic Beanstalk screen in the AWS console and remove the application from there first. If you just terminate the instance from the EC2 screen, Elastic Beanstalk probably thinks that the instance crashed and launches a new one.
if beanstalk is not enable, then most probably it is creating from auto scaling. which is present in EC2 service itself.
Go to auto scaling and first delete the auto configuration group and launch configuration for that associate instance.
As described here it is caused by Auto Scaling Group's desired/minimum 1 instance setting. So, what that means is that the instances in that auto scaling group will always have a running instance. If you delete the only one it will create another one. To prevent this go to EC2 dashboard and on the left sidebar scroll down to find/click Auto Scaling Groups under AUTO SCALING menu. You will see the list of groups where you can click on them to see which instances are in those groups. Find the group that your instance is in and either delete it (that happens when an environment is deleted as well) or change its desired and minimum 1 instance rule to 0 and save. That is it.
Check your autoscaling groups. If you would have created node groups for your EKS cluster.