Auto Delete Stack on Instance Termination - amazon-web-services

I am currently working with EC2 instances and have made a Java CLI tool to quickly launch/kill instances for our developers. One of the options is to set a lease time, so that the instance terminates after a certain amount of hours. For spot instances this works great, terminates the instance, the request, and deletes all volumes associated with that instance.
However, for on demand instances, the lease time only affects instance termination and does not delete the stack that it was on.
Is there some way to automate this through AWS? I don't want to write a script to manually check or even monitor, it would be great if this could be automated through AWS itself. I have looked and have not found anything. Any advice is appreciated!

Related

How do I "hibernate" an entire AWS account?

I have inherited a nasty project running on AWS. The architecture is very complex and, not being an AWS expert, I'm not entirely sure where to start unpicking the structure of the project.
I've been asked to "hibernate" the entire system - meaning we can't lose any data, but all running instances can be switched off. As long as it can eventually be resurrected, the data can be stored anywhere within AWS.
From what I can tell, I think everything is controlled by ECS. There are several ECS clusters with various tasks running, and most seem to have a volume attached. I know that if I set the desired instances for any cluster to 0, it will shut down all associated instances. But the question is, if I later set that back to the previous count, will the data come back when the instances do? Or are the volumes deleted once the EC2 instances are terminated (as is usual with a "standalone" EC2 instance).
I'd prefer not to have to set up the entire architecture again manually in the future.
I have tried to understand whether volumes currently in use by the instances for the ECS clusters will be deleted when I reduce the desired instance count to 0, but have been unable to come up with an answer.

Logging solution for aws spot instances

Is there any solution or way, which can help me to get live/real-time logs?
I have 100s of Spot instances that come and go frequently, So we need to configure a setup that captures the logs despite this spot life cycle, which means whenever new spot instance launches we will get the respective instance to log immediately.
please help/suggested the way.

How to use existing On-Demand instance in spot fleet?

I'm trying to reduce my expenses and want to start using AWS's spot pricing service. I'm completely new to it, but as I understand I can have instances running for certain amounts of time based on the price that will eventually stop running based on certain conditions. That's fine, I'm also aware you can have spot fleets, and in these fleets you can have an On-Demand instance for when the spot instance is interrupted.
I currently have a an On-Demand instance that hosts an ElasticBeanStalk application (it's an API), is there a way to use this instance inside the spot fleet so that when there's an available spot-instance it's servicing my EBS application then when the spot-instance is interrupted it just goes back to using my current On-Demand instance until another spot-instance is available?
Sadly, spot fleets don't work like this. If your spot instance gets terminated, no on-demand replacement is going to be created for you automatically. If it worked like this, everyone would be using spot instances in my view.
The on-demand portion of your spot fleet is separate from spot portion. This way your application will always run at minimum capacity (without spot). When spot is available, your spot instances will run along side your on-demand. This way you will have more computational power for cheap, which is very beneficial for any heavy processing application (e.g. batch image processing).
Details of how spot fleet and spot instances work are in How Spot Fleet works and How Spot Instances work docs.
Nevertheless, if you would like to have such replacement provisioned you would have to develop a custom solution for that.
There's a third-party solution called Spot.io that not only replaces the spot instance for an on-demand instance in a scenario like the one you describe but it has an algorithm that anticipates the interruption event and stands up an On-demand instance and has it ready before the interruption occurs.

How to determine the state of an AWS instance

I'm trying to determine how to remove an instance from several applications (freeIPA, Chef, service discovery) from within an AWS autoscaling group but I'm finding that there's no reliable way to determine if an instance is simply stopping (sometimes our admins will take an instance out of the ASG for analysis) vs terminating. If the instance is stopped then I would like to retain the ability to have it stay connected to our LDAP and other systems. Anyone know a good way to do this?
Is the instance EBS backed or using instance store?
if instance store, you cannot really stop it (only terminate it)
Have you looked at the EC2 API (via aws-sdk maybe)? (looks like describe-instances and looking at reservations should do the trick here)
http://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeInstances.html
I determined that the best way for me to do this was to use the ASG alarms (specifically the EC2_TERMINATE alarm). This effectively allows me to take no action if an instance is stopping and fire off a script if it is determined that the instance is terminating.

EC2 Spot instances: How to start tasks, how to stop them?

I have a long batch job that I'd like to run on AWS EC2 Spot Instances, to save money. However, I can't find the answer to two seemingly critical questions:
When a new instance is created, I need to upload the code onto it, configure it, and run the code. How does that get done for Spot Instances, which are created automatically and unattatendly?
When an instance is stopped, I would prefer having some type of notification, so that the state could be saved. (This is not critical, as the batch job will run fine if terminated suddenly - but a clean shutdown is preferred).
What is the standard way to deploy spot instances? Is there a way to do manual setup, turn it into a spot instance, and then let it hibernate until the spot price is available?
As to #1, if you create an AMI (amazon machine image), you can have everything you want pre-installed on a 'hibernating' image that you can use as a basis for the spot image you start:
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-spot-instances-getting-started.html
For #2, you can be notified when a spot instance terminates using SNS:
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/spot-autoscaling-notifications.html
http://docs.aws.amazon.com/AutoScaling/latest/DeveloperGuide/ASGettingNotifications.html
BTW: You can be notified the the instance was terminated, but only after it terminates. You can't get notified that an instance is about to be shutdown and gracefully save the state - you need to engineer your solution to be OK with unexpected shutdowns.
No matter how high you bid, there is always a risk that your Spot
Instance will be interrupted. We strongly recommend against bidding
above the On-Demand price or using Spot for applications that cannot
tolerate interruptions.
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/spot-protect-interruptions.html
You can use the user data settings to download from a specific repository a script and run it at the first instance startup.
As E.J. Brennan said: you can use SNS