Ive been finding my self creating(maintain)/terminating ec2 spot fleet every 12 hours due the application not needing to run full time. I would like the fleet to be created at 8am for example and terminated 8pm. Is this possible?
I can do this with ec2 instances +lambda + cloudwatch: https://aws.amazon.com/premiumsupport/knowledge-center/start-stop-lambda-cloudwatch/
However I cannot seem to figure this out with spot fleet.
You can create an Autoscaling Group with a spot price, then use Scheduled Actions to launch and terminate them at specific times.
Related
I've created aws cloudwatch alarm based on ASG's group metrics cpuutilization. It sends an email alert email whenever cpuutilization exceeds more than 99% for more than an hour.
Is there a way to execute an event/action that will terminate specific ec2 instances that triggered the alarm? These instances hang and has to be terminated.
I would create an additional alarm that would terminate any instance that reaches 99% cpu for an hour. This is directly supported by CloudWatch.
From Create Alarms to Stop, Terminate, Reboot, or Recover an Instance:
Using Amazon CloudWatch alarm actions, you can create alarms that automatically stop, terminate, reboot, or recover your EC2 instances. You can use the reboot and recover actions to automatically reboot those instances or recover them onto new hardware if a system impairment occurs.
See https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/UsingAlarmActions.html
I feel possible solution for this requirement is to write AWS Cli script which would run probably every 15 mins and get list of all EC2 instances running and then terminate if needed. Also, need historical info for ec2's w/c cpu is at 100% for more than 45mins
I'm having an issue with AWS boxes (EC2) where I terminate the box and it re-spawns. To give context, there is no autoscaling group. Anywhere I can search for some config that might be triggering the launch?
I would make sure you don't have a persistent spot request active in your account, and also check to see if you perhaps installed the AWS Instance scheduler - either or both of those could be starting instances on your behalf - (double check the autoscaling group, that is the most obvious reason though)
If you terminate a running Spot Instance that was launched by a
persistent Spot request, the Spot request returns to the open state so
that a new Spot Instance can be launched. To cancel a persistent Spot
request and terminate its Spot Instances, you must cancel the Spot
request first and then terminate the Spot Instances. Otherwise, the
persistent Spot request can launch a new instance.
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/spot-requests.html#terminating-a-spot-instance
https://aws.amazon.com/solutions/instance-scheduler/
So I found out the culprit, maybe it can help more people. Apparently, there is a service from AWS called OpsWorks that automates things like Cheff of Puppet, which my company had configured some time ago. That would be checking for instances running and triggering re-provisioning when it didn't see the instance running. OpsWorks is here
I've configured AWS instance scheduler, everything is working as expected.
The issue I'm having is each instance has a autoscaling group in my dev environment and i'm unable to shutdown instances without them beign terminated by autoscale group when it does a health check and notices its down.
Has anyone figured out an automated solution to this without me having to manually suspend ASG? Since the whole purpose of this is to stop the instances after hours I'm unable to intervene to suspend/resume ASG.
Thanks in advance!
"Auto Scaling" and "AWS Instance Scheduler" don't really fit together nicely. Do you really need ELB for Dev environments? I feel this is overkill.
Anyway, if you still want to use ELB + AutoScaling and would like to shutdown the boxes during off hours, you can set "AutoScaling" to ZERO for the hours you want using Scheduled Scaling approach.
I am trying to bring up multiple AWS EC2 Spot instances, machine type "m3.medium" or ""m4.2xlarge" in REGION="eu-west-1".
I am able to bring up On-Demand instances but my spots keep failing. Below is the error. Has anyone seen this before or worked around it?
I have installed the EC2 spot fleet plugin in Jenkins to use EC2 machines as slaves.
I want only a particular job to be executed on this EC2 fleet.
However, whilst restricting a label to this job where it should runn; every label is now being served by this Amazon EC2 fleet.
Whereas, a particular job should run on this EC2 fleet, since the instances on this fleet are configured to run only this job and not the other jobs.
Before creating spot fleet:
Creating the spot fleet:
After Adding the spot fleet:
So, now every label is using this spot fleet to serve the requests assigned to it. However, this spot fleet can run only a particular job.
How can this be solved so that only a particular job runs on this spot fleet?
Starting from version 1.5.0 EC2 Fleet Jenkins Plugin has property:
Only build jobs with label expressions matching this node
When checked only properly labled jobs will be executed on plugin nodes.