How to restrict a Jenkins job on EC2 spot fleet? - amazon-web-services

I have installed the EC2 spot fleet plugin in Jenkins to use EC2 machines as slaves.
I want only a particular job to be executed on this EC2 fleet.
However, whilst restricting a label to this job where it should runn; every label is now being served by this Amazon EC2 fleet.
Whereas, a particular job should run on this EC2 fleet, since the instances on this fleet are configured to run only this job and not the other jobs.
Before creating spot fleet:
Creating the spot fleet:
After Adding the spot fleet:
So, now every label is using this spot fleet to serve the requests assigned to it. However, this spot fleet can run only a particular job.
How can this be solved so that only a particular job runs on this spot fleet?

Starting from version 1.5.0 EC2 Fleet Jenkins Plugin has property:
Only build jobs with label expressions matching this node
When checked only properly labled jobs will be executed on plugin nodes.

Related

Adjusting AWS ECS spot request

I have AWS ECS cluster but spot instance type I selected is too small.
I can't find way to adjust Spot Fleet request ID or change Instance type(s) for Spot Fleet request cluster is using.
Do I have to create a new cluster with a new spot fleet request?
Is there any cli option to adjust cluster?
Do I have manually order EC2 with ECS optimized AMI ?
UPDATE In question How to change instance type in AWS ECS cluster? that sounds similar advised to copy Launch Configuration. But I have no Launch Configuration
There is no way of changing the instance types' requested by a spot fleet after its been created.
If you want to run you ECS workload on another instance type, create a new spot fleet (with instances which are aware of your ECS cluster).
When the spot instances spin up, they will register with your ECS Cluster.
Once they are registered, you can find the old instances (in the ECS Instances tab of the cluster view) and click the checkbox net to them.
Then, go to Actions -> Drain instances
This tells ECS that you no longer wish to use these instances. New tasks will now be scheduled on the new instances.
Once all the tasks are running on the new instances, you can delete the old spot fleet.
On the subject of launch configurations. There are two ways of creating collections of spot instances.
Through a Spot Fleet (which is what you're doing)
Through and Auto Scaling Group (ASG)
ASGs allow you to supply a launch configuration (basically a set of instructions to set up an EC2 instances.
Spot Fleets only allow you to customise the instance on creation via User Data.
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/user-data.html
Because you're using Spot Fleets, Launch Configurations are really a consideration for you.

Jenkins AWS Spot fleet plugin doesn't automatically scale spot instances

Planned to use EC2 Spot instance/fleet as our jenkins slave solution based on this article https://jenkins.io/blog/2016/06/10/save-costs-with-ec2-spot-fleet/.
EXCEPTED
if the spot instances nodes remain free for the specified idle time (I have configured for 5 minutes), then Jenkins releases the nodes, and my Spot fleet nodes will be automatically scaled down.
ACTUAL
my spot instances is still running for days.Also, noticed when I have more pending jobs, Jenkins does not automatically scale my Spot fleet to add more nodes.
Automatic scale up/down supposed to be triggered automatically by aws service? or is this supposed to be triggered by the jenkins plugin?
CONFIGURATION
Jenkins version : 2.121.2-1.1
EC2 Fleet Jenkins Plugin version : 1.1.7
Spot instance configuration :
Request type : request & maintain
Target Capacity : 1
Spot fleet plugin configuration :
Max Idle Minutes Before Scaledown : 5
Minimum Cluster Size : 0
Maximum Cluster Size : 3
Any help or lead would be really appreciated.
I had the same issue and by looking in Jenkins' logs I saw it tried to terminate the instances but was refused to by AWS.
So, I checked in AWS Cloudtrail all the actions Jenkins tried and for which there was an error.
In order for the plugin to scale your Spot Fleet, check that your AWS EC2 Spot Fleet plugin has the following permissions with the right conditions:
ec2:TerminateInstances
ec2:ModifySpotFleetRequest
In my case, the condition in the policy was malformed and didn't work.

Schedule AWS EC2 Spot instance Creation/Termination

Ive been finding my self creating(maintain)/terminating ec2 spot fleet every 12 hours due the application not needing to run full time. I would like the fleet to be created at 8am for example and terminated 8pm. Is this possible?
I can do this with ec2 instances +lambda + cloudwatch: https://aws.amazon.com/premiumsupport/knowledge-center/start-stop-lambda-cloudwatch/
However I cannot seem to figure this out with spot fleet.
You can create an Autoscaling Group with a spot price, then use Scheduled Actions to launch and terminate them at specific times.

AWS AutoScaling - Instance Lifecycle

Just trying to get a bit of info on aws asg.
Here is my scenario;
launch an asg from a launch config using a default ubuntu ami
provision (install all required packages and config) the instances in the asg using ansible
deploy code to the instances using python
Happy days, every thing is setup.
The problem is if I should change the metrics of my asg so that all instances are terminated and then change it back to the original metrics, the new instances come up blank! No packages, No code!
1 - Is this expected behaviour?
2 - Also if the asg has 2 instances and scales up to 5 will it add 3 blank instances or take a copy of 1 of the running instances with code and packages and spin up the 3 new ones?
If 1 is Yes, how do I go around this? Do I need to use a pre-baked image? But then even that won't have the latest code.
Basically at off peak hours I want to be able to 'zero' out my asg so no instances are running and then bring then back up again during peak hours. It doesn't make sense to have to provision and deploy code again everyday.
Any help/advice appreciated.
The problem happening with your approach is you are deploying the code to the launch instance. So when you change the ASG metrics instance close and come up again they are launched from the AMI so they miss the code and configuration. Always remember in auto scaling the newly launched instances are launched using the AMI the ASG is using and the launched instance.
To avoid this you can use user data that will run the configuration script and pull the data from your repo to the instance on each of the instance when the instance is getting launched from the AMI so that blank instance don't get launched.
Read this developer guide for User Data
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/user-data.html
Yes, this is expected behaviour
It will add 3 blank instances (if by blank you mean from my base image)
Usual options are:
Bake new image every time there is a new code version (it can be easily automated as a step in build/deploy process), but his probably makes sense only if that is not too often.
Configure your instance to pull latest software version on launch. There are several options which all involve user-data scripts. You can pull the latest version directly from SCM, or for example place the latest package to a S3 bucket as part of the build process, and configure the instance on launch to pull it from S3 and self deploy, or whatever you find suitable...

aws - upgrade autoscale base AMI

I have setup a HA architecture using Autoscaling, load balancer , code deploy.
I have a base-image through which autoscale launch any new instance. This base-image will get outdated with time and I may have to upgrade it.
My confusion is how can I provision this base AMI to install desire version of packages ? and How will I provision the already in-service instances ?
For-example - Currently my base AMI have php5.3 but if in future I need PHP5.5 then how can I provision in-service fleet of EC2 instances and as well as base AMI
I have Chef as provisioning server. So how should I proceed for above problem ?
The AMI that an instance uses is determined through the launch configuration when the instance boots. Therefore, the only way to change the AMI of an instance is to terminate it and launch it again.
In an autoscaling scenario, this is relatively easy: update the launch configuration of the autoscaling group to use the new AMI and terminate all instances you want to upgrade. You can do a rolling upgrade by terminating the instances one-by-one.
When your autoscaling group scales up and down frequently and it's OK for you to have multiple versions of the AMI in your autoscaling group, you can just update the launch configuration and wait. Every time the autoscaling process kicks in and a new instance is launched, the new AMI is used. When the autoscaling group has the right 'termination policy' ("OldestInstance", for example), every time the autoscaling process scales down, an instance running the old AMI will be terminated.
So, let's say your have 4 instances running. You update the launch config to use a new AMI. After 4 scale-up actions and 4 scale-down actions, all instances are running the new AMI.
Autoscale has a feature called Launch Configuration, which includes the ability to pass in userdata, which will get executed at launch time. The userdata can be saved within Launch Configuration so that you can automate the process.
I have never worked with Chef and I'm sure there is a Chef-centric way of doing this, but the quick and dirty would be to use userdata.
Your userdata script (i.e. BASH) would then include the necessary sudo apt-get remove / install commands (assuming Ubuntu OS).
The documentation for this is here:
http://docs.aws.amazon.com/AutoScaling/latest/APIReference/API_CreateLaunchConfiguration.html
UserData The user data to make available to the launched EC2
instances. For more information, see Instance Metadata and User Data
in the Amazon Elastic Compute Cloud User Guide.
At this time, launch configurations don't support compressed (zipped)
user data files.
Type: String
Length constraints: Minimum length of 0. Maximum length of 21847.
Pattern: [\u0020-\uD7FF\uE000-\uFFFD\uD800\uDC00-\uDBFF\uDFFF\r\n\t]*
Required: No