Mesos autoscale EC2 - amazon-web-services

I was looking for a way to autoscale a mesos or DCOS EC2 cluster dynamically. An example scenario would be if cluster CPU usage is above x % for x minutes spin up new instances, if memory is above X % for x minutes, spin up new instances.
Ideally the instance type should be dynamically determined by the type and amount of resources needed. I saw this projects:
https://github.com/thefactory/autoscale-python
which I suppose can be run as a mesos marathon task itself to handle that, but I was wondering if there is a built in utility in mesos or a generic way to do that on EC2 or GCE. Thanks!

As recently as last week, I had a talk with Mesosphere people, and can confirm that this functionality is not yet built-in to DCOS. I didn't get a feeling that this is something on their cards anytime soon as they kept referring me to Application autoscaling instead of Infra autoscaling (which is what we want here).
The python script from thefactory (Tendril) seems to be not maintained anymore as evidenced by a PR hanging in there for 12months now.
I'm currently looking at Netflix Fenzo which also seems to have been written to enable autoscaling of the infrastructure, among other things.(http://techblog.netflix.com/2015/08/fenzo-oss-scheduler-for-apache-mesos.html)>
I will try to post back once I get a better idea of Fenzo and how it integrated to DC/OS.

Related

AWS EC2 t3.micro instance sufficiently stable for spring boot services

I am new to AWS and recently set up a free t3.micro instance. My goal is to achieve a stable hosting of an Angular application with 2 spring boot services. I got everything working, but after a while, the spring boot services are not reachable anymore. When i redeploy the service it will run again. The spring boot services are packed as jar and after the deployment the process is started as a java process.
I thought AWS guarantees permanent availability out of the box. Do i need some more setup such as autoscaling to achieve the desired uptime of the services or is the t3.micro instance not suffienciently performant, so that i need to upgrade to a stronger instance to avoid the problem?
It depends :)
I think you did the right thing by starting with a small instance type and avoid over provisioning in the first place. T3 instance types are generally beneficial for 'burst' usage scenarios i.e. your application sporadically needs a compute spike but not a persistent one. T3 instance types usually work with credits based system, where you instance 'earns' credits when it is idle, and that buffer is always available in times of need (but only until consumed entirely). Then you need to wait for some time window again and earn the credits back.
For your current problem, I think first approach can be to get an idea of the current usage by going through the 'Monitoring' tab on the EC2 instance details page. This will help you understand if the needs are more compute related or i/o related and then you can choose an appropriate instance type from :
https://aws.amazon.com/ec2/instance-types
Next step could also be to profile your application and understand the memory, compute utilisation better. AWS does guarantee availability/durability of resources, but how you consume those resources is more of an application thing, which AWS does not guarantee/control
For your ideas around, autoscaling and availability, it again depends on what your needs are in terms of cost, outages in AWS data centres etc. To have a reliable production setup, you could consider them, but not something really important in the first place.

AWS Container (ECS) vs AMI & Spot instances

The core of my question is whether or not there are downsides to using an Amazon Machine Image + Micro Spot instances to run a task, vs using the Elastic Container Service (ECS).
Here's my situation: I have the need to run a task on demand that is triggered by a remote web hook.
There is the possibility this task can get triggered 10 times in a row, or go weeks w/o ever executing, so I definitely want a service that only runs (and bills) on demand.
My plan is to point the webhook to a Lambda function, but then the question is what to have the Lambda function do.
Tho it doesn't take very long, this task requires several different runtimes (Powershell Core, Python, PHP, Git) to get its job done, so Lambda isn't really a possibility as I'd hit the deployment package size limit. But I can use Lambda to kick off the job.
What I started doing was creating an AMI that has all the necessary runtimes and code, then using a Spot request to launch an instance, have it execute the operation via a startup script passed in via userdata, then shut itself down when it's done. I'd have to put in some rate control logic to prevent two from running at once, but that's a solvable problem.
I hesitated half way through developing this solution when I realized I could probably do this with a docker container on ECS using Fargate.
I just don't know if there is any benefit of putting in the additional development time of switching to a docker container, when I am not a docker pro and already have the AMI configured. Plus ECS/Fargate is actually more expensive than just running a micro instance.
Are these any concerns about spinning up short-lived (<5min) spot requests (t3a-micro) where there could be a dozen fired off in a single day? Are there rate limits about this? Will I get an angry email from AWS telling me to knock it off? Are there other reasons ECS is the only right answer? Something else entirely?
Your solution using spot instance and AMI is a valid one, though I've experienced slow times to get a spot instance in the past. You also incur the AMI startup time.
As mentioned in the comments, you will incur a minimum of 1 hour charge for the instance, so you should leave your instance up for the hour before terminating, in case more requests can come in the same hour.
IMHO you should build it all with lambda. By splitting the workload for each runtime into its own lambda you can make it work.
AWS supports python, powershell runtimes, and you can create a custom PHP one. Chain them together with your glue of choice, SNS, SQS, direct invocation, or Step Functions, and you have the most cost effective solution. You also get the benefits of better and independent maintenance for each function/runtime.
Put the initial lambda behind API gateway and you will get rate limiting capabiltiy too.

ECS Service auto scaling and auto scaling group

I've decided to start playing with AWS ECS service, and created cluster and one service my issue is that I want to connect it to the AWS auto scaling group.
I have followed the following guide.
The guide works, my issue is that its a total waste of money.
The guide says that I need to add machine when the total amount of CPU units that my services reserve is above 75, but in reality my services always reserve 100%
because I don't want waste money, also its pretty useless to put 3 nodejs tasks on 2 cpu machine, there is no hard-limit anyway.
I am breaking my head on it for few days now, I have no idea how to make them work together properly
EDIT:
Currently this is what happens:
CPU getting above 75%, Service scaling is created 2 new tasks on the same server, which means that now I have 1 instance with 4 tasks
Instance reservation is now 100%, Auto Scaling Group is creating new instance
Once the new instance is created, Service Scaling is removing 2 tasks from the old instance and adding 2 new tasks to the new instance
Its just me or this whole process looks like waste of time? is this how it really should be or (probably) i done something wrong?
I think you are missing a few insights.
For ECS autoscaling to work properly you also have to set up scaling on a ECS Service level.
Then, the scaling flow would look like this:
ECS Service is reaching 100% used CPU and has 100% reserved CPU
ECS Service scales by starting an additional task, making the reserved CPU total at 200%
Auto scaling group sees there is more Reserved capacity than available capacity and launches a new machine.
In addition, you can perfectly run multiple nodejes tasks on a 2 CPU machine. Especially in a micro service environment, these nodejs services can be quite small (128 CPU for example) and still run perfectly fine all together on the same host machine.
Eventually, I figured, what I want to do is not possible.
It is not possible to create a resource-optimized cluster with ECS like in Kubernetes.
(unless of course if you write some magic with lambda)
Service auto-scaling and auto-scaling groups don't work together, you can, however, make it work perfectly with fargate but its expansive, the main issue is that you don't have a trigger to cluster reservation above 100%

Auto Scaling for EC2 Celery workers with ASG or Spot Fleets?

I want to setup auto scaling of EC2 instances on my celery cluster, and I've been doing this manually spinning up new EC2 instances whenever I see (manually) that the SQS queue experiences low throughput.
Searching around, I've came across two seemingly similar solutions:
AutoScalingGroup with EC2 Intances using LaunchConfigurations configured to use Spot Instances
SpotFleet with direct response actions to SQS metrics
Most questions on SO are dated (6 months is not much, but it's basically the release date of SpotFleet auto scaling feature) and mention that SpotFleet lacks ASG features.
Particularly concerning: running the Celery worker will require me to run some setup (install package, download code, run some script). I'm not particularly worried about cost (both are spot instances, close enough) and reliability (workers can die without a problem, the exact size of the cluster is not that important either).
Which option will work as best practice to get this done?

Is it possible to auto scale with amazon web services, with ever changing AMI's?

Curious if this is possible:
We have a web application that at MOST times, works just fine with our single small instance. However, when we get multiple customers running simultaneously intense queries (we are a cloud scheduling service); our instance bogs way down to near 80% cpu load and becomes pretty unresponsive.
Is there a way to have AWS fire up another small instance (or a few), quickly, only for the times that its operating under this intense load? BUT, the real question is how does this work when we have very frequent programming updates to our application? Do we have to manually create a new image everytime we upload a code change???
Thanks
You should never be running anything important on a single EC2 instance. Instances can--and do--go offline randomly. Always use an autoscaling (AS) group that spans multiple availability zones. An AS group will automatically bring new instances online when you hit a certain trigger (in your case, CPU utilization). And then it will scale down the instances when traffic subsides. Autoscaling is the heart and soul of AWS and if you're not using it, you might as well be using a cheaper (and more durable) VPS host.
No, you don't want to be creating a new AMI for each code release. Ideally you should use a base AMI (like one of Amazon's official ones) and then have it auto-provision at boot. You can use the "user data" field when you launch an AMI to bootstrap this process. It can be as simple as a bash script that pulls from your Git repo to as something as sophisticated as Puppet or Chef.
The only time I create custom AMI's is if the provisioning process just takes too long. However that can almost always be solved by storing the needed files in S3.