Elastic beanstalk periodic tasks on autoscaled environment - amazon-web-services

On an autoscaled environment running a periodic task, if the environment is scaled up, do the periodic tasks get run on each instance? Or more specifically, does each instance then post to the queue leading to multiple "periodic tasks" running?

Yes. If there's some periodic task that should only be triggered once, you should have a separate auto scale environment of minimum 1 maximum one instance to either perform the task or trigger it on one of your servers (maybe make a request to your load balancer and one of your instances will perform the task)

Yes, behind the screen it's just a cron job on all your instances. The default scenario for using periodic tasks is to read the tasks from the SQS queue on the worker nodes.
So yes, if you doing some kind of posting what has to happen only once, then you either need to put some logic between or use a different solution.
(For example generating some kind of time based ID which identifies the cycle of the cron job. So messages from the same cycle are having the same id, easy to filter them/ ignore everything after the firs.

Related

AWS ECS fargate auto-scaling - how does it scale-in selects which tasks to terminate?

I am running java process inside ecs fargate containers and have set-up auto scaling to scale-out when memory utilization is above 60% and scale-in accordingly. This setup is working fine but i am not able to figure out the criteria based upon which ecs determines which tasks it should shutdown as part of the scale-in events i.e how does it distinguishes between different tasks and picks one to shutdown ?
Does it check if there any active requests on the tasks or not and then if there are multiple such tasks then picks randomly ?
There is a years long open issue about that on github:
Control which containers are terminated on scale in
From the issue and its comments you can infer the following:
Does it check if there any active requests on the tasks
No.
if there are multiple such tasks then picks randomly ?
Its random.
There is actually an update of this, now you can make your running task to be protected. Check this one for more details
https://aws.amazon.com/premiumsupport/knowledge-center/ecs-fargate-service-auto-scaling/

Make Azure Webjob Timer Trigger Not Run as a Singleton

I have a set of worker functions that are spun up as needed to pull from Service Bus Topic subscriptions when they are created. New workers are created when a new Subscription is created by way of a provisioning message that is queued triggering a job to spin up the new worker to listen to the subscription. The problem is that now I want to be able to scale out the workers listening to the subscriptions when the app is scaled out. Since the provisioning job only creates the worker on a single instance the effectiveness of scaling out is significantly reduced.
My thought was to create a second provisioning job that runs from a timer trigger to synchronize the running jobs to the current list of subscriptions. I run into the same problem with the timer job as with the service bus trigger though because it is running as a singleton in the web job and likely will run on the same instance of the job each time it is run meaning I still will likely have one maybe 2 instances of a job per subscription no matter how much I scale out.
My question is, is it possible to create a timer job that is not run as a singleton? Meaning, can I configure a timer job that, for each instance of the scaled out web job, will run on a set interval?
Singleton attribute ensures to run only one instance with the help of distributed locking, these are related to webjobs SDK.
Also we have Singleton Listeners and with adding few settings to ensure that your function runs as a singleton on a single instance. To ensure that only a single instance of the function is running when the web app scales out to multiple instances, apply a listener-level singleton lock on the function ([Singleton(Mode = SingletonMode.Listener)]). Listener locks are acquired when the JobHost starts. If three scaled-out instances all start at the same time, only one of the instances acquires the lock and only one listener starts.
Now in case of not assigning singleton to avoid running each instance at a time, refer to Multiple Instances documentation from MS Docs

Auto scaling service in AWS without duplicating cron jobs

I have a (golang web server) service running on AWS on a EC2 (no auto scaling). This service has a few cron jobs that runs throughout the day and these jobs starts when the service starts.
I would like to take advantage of auto scaling in some form on AWS. Been looking at ECS and Beanstalk.
When I add auto scaling I need the cron job to only execute on one of the scaled services due to rate limits on external APIs. Right now the cron job is tightly coupled within the service and I am looking for an option that does not require moving the cron job to its own service.
How can I achieve this in a good way using AWS?
You're going to get this problem as a general issue in any scalable application where crons cannot / should not run multiple times. It's not really AWS specific. I'm not sure to what extent you want to keep things coupled or how your crons are currently run but here are a few suggestions that might work for you:
Create a "cron runner" instance with a limit to run crons on
You could create a separate ECS service which has no autoscaling and a fixed value of 1 instance. This instance would run the same copy of your code as your "normal" instances and would run crons. You would turn crons off on your "normal" instances. You might find that this can be a very small instance since it doesn't handle any web traffic.
Create a "cron trigger" instance which fires off crons remotely
Here you create one "trigger" instance which sends a request to your normal instances through an ALB. Because your ALB will route the request to 1 of the servers behind it the cron only gets run once. One watch out with this is that if your cron is long running, you may need to consider your request timeouts. You'll also have to think about retries etc but I assume you already have a process that can be adapted for that.
The above solutions can be adapted with message queues etc but the basis of both is that there is another instance of some kind which starts the cron and is separate from your normal servers. Depending on when your cron runs, you may only need to run this cron instance for a few hours per day so it can be cost efficient to do things like this.
Personally I have used both methods in a multi-tenant application and I had to go with the option of running the cron like this due to the number of tenants and the time / resource it took to run the crons for all of them at once:
Cloudwatch schedule triggers a lambda which sends a message to SQS to queue a cron for each tenant individually.
Cron servers (totally separate from main web servers but running same / similar code) pull messages and run the cron for each tenant individually. Stores a key in redis for crons which are vital to only run once to stop issues with "at least once" delivery so crons don't run twice.
This can also help handle failures with retry policies and deadletter queues managed in SQS.
Ultimately you need to kick off these crons from one place. If possible, change up your crons so it doesn't matter if they run twice. It makes it easier to deal with retries and things like that.

Custom Worker Prioritization on Elastic Beanstalk

I have an Elastic Beanstalk setup where I want to do two things:
Have all workers prioritize certain jobs (premium > free)
Have some workers only do specific jobs (enterprise worker does only enterprise jobs)
The workers use the SQS daemon that fetches from the queue and I'm not sure if and how to modify them.
How would you achieve this using Elastic Beanstalk?
The main EB adventure is that it is out-of-the-box system you setup in minutes. The disadvantage is the you have limited control over it.
What you described could be achieved on the worker environment. I think you could disable the worker daemon, and handle all the message processing yourself in your up according to your criteria.
You could also create multiple queues if you want using by using Resources setup options.
However, the futher you deviate from its behavior, the more management you will have to do yourself. Subsequently, you may get to the point where it is simply easier to make your own environment for processing your messages outside of EB.
With SQS this is usually accomplished by having multiple queues. You could have one for Enterprise, one for Premium, and one for Free. Then have your worker check them in that order (and depending on your application, perhaps have some worker that only check Enterprise/Premium/Free. This may depend how long your jobs take and what your user's expectations are).
I do not know exactly how to set this up in Elastic Beanstalk, but hopefully this is enough to get you started.

High availability periodic task (cron) on AWS

What is the recommend way/pattern for setting up a High Availability (multiple Availability Zones) periodic task (probably triggered using Cron) on AWS?
I would like to have the software installed on multiple EC2 instances in multiple Availability Zones but only have the task run on a single instance at a time. It doesn't matter which instance.
Before moving to AWS, we used to use database locking in a MySQL instance - only the instance that successfully creates a lock would run.
But there must be a better way on AWS? Particularly if there is no requirement for a database.
Thanks!
nick.
Since I first asked this question, it is now possible to use CloudWatch Events to schedule periodic events. Events can be:
A fixed rate of N minutes/hours/days
Cron expression
The targets can be:
An AWS Lambda function
Post to an SNS topic
Write to an SQS queue
Write to a Kinesis stream
Perform an action on EC2/EBS instance
SQS could then be used to notify a single instance in a cluster of machines in multiple availability zones to perform an action.
There is more information here:
https://docs.aws.amazon.com/AmazonCloudWatch/latest/DeveloperGuide/ScheduledEvents.html
Although it does not include a resilience/availability statement of what happens if an Availability Zone goes down.
One suggested solution that has been suggested to me is to use an Auto Scaling Group, with the maximum and minimum number of instances set to 1. This means that if an availability zone goes offline, the ASG will cause a new instance in another zone to be launched.
This technique was briefly covered on the Architecting on AWS training course, but I don't know how reliable it is.
Amazon just released the solution to your problem: Beanstalk worker tier periodic tasks:
http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/using-features-managing-env-tiers.html#worker-periodictasks
It basically relies on a yaml file listing cron schedules calling the API you want:
To invoke periodic tasks, your application source bundle must include a cron.yaml file at the root level. The file must contain information about the periodic tasks you want to schedule. Specify this information using standard crontab syntax.
It really depends on your requirements.
You could post your tasks to a SQS queue and have your instances (possibly an autoscaling group spread across different zones) poll that queue. SQS' at least (and generally only) once semantics could be a problem here if it is critical for you that tasks get executed only once. If that's the case, you could easily use a DynamoDB table and conditional writes. Or, if you're more after a full-fledged fault-tolerant solution, you might give airbnb chronos a try.