Run an AWS ECS task - amazon-web-services

I have an ECS fargate task defined in aws. I would like to run it occasionally as needed.
Is there an easy way to do this?
I have terraform code that defines it as a scheduled task that is disabled. Whenever I want to run it, my procedure is:
Modify the terraform file to enable the task and set the scheduled execution time for five minutes from now.
Deploy the terraform and wait for the task to run.
Undo the terraform changes and redeploy.
This procedure works, but is quite inconvenient. Surely there is a better way to run one-off tasks? I've tried going through the aws web console but it's even worse.

If you want to stick with using the scheduler to run the task, then something like your current process is the only way to achieve that. However it sounds like you don't really want to have the task run on a set schedule at all, instead you only want to run it when needed.
The most direct way to trigger an ECS task to run, is via the RunTask API, which you can trigger from the AWS CLI (which you could wrap in a shell script), or one of the AWS SDKs.

You can try with Lambda. There is a project I wrote Python code with boto3 to run a lot of different tasks in AWS and I'm pretty sure Lambda can solve your problem.

Related

Running Scripts or Commands in Spinnaker Pipelines

I'm trying to run scripts as part of some of my deployment pipelines in Spinnaker. I don't want to use Jenkins to run these scripts. I would use a Kubernetes job, but these scripts need to execute prior to the Kubernetes deployment.
I was debating creating ECS tasks in AWS which I'd like to run on demand during one of the stages in my pipeline. Does anyone know if it's possible to execute an ECS task directly from Spinnaker?
If not, are there any other ways to execute a command or script in a pipeline outside of using a Kubernetes job or Jenkins server?
One way to do this is to use the Run Job ( Manifest ) stage and just point it to another kubernetes cluster for this. This approach gives you a bit of flexibility since you can monitor the pipeline stage for completion status.
You can also just create an arbitrary API endpoint and trigger via a webhook stage that monitors for completion and use whatever your preferred script execution environment (i.e, Lambda, ECS etc) behind that api endpoint.

shell script for AWS run-tasks outside of normal schedule

Currently have some task-based automation for ECS that run on a scheduled basis, however sometimes there is a need to run only run task or re-run tasks for only a certain kinds of tasks (for example sql tasks or datadog tasks).
I know this can be done via console, but it's inefficient. Was thinking of a bash script that calls to start a task from a CLI. Currently I know I can do this with the AWS CLI using '--task-definition', but it's not much better. I don't usually write scripts, so I'm basically here to help with brainstorming. I'm wondering if there is a way to make an API call to start tasks. Would I need to type in the ARN every time? Can I just list the tasks on the AWS CLI and have the exported to the script? Would network-config need to be hard-coded?
Thanks!
The AWS API calls to start a task are:
StartTask:
Starts a new task from the specified task definition on the specified container instance or instances.
RunTask:
Starts a new task using the specified task definition. You can allow Amazon ECS to place tasks for you, or you can customize how Amazon ECS places tasks using placement constraints and placement strategies.
Since this is AWS API calls, there are equivalent calls in CLI and SDK.

Python pipeline on AWS Cloud

I have few python scripts which need to be executed in sequence on AWS Cloud so what are the best and simplest options? These script files are proof of concept so little bit dirty also but need to run overnight. Most of the script finishes within 10 mins but couple of them can take up to 1 hour running on a single core.
We do not have any servers like Jenkins, airflow etc...we are planning to use existing aws services.
Please let me know, Thanks.
1) EC2 Instance (Manually controlled)
Upload your scripts to an S3 bucket Use default VPC
launch EC2 Instance
Use SSM Remote session to log in
Run AWS CLI (AWS S3 Sync to download from S3)
Run them Manually
stop instance when done.
To be clean, make a SH file (or master .py file) to do the work. If you want it to stop charging you money afterwards, add command to stop instance when complete.
Least amount of work
2) If you want to run scripts daily
- Script out the work above (include modifying the Autoscale group at end to go to one box)
- Create an EC2 Auto Scale Group and launch it on a CRON job schedule.
It will start up, do the work, and then shut down and stop charging you.
3) Lambda
Pretty much like option 2, but AWS will do most of the work for you.
Either put all your scripts into one lambda..or put each script into its own lambda and have a master that does sync invoke of each script in the order you want.
You have a cloudwatch alarm trigger daily and does the work
I would say that if you are in POC mode, option 1 is best decision. It is likely closest to what you already do where you are currently executing. This is what #jarmod recommended already.
You didn't mention anything about which AWS resources your python scripts need to access or at least the purpose of the scripts, so it is difficult to provide a solution.
However a good option is to use AWS Batch.

On AWS, run an AWS CLI command daily

I have an AWS CLI invocation (in this case, to launch a configured EMR cluster to do some steps and then shut down) but I'm not sure how to go about running it daily.
I guess one way to do it is an EC2 micro instance running a cron job, or an ECS task in a micro that launches the command, but that all seems like it might be overkill. It looks like there's also a way to do it in Lambda, but rom what I can tell it'd be kludgy.
This doesn't have to be a good long-term solution, something that's suitable until I can do it right (Data Pipelines) would work just fine.
Suggestions?
If it is not a strict requirement to use the AWS CLI, you can use one of the AWS SDK instead to programmatically invoke Lambda.
Schedule a CloudWatch Rules using cron
When configured, the CloudWatch Rules will trigger a Lambda function
Implement a Lambda function that calls EMR using one of the supported SDKs (e.g. the EMR class in the AWS JavaScript SDK)
Make sure that you have the IAM configuration in place
Full example is available in the Schedule AWS Lambda Functions Using CloudWatch Events
Kludgy? Yes, configuration is needed, however if you take into account the amount of work required to launch EC2 / ECS (and make sure that it re-launches in the event of failure), I'd say it evens out.
Not sure about the whole task that you are doing, but to avoid doing it:
Manually
Avoid another set up for resources in AWS (as you mentioned)
I would create a simple job in a Continuous Integration (CI) server like jenkins,bamboo,circleci ..... (list can go on). I would assume that you might already have a CI server running, why not use it?

How do I kill a deployment in AWS Opsworks?

How do I kill a long running deployment in Amazon Opsworks?
We run deployments to an integration environment everytime we commit to our code repo. Our current deployments are taking a long time, which causes deployments to stack on top of each other in Opsworks. We're working on making our deployment process for the application more efficient, but until we get that sorted out, is there an easy way to kill a deployment so we can just run the latest one in the queue?
Unfortunately there is no easy way.
There is no API call to cancel them.
So the only possible approach would be checking on the instance if it's necessary to run the deployment or skip it. You can achieve this with a custom cookbook.