How to provide tasks with different environment variables ECS Terraform - amazon-web-services

I have an ECS service and within that ECS service, I want to boot up 3 tasks all from the same task definition. I need each of these tasks to be on a separate EC2 instance, this seems simple enough however I want to pass a different command to each one of the running tasks to specify where their config can be found and some other options via the CLI within my running application.
For example for task 1 I want to pass run-node CONFIG_PATH="/tmp/nodes/node_0 and task 2 run-node CONFIG_PATH="/tmp/nodes/node_1" --bootnode true and task 3 run-node CONFIG_PATH="/tmp/nodes/node_0 --http true"
I'm struggling to see how I can manage individual task instances like this within a single service using Terraform, it seems really easy to manage multiple instances that are all completely equal but I can't find a way to pass custom overrides to each task that are all running off the same task definition.
I am thinking this may be a job for a different dev-ops automation tool but would love to carry on doing it in Terraform if possible.

This is not a limitation of a terraform. This is how ECS service works - runs exact copies of same task definition. Thus, you can't customize individual tasks in an ECS service as all these tasks are meant to be identical, interchangeable and disposable.
To provide overwrites you have to run the tasks outside of a service, which you can do using run-task or start-task with --overrides of AWS CLI or equivalent in any AWS SDK. Sadly there is no equivalent for that in terraform, except running local-exec with AWS CLI.

Related

Define unique Environment variables for all tasks in ECS Service

I have an ECS service that will be running many instances of one task. All these instances use the exact same task definition but all need to be provided a specific environment variable INDEX that is unique to that instance. I would like the service to monitor each instance and restart it with the same environment variable value if it fails (I.E. If a task with INDEX=555 fails, I would like the service to spin up a new task with INDEX=555 to replace it). Currently I only see the option to set environment variables in the task definition itself, which would require me to create many versions of the exact same task (and the corresponding service as well) with the only difference being the environment variable (this seems wasteful and would clutter the task definition list and service list in that cluster).
In short I want to configure my ECS service such that I can provide a list of values for a specific environment variable along with one task definition and have it create a 1:1 map of a task instance to environment variable. Is it possible to do so? If so, how can I accomplish this?
Important Note: This service is running on Fargate and not EC2 Instances
Sadly, you can't do this for ECS Service. The ECS services are meant to run exact copies of same tasks which are disposable, exchangeable and scalable.
However, you can provide overwrites for the varaibles for tasks outside of a service, which you can do using run-task or start-task with --overrides of AWS CLI or equivalent in any AWS SDK.

Running Scripts or Commands in Spinnaker Pipelines

I'm trying to run scripts as part of some of my deployment pipelines in Spinnaker. I don't want to use Jenkins to run these scripts. I would use a Kubernetes job, but these scripts need to execute prior to the Kubernetes deployment.
I was debating creating ECS tasks in AWS which I'd like to run on demand during one of the stages in my pipeline. Does anyone know if it's possible to execute an ECS task directly from Spinnaker?
If not, are there any other ways to execute a command or script in a pipeline outside of using a Kubernetes job or Jenkins server?
One way to do this is to use the Run Job ( Manifest ) stage and just point it to another kubernetes cluster for this. This approach gives you a bit of flexibility since you can monitor the pipeline stage for completion status.
You can also just create an arbitrary API endpoint and trigger via a webhook stage that monitors for completion and use whatever your preferred script execution environment (i.e, Lambda, ECS etc) behind that api endpoint.

shell script for AWS run-tasks outside of normal schedule

Currently have some task-based automation for ECS that run on a scheduled basis, however sometimes there is a need to run only run task or re-run tasks for only a certain kinds of tasks (for example sql tasks or datadog tasks).
I know this can be done via console, but it's inefficient. Was thinking of a bash script that calls to start a task from a CLI. Currently I know I can do this with the AWS CLI using '--task-definition', but it's not much better. I don't usually write scripts, so I'm basically here to help with brainstorming. I'm wondering if there is a way to make an API call to start tasks. Would I need to type in the ARN every time? Can I just list the tasks on the AWS CLI and have the exported to the script? Would network-config need to be hard-coded?
Thanks!
The AWS API calls to start a task are:
StartTask:
Starts a new task from the specified task definition on the specified container instance or instances.
RunTask:
Starts a new task using the specified task definition. You can allow Amazon ECS to place tasks for you, or you can customize how Amazon ECS places tasks using placement constraints and placement strategies.
Since this is AWS API calls, there are equivalent calls in CLI and SDK.

Ecs run vs ecs deploy

for example for a migrate task we do ecs run and for any long running service to deploy we do ecs deploy. Why so?
What is basic fundamental difference between these two. Because ecs run doesnt give back the status of the task ran. (it always returns a non zero status code on running the service). So we have to do polling to get the status of the deployment. So why cant we use ecs deploy instead of ecs run because ecs deploy returns the status of the deployment also?
What is basic fundamental difference between these two.
aws ecs run-tusks starts a single task, while aws ecs deploy deploys a new task definition to a service.
Thus the different is that a single service can run many long-running tasks. Since you are running many tasks in a service you need to have a deployment strategy (e.g. rolling or blue/green) for how you deploy new versions of your task definitions.
So the choice of which to use depends on your specific use cases. For ad-hoc short running jobs, a single task can be sufficient. For hosting business critical containers, a service is the right choice.

How to run a service on AWS ECS with container overrides?

On AWS ECS you can run a task, or a service.
If you run a task with run_task(**kwargs), you have the option to override some task options, for example the container environment variables, this way you can configure the thing inside the container for example. That's great.
Now, I can't find a way how to do the same with create_service(**kwargs). You can only specify a task, so the created container runs with configuration as specified in the task definition. No way to configure it.
Is there a way how to modify task in a service, or this is not possible with the AWS ECS service?
This is not possible. If you think how services work, they create X number of replicas of the task. All instances of the task have the same parameters, because the purpose is scaling out the task - they should do the same job. Often the traffic is load-balanced (part of service configuration), so it is undesirable that a user will get different response next time than the previous request due to ending up on a task which is configured differently. So bottom line is - that's by design.
Because parameters are shared, if you need to change a parameter, you create a new definition of the task, and then launch that as a service (or update an existing service).
If you want the tasks to be aware of other tasks (and thus behave differently), for example to write data to different shards of a sharded store, you have to implement that in the task's logic.