Fargate sceduled task FailedInvocation error - amazon-web-services

I have a fargate task that I want to run as a scheduled task every n minutes. I have a task definition that works perfectly as expected (with cloud watch logs as expected and VPC connections working properly). That is when I run it as a task or a service. However, when I try to run it as scheduled task, it does not start. I checked the cloudwatch logs, however, there are no log entries in the log group. If I lookup the metrics page, I see a FailedInvocations entry under the metric name.
I understand that it is a bit tricky to schedule a task in fargate, as we have to go to cloudwatch rules, and update the scheduled task there, in order to add subnets and define a security group, as this option is not available when creating the scheduled task through my ECS cluster page.
I also have studied the documentation page here, and also checked this question. But I still cannot understand why it does not work. Thank you in advance.

This seems like an issue with the web interface of AWS for scheduled tasks, as they don't let me set the assignPublicIp to enabled.
Without this, the Fargate task cannot pull images from the ECR registry. However, when I started this task using boto3 using a lambda function that gets called through cloudwatch rules, it works fine.

Related

Amazon ECS : how to schedule a container?

I have a very simple ECS cluster using Fargate. I'd like to schedule a container to be run using a cron expression.
I created the task definition and a rule pointing to it using the EventBridge console, but I see nothing getting launched on the cluster. No logs, not even a trace of anything starting apart from the "monitor" tab of the rule which says it was triggered (but then again, I don't see any logs).
I'm guessing this might have to do with the public IP somehow needed for the rule to pull the container using Fargate? In the creation there is a setting called auto-assign public IP address but it only shows the DISABLED option.
Has anyone had the same problem? Should I just schedule a normal service with sleep times of 24hours between executions and risk a higher cost? Cheers
Since you mention that you have no issues running the task manually in the cluster, it's likely that the problem with EventBridge is that the role associated with the rule does not have enough permissions to run the task.
You can confirm this by checking CloudTrail logs. You'll find a RunTask event with a failure similar to the following:
User: arn:aws:sts::xxxx:assumed-role/Amazon_EventBridge_Invoke_ECS/xxx is not authorized to perform: ecs:RunTask on resource: arn:aws:ecs:us-east-1:xxxx:task-definition/ECS_task

AWS ECS: Is it possible to make a scheduled ecs task accessible via ALB?

my current ECS infrastructure works as follows: ALB -> ECS Fargate --> ECS service -> ECS task.
Now I would like to replace the normal ECS task with a Scheduled ECS task. But nowhere do I find a way to connect the Scheduled ECS task to the service and thus make it accessible via the ALB. Isn't that possible?
Thanks in advance for answers.
A scheduled task is really more for something that runs to complete a given task and then exits.
If you want to connect your ECS task to a load balancer you should run it as part of a Service. ECS will handle connecting the task to the load balancer for you when it runs as a Service.
You mentioned in comments that your end goal is to run a dev environment for a specific time each day. You can do this with an ECS service and scheduled auto-scaling. This feature isn't available through the AWS Web console for some reason, but you can configure it via the AWS CLI or one of the AWS SDKs. You would configure it to scale to 0 during the time you don't want your app running, and scale up to 1 or more during the time you do want it running.
A scheduled ECS task is it a one-off task launched with the RunTask API and that has no ties to an ALB (because it's not part of the ECS service). You could probably make this work but you'd probably need to build the wiring yourself by finding out the details of the task and adding it to the target group. I believe what you need to do (if you want ECS to deal with the wiring) is to schedule a Lambda that increments the desired number of tasks in the service. I am also wondering what the use case is for this (as maybe there are other ways to achieve it). Scheduled tasks are usually batch jobs of some sort and not web services that need to get wired to a load balancer. What is the scenario / end goal you have?
UPDATE: I missed the non-UI support for scheduling the desired number of tasks so the Lambda isn't really needed.

How to run simple .net core console app with AWS fargate

Currently, I have one .net core console application which takes around 3-4 hours for completion. I need to move this piece of code to AWS Fargate. I am seeing the examples of .net core Web or API Hosted on AWS Fragate but not sure how to deploy and Host console applications on AWS Fargate.
Any help is appreciated. Thanks in advance.
I have one .net core console application which takes around 3-4 hours
for completion
Seems like you need Schedule based task, as you are not running the fargate service, as you do not need to run again and again once task complete, so you can have two option for such task.
For Schedule rule type, choose whether to use a fixed interval
schedule or a cron expression for your schedule rule. For more
information, see Schedule Expressions for Rules in the Amazon
CloudWatch Events User Guide.
For Run at fixed interval, enter the interval and unit for your
schedule.
For Cron expression, enter the cron expression for your task schedule.
These expressions have six required fields, and fields are separated
by white space. For more information, and examples of cron
expressions, see Cron Expressions in the Amazon CloudWatch Events User
Guide.
scheduled_tasks
As far your second question,
AWS Fargate but not sure how to deploy and Host console applications
on AWS Fargate
Fargate has nothing to do with the application, just build the docker-image and push the image ECR. Fargate will take care of it.
The most important thing is logging, you will not able to see logs of your container, you need to push container logs to cloud watch.
using_awslogs with fargate
You can check further below links
aws-fargate-features-docker
hosting-asp-net-core-applications-in-amazon-ecs-using-aws-fargate
deployment-ecs-aspnetcore-fargate

How to configure AWS Fargate task to run a container that will create a cloudwatch custom metric

I need to set up a monitoring into an aws account to ping certain servers from outside the account, create a custom cloudwatch metric with the package loss and i need to deploy the solution without any EC2 instance.
My first choice was lambda, but it seems that lambda does not allow pinging from it.
Second choice was a container, as FARGATE has the ability to execute containers without any EC2 instance. The thing is im able to run the task definition and i see the task in RUNNNING state in the cluster, but the cloudwatch metric is never received.
If I use the normal EC2 cluster, the container works perfectly, so i assume I have some error within the configuration, but I'm lost why. I have added admin rights to the ECS Task Execution Role and opened all ports in the sec group.
I have tried public/private subnets with no success.
Anyone could please help me?
Here you can find that the task is certainly RUNNING, however the app dont generate any further action
So i solved the problem. There was problem inside the container. It seems Fargate doesn't like cron, so i removed my cron schedule from the container and used a cloudwatch event rule instead and it works perfectly

Best practices for tagging a ECS task?

We can add tags to EC2 instances to help us better track billing usages and to manage instances.
Is there a way to achieve when deploying containers in ECS? I would like the running container to have the ability to know what tag it currently have attached.
It really depends on what you're ultimately trying to visualize after the fact. I'll share a few off-the-cuff thoughts below, and maybe you can extrapolate on these to build something that satisfies your needs.
As you probably are aware, ECS Tasks themselves don't support the notion of tags, however there are some workarounds that you could consider. For example, depending on how you're logging your application's behavior (eg. batching logs to CloudWatch Logs), you could create a Log Stream name, for each ECS Task, that contains a delimited array of tags.
As part of a POC I was building recently, I used the auto-generated computer name to dynamically create CloudWatch Log Stream names. You could easily append or prepend the tag data that you embed in your container images, and then query the tag information from the CloudWatch Log Streams later on.
Another option would be to simply log a metric to CloudWatch Metrics, based on the number of ECS Tasks running off of each unique Task Definition in ECR.
You could build a very simple Lambda function that queries your ECS Tasks, on each cluster, and writes the Task count, for each unique Task Definition, to CloudWatch Metrics on a per-minute basis. CloudWatch Event Rules allow you to trigger Lambda functions on a cron schedule, so you can customize the period to your liking.
You can use this metric data to help drive scaling decisions about the ECS Cluster, the Services and Tasks running on it, and the underlying EC2 compute instances that support the ECS Cluster.
Hope this helps.
Just found this while trying to work out the current situation. For future searchers: I believe tagging was added some time after this question, in late 2018.
I've not yet worked out if you can set this up in the Console or if it's a feature of the API only, but e.g. the Terraform AWS provider now lets you set service or task definition tags to 'bubble through' to tasks – including Fargate ones – via propagate_tags.
I've just enabled this and it works but forces a new ECS service – I guess this is related to it not being obviously editable in the Console UI.