I have a very simple ECS cluster using Fargate. I'd like to schedule a container to be run using a cron expression.
I created the task definition and a rule pointing to it using the EventBridge console, but I see nothing getting launched on the cluster. No logs, not even a trace of anything starting apart from the "monitor" tab of the rule which says it was triggered (but then again, I don't see any logs).
I'm guessing this might have to do with the public IP somehow needed for the rule to pull the container using Fargate? In the creation there is a setting called auto-assign public IP address but it only shows the DISABLED option.
Has anyone had the same problem? Should I just schedule a normal service with sleep times of 24hours between executions and risk a higher cost? Cheers
Since you mention that you have no issues running the task manually in the cluster, it's likely that the problem with EventBridge is that the role associated with the rule does not have enough permissions to run the task.
You can confirm this by checking CloudTrail logs. You'll find a RunTask event with a failure similar to the following:
User: arn:aws:sts::xxxx:assumed-role/Amazon_EventBridge_Invoke_ECS/xxx is not authorized to perform: ecs:RunTask on resource: arn:aws:ecs:us-east-1:xxxx:task-definition/ECS_task
Related
I changed some environment variables in the task definition part and executed the changeset.
The task definition got updated successfully but the update of service got stuck in cloudformation.
On checking the events in the cluster I found the following:
It is adding new task but the old one is already running consuming port so it is stuck. what can be done to resolve this. I can always delete and run the CF script again but I need to create a pipeline so I want the update stack to work.
This UPDATE_IN_PROGRESS will take around 3 hours until DescribeService API timeout.
If you can't wait then you need to manually force the state of the Amazon ECS service resource in AWS CloudFormation into a CREATE_COMPLETE state by
setting the desired count of the service to zero in the Amazon ECS console to stop running tasks. AWS CloudFormation then considers the update as successful, because the number of tasks equals the desired count of zero.
This blog explains the cause of the message and its fix in detail.
https://aws.amazon.com/premiumsupport/knowledge-center/cloudformation-ecs-service-stabilize/
https://aws.amazon.com/premiumsupport/knowledge-center/ecs-service-stuck-update-status/?nc1=h_ls
I have a fargate task that I want to run as a scheduled task every n minutes. I have a task definition that works perfectly as expected (with cloud watch logs as expected and VPC connections working properly). That is when I run it as a task or a service. However, when I try to run it as scheduled task, it does not start. I checked the cloudwatch logs, however, there are no log entries in the log group. If I lookup the metrics page, I see a FailedInvocations entry under the metric name.
I understand that it is a bit tricky to schedule a task in fargate, as we have to go to cloudwatch rules, and update the scheduled task there, in order to add subnets and define a security group, as this option is not available when creating the scheduled task through my ECS cluster page.
I also have studied the documentation page here, and also checked this question. But I still cannot understand why it does not work. Thank you in advance.
This seems like an issue with the web interface of AWS for scheduled tasks, as they don't let me set the assignPublicIp to enabled.
Without this, the Fargate task cannot pull images from the ECR registry. However, when I started this task using boto3 using a lambda function that gets called through cloudwatch rules, it works fine.
As you can see I have task definition for revision 4 and a task definition for revision 5. I want permanently stop running 4, and only run 5:
So in other words, the task that is PROVISIONING - I only want that one. The task that is RUNNING - I don't want that one to run anymore. How to achieve this?
I tried to replicate the scenario and it went well for me. So what I think is you need to dig further under the hood.
Your task is in provisioning state which I believe is related to your environment and not related to your task, service or cluster.
From the AWS Documentation :
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-lifecycle.html
**PROVISIONING**
Amazon ECS has to perform additional steps before the task is launched. For example, for tasks that use the awsvpc network mode, the elastic network interface needs to be provisioned.
You might want to check below things to start debugging :
Cloudformation template that ECS use to provision your resources.
Try looking into your VPC if anything got changed since the last deployment.
Security Groups, IAM Roles to find out if anything blocking your resource creation.
I need to set up a monitoring into an aws account to ping certain servers from outside the account, create a custom cloudwatch metric with the package loss and i need to deploy the solution without any EC2 instance.
My first choice was lambda, but it seems that lambda does not allow pinging from it.
Second choice was a container, as FARGATE has the ability to execute containers without any EC2 instance. The thing is im able to run the task definition and i see the task in RUNNNING state in the cluster, but the cloudwatch metric is never received.
If I use the normal EC2 cluster, the container works perfectly, so i assume I have some error within the configuration, but I'm lost why. I have added admin rights to the ECS Task Execution Role and opened all ports in the sec group.
I have tried public/private subnets with no success.
Anyone could please help me?
Here you can find that the task is certainly RUNNING, however the app dont generate any further action
So i solved the problem. There was problem inside the container. It seems Fargate doesn't like cron, so i removed my cron schedule from the container and used a cloudwatch event rule instead and it works perfectly
I am trying to configure my docker hub image with aws ecs..I have created repository, cluster, and task while running task am getting an error as an essential container in task exited 1. while trying to get exact error details I have found that some of my variables are shown as not configured.
find the screenshot attached of errors.
cluster details
error detail
You should setup the "Log Configuration" by specifying a log configuration in your task definition. I would recommend the awslogs configuration type, as this lets you see the logs from your container right inside the console.
Once you do that you will get a new tab on the task details screen called "Logs" and you can click that to see the output from your container as it was starting up. You will probably see some kind of error or crash as the "Essential container exited" error means that the container was expected to stay up and running, but it just exited.
I had to expand the corresponding container details in the stopped task and check the "Details" --> "Status reason", which revealed the following issue:
OutOfMemoryError: Container killed due to memory usage
Exit Code 137
After increasing the available container memory, it worked fine.
I had a similar issue. You can setup the cloudwatch log, there you can get the full error log which will help you to debug and fix the issue. below are the part of the taken taken from aws official documentation.
Using the auto-configuration feature to create a log group
When registering a task definition in the Amazon ECS console, you have the option to allow Amazon ECS to auto-configure your CloudWatch logs. This option creates a log group on your behalf using the task definition family name with ecs as the prefix.
To use log group auto-configuration option in the Amazon ECS console
Open the Amazon ECS console at https://console.aws.amazon.com/ecs/.
In the left navigation pane, choose Task Definitions, Create a new Task Definition, alternatively, you can also do create a revision of the exiting task definition.
Select your compatibility option and choose Next Step.
Choose Add container.
In the Storage and Logging section, for Log configuration, choose Auto-configure CloudWatch Logs.
Enter your awslogs log driver options. For more information, see Specifying a log configuration in your task definition.
Complete the rest of the task definition wizard.
I have been stuck on the same error.
The problem ended up being "Task tagging configuration" > DISABLE "Enable ECS managed tags"
When this parameter is enabled, Amazon ECS automatically tags your tasks with two tags corresponding to the cluster and service names. These tags allow you to identify tasks easily in your AWS Cost and Usage Report.
Billing permissions are separate and not by default assigned when you create a new ECS cluster and task definition with the default setting. This is why ECS was failing with "STOPPED: Essential container in task exited"