Essential container in task exited - amazon-web-services

I am trying to configure my docker hub image with aws ecs..I have created repository, cluster, and task while running task am getting an error as an essential container in task exited 1. while trying to get exact error details I have found that some of my variables are shown as not configured.
find the screenshot attached of errors.
cluster details
error detail

You should setup the "Log Configuration" by specifying a log configuration in your task definition. I would recommend the awslogs configuration type, as this lets you see the logs from your container right inside the console.
Once you do that you will get a new tab on the task details screen called "Logs" and you can click that to see the output from your container as it was starting up. You will probably see some kind of error or crash as the "Essential container exited" error means that the container was expected to stay up and running, but it just exited.

I had to expand the corresponding container details in the stopped task and check the "Details" --> "Status reason", which revealed the following issue:
OutOfMemoryError: Container killed due to memory usage
Exit Code 137
After increasing the available container memory, it worked fine.

I had a similar issue. You can setup the cloudwatch log, there you can get the full error log which will help you to debug and fix the issue. below are the part of the taken taken from aws official documentation.
Using the auto-configuration feature to create a log group
When registering a task definition in the Amazon ECS console, you have the option to allow Amazon ECS to auto-configure your CloudWatch logs. This option creates a log group on your behalf using the task definition family name with ecs as the prefix.
To use log group auto-configuration option in the Amazon ECS console
Open the Amazon ECS console at https://console.aws.amazon.com/ecs/.
In the left navigation pane, choose Task Definitions, Create a new Task Definition, alternatively, you can also do create a revision of the exiting task definition.
Select your compatibility option and choose Next Step.
Choose Add container.
In the Storage and Logging section, for Log configuration, choose Auto-configure CloudWatch Logs.
Enter your awslogs log driver options. For more information, see Specifying a log configuration in your task definition.
Complete the rest of the task definition wizard.

I have been stuck on the same error.
The problem ended up being "Task tagging configuration" > DISABLE "Enable ECS managed tags"
When this parameter is enabled, Amazon ECS automatically tags your tasks with two tags corresponding to the cluster and service names. These tags allow you to identify tasks easily in your AWS Cost and Usage Report.
Billing permissions are separate and not by default assigned when you create a new ECS cluster and task definition with the default setting. This is why ECS was failing with "STOPPED: Essential container in task exited"

Related

Where is ECS Task Stopped Reason now?

I am using the AWS interface to configure my services on ECS. Before the interface change, I used to be able to access a screen that would allow me to see why the task had failed (like in the example below), that interface could be accessed from the ECS service events by clicking on the taskid. Does anyone know how to get the task stopped reason data with the new interface?
You can see essentially the same message if you do the following steps:
Select your service from your ECS cluster:
Go to Configuration and tasks tab:
Scroll down and select a task. You would want to chose one which was stopped by the failing deployment:
You should have the Stopped reason message:

Amazon ECS : how to schedule a container?

I have a very simple ECS cluster using Fargate. I'd like to schedule a container to be run using a cron expression.
I created the task definition and a rule pointing to it using the EventBridge console, but I see nothing getting launched on the cluster. No logs, not even a trace of anything starting apart from the "monitor" tab of the rule which says it was triggered (but then again, I don't see any logs).
I'm guessing this might have to do with the public IP somehow needed for the rule to pull the container using Fargate? In the creation there is a setting called auto-assign public IP address but it only shows the DISABLED option.
Has anyone had the same problem? Should I just schedule a normal service with sleep times of 24hours between executions and risk a higher cost? Cheers
Since you mention that you have no issues running the task manually in the cluster, it's likely that the problem with EventBridge is that the role associated with the rule does not have enough permissions to run the task.
You can confirm this by checking CloudTrail logs. You'll find a RunTask event with a failure similar to the following:
User: arn:aws:sts::xxxx:assumed-role/Amazon_EventBridge_Invoke_ECS/xxx is not authorized to perform: ecs:RunTask on resource: arn:aws:ecs:us-east-1:xxxx:task-definition/ECS_task

Fargate sceduled task FailedInvocation error

I have a fargate task that I want to run as a scheduled task every n minutes. I have a task definition that works perfectly as expected (with cloud watch logs as expected and VPC connections working properly). That is when I run it as a task or a service. However, when I try to run it as scheduled task, it does not start. I checked the cloudwatch logs, however, there are no log entries in the log group. If I lookup the metrics page, I see a FailedInvocations entry under the metric name.
I understand that it is a bit tricky to schedule a task in fargate, as we have to go to cloudwatch rules, and update the scheduled task there, in order to add subnets and define a security group, as this option is not available when creating the scheduled task through my ECS cluster page.
I also have studied the documentation page here, and also checked this question. But I still cannot understand why it does not work. Thank you in advance.
This seems like an issue with the web interface of AWS for scheduled tasks, as they don't let me set the assignPublicIp to enabled.
Without this, the Fargate task cannot pull images from the ECR registry. However, when I started this task using boto3 using a lambda function that gets called through cloudwatch rules, it works fine.

How to find the status of a running process inside docker container in AWS?

I have an application (a task) running in containers in AWS. I need to know its current state and also need to make sure it runs without the container exiting and killing it while in progress.
It's a C++ binary.
Service - Creating a service will ensure that its fail-safe but how can I read this information from the outside. I could exit the application with a proper exit code but the service will just recreate the task again and again in this manner which is a burden.
Is there a recommended way to communicate from the process within an ECS container to know what it is doing at the moment?
There can be two ways to view ECS container logs:
SSH into your EC2 instance created by ECS, run docker ps to find container ids and docker logs container_id to see what is going on in the container. (This will not work if you have created your cluster using Fargate as it does not creates EC2 instance, it only creates a network interface)
Configure Cloudwatch on AWS to view container activities. To configure logs you have to create a new version of your task defenition > open container > under storage and logging uncheck auto configure CloudWatch logs > select log driver as awslogs > tag your group, region and prefix keys.
To view your logs, click on the Task tab in your cluster > open your task > expand your container > bottom section shows Log Configuration with a link to your container logs.

AWS CodeDeploy using Github Failing

The overall deployment failed because too many individual instances failed deployment, too few healthy instances are available for deployment, or some instances in your deployment group are experiencing problems. (Error code: HEALTH_CONSTRAINTS)
Note : 1. Already installed AWS code deploy agent
2. Roles created
Please assist and let me know for any more information.
You can see more information about the error by accessing the deployment ID events:
Here you can check all the steps and their detailed status:
If there are any errors you can click on the Logs column.
There can be different reasons why your deployment failed. As instructed above, you can see click on the deployment ID to go to the deployment details page and see which instances failed. You can then click on the view events for each instance to see why deployment to that instance failed. If you do not see any link to "View events", and no event details for that instance, it is likely that the agent is not running properly. Otherwise, you should be able to click on "View events" to see which lifecycle event failed. You can also log in to the failed instance and view the host agent logs to get more information.