ECS logs: Fargate vs EC2 - amazon-web-services

When I usually run a task in ECS using Fargate, the STDOUT is redirected automatically to cloudwatch and this application logs can be found without any complication.
To clarify, for example, in C#:
Console.WriteLine("log to write to CloudWatch")
That output is automatically redircted to CloudWatch logs when I use ECS with Fargate or Lambda functions
I would like to do the same using EC2.
The first impression using ECS with EC2 is that this is not as automatic as Fargate. Am I right?
Looking for a information I have found the following (apart of other older question or post):
In this question refers to an old post from the AWS blog, so
this could be obsolete.
In this AWS page, they describe a few steps where you need to
install some utilities to your EC2
So, summarizing, is there any way to see the STDOUT in cloudwatch when I use ECS with EC2 in the same way Fargate does?

So, summarizing, is there any way to see the STDOUT in cloudwatch when I use ECS with EC2 in the same way Fargate does?
If you mean EC2 logging as easily as Fargate does without any complex configuration, then no. You need to provide some configuration and utilities to your EC2 to allow logging to CloudWatch. As any EC2 instance we launch, ECS instances are just a virtual machine with some operational system with a default configuration, in this case, is Amazon ECS-optimized AMIs. Other services and configurations we should provide by ourself.
Besides the link above you provided, I found this CloudFormation template which configures EC2 Spot Fleet to log to CloudWatch in the same way your second link describes.

I don't think your correct. The StdOut logs from the ECS task launch are just as easily written and accessed running under EC2 as Fargate.
You just have this in your task definition which, as far as I can tell, is the same as in Fargate:
"containerDefinitions": [
{
"dnsSearchDomains": null,
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-group": "my-log-family",
"awslogs-region": "us-east-1",
"awslogs-stream-prefix": "my-stream-name"
}
}
...
After it launches, you should see your logs under my-log-family
If you are trying to put application logs in CloudWatch, that's another matter... this is typically done using the CloudWatch logs agent which you'd have to install into the container, but the above will capture the StdOut.

This is how I did it.
Using the nugget: AWS.Logger.AspNetCore
An example of use:
static async Task Main(string[] args)
{
Logger loggerObj = new Logger();
ILogger<Program> logger = await loggerObj.CreateLogger("test", "eu-west-1");
logger.LogInformation("test info");
logger.LogError("test error");
}
public async Task<ILogger<Program>> CreateLogger(string logGroup, string region)
{
AWS.Logger.AWSLoggerConfig config = new AWS.Logger.AWSLoggerConfig();
config.Region = region;
config.LogStreamNameSuffix = "";
config.LogGroup = logGroup;
LoggerFactory logFactory = new LoggerFactory();
logFactory.AddAWSProvider(config);
return logFactory.CreateLogger<Program>();
}

Related

How and why of awslogs on ECS (fargate)

I am struggling to get a task running using ECS Fargate, and launched (ecs.runTask) from an AWS SDK script (JS/Node).
My current struggle is to get logs from the containers so that I can trouble shoot why they are stopping. I can't seem to get the Task Definition right so that they will be generated.
logConfiguration: {
logDriver: 'awslogs',
options: {
"awslogs-region": 'us-west-2',
"awslogs-group": 'myTask',
"awslogs-stream-prefix": "myTask",
"awslogs-create-group": "true"
}
}
I have set the log driver for them to awslogs, but when I try to view the logs in CloudWatch, I get various kinds of nothing:
If I specify the awslogs-create-group as "true" (it requires a string, rather than a Boolean, which is strange; I assume case doesn't matter), I nevertheless find that the group is not created.
If I create the group manually, I find that the log stream is not created.
I suspect that there may be an error in my permissions, though of course there is no error messaging to confirm. The docs (here) indicate that I need to attach certain policies to ecsInstanceRole, which seems to be a placeholder for a role that is used somewhere in the process.
But I have attached such a policy to my ECS executionRole, to the role that executes my API call to runTask, and I have looked for any other role that might be involved (an actual "instanceRole" doesn't seem to exist in the Task Def), and nothing is improving my situation.
I'd be happy to supply more information, but at this point I'm not sure where my blind spot is.
Can anyone see it?
Go to your Task Definition. You should find a section called "Task execution IAM role". The description says -
This role is required by tasks to pull container images and publish container logs to Amazon CloudWatch.
The role you attach here needs a policy like AmazonECSTaskExecutionRolePolicy (AWS managed policy), and the Trusted Entity is ecs-tasks.amazonaws.com.
Also, the awslogs option awslogs-create-group is not needed, I think.

How to run aws cli commands as a specific user?

Currently we are running 4 commands:
below two aws cli commands in jenkins docker container:
sh 'aws cloudformation package ...'
s3Upload()
Below two aws cli commands in docker container:
aws s3 cp source dest
aws cloudformation deploy
To run these above 4 commands in docker container, aws cli derive access permissions from docker host( EC2 ) which assumes a role with policy having permissions ( to access s3 and create/update cloud formation stack).
But the problem with such solution is,
we have to assign this role(say xrole) to every EC2 that is running in each test environment. There are 3-4 test environments.
Internally, aws creates an adhoc user as aws::sts::{account Id}::assumerole/xrole/i-112223344 and above 4 commands run on behalf of this user.
Better solution would be to create a user and assign the same role(xrole) to this and run above 4 commands as this user.
But,
1) what is the process to create such user? Because it has to assume xrole...
2) how to run above 4 commands with this user?
Best practice is to use roles, not users when working with EC2 instances. Users are necessary only when you need to grant permissions to applications that are running on computers outside of AWS environment (on premise). And even then, it is still best practice to grant this user permissions to only assume role which grants the necessary permissions.
If you are running all your commands from within containers and you want to grant permissions to containers instead of the whole EC2 instance then what you can do is to use ECS service instead of plain EC2 instances.
When using EC2 launch type with ECS, you have the same control over the EC2 instance but the difference is that you can attach role to a particular task (container) instead of the whole EC2 instance. By doing this, you can have several different tasks (containers) running on the same EC2 instance while each of them have only permissions that its needs. So if one of your containers needs to upload data to S3, you can create necessary role, specify the role in task definition and only that particular task will have those permissions. Neither other tasks nor the EC2 instance itself will be able to upload objects to S3.
Moreover, if you specify awsvpc networking mode for your tasks, each task will get its own ENI which means that you can specify Security Group for each task separately even if they are running on the same EC2 instance.
Here is an example of task definition using docker image stored in ECR and role called AmazonECSTaskS3BucketRole.
{
"containerDefinitions": [
{
"name": "sample-app",
"image": "123456789012.dkr.ecr.us-west-2.amazonaws.com/aws-nodejs-sample:v1",
"memory": 200,
"cpu": 10,
"essential": true
}
],
"family": "example_task_3",
"taskRoleArn": "arn:aws:iam::123456789012:role/AmazonECSTaskS3BucketRole"
}
Here is documentation for task definitions
Applications running on the same host share the permissions assigned to the host through the instance profile. If you would like to segregate different applications running on the same instance due to security requirements, it is best to launch them on separate instances.
Using access keys per application is not a recommended approach as access keys are long-term credentials and they can easily be retrieved when the host is shared.
It is possible to assign IAM roles to ECS tasks as suggested by the previous answer. However, containers that are running on your container instances are not prevented from accessing the credentials that are supplied through the instance profile. It is therefore recommended to assign minimal permissions to the container instance roles.
If you run your tasks in awsvpc network mode, then you can configure ECS agent to prevent a task from accessing the instance metadata. You should just set agent configuration variable, ECS_AWSVPC_BLOCK_IMDS=true and restart the agent.
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-iam-roles.html

Register AWS ECS task in service discovery namespace (private hosted zone)

I'm quite bad at using AWS but I'm trying to automate the set up of an ECS cluster with private DNS names in route53, using the new service discovery mechanism. I am able to click my way through the AWS UI to accomplish a DNS entry showing up in a private hosted zone but I cannot figure out the JSON parameters to add to the json for the command below to accomplish the same thing.
aws ecs create-service --cli-input-json file://aws/createService.json
and below is the approximate contents of the createService.json
referenced above
"cluster": "clustername",
"serviceName": "servicename",
"taskDefinition": "taskname",
"desiredCount": 1,
// here is where I'm guessing there should be some DNS config referencing some
// namespace or similar that I cannot figure out...
"networkConfiguration": {
"awsvpcConfiguration": {
"subnets": [
"subnet-11111111"
],
"securityGroups": [
"sg-111111111"
],
"assignPublicIp": "DISABLED"
}
}
I'd be grateful for any ideas since my googling skills apparently aren't good enough for this problem as it seems. Many thanks!
To automatically have an ECS service register instances into a servicediscovery service you can use the serviceRegistries attribute. Add the following to the ECS service definition json:
{
...
"serviceRegistries": [
{
"registryArn": "arn:aws:servicediscovery:region:aws_account_id:service/srv-utcrh6wavdkggqtk"
}
]
}
The attribute contains a list of autodiscovery services that should be updated by ECS when it creates or destroys a task as part of the service. Each registry is referenced using the ARN of the autodiscovery service.
To get the Arn use the AWS cli command aws servicediscovery list-services
Strangely the documentation of the ECS service definition does not contain information about this attribute. However this tutorial about service discovery does.
As it turns out there is no support in ecs create service for adding it to the service registry, i.e. the route53 private hosted zone. Instead I had to use aws servicediscovery create-service and then servicediscovery register-instance to finally get an entry in my private hosted zone.
This became a quite complicated solution so I'll instead give Terraform a shot at it since I found they recently added support for ECS service discovery and see where that takes me...

Useless Amazon ECS Error Message when creating tasks

Using the ecs agent container on an Ubuntu instance, I am able to register the agent with my cluster.
I also have a service created in that cluster and task definitions as well. When I try to add a task to the cluster I get the useless error message:
Run tasks failed
Reasons : ["ATTRIBUTE"]
The ecs agent log has no related error message. Any thoughts on how I can get better debugging or what the issue might be?
The cli also returns the same useless error message
{
"tasks": [],
"failures": [
{
"arn": "arn:aws:ecs:us-east-1:sssssss:container-instance/sssssssssssss",
"reason": "ATTRIBUTE"
}
]
}
From the troubleshooting guide:
ATTRIBUTE (container instance ID)
Your task definition contains a parameter that requires a specific container instance attribute that is not available on your container instances. For more information on which attributes are required for specific task definition parameters and agent configuration variables, see Task Definition Parameters and Amazon ECS Container Agent Configuration.
You can find the attributes required for your task definition by looking at the requiredAttributes field. You can find the attributes that are present for your container instances in the result of the DescribeContainerInstances API call.
The ECS console webpage does not provide enough information, but you can connect to the EC2 instance to retrieve more logs.
You can try by manually restart ecs agent daemon, ecs agent docker.
Sometimes, you need to manually delete the checkpoint file
A cheatsheet with location of logs, commands can be found at
ecs-agent troubleshoot

AWS ECS - how to log to cloudwatch from ECS container?

I have a container that runs a given task in an ECS cluster. I'd like to log the results of that task in cloudwatch.
I've attempted to edit the container to use awslogs and set the following params:
awslogs-group
awslogs-region
When I attempt to run the task, I get the following helpful error:
Is there a proven MVP way of setting up containers to log to cloudwatch in AWS?
We are using a docker-container which is forwarding all logfiles to AWS CloudWatch:
https://github.com/nearform/docker-cloudwatch
First, we have created a new IAM user with the required access rights and assigned it to the new IAM user. With the Access ID / Key it is now possible to have all logs from all containers in Cloudwatch.
You should add another parameter in "options" of "logConfiguration" inside "containerDefinitions" like this:
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-group": "/ecs/<your-task-name>",
"awslogs-region": "ap-south-1",
"awslogs-stream-prefix": "ecs"
}
}
If you want to use other log driver take a look at these examples on AWS docs and documentation for logging part here.