AWS ECS - how to log to cloudwatch from ECS container? - amazon-web-services

I have a container that runs a given task in an ECS cluster. I'd like to log the results of that task in cloudwatch.
I've attempted to edit the container to use awslogs and set the following params:
awslogs-group
awslogs-region
When I attempt to run the task, I get the following helpful error:
Is there a proven MVP way of setting up containers to log to cloudwatch in AWS?

We are using a docker-container which is forwarding all logfiles to AWS CloudWatch:
https://github.com/nearform/docker-cloudwatch
First, we have created a new IAM user with the required access rights and assigned it to the new IAM user. With the Access ID / Key it is now possible to have all logs from all containers in Cloudwatch.

You should add another parameter in "options" of "logConfiguration" inside "containerDefinitions" like this:
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-group": "/ecs/<your-task-name>",
"awslogs-region": "ap-south-1",
"awslogs-stream-prefix": "ecs"
}
}
If you want to use other log driver take a look at these examples on AWS docs and documentation for logging part here.

Related

How and why of awslogs on ECS (fargate)

I am struggling to get a task running using ECS Fargate, and launched (ecs.runTask) from an AWS SDK script (JS/Node).
My current struggle is to get logs from the containers so that I can trouble shoot why they are stopping. I can't seem to get the Task Definition right so that they will be generated.
logConfiguration: {
logDriver: 'awslogs',
options: {
"awslogs-region": 'us-west-2',
"awslogs-group": 'myTask',
"awslogs-stream-prefix": "myTask",
"awslogs-create-group": "true"
}
}
I have set the log driver for them to awslogs, but when I try to view the logs in CloudWatch, I get various kinds of nothing:
If I specify the awslogs-create-group as "true" (it requires a string, rather than a Boolean, which is strange; I assume case doesn't matter), I nevertheless find that the group is not created.
If I create the group manually, I find that the log stream is not created.
I suspect that there may be an error in my permissions, though of course there is no error messaging to confirm. The docs (here) indicate that I need to attach certain policies to ecsInstanceRole, which seems to be a placeholder for a role that is used somewhere in the process.
But I have attached such a policy to my ECS executionRole, to the role that executes my API call to runTask, and I have looked for any other role that might be involved (an actual "instanceRole" doesn't seem to exist in the Task Def), and nothing is improving my situation.
I'd be happy to supply more information, but at this point I'm not sure where my blind spot is.
Can anyone see it?
Go to your Task Definition. You should find a section called "Task execution IAM role". The description says -
This role is required by tasks to pull container images and publish container logs to Amazon CloudWatch.
The role you attach here needs a policy like AmazonECSTaskExecutionRolePolicy (AWS managed policy), and the Trusted Entity is ecs-tasks.amazonaws.com.
Also, the awslogs option awslogs-create-group is not needed, I think.

ECS logs: Fargate vs EC2

When I usually run a task in ECS using Fargate, the STDOUT is redirected automatically to cloudwatch and this application logs can be found without any complication.
To clarify, for example, in C#:
Console.WriteLine("log to write to CloudWatch")
That output is automatically redircted to CloudWatch logs when I use ECS with Fargate or Lambda functions
I would like to do the same using EC2.
The first impression using ECS with EC2 is that this is not as automatic as Fargate. Am I right?
Looking for a information I have found the following (apart of other older question or post):
In this question refers to an old post from the AWS blog, so
this could be obsolete.
In this AWS page, they describe a few steps where you need to
install some utilities to your EC2
So, summarizing, is there any way to see the STDOUT in cloudwatch when I use ECS with EC2 in the same way Fargate does?
So, summarizing, is there any way to see the STDOUT in cloudwatch when I use ECS with EC2 in the same way Fargate does?
If you mean EC2 logging as easily as Fargate does without any complex configuration, then no. You need to provide some configuration and utilities to your EC2 to allow logging to CloudWatch. As any EC2 instance we launch, ECS instances are just a virtual machine with some operational system with a default configuration, in this case, is Amazon ECS-optimized AMIs. Other services and configurations we should provide by ourself.
Besides the link above you provided, I found this CloudFormation template which configures EC2 Spot Fleet to log to CloudWatch in the same way your second link describes.
I don't think your correct. The StdOut logs from the ECS task launch are just as easily written and accessed running under EC2 as Fargate.
You just have this in your task definition which, as far as I can tell, is the same as in Fargate:
"containerDefinitions": [
{
"dnsSearchDomains": null,
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-group": "my-log-family",
"awslogs-region": "us-east-1",
"awslogs-stream-prefix": "my-stream-name"
}
}
...
After it launches, you should see your logs under my-log-family
If you are trying to put application logs in CloudWatch, that's another matter... this is typically done using the CloudWatch logs agent which you'd have to install into the container, but the above will capture the StdOut.
This is how I did it.
Using the nugget: AWS.Logger.AspNetCore
An example of use:
static async Task Main(string[] args)
{
Logger loggerObj = new Logger();
ILogger<Program> logger = await loggerObj.CreateLogger("test", "eu-west-1");
logger.LogInformation("test info");
logger.LogError("test error");
}
public async Task<ILogger<Program>> CreateLogger(string logGroup, string region)
{
AWS.Logger.AWSLoggerConfig config = new AWS.Logger.AWSLoggerConfig();
config.Region = region;
config.LogStreamNameSuffix = "";
config.LogGroup = logGroup;
LoggerFactory logFactory = new LoggerFactory();
logFactory.AddAWSProvider(config);
return logFactory.CreateLogger<Program>();
}

Using AWS Storages Services (EBS or EFS or S3) as Volumes or Mount Binds with Stanalone Docker Containers not ECS?

I have self-managed AWS Cluster over which I am looking to run Docker Containers.
(At present, ECS and EKS are not in my scope though in future they might... but I need focus on present requirement).
I got to add persistence to few containers by attaching AWS efs/ebs/s3fs storages (as appropriate for the use case). AWS has addressed this use case through a lengthy and verbose blog which takes ECS in to picture. Like said my requirement is simple and this article seems to do many things like cloudFormaton etc etc..
Will appreciate if anyone can simplify this a provide the bare bones step I need to follow.
1) I installed the ebs/efs/s3fs drivers -
docker plugin install --grant-all-permissions rexray/ebs
and so on for efs and s3fs too. s3fs installation ran into trouble.
Error response from daemon: dial unix
/run/docker/plugins/b0b9c534158e73cb07011350887501fe5fd071585af540c2264de760f8e2c0d9/rexray.sock:
connect: no such file or directory
But this is not my problem for the moment unless someone wants to volunteer on solving this issue.
Where I am struck is - what are the next steps to create volumes or directly mount them at run time to containers as volumes or mount binds (is this supported? or just volumes).
here are the steps for ec2-based ecs services (since fargate instances do not support docker volumes as of today):
Update your instance role to include the following permissions:
ec2:AttachVolume
ec2:CreateVolume
ec2:CreateSnapshot
ec2:CreateTags
ec2:DeleteVolume
ec2:DeleteSnapshot
ec2:DescribeAvailabilityZones
ec2:DescribeInstances
ec2:DescribeVolumes
ec2:DescribeVolumeAttribute
ec2:DescribeVolumeStatus
ec2:DescribeSnapshots
ec2:CopySnapshot
ec2:DescribeSnapshotAttribute
ec2:DetachVolume
ec2:ModifySnapshotAttribute
ec2:ModifyVolumeAttribute
ec2:DescribeTags
this should be for all resources in the policy. n.b, the createVolume and deleteVolume permissions can be omitted if you don't want to use autoProvision.
Install rexray on the instance (you've already done this)
If you're not using autoprovision, provision your volume and make sure there is a Name tag matching the name of the volume that you want to use in your service definition. In the example below, we set this value to rexray-vol.
Update your task definition to include the necessary values for the volume to be mounted as a docker container. Here is an example:
"volumes": [{
"name": "rexray-vol",
"dockerVolumeConfiguration": {
"autoprovision": true,
"scope": "shared",
"driver": "rexray/ebs",
"driverOpts": {
"volumetype": "gp2",
"size": "5"
}
}
}]
Update the task definition's container definition to refer your swanky ebs volume:
"mountPoints": [
{
"containerPath": "/var/lib/mysql",
"sourceVolume": "rexray-vol"
}
],

Register AWS ECS task in service discovery namespace (private hosted zone)

I'm quite bad at using AWS but I'm trying to automate the set up of an ECS cluster with private DNS names in route53, using the new service discovery mechanism. I am able to click my way through the AWS UI to accomplish a DNS entry showing up in a private hosted zone but I cannot figure out the JSON parameters to add to the json for the command below to accomplish the same thing.
aws ecs create-service --cli-input-json file://aws/createService.json
and below is the approximate contents of the createService.json
referenced above
"cluster": "clustername",
"serviceName": "servicename",
"taskDefinition": "taskname",
"desiredCount": 1,
// here is where I'm guessing there should be some DNS config referencing some
// namespace or similar that I cannot figure out...
"networkConfiguration": {
"awsvpcConfiguration": {
"subnets": [
"subnet-11111111"
],
"securityGroups": [
"sg-111111111"
],
"assignPublicIp": "DISABLED"
}
}
I'd be grateful for any ideas since my googling skills apparently aren't good enough for this problem as it seems. Many thanks!
To automatically have an ECS service register instances into a servicediscovery service you can use the serviceRegistries attribute. Add the following to the ECS service definition json:
{
...
"serviceRegistries": [
{
"registryArn": "arn:aws:servicediscovery:region:aws_account_id:service/srv-utcrh6wavdkggqtk"
}
]
}
The attribute contains a list of autodiscovery services that should be updated by ECS when it creates or destroys a task as part of the service. Each registry is referenced using the ARN of the autodiscovery service.
To get the Arn use the AWS cli command aws servicediscovery list-services
Strangely the documentation of the ECS service definition does not contain information about this attribute. However this tutorial about service discovery does.
As it turns out there is no support in ecs create service for adding it to the service registry, i.e. the route53 private hosted zone. Instead I had to use aws servicediscovery create-service and then servicediscovery register-instance to finally get an entry in my private hosted zone.
This became a quite complicated solution so I'll instead give Terraform a shot at it since I found they recently added support for ECS service discovery and see where that takes me...

AWS EC2 Launch logs in cloudwatch windows 2016 image

I'm trying to forward the EC2 Launch logs to cloudwatch from my win 2016-based EC2 instance.
For some reason I can't see the log groups for this specific category.
Here's example of my AWS.EC2.Windows.CloudWatch.json:
{
"IsEnabled": true,
"EngineConfiguration": {
"PollInterval": "00:00:15",
"Components": [
{
"Id": "Ec2Config",
"FullName": "AWS.EC2.Windows.CloudWatch.CustomLog.CustomLogInputComponent,AWS.EC2.Windows.CloudWatch",
"Parameters": {
"LogDirectoryPath": "C:\\ProgramData\\Amazon\\EC2-Windows\\Launch\\Log",
"TimestampFormat": "yyyy-MM-ddTHH:mm:ss.fffZ:",
"Encoding": "UTF-8",
"Filter": "UserdataExecution.log",
"CultureName": "en-US",
"TimeZoneKind": "UTC"
}
},
{
"Id": "EC2ConfigSink",
"FullName": "AWS.EC2.Windows.CloudWatch.CloudWatchLogsOutput,AWS.EC2.Windows.CloudWatch",
"Parameters": {
"Region": "eu-west-1",
"LogGroup": "/my-customer/deployment/ec2config-userdata",
"LogStream": "ec2config-userdata"
}
}
...
I have a few more definitions in this file
...],
"Flows": {
"Flows":
[
"Ec2Config,EC2ConfigSink",
... other references here
]
}
}
Cloudwatch agent starts and doesn't report any errors, I can see data from other sources (some application log files - I skipped the definitions intentionally)
It means the cloudwatch config file is correct and is applied / placed in a correct directory.
Logs are coming through with no problem except for the EC2 launch logs.
I'm wondering if anybody ran into this problem? It works perfectly on Windows 2012 - based images
Apparently, the SSM Agent starts after the EC2 Launch executes UserDatascript. I can see it from the SSM Agent's log file modification timestamps.
Therefore, there's no log forwarding happening during the EC2 Launch.
When the SSM Agent starts and loads the cloudwatch plugin, the log files are already filled with entries and never change (wallpaper log is the only exception) So they never end up in cloudwatch console.
There's been a lot of changes implemented on AWS side: they switch to .Net core, removed EC2 config service and moved the log forwarding logic to SSM Agent (cloudwatch Plugin) for Windows 2016-based AMIs
It looks like the behavior has changed quite significantly too so there's no way to get the EC2 launch logs in cloudwatch (when using AWS toolset-only)
Basically we have to stick to our Application logs only which is very unfortunate. We rely on EC2 launch logs to see if the instance started & successfully executed user data.