Datadog task definition entrypoint - amazon-web-services

Each time i add the entrypoint to my task definition to access the ip adress so the datadog agent can send me apm traces the service does not accept it and return failing status.
"entryPoint": [
"sh",
"-c",
"export DD_AGENT_HOST=$(curl http://169.254.169.254/latest/meta-data/local-ipv4)"
],

Related

AWS Cloudwatch logs status shows running but logs are not available in AWS console

I am new to aws and I have to publish the application service logs to cloud watch. I tried the steps mentioned in AWS documentation and its working. I configured the same steps via jenkins pipeline. Here i am facing an issue. Logs are not getting published i.e. I could not see the logs from AWS console. I logged on to the ec2 instance and check the cloudwatch service status and it shows
{
  "status": "running",
  "starttime": "2021-03-25T07:40:21+0000",
  "configstatus": "configured",
  "cwoc_status": "stopped",
  "cwoc_starttime": "",
  "cwoc_configstatus": "not configured",
  "version": "1.247347.3b250378"
}
Don't understand what is wrong here :(.
Any help would be helpful.
Thanks in advance.
I followed the link mentioned by you Installing and Running the CloudWatch Agent on Your Servers.
Below is my configuration for pushing the logs and few other metrics, which is generated by this command
Run the CloudWatch Agent Configuration Wizard
sudo /opt/aws/amazon-cloudwatch-agent/bin/amazon-cloudwatch-agent-config-wizard
{
"agent": {
"metrics_collection_interval": 60,
"run_as_user": "root"
},
"logs": {
"logs_collected": {
"files": {
"collect_list": [
{
"file_path": "/var/log/messages",
"log_group_name": "messages",
"log_stream_name": "{instance_id}"
}
]
}
}
},
"metrics": {
... # metrics configuration here
}
}
Started the client as described in the doc
/opt/aws/amazon-cloudwatch-agent/bin/amazon-cloudwatch-agent-ctl -a \
fetch-config -m ec2 -s -c file:///opt/aws/amazon-cloudwatch-agent/bin/config.json
Start the CloudWatch Agent Using the Command Line
# /opt/aws/amazon-cloudwatch-agent/bin/amazon-cloudwatch-agent-ctl -a status
{
"status": "running",
"starttime": "2021-03-26T11:46:14+0000",
"version": "1.247345.35"
}
You can look for troubles inside the logs directory if there are any
[root#ip-xx amazon-cloudwatch-agent]# ls
amazon-cloudwatch-agent.log configuration-validation.log state
[root#ip-xx amazon-cloudwatch-agent]# pwd
/var/log/amazon/amazon-cloudwatch-agent
On the side note if I just want to push the logs to cloudwatch, I would use this one
Quick Start: Install and Configure the CloudWatch Logs Agent on a Running EC2 Linux Instance

How does "latest" tag work in an ECS task definition and container instances pulling from ECR?

I'm having problems using latest tag in an ECR task definition, where image parameter has value like XXXXXXXXXXXX.dkr.ecr.us-east-1.amazonaws.com/reponame/web:latest.
I'm expecting this task definition to pull an image with latest tag from ECR once a new service instance (task) is run on the container instance (an EC2 instance registered to the cluster).
However in my case when I connect to the container instance remotely and list docker images, I can see that it has not pulled the latest release image from ECR.
The latest tag there is two release versions behind the current one, from since I updated the task definition to use latest tag instance of explicitly defining the version tag i.e. :v1.05.
I have just one container instance on this cluster.
It's possible there is some quirk in my process, but this question is mainly about how this latest should behave in this kind scenario?
My docker image build and tagging, ECR push, ECS task definition update, and ECS service update process:
# Build the image with multiple tags
docker build -t reponame/web:latest -t reponame/web:v1.05 .
# Tag the image with the ECR repo URI
docker tag ${imageId} XXXXXXXXXXXX.dkr.ecr.us-east-1.amazonaws.com/reponame/web
# Push both tags separately
docker push XXXXXXXXXXXX.dkr.ecr.us-east-1.amazonaws.com/reponame/web:v1.05
docker push XXXXXXXXXXXX.dkr.ecr.us-east-1.amazonaws.com/reponame/web:latest
# Run only if the definition file's contents has been updated
aws ecs register-task-definition --cli-input-json file://web-task-definition.json
# Update the service with force-new-deployment
aws ecs update-service \
--cluster my-cluster-name \
--service web \
--task-definition web \
--force-new-deployment
With a task definition file:
{
"family": "web",
"containerDefinitions": [
{
"name": "web",
"image": "XXXXXXXXXXXX.dkr.ecr.us-east-1.amazonaws.com/reponame/web:latest",
"essential": true,
"memory": 768,
"memoryReservation": 512,
"cpu": 768,
"portMappings": [
{
"containerPort": 5000,
"hostPort": 80
}
],
"entryPoint": [
"yarn", "start"
],
"environment": [
{
"name": "HOST",
"value": "0.0.0.0"
},
{
"name": "NUXT_HOST",
"value": "0.0.0.0"
},
{
"name": "NUXT_PORT",
"value": "5000"
},
{
"name": "NODE_ENV",
"value": "production"
},
{
"name": "API_URL",
"value": "/api"
}
]
}
]
}
Turned out the problem was with my scripts. Was using a different variable that had an old value still stored with my terminal session.
I've validated that by using latest tag in the task definition's image source url does have a newly started service instance to pull in the image with latest tag from ECR.
Without needing to register a new revision of the task definition.
As a sidenote, one needs to be careful with handling the latest tag. In this scenario it seems to work out, but in many other cases it would be error prone: Ref1, Ref2
You must label and push latest when you build a new image, otherwise the label will not be updated on the registry.
There is also an option to force pull when running an image, so that the docker host will not assume that just because it pulled latest yesterday, it should still try and pull latest today.

Run docker image on amazon ecs

I have a docker image which runs with this command
docker run -it -p 8118:8118 -p 9050:9050 -d dperson/torproxy
It requires a port as an argument.
What I tried?
I pushed this image to ECR repo, created task related to this image. After I created service with network-load-balancer. But the server is not responding when I try to GET DNS name of network-load-balancer.
I think this is because I didn't configure the port for the container.
How can I do this?
Port Mappings are part of the Task Definition > Container Definitions.
This can be done through the UI Add Container or using the CLI / SDK RegisterTaskDefinition
{
"containerDefinitions": [
{
...
"portMappings": [
{
"containerPort": number,
"hostPort": number,
"protocol": "string"
}
],
...
}
]
}

Runing a docker container with the --privileged option

I'm currently trying to figure out how to run a container on Elastic Beanstalk with the privileged mode. I read the documentation, but i can't find a way to do it.
I'm assuming you're launching to Docker running in ECS.
ECS using task definitions to define how a docker container should start up. Specifically, the task definition property: privileged is what you're looking for.
ElasticBeanstalk uses the Dockerrun.aws.json file to generate a task definition. According to the documentation for v2 of the file, you can add this flag to one of the objects in the containerDefinitions block.
So, something like this should work
{
"AWSEBDockerrunVersion": 2,
"containerDefinitions": [
{
"name": "my-app",
"image": "some:app",
"essential": true,
"memory": 128,
"privileged": true,
}
]
}

How to use fluentd log driver on Elastic Beanstalk Multicontainer docker

I tried to use fluentd log driver with the following Dockerrun.aws.json,
{
"AWSEBDockerrunVersion": 2,
"containerDefinitions": [
{
"name": "apache",
"image": "php:5.6-apache",
"essential": true,
"memory": 128,
"portMappings": [
{
"hostPort": 80,
"containerPort": 80
}
],
"logConfiguration": {
"logDriver": "fluentd",
"options": {
"fluentd-address": "127.0.0.1:24224"
}
}
}
]
}
but the following error occurred.
ERROR: Encountered error starting new ECS task: {cancel the command.
"failures": [
{
"reason": "ATTRIBUTE",
"arn": "arn:aws:ecs:ap-northeast-1:000000000000:container-instance/00000000-0000-0000-0000-000000000000"
}
],
"tasks": []
}
ERROR: Failed to start ECS task after retrying 2 times.
ERROR: [Instance: i-00000000] Command failed on instance. Return code: 1 Output: beanstalk/hooks/appdeploy/enact/03start-task.sh failed. For more detail, check /var/log/eb-activity.log using console or EB CLI.
What sould do I configure?
Seems that you can also accomplish it with .ebextensions/01-fluentd.config file in your application environment directory with the following content:
files:
"/home/ec2-user/setup-available-log-dirvers.sh":
mode: "000755"
owner: root
group: root
content: |
#!/bin/sh
set -e
if ! grep fluentd /etc/ecs/ecs.config &> /dev/null
then
echo 'ECS_AVAILABLE_LOGGING_DRIVERS=["json-file","syslog","fluentd"]' >> /etc/ecs/ecs.config
fi
container_commands:
01-configure-fluentd:
command: /home/ec2-user/setup-available-log-dirvers.sh
Now you have to deploy a new application version (without fluentd configuration yet), rebuild your environment, add fluentd configuration:
logConfiguration:
logDriver: fluentd
options:
fluentd-address: localhost:24224
fluentd-tag: docker.myapp
and now deploy updated app, everything should work now.
I have resolved the problem myself.
First, I prepare a custom ami having the following user data.
#cloud-config
repo_releasever: 2015.09
repo_upgrade: none
runcmd:
- echo 'ECS_AVAILABLE_LOGGING_DRIVERS=["json-file","syslog","fluentd"]' >> /etc/ecs/ecs.config
Second, I define the ami id which is created custom ami in my environment EC2 settings. Finally, I deploy my application to Elastic Beanstalk. After this, fluentd log driver in my environment works normally.
In order to use fluentd log driver in Elastic Beanstalk Multicontainer Docker, it requires to define ECS_AVAILABLE_LOGGING_DRIVERS variable in /etc/ecs/ecs.config. Elastic Beanstalk Multicontainer Docker is using ECS inside, thus related settings is in the ECS documentation.
Please read logConfiguration section in the following documentation:
http://docs.aws.amazon.com/AmazonECS/latest/developerguide/task_definition_parameters.html
I have added a comment already to the accepted answer, just adding the complete ebextension file that I used to make it work for me
files:
"/home/ec2-user/setup-available-log-dirvers.sh":
mode: "000755"
owner: root
group: root
content: |
#!/bin/sh
set -e
if ! grep fluentd /etc/ecs/ecs.config &> /dev/null
then
echo 'ECS_AVAILABLE_LOGGING_DRIVERS=["json-file","syslog","fluentd"]' >> /etc/ecs/ecs.config
fi
container_commands:
00-configure-fluentd:
command: /home/ec2-user/setup-available-log-dirvers.sh
01-stop-ecs:
command: stop ecs
02-stop-ecs:
command: start ecs
We are just restating ecs after setting logging drivers