I am using multi container Beanstalk and trying to forward container logs to CloudWatch.
The option in Dockerrun.aws.json does not work for me as I need to forward the logs for each env to its own log group while having universal zip file to be deployed to each env. Unfortunately, there is no way to have log group specified as a variable in Dockerrun.aws.json.
So, what I am using is .ebextensions/00-container-logs.config:
files:
"/etc/awslogs/config/container_logs.conf" :
mode: "000644"
owner: root
group: root
content: |
[app-container-logs]
file=/var/log/containers/*-stdouterr.log
log_group_name=`{"Fn::Join":["-", [{ "Ref":"AWSEBEnvironmentName" }, "AppAndCrons"]]}`
log_stream_name=ApplicationContainerLogs
commands:
"01":
command: service awslogs restart
The issue: once docker starts logging to a new file, it stops sending the logs to CloudWatch until the command "service awslogs restart" is executed manually. Any ideas, please?
Adding file_fingerprint_lines helped a lot, as I have my first line of each log file empty (while the first line is hashed by CloudWatch).
content: |
[app-container-logs]
file=/var/log/containers/*-stdouterr.log
log_group_name=`{"Fn::Join":["-", [{ "Ref":"AWSEBEnvironmentName" }, "AppAndCrons"]]}`
log_stream_name=ApplicationContainerLogs
file_fingerprint_lines=1-8
Related
I am following a tutorial to deploy a Flask application with Docker to AWS Elastic Beanstalk (EB). I created an AWS Elastic Container Registry (ECR) and ran some commands which successfully pushed the Docker image to the ECR:
docker build -t app-backend
docker tag app-backend:latest [URL_ID].dkr.ecr.us-east-1.amazonaws.com/app-backend:latest
docker push [URL_ID].dkr.ecr.us-east-1.amazonaws.com/app-backend:latest
Then I tried to deploy to EB:
eb init (selecting a Docker EB application I created on the AWS GUI)
eb deploy
On "eb init" I get the error "Cannot setup CodeCommit because there is no Source Control setup, continuing with initialization", but I assume this can be ignored as it otherwise looked fine. On "eb deploy" though, the deployment fails. In "eb-engine.log" (found in the AWS GUI), I see error messages like:
[ERROR] An error occurred during execution of command [app-deploy] - [Docker Specific Build Application]. Stop running the command. Error: failed to pull docker image: Command /bin/sh -c docker pull [URL_ID].dkr.ecr.us-east-1.amazonaws.com/app-backend:latest failed with error exit status 1. Stderr:failed to register layer: Error processing tar file(exit status 1): write /root/.cache/pip/http/5/e/7/3/b/[long number]: no space left on device
When I manually run the pull command the error references (locally, not from the EB instance), the command seems to respond as expected:
docker pull [URL_ID].dkr.ecr.us-east-1.amazonaws.com/app-backend:latest
What could be causing this deployment failure?
My Dockerrun.aws.json file looks like this:
{
"AWSEBDockerrunVersion": "1",
"Image": {
"Name": "[URL_ID].dkr.ecr.us-east-1.amazonaws.com/app-backend",
"Update": "true"
},
"Ports": [
{
"ContainerPort": 5000,
"HostPort": 5000
}
]
}
I solved this by following how to prevent error "no space left on device" when deploying multi container docker application on AWS beanstalk?.
Basically you find your Elastic Beanstalk instance in the EC2 AWS GUI, you modify the volumes to add space to the EB instance. Then you follow the link in that Stack Overflow post to repartition your EB instance by SSHing into it with eb ssh and then using commands like df -H and lsblk to see how much space in in each partition. And use commands like:
sudo growpart /dev/xvda 1
sudo xfs_growfs -d /
to repartition the hard drive as to use all the new space you added in the AWS EC2 GUI. You can check with df -H and lsblk to see if the repartitioning gave you more space.
Then the eb deploy command should work. If SSH isn't setup yet, you may have to do eb ssh --setup first.
I'm currently trying to create a worker on AWS Elastic Beanstalk which is pulling messages from a specific SQS queue (with the help of the Symfony messenger). I don't want to use dedicated worker instances for this task. After some research, I found out that systemd can help here which is enabled by default on the new Amazon Linux 2 instances.
However, I'm not able to create a running systemd service. Here is my .ebextensions/03_workers.config file:
files:
/etc/systemd/system/my_worker.service:
mode: "000755"
owner: root
group: root
content: |
[Unit]
Description=My worker
[Service]
User=nginx
Group=nginx
Restart=always
ExecStart=/usr/bin/nohup /usr/bin/php /var/app/current/bin/console messenger:consume integration_incoming --time-limit=60
[Install]
WantedBy=multi-user.target
services:
systemd:
my_worker:
enabled: "true"
ensureRunning: "true"
I can't see my service running if I'm running this command:
systemctl | grep my_worker
What am I doing wrong? :)
systemd is not supported in Services. The only correct is sysvinit:
services:
sysvinit:
my_worker:
enabled: "true"
ensureRunning: "true"
But I don't think it will even work, as this is for Amazon Linux 1, not for Amazon Linux 2.
In Amazon Linux 2 you shouldn't be even using much of .ebextensions. AWS docs specifically write:
On Amazon Linux 2 platforms, instead of providing files and commands in .ebextensions configuration files, we highly recommend that you use Buildfile. Procfile, and platform hooks whenever possible to configure and run custom code on your environment instances during instance provisioning.
Thus, you should consider using Procfile which does basically what you want to achieve:
Use a Procfile for long-running application processes that shouldn't exit. Elastic Beanstalk expects processes run from the Procfile to run continuously. Elastic Beanstalk monitors these processes and restarts any process that terminates. For short-running processes, use a Buildfile.
Alternative
Since you already have created a unit file /etc/systemd/system/my_worker.service for systemd, you can enable and start it yourself.
For this container_commands in .ebextensions can be used. For example:
container_commands:
10_enable_worker:
command: systemctl enable worker.service
20_start_worker:
command: systemctl start worker.service
It's not officially documented, but you can use a systemd service in Amazon Linux 2.
A block like the following should work:
services:
systemd:
__SERVICE_NAME__:
enabled: true
ensureRunning: true
Support for a "systemd" service is provided by internal package /usr/lib/python3.7/site-packages/cfnbootstrap/construction.py which lists recognized service types: sysvinit, windows, and systemd
class CloudFormationCarpenter(object):
_serviceTools = {"sysvinit": SysVInitTool, "windows": WindowsServiceTool, "systemd": SystemDTool}
Note that a systemd service must support chkconfig and in particular your launch script at /etc/init.d/__SERVICE_NAME__ must include a "chkconfig" and "description" line similar to:
# chkconfig: 2345 70 60
# description: Continuously logs Nginx status.
If you don't support chkconfig correctly then chkconfig --list __SERVICE_NAME__ will print an error, and attempting to deploy to Elastic Beanstalk will log a more detailed error in /var/log/cfn-init.log when it tries to start the service.
I am using Terraform script to create aws elastic beanstalk environment, I need to launch a shell script on instance launch
I have already tried the following
resource "aws_elastic_beanstalk_environment" "Environment" {
name = "${var.ebs_env_name}"
application = "${var.ebs_app_name}"
---
---
---
setting = {
namespace = "aws:autoscaling:launchconfiguration"
name = "user_data"
value = "${file("user-data.sh")}"
}
}
This is throwing error
Error applying plan:
1 error(s) occurred:
aws_elastic_beanstalk_environment.Environment: ConfigurationValidationException: Configuration validation exception: Invalid option specification (Namespace: 'aws:autoscaling:launchconfiguration', OptionName: 'user_data'): Unknown configuration setting.
status code: 400, request id: xxxxx-xxxxxx
Please Help
Thanks for the Answer, I found a solution
I have created a folder .ebextensions and created a file inside the folder called 99delayed_job.config (you can give any name)
commands:
create_post_dir:
command: "mkdir /opt/elasticbeanstalk/hooks/appdeploy/pre"
ignoreErrors: true
files:
/opt/elasticbeanstalk/hooks/appdeploy/pre/99_restart_delayed_job.sh:
group: root
mode: "000755"
owner: root
content: |-
#!/usr/bin/env bash
<My shell script here>
A and zipped with 'Dockerrun.aws.json' this zip I am sending to s3 and used to deploy
Working fine :)
I couldn't find any information on the AWS Elastic Beanstalk service exposing a means to modify the user_data on the instances. You can however adjust the AMI used, so you could use a tool like Packer to build yourself a custom AMI that includes the user_data in it.
I'm running a single Docker container on Elastic Beanstalk using its Single Container Docker Configuration, and trying to send the application stdout to CloudWatch using the awslogs logging driver.
EB looks for a Dockerrun.aws.json file for the configuration of the container, but as far as I can see doesn't have an option to use awslogs as the container's logging driver (or add any other flags to the docker run command for that matter).
I've tried hacking into the docker run command using the answer provided here, by adding a file .ebextensions/01-commands.config with content:
commands:
add_awslogs:
command: 'sudo sed -i "s/docker run -d/docker run --log-driver=awslogs --log-opt awslogs-region=eu-west-2 --log-opt awslogs-group=dockerContainerLogs -d/" /opt/elasticbeanstalk/hooks/appdeploy/enact/00run.sh'
This works, in the sense that it modifies the run script, and logs show up in CloudWatch.
But the EB application dies. The container is up, but does not respond to requests.
I find the following error in the container logs:
"logs" command is supported only for "json-file" and "journald" logging
drivers (got: awslogs)
I find answers to similar questions relating to ECS (not EB) suggesting to append ECS_AVAILABLE_LOGGING_DRIVERS with awslogs. But I don't find this configuration setting in EB.
Any thoughts?
I'm posting here the answer I received from AWS support:
As Elastic Beanstalk Single Container environment will save the stdout
and stderr on /var/log/eb-docker/containers/eb-current-app/ by
default, and as the new solution stack allows you the option to stream
log to cloudwatch, automating the configuration of the AWSLogs agent
on the instances, what I recommend to do is to add an ebextension to
add the stdout and stderr logs files to the cloudwatch configuration
and use the already configured agent to stream those files to
cloudwatch logs. instead of touching the pre-hooks , which is nor
supported by AWS as hooks may change from solution stack version to
another.
Regarding the error you are seeing "logs" command is supported only
for "json-file" and "journald" logging drivers (got: awslogs)" this
error is from how docker works, when it is configured to send logs to
other driver beside json-file or journald it will not be able to
display logs locally as it does not have a local copy of them.
### BEGIN .ebextensions/logs.config
option_settings:
- namespace: aws:elasticbeanstalk:cloudwatch:logs
option_name: StreamLogs
value: true
- namespace: aws:elasticbeanstalk:cloudwatch:logs
option_name: DeleteOnTerminate
value: false
- namespace: aws:elasticbeanstalk:cloudwatch:logs
option_name: RetentionInDays
value: 7
files:
"/etc/awslogs/config/stdout.conf":
mode: "000755"
owner: root
group: root
content: |
[docker-stdout]
log_group_name=/aws/elasticbeanstalk/environment_name/docker-stdout
log_stream_name={instance_id}
file=/var/log/eb-docker/containers/eb-current-app/*-stdouterr.log
commands:
"00_restart_awslogs":
command: service awslogs restart
### END .ebextensions/logs.config
I was able to expand on the previous answer for a multi container elastic beanstalk environment as well as inject the environment name. I did have to grant the correct permission in the ec2 role to be able to create the log group. You can see if it is working by looking in:
/var/log/awslogs.log
this goes in .ebextensions/logs.config
option_settings:
- namespace: aws:elasticbeanstalk:cloudwatch:logs
option_name: StreamLogs
value: true
- namespace: aws:elasticbeanstalk:cloudwatch:logs
option_name: DeleteOnTerminate
value: false
- namespace: aws:elasticbeanstalk:cloudwatch:logs
option_name: RetentionInDays
value: 14
files:
"/etc/awslogs/config/stdout.conf":
mode: "000755"
owner: root
group: root
content: |
[/var/log/containers/docker-stdout]
log_group_name=/aws/elasticbeanstalk/`{ "Ref" : "AWSEBEnvironmentName" }`/docker-stdout.log
log_stream_name={instance_id}
file=/var/log/containers/*-stdouterr.log
commands:
"00_restart_awslogs":
command: service awslogs restart
I tried to use fluentd log driver with the following Dockerrun.aws.json,
{
"AWSEBDockerrunVersion": 2,
"containerDefinitions": [
{
"name": "apache",
"image": "php:5.6-apache",
"essential": true,
"memory": 128,
"portMappings": [
{
"hostPort": 80,
"containerPort": 80
}
],
"logConfiguration": {
"logDriver": "fluentd",
"options": {
"fluentd-address": "127.0.0.1:24224"
}
}
}
]
}
but the following error occurred.
ERROR: Encountered error starting new ECS task: {cancel the command.
"failures": [
{
"reason": "ATTRIBUTE",
"arn": "arn:aws:ecs:ap-northeast-1:000000000000:container-instance/00000000-0000-0000-0000-000000000000"
}
],
"tasks": []
}
ERROR: Failed to start ECS task after retrying 2 times.
ERROR: [Instance: i-00000000] Command failed on instance. Return code: 1 Output: beanstalk/hooks/appdeploy/enact/03start-task.sh failed. For more detail, check /var/log/eb-activity.log using console or EB CLI.
What sould do I configure?
Seems that you can also accomplish it with .ebextensions/01-fluentd.config file in your application environment directory with the following content:
files:
"/home/ec2-user/setup-available-log-dirvers.sh":
mode: "000755"
owner: root
group: root
content: |
#!/bin/sh
set -e
if ! grep fluentd /etc/ecs/ecs.config &> /dev/null
then
echo 'ECS_AVAILABLE_LOGGING_DRIVERS=["json-file","syslog","fluentd"]' >> /etc/ecs/ecs.config
fi
container_commands:
01-configure-fluentd:
command: /home/ec2-user/setup-available-log-dirvers.sh
Now you have to deploy a new application version (without fluentd configuration yet), rebuild your environment, add fluentd configuration:
logConfiguration:
logDriver: fluentd
options:
fluentd-address: localhost:24224
fluentd-tag: docker.myapp
and now deploy updated app, everything should work now.
I have resolved the problem myself.
First, I prepare a custom ami having the following user data.
#cloud-config
repo_releasever: 2015.09
repo_upgrade: none
runcmd:
- echo 'ECS_AVAILABLE_LOGGING_DRIVERS=["json-file","syslog","fluentd"]' >> /etc/ecs/ecs.config
Second, I define the ami id which is created custom ami in my environment EC2 settings. Finally, I deploy my application to Elastic Beanstalk. After this, fluentd log driver in my environment works normally.
In order to use fluentd log driver in Elastic Beanstalk Multicontainer Docker, it requires to define ECS_AVAILABLE_LOGGING_DRIVERS variable in /etc/ecs/ecs.config. Elastic Beanstalk Multicontainer Docker is using ECS inside, thus related settings is in the ECS documentation.
Please read logConfiguration section in the following documentation:
http://docs.aws.amazon.com/AmazonECS/latest/developerguide/task_definition_parameters.html
I have added a comment already to the accepted answer, just adding the complete ebextension file that I used to make it work for me
files:
"/home/ec2-user/setup-available-log-dirvers.sh":
mode: "000755"
owner: root
group: root
content: |
#!/bin/sh
set -e
if ! grep fluentd /etc/ecs/ecs.config &> /dev/null
then
echo 'ECS_AVAILABLE_LOGGING_DRIVERS=["json-file","syslog","fluentd"]' >> /etc/ecs/ecs.config
fi
container_commands:
00-configure-fluentd:
command: /home/ec2-user/setup-available-log-dirvers.sh
01-stop-ecs:
command: stop ecs
02-stop-ecs:
command: start ecs
We are just restating ecs after setting logging drivers