Elastic Beanstalk Single Container Docker - use awslogs logging driver - amazon-web-services

I'm running a single Docker container on Elastic Beanstalk using its Single Container Docker Configuration, and trying to send the application stdout to CloudWatch using the awslogs logging driver.
EB looks for a Dockerrun.aws.json file for the configuration of the container, but as far as I can see doesn't have an option to use awslogs as the container's logging driver (or add any other flags to the docker run command for that matter).
I've tried hacking into the docker run command using the answer provided here, by adding a file .ebextensions/01-commands.config with content:
commands:
add_awslogs:
command: 'sudo sed -i "s/docker run -d/docker run --log-driver=awslogs --log-opt awslogs-region=eu-west-2 --log-opt awslogs-group=dockerContainerLogs -d/" /opt/elasticbeanstalk/hooks/appdeploy/enact/00run.sh'
This works, in the sense that it modifies the run script, and logs show up in CloudWatch.
But the EB application dies. The container is up, but does not respond to requests.
I find the following error in the container logs:
"logs" command is supported only for "json-file" and "journald" logging
drivers (got: awslogs)
I find answers to similar questions relating to ECS (not EB) suggesting to append ECS_AVAILABLE_LOGGING_DRIVERS with awslogs. But I don't find this configuration setting in EB.
Any thoughts?

I'm posting here the answer I received from AWS support:
As Elastic Beanstalk Single Container environment will save the stdout
and stderr on /var/log/eb-docker/containers/eb-current-app/ by
default, and as the new solution stack allows you the option to stream
log to cloudwatch, automating the configuration of the AWSLogs agent
on the instances, what I recommend to do is to add an ebextension to
add the stdout and stderr logs files to the cloudwatch configuration
and use the already configured agent to stream those files to
cloudwatch logs. instead of touching the pre-hooks , which is nor
supported by AWS as hooks may change from solution stack version to
another.
Regarding the error you are seeing "logs" command is supported only
for "json-file" and "journald" logging drivers (got: awslogs)" this
error is from how docker works, when it is configured to send logs to
other driver beside json-file or journald it will not be able to
display logs locally as it does not have a local copy of them.
### BEGIN .ebextensions/logs.config
option_settings:
- namespace: aws:elasticbeanstalk:cloudwatch:logs
option_name: StreamLogs
value: true
- namespace: aws:elasticbeanstalk:cloudwatch:logs
option_name: DeleteOnTerminate
value: false
- namespace: aws:elasticbeanstalk:cloudwatch:logs
option_name: RetentionInDays
value: 7
files:
"/etc/awslogs/config/stdout.conf":
mode: "000755"
owner: root
group: root
content: |
[docker-stdout]
log_group_name=/aws/elasticbeanstalk/environment_name/docker-stdout
log_stream_name={instance_id}
file=/var/log/eb-docker/containers/eb-current-app/*-stdouterr.log
commands:
"00_restart_awslogs":
command: service awslogs restart
### END .ebextensions/logs.config

I was able to expand on the previous answer for a multi container elastic beanstalk environment as well as inject the environment name. I did have to grant the correct permission in the ec2 role to be able to create the log group. You can see if it is working by looking in:
/var/log/awslogs.log
this goes in .ebextensions/logs.config
option_settings:
- namespace: aws:elasticbeanstalk:cloudwatch:logs
option_name: StreamLogs
value: true
- namespace: aws:elasticbeanstalk:cloudwatch:logs
option_name: DeleteOnTerminate
value: false
- namespace: aws:elasticbeanstalk:cloudwatch:logs
option_name: RetentionInDays
value: 14
files:
"/etc/awslogs/config/stdout.conf":
mode: "000755"
owner: root
group: root
content: |
[/var/log/containers/docker-stdout]
log_group_name=/aws/elasticbeanstalk/`{ "Ref" : "AWSEBEnvironmentName" }`/docker-stdout.log
log_stream_name={instance_id}
file=/var/log/containers/*-stdouterr.log
commands:
"00_restart_awslogs":
command: service awslogs restart

Related

NoCredentialProviders error when running "docker compose up" with AWS ECS integration

I'm getting the following error when I try to run docker compose up to deploy my infrastructure to AWS using Docker's ECS integration. Note that I'm running this on Pop!_OS 21.10, which is based on Ubuntu.
NoCredentialProviders: no valid providers in chain. Deprecated. For verbose messaging see aws.Config.CredentialsChainVerboseErrors
Things I've tried, based on an exhaustive search of SO and other sites:
Verified the proper format of my ~/.aws/config and ~/.aws/credentials files are formatted correctly, are in the proper place, and have the correct permissions
Verified that the aws cli works fine
Verify that AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, AWS_REGION are all set correctly
Tried copying the config and credentials to /root/.aws
Tried setting AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, AWS_REGION in the root user's environment
Created /etc/systemd/system/docker.service.d/aws-credentials.conf and populated it with:
[Service]
Environment="AWS_ACCESS_KEY_ID=********************"
Environment="AWS_SECRET_ACCESS_KEY=****************************************"
Ran docker -l debug compose up (Only extra information it provides is DEBUG deploying on AWS with region="us-east-1"
I'm running out of options. If anyone has any other ideas to try, I'd love to hear it. Thanks!
Update: I've also now tried the following, with no luck:
Tried setting Environment="AWS_SHARED_CREDENTIALS_FILE=/home/kespan/.aws/credentials
Tried setting Environment="AWS_SHARED_CREDENTIALS_FILE=/home/kespan/.aws/credentials in /etc/systemd/system/docker.service.d/override.conf
After remembering my IAM account has MFA enabled, generated a token and added Environment="AWS_SESSION_TOKEN=..." to override.conf
Also to note - each time after I've added/modified files under /etc/systemd/system/docker.service.d/ I've run:
sudo systemctl daemon-reload
sudo systemctl restart docker
Edit:
Here's one of the Dockerfiles (both the scraper and scheduler use an identical Dockerfile):
FROM denoland/deno:alpine
WORKDIR /app
USER deno
COPY deps.ts .
RUN deno cache --unstable --no-check deps.ts
COPY . .
RUN deno cache --unstable --no-check mod.ts
RUN mkdir -p /var/tmp/log
CMD ["run", "--unstable", "--allow-all", "--no-check", "mod.ts"]
Here's my docker-compose (some bits redacted):
version: '3'
services:
grafana:
container_name: grafana
image: grafana/grafana
ports:
- "3000:3000"
volumes:
- grafana:/var/lib/grafana
deploy:
replicas: 1
scheduler:
image: scheduler
x-aws-pull-credentials: "arn..."
container_name: scheduler
environment:
DB_CONNECTION_STRING: "postgres://..."
SQS_URL: "..."
SQS_REGION: "us-east-1"
AWS_ACCESS_KEY_ID: ${AWS_ACCESS_KEY_ID}
AWS_SECRET_ACCESS_KEY: ${AWS_SECRET_ACCESS_KEY}
deploy:
replicas: 1
scraper:
image: scraper
x-aws-pull-credentials: "arn..."
container_name: scraper
environment:
DB_CONNECTION_STRING: "postgres://..."
SQS_URL: "..."
SQS_REGION: "us-east-1"
AWS_ACCESS_KEY_ID: ${AWS_ACCESS_KEY_ID}
AWS_SECRET_ACCESS_KEY: ${AWS_SECRET_ACCESS_KEY}
deploy:
replicas: 1
volumes:
grafana:
Have you attempted to use the Amazon ECS Local Container Endpoints tool that AWS Labs provides? It allows you to create an override file for you docker-compose configurations, and it will simulate the ECS endpoints and IAM roles you would be using in AWS.
This is done using the local AWS credentials you have on your workstation. More information is available on the AWS Blog.

Codepipeline script_start failed Script does not exist at specified location

I am getting this error below. when I start deployment
Script name
scripts/server_stop.sh
Message
Script does not exist at specified location: /opt/codedeploy-agent/deployment-root/2a16dbef-8b9f-42a5-98ad-1fd41f596acb/d-2KDHE6D5F/deployment-archive/scripts/server_stop.sh
source and build are fine
this is the appspec that I am using:
version: 0.0
os: linux
files:
- source: /
destination: /home/ec2-user/KnCare-Rest
file_exists_behavior: OVERWRITE
permissions:
- object: /
pattern: "**"
owner: ec2-user
group: ec2-user
hooks:
#BeforeInstall:
#- location: scripts/server_clear.sh
#timeout: 300
#runas: ec2-use
ApplicationStart:
- location: scripsts/server_start.sh
timeout: 20
runas: ec2-user
ApplicationStop:
- location: scripsts/server_stop.sh
timeout: 20
runas: ec2-user
Two things cause this issue
If you deployed the scripts on asg or ec2 and then add additional scripts for the hook.
For this you need to restart the asg or ec2 before adding additional scripts.
If you used the same pipeline name again for a different deployment.
For this you need to delete the input artifact that is in s3 before running the deployment since aws will not delete the artifacts even if you delete the pipeline.
Did you ssh to ec2 instance to verify that scripts/server_stop.sh existed and in execute mode permissions?

AWS CloudFormation -- auto mount EFS

I'm trying to create a CF script that creates an EC2 and then automatically mounts an EFS. Here's the relevant bit of the template. I find that the packages are not loaded: amazon-efs-utils, nfs-utils.
Therefore if the mount command is executed it will fail.
I've verified that my other stack has what I need and the output variable is correct: !ImportValue dmp356-efs-EFSId
If I log into my new instance and do the steps manually it works fine and I can see my files in the EFS. Naturally I suspect that my CF script is wrong in some way, although it validates if I use "aws cloudformation validate-template ..." and it deploys with a successful conclusion. As I said, I can log into the new instance, it doesn't rollback.
Resources:
TestHost:
Type: AWS::EC2::Instance
Metadata:
AWS::CloudFormation::Init:
configSets:
MountConfig:
- setup
- mount
setup:
packages:
yum:
amazon-efs-utils: []
nfs-utils: []
commands:
01_createdir:
command:
"mkdir /nfs"
mount:
commands:
01_mount:
command: !Sub
- mount -t efs ${EFS_ID} /nfs
- { EFS_ID: !ImportValue dmp356-efs-EFSId }
02_mount:
command:
"chown ec2-user.ec2-user /nfs"

AWS Elastic Beanstalk: streaming container logs to CloudWatch issue

I am using multi container Beanstalk and trying to forward container logs to CloudWatch.
The option in Dockerrun.aws.json does not work for me as I need to forward the logs for each env to its own log group while having universal zip file to be deployed to each env. Unfortunately, there is no way to have log group specified as a variable in Dockerrun.aws.json.
So, what I am using is .ebextensions/00-container-logs.config:
files:
"/etc/awslogs/config/container_logs.conf" :
mode: "000644"
owner: root
group: root
content: |
[app-container-logs]
file=/var/log/containers/*-stdouterr.log
log_group_name=`{"Fn::Join":["-", [{ "Ref":"AWSEBEnvironmentName" }, "AppAndCrons"]]}`
log_stream_name=ApplicationContainerLogs
commands:
"01":
command: service awslogs restart
The issue: once docker starts logging to a new file, it stops sending the logs to CloudWatch until the command "service awslogs restart" is executed manually. Any ideas, please?
Adding file_fingerprint_lines helped a lot, as I have my first line of each log file empty (while the first line is hashed by CloudWatch).
content: |
[app-container-logs]
file=/var/log/containers/*-stdouterr.log
log_group_name=`{"Fn::Join":["-", [{ "Ref":"AWSEBEnvironmentName" }, "AppAndCrons"]]}`
log_stream_name=ApplicationContainerLogs
file_fingerprint_lines=1-8

CI/CD with AWS CodePipeline for a Django App

Currently I have a AWS Codecommit repository and an AWS Elastic Beanstalk enviroment in which I upload updates with the EB CLI, using eb deploy.
I have some config files that are ignored in .gitignore, I want to establish a AWS CodePipeline so when I push changes to repository, automatically run the test functions and upload the changes directly to Elastic Beanstalk
I tried implementing a simple pipeline where I push code to CodeCommit and Deploys to Elastic Beantstalk but I get the following error:
2019-09-09 11:51:45 UTC-0500 ERROR "option_settings" in one of the configuration files failed validation. More details to follow.
2019-09-09 11:51:45 UTC-0500 ERROR You cannot remove an environment from a VPC. Launch a new environment outside the VPC.
2019-09-09 11:51:45 UTC-0500 ERROR Failed to deploy application.
This is the *.config file that isn't in Codecommit
option_settings:
aws:ec2:vpc:
VPCId: vpc-xxx
Subnets: 'subnet-xxx'
aws:elasticbeanstalk:environment:
EnvironmentType: SingleInstance
ServiceRole: aws-xxxx
aws:elasticbeanstalk:container:python:
WSGIPath: xxx/wsgi.py
aws:elasticbeanstalk:healthreporting:system:
SystemType: enhanced
aws:elasticbeanstalk:application:environment:
DJANGO_SETTINGS_MODULE: xxxxsettings
SECRET_KEY: xxxx
DB_NAME: xxxx
DB_USER: xxxx
DB_PASSWORD: xxxx
DB_HOST: xxx
DB_PORT: xxxx
aws:autoscaling:launchconfiguration:
SecurityGroups: sg-xxx
I noticed some syntax that is a little different from the above:
Subnets: value has '' around them, could this be causing the issue and if you have this here, are '' supposed to be around the other values ?
From the config file it's look like that you are using single instance. For Single instance you don't need to specify autoscaling launch configuration. Just remove the last two line it will work fine.
I think from what I have been reading is that I should not commit my config files, but add them in CodeBuild so it generates the .zip file that would be deployed to ElasticBeanstalk.