Copying a file from S3 into my codebase when using Elastic Beanstalk - amazon-web-services

I have the following script:
Parameters:
bucket:
Type: CommaDelimitedList
Description: "Name of the Amazon S3 bucket that contains your file"
Default: "my-bucket"
fileuri:
Type: String
Description: "Path to the file in S3"
Default: "https://my-bucket.s3.eu-west-2.amazonaws.com/oauth-private.key"
authrole:
Type: String
Description: "Role with permissions to download the file from Amazon S3"
Default: "aws-elasticbeanstalk-ec2-role"
files:
/var/app/current/storage/oauth-private.key:
mode: "000600"
owner: webapp
group: webapp
source: { "Ref" : "fileuri" }
authentication: S3AccessCred
Resources:
AWSEBAutoScalingGroup:
Type: "AWS::AutoScaling::AutoScalingGroup"
Metadata:
AWS::CloudFormation::Authentication:
S3AccessCred:
type: "S3"
roleName: { "Ref" : "authrole" }
buckets: { "Ref" : "bucket" }
The issue that I am having is that when this is being deployed, the file aren't present in the /var/app/current/storage directory.
I thought that maybe this script was running too soon and the current directory wasn't ready yet, so I tried the ondeck directory and this also doesn't work.
If I change the path to anywhere other than my codebase directory it works, the file is copied from S3.
Any ideas? Thanks.

Directives under the "files" key are processed before your web application is set up. You will need to download the file to a tmp file, and then use a container_command to transfer it to your app in the current directory.
This AWS doc mentions near the top the order in which keys are processed. The files key is processed before commands, and commands are run before the application and web server are set up. However, the container_commands section notes that they are used "to execute commands that affect your application source code".
So you should modify your script to something like this:
Parameters: ...
Resources: ...
files:
"/tmp/oauth-private.key":
mode: "000600"
owner: webapp
group: webapp
source: { "Ref" : "fileuri" }
authentication: S3AccessCred
container_commands:
file_transfer_1:
command: "mkdir -p storage"
file_transfer_2:
command: "mv /tmp/oauth-private.key storage"

Related

How to create an 'AWS::SSM::Document' with DocumentType of Package using CloudFormation

This AWS CloudFormation document suggests that it is possible to administer an 'AWS::SSM::Document' resource with a DocumentType of 'Package'. However the 'Content' required to achieve this remains a mystery.
Is it possible to create a Document of type 'Package' via CloudFormation, and if so, what is the equivalent of this valid CLI command written as a CloudFormation template (preferably with YAML formatting)?
ssm create-document --name my-package --content "file://manifest.json" --attachments Key="SourceUrl",Values="s3://my-s3-bucket" --document-type Package
Failed Attempt. The content used is an inline version of the manifest.json which was provided when using the CLI option. There doesn't seem to be an option to specify an AttachmentSource when using CloudFormation:
AWSTemplateFormatVersion: 2010-09-09
Resources:
Document:
Type: AWS::SSM::Document
Properties:
Name: 'my-package'
Content: !Sub |
{
"schemaVersion": "2.0",
"version": "Auto-Generated-1579701261956",
"packages": {
"windows": {
"_any": {
"x86_64": {
"file": "my-file.zip"
}
}
}
},
"files": {
"my-file.zip": {
"checksums": {
"sha256": "sha...."
}
}
}
}
DocumentType: Package
CloudFormation Error
AttachmentSource not provided in the input request. (Service: AmazonSSM; Status Code: 400; Error Code: InvalidParameterValueException;
Yes, this is possible! I've successfully created a resource with DocumentType: Package and the package shows up in the SSM console under Distributor Packages after the stack succeeds.
Your YAML is almost there, but you need to also include the Attachments property that is now available.
Here is a working example:
AWSTemplateFormatVersion: "2010-09-09"
Description: Sample to create a Package type Document
Parameters:
S3BucketName:
Type: "String"
Default: "my-sample-bucket-for-package-files"
Description: "The name of the S3 bucket."
Resources:
CrowdStrikePackage:
Type: AWS::SSM::Document
Properties:
Attachments:
- Key: "SourceUrl"
Values:
- !Sub "s3://${S3BucketName}"
Content:
!Sub |
{
"schemaVersion": "2.0",
"version": "1.0",
"packages": {
"windows": {
"_any": {
"_any": {
"file": "YourZipFileName.zip"
}
}
}
},
"files": {
"YourZipFileName.zip": {
"checksums": {
"sha256": "7981B430E8E7C45FA1404FE6FDAB8C3A21BBCF60E8860E5668395FC427CE7070"
}
}
}
}
DocumentFormat: "JSON"
DocumentType: "Package"
Name: "YourPackageNameGoesHere"
TargetType: "/AWS::EC2::Instance"
Note: for the Attachments property you must use the SourceUrl key when using DocumentType: Package. The creation process will append a "/" to this S3 bucket URL and concatenate it with each file name you have listed in the manifest that is the Content property when it creates the package.
Seems there is no direct way to create an SSM Document with Attachment via CloudFormation (CFN). You can use a workaround as using a backed Lambda CFN where you will use a Lambda to call the API SDK to create SSM Document then use Custom Resource in CFN to invoke that Lambda.
There are some notes on how to implement this solution as below:
How to invoke Lambda from CFN: Is it possible to trigger a lambda on creation from CloudFormation template
Sample of a Lambda sending response format (when using Custom Resource in CFN): https://github.com/stelligent/cloudformation-custom-resources
In order to deploy Lambda with best practices and easy upload the attachment, Document content from local, you should use sam deploy instead of CFN create stack.
You can get information of the newly created resource from lambda to the CFN by adding the resource detail into the data json in the response lambda send back and the CFN can use it with !GetAtt CustomResrc.Attribute, you can find more detail here.
There are some drawbacks on this solution:
Add more complex to the original solution as you have to create resources for the Lambda execution such as (S3 to deploy Lambda, Role for Lambda execution and assume the SSM execution, SSM content file - or you have to use a 'long' inline content). It won't be a One-call CFN create-stack anymore. However, you can put everything into the SAM template because at the end of the day, it's just a CFN template
When Delete the CFN stack, you have to implement the lambda when RequestType == Delete for cleaning up your resource.
PS: If you don't have to work strictly on CFN, then you can try with Terraform: https://www.terraform.io/docs/providers/aws/r/ssm_document.html

Elastic Beanstalk not streaming to CloudWatch

I've had an Elastic Beanstalk instance streaming my logs to Cloud Watch for about a year. This week the logs stopped streaming. This may have been because I 'rebuilt' the environment in Beanstalk. No configuration changes were made at the same time.
I've double checked that my Beanstalk role has the correct permissions in IAM (it has CloudWatchFullAccess).
I also tried deleting all of my existing group logs. I then went into the Beanstalk 'Instance log streaming to CloudWatch Logs' area, changed my log retention period and restarted the App Server. Sure enough my log groups were recreated (with the new retention period), so I'm pretty sure the permissions look OK. Despite this, no log messages are appearing in the log groups.
I have requested the recent logs though Beanstalk and I can see messages are being written to the logs on the App Server OK.
My platform is Tomcat 8 with Java 8 running on 64bit Amazon Linux/2.6.2
I'm not sure where to go from here. I have no error messages to work off, or any good ideas for what to check next.
Edit: Here is my custom config for CloudWatch, as defined here
files:
"/etc/awslogs/config/company_log.conf" :
mode: "000600"
owner: root
group: root
content: |
[/var/log/tomcat8/company.log]
log_group_name = `{"Fn::Join":["/", ["/aws/elasticbeanstalk", { "Ref":"AWSEBEnvironmentName" }, "var/log/tomcat8/company.log"]]}`
log_stream_name = {instance_id}
file = /var/log/tomcat8/company.*
It's probably because it's not allowed to. Make sure the role used by the EC2 instances is allowed to access:
logs:CreateLogGroup
logs:CreateLogStream
logs:GetLogEvents
logs:PutLogEvents
logs:DescribeLogGroups
logs:DescribeLogStreams
logs:PutRetentionPolicy
With Elastic Beanstalk the IAM role used by the EC2 instances is probably aws-elasticbeanstalk-ec2-role. Give that role access to a new policy: ec2-cloudwatch-logs-stream (if someone knows a better name let me know) using JSON (as suggested here):
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"logs:CreateLogGroup",
"logs:CreateLogStream",
"logs:GetLogEvents",
"logs:PutLogEvents",
"logs:DescribeLogGroups",
"logs:DescribeLogStreams",
"logs:PutRetentionPolicy"
],
"Resource": [
"*"
]
}
]
}
And it should work after restarting the awslogs service with sudo services awslogs restart. If not; check the logs. You can find it here: /var/log/awslogs.log
You might have upgraded your AWS AMI platform, the Tomcat location is therefore different (e.g. /var/log/tomcat instead of /var/log/tomcat8).
I've detailed in a new Medium blog how this all works and an example .ebextensions file and where to put it.
Below is an excerpt that you might be able to use, though the article explains how to determine the right folder/file(s) to stream.
packages:
yum:
awslogs: []
option_settings:
- namespace: aws:elasticbeanstalk:cloudwatch:logs
option_name: StreamLogs
value: true
- namespace: aws:elasticbeanstalk:cloudwatch:logs
option_name: DeleteOnTerminate
value: false
- namespace: aws:elasticbeanstalk:cloudwatch:logs
option_name: RetentionInDays
value: 90
files:
"/etc/awslogs/awscli.conf" :
mode: "000600"
owner: root
group: root
content: |
[plugins]
cwlogs = cwlogs
[default]
region = `{"Ref":"AWS::Region"}`
"/etc/awslogs/config/logs.conf" :
mode: "000600"
owner: root
group: root
content: |
[/var/log/tomcat/localhost.log]
log_group_name = `{"Fn::Join":["/", ["/aws/elasticbeanstalk", { "Ref":"AWSEBEnvironmentName" }, "var/log/tomcat/localhost.log"]]}`
log_stream_name = {instance_id}
file = /var/log/tomcat/localhost.*
[/var/log/tomcat/catalina.log]
log_group_name = `{"Fn::Join":["/", ["/aws/elasticbeanstalk", { "Ref":"AWSEBEnvironmentName" }, "var/log/tomcat/catalina.log"]]}`
log_stream_name = {instance_id}
file = /var/log/tomcat/catalina.*
[/var/log/tomcat/localhost_access_log.txt]
log_group_name = `{"Fn::Join":["/", ["/aws/elasticbeanstalk", { "Ref":"AWSEBEnvironmentName" }, "var/log/tomcat/access_log"]]}`
log_stream_name = {instance_id}
file = /var/log/tomcat/access_log.*
commands:
"01":
command: systemctl enable awslogsd.service
"02":
command: systemctl restart awslogsd

AWS: IAM permission discrepancies

I am provisioning an ECS cluster using this template as provided by AWS.
I want to also add a file from an s3 bucket, but when adding the following
files:
"/home/ec2-user/.ssh/authorized_keys":
mode: "000600"
owner: ec2-user
group: ec2-user
source: "https://s3-eu-west-1.amazonaws.com/mybucket/myfile"
the provisioning fails with this error in /var/log/cfn-init.log
[root#ip-10-17-19-56 ~]# tail -f /var/log/cfn-init.log
File "/usr/lib/python2.7/dist-packages/cfnbootstrap/construction.py", line 251, in build
changes['files'] = FileTool().apply(self._config.files, self._auth_config)
File "/usr/lib/python2.7/dist-packages/cfnbootstrap/file_tool.py", line 138, in apply
self._write_file(f, attribs, auth_config)
File "/usr/lib/python2.7/dist-packages/cfnbootstrap/file_tool.py", line 225, in _write_file
raise ToolError("Failed to retrieve %s: %s" % (source, e.strerror))
ToolError: Failed to retrieve https://s3-eu-west-1.amazonaws.com/mybucket/myfile: HTTP Error 403 : <?xml version="1.0" encoding="UTF-8"?>
<Error><Code>AccessDenied</Code><Message>Access Denied</Message><RequestId>C6CDAC18E57345BF</RequestId><HostId>VFCrqxtbAsTeFrGxp/nzgBqJdwC7IsS3phjvPq/YzhUk8zuRhemquovq3Plc8aqFC73ki78tK+U=</HostId></Error>
However from within the instance (without the above section) the following command succeeds!
aws s3 cp s3://mybucket/myfile .
You need to use AWS::CloudFormation::Authentication resource to specify authentication credentials for files or sources that you specify with the AWS::CloudFormation::Init resource.
Example:
Metadata:
AWS::CloudFormation::Init:
...
AWS::CloudFormation::Authentication:
S3AccessCreds:
type: "S3"
buckets:
- "mybucket"
roleName:
Ref: "myRole"

Cloudformation: Outputs - Can you return a value from a file

Cloudformation appears to have an "Outputs" section where you can have a value referenced for other stacks, or to display back to the user, etc.
The limited doc is here.
Is it possible to use this to make the contents of a file available?
e.g. I've got a Jenkins install where the initial admin password is stored within:
/var/lib/jenkins/secrets/initialAdminPassword
I'd love to have that value available after deploying our Jenkins Cloudformation stack without having to then SSH into the server.
Is this possible with the outputs section, or any other way with cloudformation templates?
The Outputs section Cloud Formation template are meant to help you find your resource easily.
For any resource you create, you can output the properties defined in Fb::GetAtt Documentation.
For example, to get the connection string for the RDS Instance which was created using Cloud formation template, you can use the following
"Outputs" : {
"JDBCConnectionString": {
"Description" : "JDBC connection string for the master database",
"Value" : { "Fn::Join": [ "",
[ "jdbc:mysql://",
{ "Fn::GetAtt": [ "MyDatabase", "Endpoint.Address" ] },
":",
{ "Fn::GetAtt": [ "MyDatabase", "Endpoint.Port" ] },
"/",
{ "Ref": "MyDBName" }]
]}
}
}
It is not possible to output contents from a file. Moreover, outputs are visible to all the users having access to your AWS account. So, having password as an output is not recommended.
I would suggest you to upload your secrets to a private S3 bucket after the cloud formation create stack operation is successful and download the secrets whenever required.
Hope this helps.
I know this question has been answered but I wanted to offer another solution.
I found myself wanting to do exactly what you (the, OP) were trying to do: use Cloudformation to install Jenkins on an EC2 instance and then print the initial admin password to the Cloudformation outputs.
I ended up working around trying to read the file with the password and instead used the Jenkins CLI from the UserData section to update the admin user with a password that I specified.
Here’s what I did (showing snippets from the template in YAML):
Added a parameter to the template inputs to get the password:
Parameters:
KeyName:
ConstraintDescription: Must be the name of an existing EC2 KeyPair.
Description: Name of an existing EC2 KeyPair for SSH access
Type: AWS::EC2::KeyPair::KeyName
PassWord:
AllowedPattern: '[-_a-zA-Z0-9]*'
ConstraintDescription: A complex password at least eight chars long with alphanumeric characters, dashes and underscores.
Description: Password for the admin account
MaxLength: 64
MinLength: 8
NoEcho: true
Type: String
In the UserData section, I used the PassWord parameter in a call to the jenkins-cli to update the admin account :
UserData: !Base64
Fn::Join:
- ''
- - "#!/bin/bash -x\n"
- "exec > /tmp/user-data.log 2>&1\nunset UCF_FORCE_CONFFOLD\n"
- "export UCF_FORCE_CONFFNEW=YES\n"
- "ucf --purge /boot/grub/menu.lst\n"
- "export DEBIAN_FRONTEND=noninteractive\n"
- "echo \"deb http://pkg.jenkins-ci.org/debian binary/\" > /etc/apt/sources.list.d/jenkins.list\n"
- "wget -q -O jenkins-ci.org.key http://pkg.jenkins-ci.org/debian-stable/jenkins-ci.org.key\n\
apt-key add jenkins-ci.org.key\n"
- "apt-get update\n"
- "apt-get -o Dpkg::Options::=\"--force-confnew\" --force-yes -fuy upgrade\n"
- "apt-get install -y python-pip\n"
- "pip install https://s3.amazonaws.com/cloudformation-examples/aws-cfn-bootstrap-latest.tar.gz\n"
- "apt-get install -y nginx\n"
- "apt-get install -y openjdk-8-jdk\n"
- "apt-get install -y jenkins\n"
- "# Wait for Jenkins to Set Up\n
- until [ $(curl -o /dev/null --silent\
\ --head --write-out '%{http_code}\n' http://localhost:8080) -eq 403\
\ ]; do sleep 1; done\nsleep 10\n# Change the password for the admin\
\ account\necho 'jenkins.model.Jenkins.instance.securityRealm.createAccount(\"\
admin\", \""
- !Ref 'PassWord'
- "\")' | java -jar /var/cache/jenkins/war/WEB-INF/jenkins-cli.jar -s\
\ \"http://localhost:8080/\" -auth \"admin:$(cat /var/lib/jenkins/secrets/initialAdminPassword)\"\
\ groovy =\n/usr/local/bin/cfn-init --resource=Instance --region="
- !Ref 'AWS::Region'
- ' --stack='
- !Ref 'AWS::StackName'
- "\n"
- "unlink /etc/nginx/sites-enabled/default\nsystemctl reload nginx\n"
- /usr/local/bin/cfn-signal -e $? --resource=Instance --region=
- !Ref 'AWS::Region'
- ' --stack='
- !Ref 'AWS::StackName'
- "\n"
Using this method, when Jenkins starts up, I don’t get the “enter the initial admin password” screen but instead I get a screen where i can just log in as admin with the password used in the parameters.
In terms of adding something to the outputs from a file on the system, I think there is a way to do it using WaitCondition and and passing data back using a cfn-signal command. But once I figured out that all I needed to do was set the password I didn’t pursue the WaitCondition method.
Again, I know you have your answer, but I wanted to share in case anyone else happens to be searching for a way to do this. This way worked for me! :D

AWS Elastic Beanstalk: Add custom logs to CloudWatch?

How to add custom logs to CloudWatch? Defaults logs are sent but how to add a custom one?
I already added a file like this: (in .ebextensions)
files:
"/opt/elasticbeanstalk/tasks/bundlelogs.d/applogs.conf" :
mode: "000755"
owner: root
group: root
content: |
/var/app/current/logs/*
"/opt/elasticbeanstalk/tasks/taillogs.d/cloud-init.conf" :
mode: "000755"
owner: root
group: root
content: |
/var/app/current/logs/*
As I did bundlelogs.d and taillogs.d these custom logs are now tailed or retrieved from the console or web, that's nice but they don't persist and are not sent on CloudWatch.
In CloudWatch I have the defaults logs like
/aws/elasticbeanstalk/InstanceName/var/log/eb-activity.log
And I want to have another one like this
/aws/elasticbeanstalk/InstanceName/var/app/current/logs/mycustomlog.log
Both bundlelogs.d and taillogs.d are logs retrieved from management console. What you want to do is extend default logs (e.g. eb-activity.log) to CloudWatch Logs. In order to extend the log stream, you need to add another configuration under /etc/awslogs/config/. The configuration should follow the Agent Configuration file Format.
I've successfully extended my logs for my custom ubuntu/nginx/php platform. Here is my extension file FYI. Here is an official sample FYI.
In your case, it could be like
files:
"/etc/awslogs/config/my_app_log.conf" :
mode: "000600"
owner: root
group: root
content: |
[/var/app/current/logs/xxx.log]
log_group_name = `{"Fn::Join":["/", ["/aws/elasticbeanstalk", { "Ref":"AWSEBEnvironmentName" }, "var/app/current/logs/xxx.log"]]}`
log_stream_name = {instance_id}
file = /var/app/current/logs/xxx.log*
Credits where due go to Sebastian Hsu and Abhyudit Jain.
This is the final config file I came up with for .ebextensions for our particular use case. Notes explaining some aspects are below the code block.
files:
"/etc/awslogs/config/beanstalklogs_custom.conf" :
mode: "000600"
owner: root
group: root
content: |
[/var/log/tomcat8/catalina.out]
log_group_name = `{"Fn::Join":["/", ["/aws/elasticbeanstalk", { "Fn::Select" : [ "1", { "Fn::Split" : [ "-", { "Ref":"AWSEBEnvironmentName" } ] } ] }, "var/log/tomcat8/catalina.out"]]}`
log_stream_name = `{"Fn::Join":["--", [{ "Ref":"AWSEBEnvironmentName" }, "{instance_id}"]]}`
file = /var/log/tomcat8/catalina.out*
services:
sysvinit:
awslogs:
files:
- "/etc/awslogs/config/beanstalklogs_custom.conf"
commands:
rm_beanstalklogs_custom_bak:
command: "rm beanstalklogs_custom.conf.bak"
cwd: "/etc/awslogs/config"
ignoreErrors: true
log_group_name
We have a standard naming scheme for our EB environments which is exactly environmentName-environmentType. I'm using { "Fn::Split" : [ "-", { "Ref":"AWSEBEnvironmentName" } ] } to split that into an array of two strings (name and type).
Then I use { "Fn::Select" : [ "1", <<SPLIT_OUTPUT>> ] } to get just the type string. Your needs would obviously differ, so you may only need the following:
log_group_name = `{"Fn::Join":["/", ["/aws/elasticbeanstalk", { "Ref":"AWSEBEnvironmentName" }, "var/log/tomcat8/catalina.out"]]}`
log_stream_name
I'm using the Fn::Join function to join the EB environment name with the instance ID. Note that the instance ID template is a string that gets echoed exactly as given.
services
The awslogs service is restarted automatically when the custom conf file is deployed.
commands
When the files block overwrites an existing file, it creates a backup file, like beanstalklogs_custom.conf.bak. This block erases that backup file because awslogs service reads both files, potentially causing conflict.
Result
If you log in to an EC2 instance and sudo cat the file, you should see something like this. Note that all the Fn functions have resolved. If you find that an Fn function didn't resolve, check it for syntax errors.
[/var/log/tomcat8/catalina.out]
log_group_name = /aws/elasticbeanstalk/environmentType/var/log/tomcat8/catalina.out
log_stream_name = environmentName-environmentType--{instance_id}
file = /var/log/tomcat8/catalina.out*
The awslogs agent looks in the configuration file for the log files which it's supposed to send. There are some defaults in it. You need to edit it and specify the files.
You can check and edit the configuration file located at:
/etc/awslogs/awslogs.conf
Make sure to restart the service:
sudo service awslogs restart
You can specify your own files there and create different groups and what not.
Please refer to the following link and you'll be able to get your logs in no time.
Resources:
https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/AgentReference.html
Edit:
As you don't want to edit the files on the instance, you can add the relevant code to the .ebextensions folder in the root of your code. For example, this is my 01_cloudwatch.config :
packages:
yum:
awslogs: []
container_commands:
01_get_awscli_conf_file:
command: "aws s3 cp s3://project/awscli.conf /etc/awslogs/awscli.conf"
02_get_awslogs_conf_file:
command: "aws s3 cp s3://project/awslogs.conf.${NODE_ENV} /etc/awslogs/awslogs.conf"
03_restart_awslogs:
command: "sudo service awslogs restart"
04_start_awslogs_at_system_boot:
command: "sudo chkconfig awslogs on"
In this config, I am fetching the appropriate config file from a S3 bucket depending on the NODE_ENV. You can do anything you want in your config.
Some great answers already here.
I've detailed in a new Medium blog how this all works and an example .ebextensions file and where to put it.
Below is an excerpt that you might be able to use, the article explains how to determine the right folder/file(s) to stream.
Note that if /var/app/current/logs/* contains many different files this may not work,e.g. if you have
database.log
app.log
random.log
Then you should consider adding a stream for each, however if you have
app.2021-10-18.log
app.2021-10-17.log
app.2021-10-16.log
Then you can use /var/app/current/logs/app.*
packages:
yum:
awslogs: []
option_settings:
- namespace: aws:elasticbeanstalk:cloudwatch:logs
option_name: StreamLogs
value: true
- namespace: aws:elasticbeanstalk:cloudwatch:logs
option_name: DeleteOnTerminate
value: false
- namespace: aws:elasticbeanstalk:cloudwatch:logs
option_name: RetentionInDays
value: 90
files:
"/etc/awslogs/awscli.conf" :
mode: "000600"
owner: root
group: root
content: |
[plugins]
cwlogs = cwlogs
[default]
region = `{"Ref":"AWS::Region"}`
"/etc/awslogs/config/logs.conf" :
mode: "000600"
owner: root
group: root
content: |
[/var/app/current/logs]
log_group_name = `{"Fn::Join":["/", ["/aws/elasticbeanstalk", { "Ref":"AWSEBEnvironmentName" }, "/var/app/current/logs"]]}`
log_stream_name = {instance_id}
file = /var/app/current/logs/*
commands:
"01":
command: systemctl enable awslogsd.service
"02":
command: systemctl restart awslogsd
Looking at the AWS docs it's not immediately apparent, but there are a few things you need to do.
(Our environment is an Amazon Linux AMI - Rails App on the Ruby 2.6 Puma Platform).
First, create a Policy in IAM to give your EB generated EC2 instances access to work with CloudWatch log groups and stream to them - we named ours "EB-Cloudwatch-LogStream-Access".
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"logs:CreateLogStream",
"logs:DescribeLogStreams",
"logs:CreateLogGroup",
"logs:PutLogEvents"
],
"Resource": "arn:aws:logs:*:*:log-group:/aws/elasticbeanstalk/*:log-stream:*"
}
]
}
Once you have created this, make sure the policy is attached (in IAM > Roles) to your IAM Instance Profile and Service Role that are associated with your EB environment (check the environment's configuration page: Configuration > Security > IAM instance profile | Service Role).
Then, provide a .config file in your .ebextensions directory such as setup_stream_to_cloudwatch.config or 0x_setup_stream_to_cloudwatch.config. In our project we have made it the last extension .config file to run during our deploys by setting a high number for 0x (eg. 09_setup_stream_to_cloudwatch.config).
Then, provide the following, replacing your_log_file with the appropriate filename, keeping in mind that some log files live in /var/log on an Amazon Linux AMI and some (such as those generated by your application) may live in a path such as /var/app/current/log:
files:
'/etc/awslogs/config/logs.conf':
mode: '000600'
owner: root
group: root
content: |
[/var/app/current/log/your_log_file.log]
log_group_name = `{"Fn::Join":["/", ["/aws/elasticbeanstalk", { "Ref":"AWSEBEnvironmentName" }, "var/app/current/log/your_log_file.log"]]}`
log_stream_name = {instance_id}
file = /var/app/current/log/your_log_file.log*
commands:
"01":
command: chkconfig awslogs on
"02":
command: service awslogs restart # note that this works for Amazon Linux AMI only - other Linux instances likely use `systemd`
Deploy your application, and you should be set!