I am looking to copy a file from s3 bucket ls-Bucket to my EC2 in /tmp/ folder. I want to do this when i upload my war file to elastic beanstalk and hit deploy.
here is my config file for .ebextentions folder setup.conf
container_commands:
# Copy script from s3-bucket to PATH: /tmp/myFile.txt
01-copyFromS3ToTmp:
files:
"/tmp/myFile.txt":
source: https://ls-Busket.s3-eu-west-1.amazonaws.com/myFile.txt
authentication: S3Access
Resources:
AWSEBAutoScalingGroup:
Metadata:
AWS::CloudFormation::Authentication:
S3Access:
type: S3
roleName: aws-elasticbeanstalk-ec2-role
buckets: ls-Busket
I use the upload and deploy using Elastic Beanstalk and everything is OK (Health)
but when I SSH into my instance and check the tmp folder I can't see my file and can't see any ERRORs
Can anyone tell me what I am doing wrong.
Any help is appreciated, new to AWS.
Thanks in advance
G
Your method of file creation is not quite right.
Try this:
Resources:
AWSEBAutoScalingGroup:
Metadata:
AWS::CloudFormation::Authentication:
S3Access:
type: S3
roleName: aws-elasticbeanstalk-ec2-role
buckets: ["ls-Busket"]
files:
"/tmp/myFile.txt":
source: https://ls-Busket.s3-eu-west-1.amazonaws.com/myFile.txt
authentication: S3Access
Related
I'm trying to create a CF script that creates an EC2 and then automatically mounts an EFS. Here's the relevant bit of the template. I find that the packages are not loaded: amazon-efs-utils, nfs-utils.
Therefore if the mount command is executed it will fail.
I've verified that my other stack has what I need and the output variable is correct: !ImportValue dmp356-efs-EFSId
If I log into my new instance and do the steps manually it works fine and I can see my files in the EFS. Naturally I suspect that my CF script is wrong in some way, although it validates if I use "aws cloudformation validate-template ..." and it deploys with a successful conclusion. As I said, I can log into the new instance, it doesn't rollback.
Resources:
TestHost:
Type: AWS::EC2::Instance
Metadata:
AWS::CloudFormation::Init:
configSets:
MountConfig:
- setup
- mount
setup:
packages:
yum:
amazon-efs-utils: []
nfs-utils: []
commands:
01_createdir:
command:
"mkdir /nfs"
mount:
commands:
01_mount:
command: !Sub
- mount -t efs ${EFS_ID} /nfs
- { EFS_ID: !ImportValue dmp356-efs-EFSId }
02_mount:
command:
"chown ec2-user.ec2-user /nfs"
Is there a way to run AWS Codedeploy without the use of an appspec.yml file?
I am looking for a way to create a 100% purely command line way of running create-deployment without the use of any yml files in S3 bucket
I found examples with YAML input but not with JSON input online. While YAML has its advantages, sometimes JSON is easier to work with in my opinion (in bash/gitlab CI scripts for example).
The way to call aws deploy using JSON without the use of S3 and constructing the Appspec content in a variable:
APPSPEC=$(echo '{"version":1,"Resources":[{"TargetService":{"Type":"AWS::ECS::Service","Properties":{"TaskDefinition":"'${AWS_TASK_DEFINITION_ARN}'","LoadBalancerInfo":{"ContainerName":"react-web","ContainerPort":3000}}}}]}' | jq -Rs .)
Note the jq -Rs . at the end: the content should be a JSON-as-String and not be part of the actual JSON. Using jq we escape the JSON. Replace the variables as needed (AWS_TASK_DEFINITION_ARN, ContainerName and ContainerPort etc.)
REVISION='{"revisionType":"AppSpecContent","appSpecContent":{"content":'${APPSPEC}'}}'
And finally we can create the deployment with the new revision:
aws deploy create-deployment --application-name "${AWS_APPLICATION_NAME}" --deployment-group-name "${AWS_DEPLOYMENT_GROUP_NAME}" --revision "$REVISION"
Tested on aws-cli/2.4.15
Unfortunately there is no way to perform CodeDeploy without the use of an appsepc file.
You can use CodePipeline to deploy your assets to an S3 bucket (which does not require an appspec). But if they're then going down to an EC2 instance you would need to find your own way to have them be pulled down.
It's possible to create a deployment without appspec.yaml files in S3 for AWS Lambda/ECS deployments.
With AWS Cli V2: https://awscli.amazonaws.com/v2/documentation/api/latest/reference/deploy/create-deployment.html
aws deploy create-deployment --cli-input-yaml file://code-deploy.yaml
Where code-deploy.yaml would have the following structure (example for ecs service):
applicationName: 'code-deploy-app'
deploymentGroupName: 'code-deploy-deployment-group'
revision:
revisionType: AppSpecContent
appSpecContent:
content: |
version: 0.0
Resources:
- TargetService:
Type: AWS::ECS::Service
Properties:
TaskDefinition: "[YOUR_TASK_DEFINITION_ARN]"
LoadBalancerInfo:
ContainerName: "ecs-service-container"
ContainerPort: 8080
Currently I have a AWS Codecommit repository and an AWS Elastic Beanstalk enviroment in which I upload updates with the EB CLI, using eb deploy.
I have some config files that are ignored in .gitignore, I want to establish a AWS CodePipeline so when I push changes to repository, automatically run the test functions and upload the changes directly to Elastic Beanstalk
I tried implementing a simple pipeline where I push code to CodeCommit and Deploys to Elastic Beantstalk but I get the following error:
2019-09-09 11:51:45 UTC-0500 ERROR "option_settings" in one of the configuration files failed validation. More details to follow.
2019-09-09 11:51:45 UTC-0500 ERROR You cannot remove an environment from a VPC. Launch a new environment outside the VPC.
2019-09-09 11:51:45 UTC-0500 ERROR Failed to deploy application.
This is the *.config file that isn't in Codecommit
option_settings:
aws:ec2:vpc:
VPCId: vpc-xxx
Subnets: 'subnet-xxx'
aws:elasticbeanstalk:environment:
EnvironmentType: SingleInstance
ServiceRole: aws-xxxx
aws:elasticbeanstalk:container:python:
WSGIPath: xxx/wsgi.py
aws:elasticbeanstalk:healthreporting:system:
SystemType: enhanced
aws:elasticbeanstalk:application:environment:
DJANGO_SETTINGS_MODULE: xxxxsettings
SECRET_KEY: xxxx
DB_NAME: xxxx
DB_USER: xxxx
DB_PASSWORD: xxxx
DB_HOST: xxx
DB_PORT: xxxx
aws:autoscaling:launchconfiguration:
SecurityGroups: sg-xxx
I noticed some syntax that is a little different from the above:
Subnets: value has '' around them, could this be causing the issue and if you have this here, are '' supposed to be around the other values ?
From the config file it's look like that you are using single instance. For Single instance you don't need to specify autoscaling launch configuration. Just remove the last two line it will work fine.
I think from what I have been reading is that I should not commit my config files, but add them in CodeBuild so it generates the .zip file that would be deployed to ElasticBeanstalk.
I'm running a single Docker container on Elastic Beanstalk using its Single Container Docker Configuration, and trying to send the application stdout to CloudWatch using the awslogs logging driver.
EB looks for a Dockerrun.aws.json file for the configuration of the container, but as far as I can see doesn't have an option to use awslogs as the container's logging driver (or add any other flags to the docker run command for that matter).
I've tried hacking into the docker run command using the answer provided here, by adding a file .ebextensions/01-commands.config with content:
commands:
add_awslogs:
command: 'sudo sed -i "s/docker run -d/docker run --log-driver=awslogs --log-opt awslogs-region=eu-west-2 --log-opt awslogs-group=dockerContainerLogs -d/" /opt/elasticbeanstalk/hooks/appdeploy/enact/00run.sh'
This works, in the sense that it modifies the run script, and logs show up in CloudWatch.
But the EB application dies. The container is up, but does not respond to requests.
I find the following error in the container logs:
"logs" command is supported only for "json-file" and "journald" logging
drivers (got: awslogs)
I find answers to similar questions relating to ECS (not EB) suggesting to append ECS_AVAILABLE_LOGGING_DRIVERS with awslogs. But I don't find this configuration setting in EB.
Any thoughts?
I'm posting here the answer I received from AWS support:
As Elastic Beanstalk Single Container environment will save the stdout
and stderr on /var/log/eb-docker/containers/eb-current-app/ by
default, and as the new solution stack allows you the option to stream
log to cloudwatch, automating the configuration of the AWSLogs agent
on the instances, what I recommend to do is to add an ebextension to
add the stdout and stderr logs files to the cloudwatch configuration
and use the already configured agent to stream those files to
cloudwatch logs. instead of touching the pre-hooks , which is nor
supported by AWS as hooks may change from solution stack version to
another.
Regarding the error you are seeing "logs" command is supported only
for "json-file" and "journald" logging drivers (got: awslogs)" this
error is from how docker works, when it is configured to send logs to
other driver beside json-file or journald it will not be able to
display logs locally as it does not have a local copy of them.
### BEGIN .ebextensions/logs.config
option_settings:
- namespace: aws:elasticbeanstalk:cloudwatch:logs
option_name: StreamLogs
value: true
- namespace: aws:elasticbeanstalk:cloudwatch:logs
option_name: DeleteOnTerminate
value: false
- namespace: aws:elasticbeanstalk:cloudwatch:logs
option_name: RetentionInDays
value: 7
files:
"/etc/awslogs/config/stdout.conf":
mode: "000755"
owner: root
group: root
content: |
[docker-stdout]
log_group_name=/aws/elasticbeanstalk/environment_name/docker-stdout
log_stream_name={instance_id}
file=/var/log/eb-docker/containers/eb-current-app/*-stdouterr.log
commands:
"00_restart_awslogs":
command: service awslogs restart
### END .ebextensions/logs.config
I was able to expand on the previous answer for a multi container elastic beanstalk environment as well as inject the environment name. I did have to grant the correct permission in the ec2 role to be able to create the log group. You can see if it is working by looking in:
/var/log/awslogs.log
this goes in .ebextensions/logs.config
option_settings:
- namespace: aws:elasticbeanstalk:cloudwatch:logs
option_name: StreamLogs
value: true
- namespace: aws:elasticbeanstalk:cloudwatch:logs
option_name: DeleteOnTerminate
value: false
- namespace: aws:elasticbeanstalk:cloudwatch:logs
option_name: RetentionInDays
value: 14
files:
"/etc/awslogs/config/stdout.conf":
mode: "000755"
owner: root
group: root
content: |
[/var/log/containers/docker-stdout]
log_group_name=/aws/elasticbeanstalk/`{ "Ref" : "AWSEBEnvironmentName" }`/docker-stdout.log
log_stream_name={instance_id}
file=/var/log/containers/*-stdouterr.log
commands:
"00_restart_awslogs":
command: service awslogs restart
If I apply a setting in two config files in the .ebextensions folder does the last file override the setting in the first file?
For example take two files with instance role setting defined:
.ebextensions/0001-base.config
option_settings:
IamInstanceProfile: aws-ec2-role
.ebextensions/0010-app.config
option_settings:
IamInstanceProfile: aws-app-role
Which role will the Beanstalk EC2 instance be given? aws-ec2-role or aws-app-role?
.ebextensions are executed in alphabetical order so aws-app-role would be the final result for your IamInstanceProfile option setting.
Your syntax for the .ebextensions would cause a compilation error if you tried to deploy them, here is the correct way to do what you want.
option_settings:
"aws:autoscaling:launchconfiguration":
IamInstanceProfile: aws-app-role