AWS CloudFormation -- auto mount EFS - amazon-web-services

I'm trying to create a CF script that creates an EC2 and then automatically mounts an EFS. Here's the relevant bit of the template. I find that the packages are not loaded: amazon-efs-utils, nfs-utils.
Therefore if the mount command is executed it will fail.
I've verified that my other stack has what I need and the output variable is correct: !ImportValue dmp356-efs-EFSId
If I log into my new instance and do the steps manually it works fine and I can see my files in the EFS. Naturally I suspect that my CF script is wrong in some way, although it validates if I use "aws cloudformation validate-template ..." and it deploys with a successful conclusion. As I said, I can log into the new instance, it doesn't rollback.
Resources:
TestHost:
Type: AWS::EC2::Instance
Metadata:
AWS::CloudFormation::Init:
configSets:
MountConfig:
- setup
- mount
setup:
packages:
yum:
amazon-efs-utils: []
nfs-utils: []
commands:
01_createdir:
command:
"mkdir /nfs"
mount:
commands:
01_mount:
command: !Sub
- mount -t efs ${EFS_ID} /nfs
- { EFS_ID: !ImportValue dmp356-efs-EFSId }
02_mount:
command:
"chown ec2-user.ec2-user /nfs"

Related

passing file to ec2 in cloud formation

I have a cloudfromation script that creates some ec2 instances, and later attaches them to an ELB.
I have a python server script written that I would like to have on the ec2 as soon as they are created.
Right now what I do is after the cloudformation script finishes, I use SCP to pass the script to the ec2 instances.
I was wondering if there was a way to do this within the cloudfromation, mabe under UserData?
I should point out I am very new to cloud formation. I have gone over the documentation, but have not been able to do this yet.
[EDIT] I think its important to state that I have a deploy.sh script that I run to create the stack. the script sits in the dame dir as my python server script. I AM NOT USING THE AWS CONSOLE.
this is my instance code in the cloudformation script:
EC2Instance2:
Type: AWS::EC2::Instance
Properties:
InstanceType: !Ref InstanceType
SecurityGroupIds:
- !Ref InstanceSecurityGroup
KeyName: !Ref KeyName
ImageId: !Ref LatestAmiId
UserData:
Fn::Base64:
!Sub |
#!/bin/bash
sleep 20
sudo apt-get update
sudo apt-get install python3-pip -y
sudo apt-get install python3-flask -y
I was wondering if there was a way to do this within the cloudfromation, mabe under UserData?
Yes, UserData would be the way to do it. For that you could store your file in S3. For that to work you would need to add instance role to your instance with S3 permissions. The you would use AWS CLI to copy your file from S3 to the instance.

How can one add a yum repository in AWS::CloudFormation::Init

I'm trying to install docker in a Centos instance using cloudformation and the blocks of AWS::CloudFormation::Init
One of the installation steps is to add a certain repository to yum by running:
$ sudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
How can I enable that repository in a cloudformation template. Ideally, I would like to use a single config block if that's possible.
This is what I got so far
Resources:
ec2:
Type: AWS::EC2::Instance
Metadata:
AWS::CloudFormation::Init:
config:
packages:
yum:
yum-utils: []
device-mapper-persistent-data: []
lvm2: []
Properties:
...
You should be able to do that by adding a commands block:
Resources:
ec2:
Type: AWS::EC2::Instance
Metadata:
AWS::CloudFormation::Init:
configSets:
config:
- yum_config_manager
- yum_packages
yum_config_manager:
commands:
yum_config_manager:
command: yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
yum_packages:
packages:
yum:
yum-utils: []
device-mapper-persistent-data: []
lvm2: []
And in your UserData you will have:
cfn-init -c config -s ${AWS::StackId} --resource ec2 --region ${AWS::Region}
Further explanation:
Note that the commands and packages blocks have each been wrapped in a configSet.
Note also that the -c config tells cfn-init which of the configSets you want to run.
After it boots up you should be able to inspect successful operation by looking in /var/log/cfn-init-cmd.log.
More info in the docs especially the sections on configSets and commands.

Elastic Beanstalk Single Container Docker - use awslogs logging driver

I'm running a single Docker container on Elastic Beanstalk using its Single Container Docker Configuration, and trying to send the application stdout to CloudWatch using the awslogs logging driver.
EB looks for a Dockerrun.aws.json file for the configuration of the container, but as far as I can see doesn't have an option to use awslogs as the container's logging driver (or add any other flags to the docker run command for that matter).
I've tried hacking into the docker run command using the answer provided here, by adding a file .ebextensions/01-commands.config with content:
commands:
add_awslogs:
command: 'sudo sed -i "s/docker run -d/docker run --log-driver=awslogs --log-opt awslogs-region=eu-west-2 --log-opt awslogs-group=dockerContainerLogs -d/" /opt/elasticbeanstalk/hooks/appdeploy/enact/00run.sh'
This works, in the sense that it modifies the run script, and logs show up in CloudWatch.
But the EB application dies. The container is up, but does not respond to requests.
I find the following error in the container logs:
"logs" command is supported only for "json-file" and "journald" logging
drivers (got: awslogs)
I find answers to similar questions relating to ECS (not EB) suggesting to append ECS_AVAILABLE_LOGGING_DRIVERS with awslogs. But I don't find this configuration setting in EB.
Any thoughts?
I'm posting here the answer I received from AWS support:
As Elastic Beanstalk Single Container environment will save the stdout
and stderr on /var/log/eb-docker/containers/eb-current-app/ by
default, and as the new solution stack allows you the option to stream
log to cloudwatch, automating the configuration of the AWSLogs agent
on the instances, what I recommend to do is to add an ebextension to
add the stdout and stderr logs files to the cloudwatch configuration
and use the already configured agent to stream those files to
cloudwatch logs. instead of touching the pre-hooks , which is nor
supported by AWS as hooks may change from solution stack version to
another.
Regarding the error you are seeing "logs" command is supported only
for "json-file" and "journald" logging drivers (got: awslogs)" this
error is from how docker works, when it is configured to send logs to
other driver beside json-file or journald it will not be able to
display logs locally as it does not have a local copy of them.
### BEGIN .ebextensions/logs.config
option_settings:
- namespace: aws:elasticbeanstalk:cloudwatch:logs
option_name: StreamLogs
value: true
- namespace: aws:elasticbeanstalk:cloudwatch:logs
option_name: DeleteOnTerminate
value: false
- namespace: aws:elasticbeanstalk:cloudwatch:logs
option_name: RetentionInDays
value: 7
files:
"/etc/awslogs/config/stdout.conf":
mode: "000755"
owner: root
group: root
content: |
[docker-stdout]
log_group_name=/aws/elasticbeanstalk/environment_name/docker-stdout
log_stream_name={instance_id}
file=/var/log/eb-docker/containers/eb-current-app/*-stdouterr.log
commands:
"00_restart_awslogs":
command: service awslogs restart
### END .ebextensions/logs.config
I was able to expand on the previous answer for a multi container elastic beanstalk environment as well as inject the environment name. I did have to grant the correct permission in the ec2 role to be able to create the log group. You can see if it is working by looking in:
/var/log/awslogs.log
this goes in .ebextensions/logs.config
option_settings:
- namespace: aws:elasticbeanstalk:cloudwatch:logs
option_name: StreamLogs
value: true
- namespace: aws:elasticbeanstalk:cloudwatch:logs
option_name: DeleteOnTerminate
value: false
- namespace: aws:elasticbeanstalk:cloudwatch:logs
option_name: RetentionInDays
value: 14
files:
"/etc/awslogs/config/stdout.conf":
mode: "000755"
owner: root
group: root
content: |
[/var/log/containers/docker-stdout]
log_group_name=/aws/elasticbeanstalk/`{ "Ref" : "AWSEBEnvironmentName" }`/docker-stdout.log
log_stream_name={instance_id}
file=/var/log/containers/*-stdouterr.log
commands:
"00_restart_awslogs":
command: service awslogs restart

Run bash script when Stack Created

I'm designing an AWS stack that contains multiple instances that run a handful of services comprised of a few tasks each. One of these services uses NFS to store configuration, and this configuration needs to be setup ONCE when the stack is created.
I've come up with a way to run a configuration script ONCE when the stack is created:
Configure the service that needs to configure itself to run a single task
When the task starts, check if the configuration exists. If it doesn't, run a configuration script and then update the desired task count so that other instances are created
(1) is necessary to avoid a race condition.
Although this works well, it strikes me as a very round-about way to achieve something simple: run a bash script ONCE when my stack is created. Is there a better way to do this?
You can run a one-off Bash script using an AWS::EC2::Instance resource with an InstanceInitiatedShutdownBehavior property of terminate (to terminate the instance after the script executes), and a DependsOn attribute set to the last-created resource in your stack (so the EC2 instance gets created and the Bash script gets executed at the end):
Description: Run a bash script once when a stack is created.
Mappings:
# amzn-ami-hvm-2016.09.1.20161221-x86_64-gp2
RegionMap:
us-east-1:
"64": "ami-9be6f38c"
Resources:
MyResource:
Type: AWS::SNS::Topic
WebServer:
Type: AWS::EC2::Instance
DependsOn: MyResource
Properties:
ImageId: !FindInMap [ RegionMap, !Ref "AWS::Region", 64]
InstanceType: m3.medium
InstanceInitiatedShutdownBehavior: terminate
UserData:
"Fn::Base64":
!Sub |
#!/bin/bash
# [Contents of bash script here...]
shutdown -h now

How to set the instance type with Elastic Beanstalk?

How can I change the instance type of an existing Elastic Beanstalk application?
Currently I am changing it in the web interface:
I tried changing it with the command line tool:
eb setenv InstanceType=t2.medium
It didn't throw an error, but also didn't change the instance type.
The setenv command is for changing Environment Variables. Hence the command you tried is bash equivalent of:
export InstanceType=t2.medium
And doesnt really do anything for your beanstalk environment.
You can create an environment using the -i option during create
eb create -i t2.micro
Or, you can use eb config to edit a currently running environment. This will open up a text editor. Look for the section that looks like:
aws:autoscaling:launchconfiguration:
IamInstanceProfile: aws-elasticbeanstalk-ec2-role
EC2KeyName: aws
InstanceType: t1.micro
And edit the t1.micro to t2.micro. (save and quit)
But just to make your life easier, you can save the below as .elasticbeanstalk/saved_configs/default.cfg.yml and the CLI will use all these settings on all future creates.
AWSConfigurationTemplateVersion: 1.1.0.0
OptionSettings:
aws:elb:loadbalancer:
CrossZone: true
aws:elasticbeanstalk:command:
BatchSize: '30'
BatchSizeType: Percentage
aws:autoscaling:launchconfiguration:
IamInstanceProfile: aws-elasticbeanstalk-ec2-role
EC2KeyName: aws
InstanceType: t2.micro
aws:elb:policies:
ConnectionDrainingEnabled: true
aws:autoscaling:updatepolicy:rollingupdate:
RollingUpdateType: Health
RollingUpdateEnabled: true
aws:elb:healthcheck:
Interval: '30'
More scriptable way:
aws elasticbeanstalk update-environment --environment-name "your-env-name" --option-settings "Namespace=aws:autoscaling:launchconfiguration,OptionName=InstanceType,Value=t2.micro"
The accepted solution didn't work for me in 2020.
As of today (26th, February 2020), in my .ebextensions/02_python.config I had to add the following under option_settings:
option_settings:
# ...
aws:ec2:instances:
InstanceTypes: 'm5.large'
Reference: https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/using-features.managing.as.html#environments-cfg-autoscaling-namespace.instances