I have ec2 userData script that doing docker-compose pull and up.
I want to run "aws cloudformation update-stack" and load new docker images.
Each time ${imageTag} property changes.
This is my cloudformation instance yml:
myInstance:
Type: 'AWS::EC2::Instance'
Metadata:
'AWS::CloudFormation::Init':
configSets:
configs:
- "configDockers"
- "configOther"
configDocker:
commands:
a:
command: 'echo -e IMAGE_TAG=${imageTag} >> .env'
b:
command: 'docker-compose pull'
c:
command: 'docker-compose up'
Properties:
UserData:
Fn::Base64:
!Sub |
runcmd:
- /opt/aws/bin/cfn-init -v --stack ${AWS::StackName} --resource myInstance --region ${AWS::Region} -c configs
- /opt/aws/bin/cfn-signal -e $? --stack ${AWS::StackName} --resource myInstance --region ${AWS::Region}
I tried to add "docker-compose down", remove old images in the UserData script, and add cloud_final_modules to run UserData each startup:
myInstance:
Type: 'AWS::EC2::Instance'
Metadata:
'AWS::CloudFormation::Init':
configSets:
configs:
- "configDockers"
- "configOther"
configDocker:
commands:
a:
command: 'echo -e IMAGE_TAG=${imageTag} >> .env'
b:
command: 'docker-compose down'
c:
command: 'docker images -q | xargs -r sudo docker rmi'
d:
command: 'docker-compose pull'
e:
command: 'docker-compose up'
Properties:
UserData:
Fn::Base64:
!Sub |
#cloud-config
cloud_final_modules:
- [scripts-user, always]
runcmd:
- /opt/aws/bin/cfn-init -v --stack ${AWS::StackName} --resource myInstance --region ${AWS::Region} -c configs
- /opt/aws/bin/cfn-signal -e $? --stack ${AWS::StackName} --resource myInstance --region ${AWS::Region}
But after "aws cloudformation update-stack" the script does not run again dispite the imageTag changed. it runs only if I change some value under runcmd.
How can I run the UserData script each time "aws cloudformation update-stack" running (each time imageTag changes)?
How can I run some UserData commands only on first startup, other commands on each reboot and other commands on stop instance?
for example: I want to run "docker-compose down" only on instance stop but other commands command: 'docker-compose pull/up' on each instance reboot or "aws cloudformation update-stack" and some initial commands only on first setup.
How can I run some UserData commands only on first startup, other commands on each reboot and other commands on stop instance?
You can't. UserData is meant to run only at instance launch, not reboots, starts or stops (one exception mentioned below). In order to do so, you have to implement all this functionality yourself.
This is commonly done though definition of custom systemd unit files. Thus you would have to create such unit files for your docker-compose.
The only exception is, a rather hackish way, of running userdata at instance start described in recent AWS blog post:
How can I utilize user data to automatically execute a script with every restart of my Amazon EC2 instance?
Related
I have created a template where I created EC2 instance and I used cfn-init to process the configsets, and in the Userdata section of the instance, I wrote some commands to be executed by cloud-init and some commands to be executed without cloud-init.
I am not sure which commands runs in which sequence?
What I mean is, in which order the commands are executed? For example:
Commands in the configsets
Commands in the cloud-init section of the userdata
Commands in the Userdata
Part of my code is below:
UserData:
Fn::If:
- DatadogAgentEnabled
-
Fn::Base64: !Sub |
#!/bin/bash -xe
yum update -y
yum update -y aws-cfn-bootstrap
/opt/aws/bin/cfn-init --stack ${AWS::StackName} --resource GhostServer --configsets prepare_instance_with_datadog --region ${AWS::Region}
/opt/aws/bin/cfn-signal -e $? --stack ${AWS::StackName} --resource GhostServer --region ${AWS::Region}
#cloud-config <----cloud-init section inside the Userdata
runcmd:
- [ sh, -c, "sed 's/api_key:.*/api_key: {DatadogAPIKey}/' /etc/datadog-agent/datadog.yaml.example > /etc/datadog-agent/datadog.yaml" ]
- systemctl restart datadog-agent
From the AWS documentation
UserData :
The user data to make available to the instance. For more
information, see Running commands on your Linux instance at launch
(Linux) and Adding User Data (Windows). If you are using a command
line tool, base64-encoding is performed for you, and you can load the
text from a file. Otherwise, you must provide base64-encoded text.
User data is limited to 16 KB.
So Basically we define UserData to execute some commands at the launch of our EC2 instance.
Reference : https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-ec2-instance.html
To Answer your question first of all UserData commands will be executed and all the commands specified in this section will execute sequentially so in your example you invoked cfn helper scripts first and then cloud init thus configsets will be applied first and then cloud init commands will be called.
Inside that UserData section we are invoking cfn helper scripts which reads cloudformation template metadata and executes all the configsets defined under AWS::CloudFormation::Init:
From the AWS Documentation :
The cfn-init helper script reads template metadata from the
AWS::CloudFormation::Init key and acts accordingly to:
Fetch and parse metadata from AWS CloudFormation
Install packages
Write files to disk
Enable/disable and start/stop services
Reference : https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/cfn-init.html
We have some flaky CodeDeploy errors that are frustrating. In about 10% of our deploys we get the following error : An AppSpec file is required, but could not be found in the revision.
The problem is that when we download the artifact zip file from s3 we clearly see a appspec.yaml file. Our build script doesn't change between deploys and when we rerun the pipeline on the same commit (using the "Release change" button), without changing anything, it works.
The error message isn't helpful and it seems like CodeDeploy isn't 100% reliable.
We use ECS Fargate using Blue/Green Deployment.
Our buildspec.yml file looks like this:
version: 0.2
env:
parameter-store:
BUILD_ENV: key-foo-site-node-env
phases:
pre_build:
commands:
- echo Logging in to Amazon ECR...
- $(aws ecr get-login --no-include-email --region $AWS_DEFAULT_REGION)
- IMAGE_TAG=$(echo $CODEBUILD_RESOLVED_SOURCE_VERSION | cut -c 1-6)
- ACCOUNT_ID=$(aws sts get-caller-identity --output text --query 'Account')
- REPOSITORY_URI="$ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com/foo-site"
- echo Saving source version into version.txt...
- echo $IMAGE_TAG >> version.txt
build:
commands:
- echo Build started on `date`
- echo Building the app Docker image...
- docker build -t $REPOSITORY_URI/app:$IMAGE_TAG .
- echo Building the nginx Docker image...
- docker build -t $REPOSITORY_URI/nginx:$IMAGE_TAG docker/nginx
post_build:
commands:
- echo Build completed on `date`
- echo Pushing the Docker images...
- docker push $REPOSITORY_URI/app:$IMAGE_TAG
- docker push $REPOSITORY_URI/nginx:$IMAGE_TAG
# Create a valid json file that will be used to create a new task definition version
# Using sed we need to replace $APP_IMAGE and $NGINX_IMAGE by image urls
- echo Creating a task definition json
- sed "s+\$APP_IMAGE+$REPOSITORY_URI/app:$IMAGE_TAG+g; s+\$NGINX_IMAGE+$REPOSITORY_URI/nginx:$IMAGE_TAG+g;" taskdef.$BUILD_ENV.json > register-task-definition.json
# Using the aws cli we register a new task definition
# We need to new task definition arn to create a valid appspec.yaml
# If you need debugging, the next line is useful
# - aws --debug ecs register-task-definition --cli-input-json "$(cat register-task-definition.json)" > task-definition.json
- echo Creating an appspec.yaml file
- TASK_DEFINITION_ARN=`aws ecs register-task-definition --cli-input-json "$(cat register-task-definition.json)" --query 'taskDefinition.taskDefinitionArn' --output text`
- sed "s+\$TASK_DEFINITION_ARN+$TASK_DEFINITION_ARN+g" appspec.yml > appspec.yaml
artifacts:
files:
- appspec.yaml
- register-task-definition.json
- task-definition.json
Our appspec.yml file looks like this:
version: 0.0
Resources:
- TargetService:
Type: AWS::ECS::Service
Properties:
TaskDefinition: "$TASK_DEFINITION_ARN"
LoadBalancerInfo:
ContainerName: "nginx"
ContainerPort: "80"
Probably not relevant anymore, but it looks like your AppSpec file has a different suffix (.yml) than indicated in the artifcats definition of the buildspec file (.yaml).
In my template, I'm provisioning an ASG that uses an EC2 Launch template
In the UserData section, I do a cfn-init to provision the instance, which works fine.
However when I do the cfn-signal command, the command is successful (exit code of 0), but the Cloudformation stack never receives it, and the stack creation/update fails with Failed to receive 1 resource signal(s) for the current batch. Each resource signal timeout is counted as a FAILURE.
When I check Cloudtrail, I see that the SignalResource API call was completed, signaling the correct Stack and resource with a SUCCESS, (but a responseElements of null)
Excerpt from my Cloudformation template:
Resources:
MyLaunchTemplate:
Type: AWS::EC2::LaunchTemplate
Metadata:
'AWS::CloudFormation::Init':
configSets:
# lots-o-stuff to be done by cfn-init
Properties:
LaunchTemplateData:
# Remove other attributes for brevity
UserData:
Fn::Base64:
!Sub |
#!/bin/bash -x
yum update -y
# gather ENI
/opt/aws/bin/cfn-init -c install \
--stack ${AWS::StackName} \
--resource MyLaunchTemplate \
--region ${AWS::Region}
/opt/aws/bin/cfn-signal -e $? \
--stack ${AWS::StackName} \
--resource MyAsg \
--region ${AWS::Region}
echo $?
LaunchTemplateName: my-launch-template
MyAsg:
Type: AWS::AutoScaling::AutoScalingGroup
CreationPolicy:
AutoScalingCreationPolicy:
MinSuccessfulInstancesPercent: 100
ResourceSignal:
Count: 1
Timeout: PT10M
UpdatePolicy:
AutoScalingReplacingUpdate:
WillReplace: true
Properties:
# Remove other attributes for brevity
LaunchTemplate:
LaunchTemplateId: !Ref MyLaunchTemplate
Version: !GetAtt MyLaunchTemplate.LatestVersionNumber
Any idea what I'm missing here?
It appears that the Amazon Linux AMI version is not the latest and cfn-signal script isn't installed on the instance. Bootstrap the EC2 instance using aws-cfn-bootstrap package.
Add yum install -y aws-cfn-bootstrap in UserData section.
Another possible cause could be that the value of the Timeout property for the CreationPolicy attribute is too low. Make sure that the value is high enough for tasks to run before the cfn-signal script sends signals to AWS CloudFormation resources.
If it still doesn't work, then do check out AWS Troubleshooting Guide for this issue.
I had also faced a similar issue recently. It seems like cfn-signals don’t work with launch template for some reason. For test purposes I changed my launch template to a launch configuration and the cfn signal worked totally fine. It’s weird why AWS don’t support launch templates.
I'm trying to install docker in a Centos instance using cloudformation and the blocks of AWS::CloudFormation::Init
One of the installation steps is to add a certain repository to yum by running:
$ sudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
How can I enable that repository in a cloudformation template. Ideally, I would like to use a single config block if that's possible.
This is what I got so far
Resources:
ec2:
Type: AWS::EC2::Instance
Metadata:
AWS::CloudFormation::Init:
config:
packages:
yum:
yum-utils: []
device-mapper-persistent-data: []
lvm2: []
Properties:
...
You should be able to do that by adding a commands block:
Resources:
ec2:
Type: AWS::EC2::Instance
Metadata:
AWS::CloudFormation::Init:
configSets:
config:
- yum_config_manager
- yum_packages
yum_config_manager:
commands:
yum_config_manager:
command: yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
yum_packages:
packages:
yum:
yum-utils: []
device-mapper-persistent-data: []
lvm2: []
And in your UserData you will have:
cfn-init -c config -s ${AWS::StackId} --resource ec2 --region ${AWS::Region}
Further explanation:
Note that the commands and packages blocks have each been wrapped in a configSet.
Note also that the -c config tells cfn-init which of the configSets you want to run.
After it boots up you should be able to inspect successful operation by looking in /var/log/cfn-init-cmd.log.
More info in the docs especially the sections on configSets and commands.
Create EC2 instance using CloudFormation, but the name (tags) of root volume is empty. How to set it using CloudFormation?
# ec2-instance.yml (CloudFormation template)
MyInstance:
Type: "AWS::EC2::Instance"
Properties:
ImageId: "ami-da9e2cbc"
InstanceType: "t2.nano"
KeyName: !Ref "KeyPair"
Tags: # This is for EC2 instance (not root volume)
- Key: "Name"
Value: "my-instance"
I find "Volumes" and "BlockDeviceMappings" properties but it could not.
CloudFormation does not appear to support this currently. However using an instance user data script, you can do this to tag the root volume:
apt-get -y install unzip
unzip awscli-bundle.zip
./awscli-bundle/install -i /usr/local/aws -b /usr/local/bin/aws
rm -rf awscli-bundle awscli-bundle.zip
EC2_INSTANCE_ID=$(curl -s http://169.254.169.254/latest/meta-data/instance-id)
EC2_REGION=${EC2_AVAIL_ZONE:0:${#EC2_AVAIL_ZONE} - 1}
ROOT_DISK_ID=$(aws ec2 describe-volumes --filters Name=attachment.instance-id,Values={EC2_INSTANCE_ID} Name=attachment.device,Values=/dev/sda1 --query 'Volumes[*].[VolumeId]' --region=${EC2_REGION} --out \"text\" | cut -f 1)
aws ec2 create-tags --resources $ROOT_DISK_ID --tags Key=Name,Value=\"Root Volume my-instance\" --region ${EC2_REGION}
This script will tag the /dev/sda1 EBS volume with Name=Root Volume my-instance
Note that for my Ubuntu AMI I have to install the AWS tools first. The Amazon Linux AMI has those tools installed.
For CloudFormation, you would use:
# ec2-instance.yml (CloudFormation template)
MyInstance:
Type: "AWS::EC2::Instance"
Properties:
ImageId: "ami-da9e2cbc"
InstanceType: "t2.nano"
KeyName: !Ref "KeyPair"
UserData:
"Fn::Base64": !Sub |
#!/bin/bash -x
apt-get -y install unzip
unzip awscli-bundle.zip
./awscli-bundle/install -i /usr/local/aws -b /usr/local/bin/aws
rm -rf awscli-bundle awscli-bundle.zip
EC2_INSTANCE_ID=$(curl -s http://169.254.169.254/latest/meta-data/instance-id)
EC2_REGION=${EC2_AVAIL_ZONE:0:${#EC2_AVAIL_ZONE} - 1}
ROOT_DISK_ID=$(aws ec2 describe-volumes --filters Name=attachment.instance-id,Values={EC2_INSTANCE_ID} Name=attachment.device,Values=/dev/sda1 --query 'Volumes[*].[VolumeId]' --region=${EC2_REGION} --out \"text\" | cut -f 1)
aws ec2 create-tags --resources $ROOT_DISK_ID --tags Key=Name,Value=\"Root Volume my-instance\" --region ${EC2_REGION}
I know that EC2 RunInstances now supports tagging of EBS volumes on launch but I'm not sure that CloudFormation supports this.
Others have requested this feature in CloudFormation. Also see this thread.
Until this is supported in CloudFormation, you might want to take a look at graffiti-monkey which looks at the tags an EC2 instance has, copies those tags to the EBS Volumes that are attached to the instance, and then copies those tags to the EBS Snapshots. (I have not verified that it propagates the tags to the root device volume, but presume it does.)
There are some syntax errors under UserData
See below code which make it work
# Download and install AWS CLI
apt-get -y install unzip
curl "https://s3.amazonaws.com/aws-cli/awscli-bundle.zip" -o "awscli-bundle.zip"
unzip awscli-bundle.zip
./awscli-bundle/install -i /usr/local/aws -b /usr/local/bin/aws
rm -rf awscli-bundle awscli-bundle.zip
EC2_INSTANCE_ID=$(curl -s http://169.254.169.254/latest/meta-data/instance-id)
EC2_AVAIL_ZONE=$(curl -s http://169.254.169.254/latest/meta-data/placement/availability-zone)
VOLUME_ID=$(aws ec2 describe-volumes --filters Name=attachment.instance-id,Values=$EC2_INSTANCE_ID Name=attachment.device,Values=/dev/sda1 --query 'Volumes[*].[VolumeId]' --region ${AWS::Region} --out text | cut -f 1)
aws ec2 create-tags --resources $VOLUME_ID --tags Key=Name,Value=\"Root Volume my-instance\" --region ${AWS::Region}