In my template, I'm provisioning an ASG that uses an EC2 Launch template
In the UserData section, I do a cfn-init to provision the instance, which works fine.
However when I do the cfn-signal command, the command is successful (exit code of 0), but the Cloudformation stack never receives it, and the stack creation/update fails with Failed to receive 1 resource signal(s) for the current batch. Each resource signal timeout is counted as a FAILURE.
When I check Cloudtrail, I see that the SignalResource API call was completed, signaling the correct Stack and resource with a SUCCESS, (but a responseElements of null)
Excerpt from my Cloudformation template:
Resources:
MyLaunchTemplate:
Type: AWS::EC2::LaunchTemplate
Metadata:
'AWS::CloudFormation::Init':
configSets:
# lots-o-stuff to be done by cfn-init
Properties:
LaunchTemplateData:
# Remove other attributes for brevity
UserData:
Fn::Base64:
!Sub |
#!/bin/bash -x
yum update -y
# gather ENI
/opt/aws/bin/cfn-init -c install \
--stack ${AWS::StackName} \
--resource MyLaunchTemplate \
--region ${AWS::Region}
/opt/aws/bin/cfn-signal -e $? \
--stack ${AWS::StackName} \
--resource MyAsg \
--region ${AWS::Region}
echo $?
LaunchTemplateName: my-launch-template
MyAsg:
Type: AWS::AutoScaling::AutoScalingGroup
CreationPolicy:
AutoScalingCreationPolicy:
MinSuccessfulInstancesPercent: 100
ResourceSignal:
Count: 1
Timeout: PT10M
UpdatePolicy:
AutoScalingReplacingUpdate:
WillReplace: true
Properties:
# Remove other attributes for brevity
LaunchTemplate:
LaunchTemplateId: !Ref MyLaunchTemplate
Version: !GetAtt MyLaunchTemplate.LatestVersionNumber
Any idea what I'm missing here?
It appears that the Amazon Linux AMI version is not the latest and cfn-signal script isn't installed on the instance. Bootstrap the EC2 instance using aws-cfn-bootstrap package.
Add yum install -y aws-cfn-bootstrap in UserData section.
Another possible cause could be that the value of the Timeout property for the CreationPolicy attribute is too low. Make sure that the value is high enough for tasks to run before the cfn-signal script sends signals to AWS CloudFormation resources.
If it still doesn't work, then do check out AWS Troubleshooting Guide for this issue.
I had also faced a similar issue recently. It seems like cfn-signals don’t work with launch template for some reason. For test purposes I changed my launch template to a launch configuration and the cfn signal worked totally fine. It’s weird why AWS don’t support launch templates.
Related
I have a working cloudFormation yaml template. I know its working because when I go to the aws portal and create a stack by clicking-away the stack gets created.
However, when I try to use the cloudformation on the command line the same yaml errors out.
I'm at loss to what's causing this issue. Does anyone know what may be causing the failure?
Here is the command I am calling
aws cloudformation create-stack --stack-name ${stack_name} --template-body file://template.yaml --region ${region}
where region is the same region I am in the aws portal. And here is the template.yaml
---
AWSTemplateFormatVersion: 2010-09-09
Description: EC2 example instance
Resources:
TestEC2Instance:
Type: AWS::EC2::Instance
Properties:
ImageId: ami-01ec0fa63b2042232
InstanceType: t3.medium
SubnetId: subnet-*********
UserData:
Fn::Base64:
!Sub |
#!/bin/bash -xe
echo "Running apt install -y git nfs-common binutils jq"
apt-get install -y git nfs-common binutils jq
when I run the command I see the stack starting to be created on the portal with the following events
ec2-boilerplate ROLLBACK_COMPLETE -
TestEC2Instance DELETE_COMPLETE -
TestEC2Instance DELETE_IN_PROGRESS -
ec2-boilerplate ROLLBACK_IN_PROGRESS The following resource(s) failed to create: [TestEC2Instance]. Rollback requested by user.
TestEC2Instance CREATE_FAILED Instance i-0bdd3e7ee34edf1ef failed to stabilize. Current state: shutting-down. Reason: Client.InternalError: Client error on launch
TestEC2Instance CREATE_IN_PROGRESS Resource creation Initiated
TestEC2Instance CREATE_IN_PROGRESS -
ec2-boilerplate CREATE_IN_PROGRESS User Initiated
Is it something about my template.yaml? about my command line call? some environment variable?
This may be related to the length of time the instance is taking to create, but your 'userdata' script seems simple enough that it is hard to imagine that being the case. The AWS documentation states that this error relates to the resource creation time exceeding the CloudFormation timeout:
A resource didn't respond because the operation exceeded the AWS
CloudFormation timeout period or an AWS service was interrupted. For
service interruptions, check that the relevant AWS service is running,
and then retry the stack operation.
One quick thing I would check is that the -e option to bash in your userdata script isn't causing it to fail in some way that hinders instance creation.
Another possible solution, per the AWS recommendation, is to ensure that your CloudFormation stack is provided a service role when it is created. This will allow it to create infrastructure beyond the timeout imposed when using the creating user's IAM permissions.
I have created a template where I created EC2 instance and I used cfn-init to process the configsets, and in the Userdata section of the instance, I wrote some commands to be executed by cloud-init and some commands to be executed without cloud-init.
I am not sure which commands runs in which sequence?
What I mean is, in which order the commands are executed? For example:
Commands in the configsets
Commands in the cloud-init section of the userdata
Commands in the Userdata
Part of my code is below:
UserData:
Fn::If:
- DatadogAgentEnabled
-
Fn::Base64: !Sub |
#!/bin/bash -xe
yum update -y
yum update -y aws-cfn-bootstrap
/opt/aws/bin/cfn-init --stack ${AWS::StackName} --resource GhostServer --configsets prepare_instance_with_datadog --region ${AWS::Region}
/opt/aws/bin/cfn-signal -e $? --stack ${AWS::StackName} --resource GhostServer --region ${AWS::Region}
#cloud-config <----cloud-init section inside the Userdata
runcmd:
- [ sh, -c, "sed 's/api_key:.*/api_key: {DatadogAPIKey}/' /etc/datadog-agent/datadog.yaml.example > /etc/datadog-agent/datadog.yaml" ]
- systemctl restart datadog-agent
From the AWS documentation
UserData :
The user data to make available to the instance. For more
information, see Running commands on your Linux instance at launch
(Linux) and Adding User Data (Windows). If you are using a command
line tool, base64-encoding is performed for you, and you can load the
text from a file. Otherwise, you must provide base64-encoded text.
User data is limited to 16 KB.
So Basically we define UserData to execute some commands at the launch of our EC2 instance.
Reference : https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-ec2-instance.html
To Answer your question first of all UserData commands will be executed and all the commands specified in this section will execute sequentially so in your example you invoked cfn helper scripts first and then cloud init thus configsets will be applied first and then cloud init commands will be called.
Inside that UserData section we are invoking cfn helper scripts which reads cloudformation template metadata and executes all the configsets defined under AWS::CloudFormation::Init:
From the AWS Documentation :
The cfn-init helper script reads template metadata from the
AWS::CloudFormation::Init key and acts accordingly to:
Fetch and parse metadata from AWS CloudFormation
Install packages
Write files to disk
Enable/disable and start/stop services
Reference : https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/cfn-init.html
I have ec2 userData script that doing docker-compose pull and up.
I want to run "aws cloudformation update-stack" and load new docker images.
Each time ${imageTag} property changes.
This is my cloudformation instance yml:
myInstance:
Type: 'AWS::EC2::Instance'
Metadata:
'AWS::CloudFormation::Init':
configSets:
configs:
- "configDockers"
- "configOther"
configDocker:
commands:
a:
command: 'echo -e IMAGE_TAG=${imageTag} >> .env'
b:
command: 'docker-compose pull'
c:
command: 'docker-compose up'
Properties:
UserData:
Fn::Base64:
!Sub |
runcmd:
- /opt/aws/bin/cfn-init -v --stack ${AWS::StackName} --resource myInstance --region ${AWS::Region} -c configs
- /opt/aws/bin/cfn-signal -e $? --stack ${AWS::StackName} --resource myInstance --region ${AWS::Region}
I tried to add "docker-compose down", remove old images in the UserData script, and add cloud_final_modules to run UserData each startup:
myInstance:
Type: 'AWS::EC2::Instance'
Metadata:
'AWS::CloudFormation::Init':
configSets:
configs:
- "configDockers"
- "configOther"
configDocker:
commands:
a:
command: 'echo -e IMAGE_TAG=${imageTag} >> .env'
b:
command: 'docker-compose down'
c:
command: 'docker images -q | xargs -r sudo docker rmi'
d:
command: 'docker-compose pull'
e:
command: 'docker-compose up'
Properties:
UserData:
Fn::Base64:
!Sub |
#cloud-config
cloud_final_modules:
- [scripts-user, always]
runcmd:
- /opt/aws/bin/cfn-init -v --stack ${AWS::StackName} --resource myInstance --region ${AWS::Region} -c configs
- /opt/aws/bin/cfn-signal -e $? --stack ${AWS::StackName} --resource myInstance --region ${AWS::Region}
But after "aws cloudformation update-stack" the script does not run again dispite the imageTag changed. it runs only if I change some value under runcmd.
How can I run the UserData script each time "aws cloudformation update-stack" running (each time imageTag changes)?
How can I run some UserData commands only on first startup, other commands on each reboot and other commands on stop instance?
for example: I want to run "docker-compose down" only on instance stop but other commands command: 'docker-compose pull/up' on each instance reboot or "aws cloudformation update-stack" and some initial commands only on first setup.
How can I run some UserData commands only on first startup, other commands on each reboot and other commands on stop instance?
You can't. UserData is meant to run only at instance launch, not reboots, starts or stops (one exception mentioned below). In order to do so, you have to implement all this functionality yourself.
This is commonly done though definition of custom systemd unit files. Thus you would have to create such unit files for your docker-compose.
The only exception is, a rather hackish way, of running userdata at instance start described in recent AWS blog post:
How can I utilize user data to automatically execute a script with every restart of my Amazon EC2 instance?
I'm trying to install docker in a Centos instance using cloudformation and the blocks of AWS::CloudFormation::Init
One of the installation steps is to add a certain repository to yum by running:
$ sudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
How can I enable that repository in a cloudformation template. Ideally, I would like to use a single config block if that's possible.
This is what I got so far
Resources:
ec2:
Type: AWS::EC2::Instance
Metadata:
AWS::CloudFormation::Init:
config:
packages:
yum:
yum-utils: []
device-mapper-persistent-data: []
lvm2: []
Properties:
...
You should be able to do that by adding a commands block:
Resources:
ec2:
Type: AWS::EC2::Instance
Metadata:
AWS::CloudFormation::Init:
configSets:
config:
- yum_config_manager
- yum_packages
yum_config_manager:
commands:
yum_config_manager:
command: yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
yum_packages:
packages:
yum:
yum-utils: []
device-mapper-persistent-data: []
lvm2: []
And in your UserData you will have:
cfn-init -c config -s ${AWS::StackId} --resource ec2 --region ${AWS::Region}
Further explanation:
Note that the commands and packages blocks have each been wrapped in a configSet.
Note also that the -c config tells cfn-init which of the configSets you want to run.
After it boots up you should be able to inspect successful operation by looking in /var/log/cfn-init-cmd.log.
More info in the docs especially the sections on configSets and commands.
we're encountering a strange issue with AWS CloudFormation.
We're using CloudFormation in order to automate the deployment of some our machines; our CloudFormation yml describes the deployment, which contains a persistent EBS volume which was created outside the stack, and we don't want to remove or recreate along such stack (it contains a lot of the state of our application).
The relevant CloudFormation yml snippet is:
DataVolumeAttachment01:
Type: AWS::EC2::VolumeAttachment
Properties:
Device: "/dev/xvdm"
InstanceId: !Ref EC2Instance01
VolumeId: !Ref DataVolumeId
EC2Instance01:
Type: "AWS::EC2::Instance"
Properties:
ImageId: "ami-6f587e1c"
KeyName: !Ref "KeyName"
InstanceType: !Ref "InstanceType"
BlockDeviceMappings:
# Root device
- DeviceName: "/dev/sda1"
Ebs:
VolumeType: "gp2"
DeleteOnTermination: "true"
VolumeSize: 20
So, the root device is "transient" (every time the stack is updated, such volume is deleted and gets reprovisioned with userdata), while /dev/xvdm should contain our persistent data; such device gets mounted at the end of the userdata, and added to fstab.
Following AWS own documentation, I created a script that unmounts such volume from inside the VM, and I even tried deattaching such volume from the EC2 Instance, something like:
${SSH_CMD} "cd /home/application && application stop && sudo umount /data && echo data volume unmounted"
echo "detaching data volume"
VOLID=$(aws ec2 describe-volumes --filters Name=tag-key,Values="Name" Name=tag-value,Values=persistent-volume --query 'Volumes[*].{ID:VolumeId}' --output text)
aws ec2 detach-volume --volume-id "${VOLID}"
I have verified the umount and the detach succeed.
The creation of a new stack with my template and parameters succeeds.
And yet, when I launch
aws cloudformation update-stack --capabilities CAPABILITY_IAM --stack-name $STACK_NAME --template-body file://single_ec2_instance.yml --parameters file://$AWS_PARAMETERS_FILE
The update fails, with this error:
Update to resource type AWS::EC2::VolumeAttachment is not supported.
Even though I'm not changing anything within such resource.
What's up? How can I solve or work around?
It seems that the thing was a non-issue.
Either CloudFormation is influenced by exhausted t2 cpu credits (which we had exhausted, we were trying to change the instance type for that exact reason in order to use m3 or m4) or we got a bad day for EC2/CloudFormation in Ireland. Today, with the same exact setup, every update succeeded.
I just had this problem and found the solution was to Terminate my stack which creates EC2 instances and then redploy.