I am trying to learn Cloudformation im stuck with a senario where I need a second EC2 instance started after one EC2 is provisioned and good to go.
This is what i have in UserData of Instance one
"#!/bin/bash\n",
"#############################################################################################\n",
"sudo add-apt-repository ppa:fkrull/deadsnakes\n",
"sudo apt-get update\n",
"curl -sL https://deb.nodesource.com/setup_7.x | sudo -E bash -\n",
"sudo apt-get install build-essential libssl-dev python2.7 python-setuptools -y\n",
"#############################################################################################\n",
"Install Easy Install",
"#############################################################################################\n",
"easy_install https://s3.amazonaws.com/cloudformation-examples/aws-cfn-bootstrap-latest.tar.gz\n",
"#############################################################################################\n",
"#############################################################################################\n",
"GIT LFS Repo",
"#############################################################################################\n",
"curl -s https://packagecloud.io/install/repositories/github/git-lfs/script.deb.sh | sudo bash\n",
"#############################################################################################\n",
"cfn-init",
" --stack ",
{
"Ref": "AWS::StackName"
},
" --resource UI",
" --configsets InstallAndRun ",
" --region ",
{
"Ref": "AWS::Region"
},
"\n",
"#############################################################################################\n",
"# Signal the status from cfn-init\n",
"cfn-signal -e 0 ",
" --stack ",
{
"Ref": "AWS::StackName"
},
" --resource UI",
" --region ",
{
"Ref": "AWS::Region"
},
" ",
{
"Ref": "WaitHandleUIConfig"
},
"\n"
I have a WaitCondition , which i think is whats used to do this
"WaitHandleUIConfig" : {
"Type" : "AWS::CloudFormation::WaitConditionHandle",
"Properties" : {}
},
"WaitConditionUIConfig" : {
"Type" : "AWS::CloudFormation::WaitCondition",
"DependsOn" : "UI",
"Properties" : {
"Handle" : { "Ref" : "WaitHandleUIConfig" },
"Timeout" : "500"
}
}
In the Instance i use the DependsOn in the second instance to wait for first instance.
"Service": {
"Type": "AWS::EC2::Instance",
"Properties": {
},
"Metadata": {
"AWS::CloudFormation::Designer": {
"id": "1ba546d0-2bad-4b68-af47-6e35159290ca"
},
},
"DependsOn":"WaitConditionUIConfig"
}
this isnt working. I keep getting the error
WaitCondition timed out. Received 0 conditions when expecting 1
Any help would be appreciated.
Thanks
Put quotes around the Handle
Change this
" ",
{
"Ref": "WaitHandleUIConfig"
},
"\n"
to this
" \"",
{
"Ref": "WaitHandleUIConfig"
},
"\"\n"
Remove --stack, --resource and --region from your cfn-signal command. These are only used when 'resource signaling', not when signaling using a Wait Condition Handle. (You might also need to add an --id option, but the documentation says this is not required.)
For further debugging, examine the /var/log/cloud-init-output.log file on the EC2 instance to view any further cloud-init errors that might fail to successfully send the signal to your wait condition.
You might also want to comment and newline the descriptions "Install Easy Install", and "GIT LFS Repo",, e.g., "# Install Easy Install\n",, these syntax issues shouldn't cause your script to fail but will output 'command not found' errors to appear in your log.
Related
In the CloudFormation template I am deploying, I am running a long running operation in the UserData block.
It looks as follows:
"UserData": {
"Fn::Base64" : {
"Fn::Join" : [
"",
[
"/usr/local/bin/mylongrunningscript.sh"
]
]
}
}
The contents of the script are:
echo "UserData starting!" > /var/log/mycustomlog.log
sleep 300
echo "UserData finished!" >> /var/log/mycustomlog.log
The issue I am seeing is that I believe the CloudFormation template is completing it's deployment before the UserData script finishes running. I believe this is the case because if I am quick enough and ssh into the instance, I will see something as follows:
$ cat /var/log/mycustomlog.log
UserData starting
which suggests that the UserData didn't finish running
How can I make sure that the UserData code execution is completed before the stack is in the "CreateComplete" status?
To ensure the CloudFormation template waits for the completion of the UserData script, you must do two things:
Add a CreationPolicy to the resource you are targeting (virtual machine in my case).
Add logic in the script to signal its completion. This custom logic uses the cfn-signal utility, which you might have to install in your instance.
Here's how the template looks now:
"UserData": {
"Fn::Base64" : {
"Fn::Join" : [
"",
[
"/usr/local/bin/mylongrunningscript.sh"
]
]
}
},
"CreationPolicy": {
"ResourceSignal" : {
"Count": "1",
"Timeout": "PT10M"
}
}
The cfn-signal utility is used to signal the termination of the script:
"/home/ubuntu/aws-cfn-bootstrap-*/bin/cfn-signal -e $? ",
" --stack ", { "Ref": "AWS::StackName" },
" --resource MyInstance" ,
" --region ", { "Ref" : "AWS::Region" }, "\n"
See here for a Windows example.
I have a cloudformation template, to deploy a windows server and run some powershell commands. I can get the server to deploy, but none of my powershell commands seem to run. They are getting passed over.
I have been focusing on cnit to install my apps, no luck
{
"AWSTemplateFormatVersion":"2010-09-09",
"Description":"CHOCO",
"Resources":{
"MyEC2Instance1":{
"Type":"AWS::EC2::Instance",
"Metadata" : {
"AWS::CloudFormation::Init": {
"configSet" : {
"config" : [
"extract",
"prereq",
"install"
]
},
"extract" : {
"command" : "powershell.exe -Command Set-ExecutionPolicy -
Force remotesigned"
},
"prereq" : {
"command" : "powershell.exe -Command Invoke-WebRequest -
Uri https://xxxxx.s3.us-east-2.amazonaws.com/chocoserverinstall.ps1 -
OutFile C:chocoserverinstall.ps1"
},
"install" : {
"command" : "powershell.exe -File chocoserverinstall.ps1"
}
}
},
"Properties":{
"AvailabilityZone":"us-east-1a",
"DisableApiTermination":false,
"ImageId":"ami-06bee8e1000e44ca4",
"InstanceType":"t3.medium",
"KeyName":"xxx",
"SecurityGroupIds":[
"sg-01d044cb1e6566ef0"
],
"SubnetId":"subnet-36c3a56b",
"Tags":[
{
"Key":"Name",
"Value":"CHOCOSERVER"
},
{
"Key":"Function",
"Value":"CRISPAPPSREPO"
}
],
"UserData":{
"Fn::Base64":{
"Fn::Join":[
"",
[
"<script>\n",
"cfn-init.exe -v ",
" --stack RDSstack",
" --configsets config ",
" --region us-east-1",
"\n",
"<script>"
]]}
}
}
}
}
}
I'm excepting cloudformation to run thru my metadata commands when provisioning this template
The cfn-init command requires the -c or --configsets command to specify "a comma-separated list of configsets to run (in order)".
See:
cfn-init - AWS CloudFormation
AWS::CloudFormation::Init - AWS CloudFormation
Packer seems to exclude ssh keys from the project but I have set the block-project-ssh-keys value to false. The final command fails but that user has an ssh key tied to the project.
Any ideas?
{
"builders": [
{
"type": "googlecompute",
"project_id": "mahamed901",
"source_image_family": "ubuntu-1804-lts",
"ssh_username": "packer",
"zone": "europe-west1-b",
"preemptible": "true",
"image_description": "Worker Node for Jenkins (Java + Docker)",
"disk_type": "pd-ssd",
"disk_size": "10",
"metadata": {"block-project-ssh-keys":"false"},
"image_name": "ubuntu1804-jenkins-docker-{{isotime | clean_image_name}}",
"image_family": "ubuntu1804-jenkins-worker"
}
],
"provisioners": [
{
"type": "shell",
"inline": [
"sudo apt update",
"#sudo apt upgrade -y",
"#sudo apt-get install -y git make default-jdk",
"#curl https://get.docker.com/ | sudo bash",
"uptime",
"sudo curl -L \"https://github.com/docker/compose/releases/download/1.23.1/docker-compose-$(uname -s)-$(uname -m)\" -o /usr/local/bin/docker-compose",
"sudo chmod +x /usr/local/bin/docker-compose",
"sleep 5",
"cat /etc/passwd",
"#sudo usermod -aG docker jenkins",
"#sudo docker ps",
"#rm ~/.ssh/authorized_keys"
]
}
]
}
This is controlled by metadata option block-project-ssh-keys true or false.
See this issue
(the format of your metadata is wrong, remove the square brackets [ ].)
In aws cloud formation I know you can update the stack by updating the json file and those changes will take affect but how could I just update the stacks packages for example yum update or apt update etc ?
Thanks in advance
Here is the sample for you on how to handle your problem.
Update the code in Cloudformation template in userdata.
"UserData" : { "Fn::Base64" : { "Fn::Join" : ["", [
"yum update -y \n",
"# Install the files and packages from the metadata\n",
"/opt/aws/bin/cfn-init -v ",
" --stack ", { "Ref" : "AWS::StackName" },
" --resource WebServerInstance ",
" --configsets InstallAndRun ",
" --region ", { "Ref" : "AWS::Region" }, "\n"
]]}}
If you need know cfn-init, read this url cfn-init
If you need a sample template, see here: Deploying Applications on Amazon EC2 with AWS CloudFormation
I am trying to combine the Cloud formation template multi-tier-web-app-in-vpc.template with the cloudformation template used by viusal studio to create Load Balanced instances. The goal is to create 2 application servers within a private subnet of a VPC. The template works fine but when I start plugging in windows instances they just fail.
Error Message
CREATE_FAILED WaitCondition timed out. Received 0 conditions when expecting 1
The following resource(s) failed to create: [FrontendWaitCondition]. . Rollback requested by user.
Template used to create the cloud formation
https://s3.amazonaws.com/hg-optimise/Windows-Multi-Tier.template
I am trying to use the following Amazon templates as guides.
Amazon Visual Studio Template
https://s3.amazonaws.com/hg-optimise/Visual-Studio.template
Amazon Multi-tier Web Sample - http://aws.amazon.com/cloudformation/aws-cloudformation-templates/
https://s3.amazonaws.com/cloudformation-templates-us-east-1/multi-tier-web-app-in-vpc.template
It looks like you are taking on too much, trying to get everything working all at once. I would try to take it one step at a time. Create a template that gets one instance up, then add auto scaling, then load balancer, then subnet, routing, etc. The problem that presents itself now is likely because you have not signaled success for the wait condition.
Below is the Properties section of an Instance resource. This snipet was taken from an AWS documentation page. Note that the "UserData" section has a call to cfn-init.exe in order to perform the actions specified in the Instance's Cloud Formation section, and has a call to cfn-signal.exe to signal to the WaitCondition that the instance is up.
"Properties": {
"InstanceType" : { "Ref" : "InstanceType" },
"ImageId" : { "Fn::FindInMap" : [ "AWSRegionArch2AMI", { "Ref" : "AWS::Region" },
{ "Fn::FindInMap" : [ "AWSInstanceType2Arch", { "Ref" : "InstanceType" }, "Arch" ] } ] },
"SecurityGroups" : [ {"Ref" : "SharePointFoundationSecurityGroup"} ],
"KeyName" : { "Ref" : "KeyPairName" },
"UserData" : { "Fn::Base64" : { "Fn::Join" : ["", [
"<script>\n",
"cfn-init.exe -v -s ", { "Ref" : "AWS::StackName" },
" -r SharePointFoundation",
" --region ", { "Ref" : "AWS::Region" }, "\n",
"cfn-signal.exe -e %ERRORLEVEL% ", { "Fn::Base64" : { "Ref" : "SharePointFoundationWaitHandle" }}, "\n",
"</script>"
]]}}
}
You have set the front end wait condition to basically wait until your FrontendFleet is up and running.
You should set a desired capacity for your front end fleet.
When you get this error, what is the state of your autoscaling group FrontendFleet? If this is still bringing up instances, then your timeout is simply to short.
I honestly wouldn't worry about the waitcondtions unless you really need them.