cfn-init specify custom json - amazon-web-services

Is there a way to tell cfn-init to use a custom JSON file loaded from disk? That way I can quickly troubleshoot problems, otherwise the only way to change the AWS::CloudFormation::Init section is to delete the stack and create it anew.
I'd rather just make changes to my template.json, then tell something like cfn-init -f C:\template.json ... so it uses C:\template.json instead of getting the template from 169.254.169.254 or however it gets it.

Turns out cfn-init does accept a file, but the file needs to have the "Metadata" key as the root key.
So if we save this to template.json:
{
"AWS::CloudFormation::Init": {
"config": {
"files": {
"C:\\Users\\Administrator\\Desktop\\test-file": {
"source": "https://s3.us-east-2.amazonaws.com/example-bucket/example-file"
}
}
}
},
"AWS::CloudFormation::Authentication": {
"S3AccessCreds": {
"type": "S3",
"buckets": ["example-bucket"],
"roleName": "s3access"
}
}
}
Then we can execute cfn-init -v --region us-east-2 template.json.
Note: Do not include the stack or resource, if you use cfn-init -v -s my_stack -r my_instance --region us-east-2 template.json you will get:
Error: You cannot specify more than one input source for metadata
If you put the entire template file instead of just "Metadata" as root, you will get:
Could not find 'AWS::CloudFormation::Init' key in template.json

Related

How to find all the instances that were not successful in a "Systems Manager" run command?

I ran the AWS-RunPatchBaseline run command and few of my instance are successful and few of them are timed out. I want to filter the instance that were timed out using the aws cli list-command-inovcations command.
When I ran the below CLI command:
aws ssm list-command-invocations --command-id 7894b7658-a156-4e5g-97t2-2a9ab5498e1d
It displays a ouput attached here
Next, from the above output, I want to filter all the instance that have the "Status": "Timedout", "StatusDetails": "DeliveryTimedOut" (or, actually, everything other than "Status": "Success")
I tried:
aws ssm list-command-invocations --command-id 7894b7658-a156-4e5g-97t2-2a9ab5498e1d --output text --query '#[?(CommandInvocations.Status != 'Success')]'
it is returning None.
I also tried
aws ssm list-command-invocations --command-id 7894b7658-a156-4e5g-97t2-2a9ab5498e1d --output text --query '#[?(#.Status != 'Success')]'
which is returning None, as too.
And, with
aws ssm list-command-invocations --command-id 7894b7658-a156-4e5g-97t2-2a9ab5498e1d --output text --query 'CommandInvocations[?(#.Status != 'Success')]'
it is not filtered, returning the complete output.
Since you did not provide an example of output one can copy / paste for testing purpose, this example is based on the output from the AWS documentation, where I changed the Status of the command of ID ef7fdfd8-9b57-4151-a15c-db9a12345678, which I also cleaned a bit from the excess data, so:
{
"CommandInvocations": [
{
"CommandId": "ef7fdfd8-9b57-4151-a15c-db9a12345678",
"InstanceId": "i-02573cafcfEXAMPLE",
"InstanceName": "",
"DocumentName": "AWS-UpdateSSMAgent",
"DocumentVersion": "",
"RequestedDateTime": 1582136283.089,
"Status": "TimedOut",
"StatusDetails": "DeliveryTimeOut"
},
{
"CommandId": "ef7fdfd8-9b57-4151-a15c-db9a12345678",
"InstanceId": "i-0471e04240EXAMPLE",
"InstanceName": "",
"DocumentName": "AWS-UpdateSSMAgent",
"DocumentVersion": "",
"RequestedDateTime": 1582136283.02,
"Status": "Success",
"StatusDetails": "Success"
}
]
}
Given this JSON, the filter to apply is quite like the one you can find in the tutorial chapter "Filter Projections".
You just need to select the property under where the array is, in your case, CommandInvocations, and apply your condition, Status != `Success`, inside the brackets [? ].
So, with the query:
CommandInvocations[?Status != `Success`]
On the above JSON, we end up with the expected:
[
{
"CommandId": "ef7fdfd8-9b57-4151-a15c-db9a12345678",
"InstanceId": "i-02573cafcfEXAMPLE",
"InstanceName": "",
"DocumentName": "AWS-UpdateSSMAgent",
"DocumentVersion": "",
"RequestedDateTime": 1582136283.089,
"Status": "TimedOut",
"StatusDetails": "DeliveryTimeOut"
}
]
And, so, your AWS command should be:
aws ssm list-command-invocations \
--command-id 7894b7658-a156-4e5g-97t2-2a9ab5498e1d \
--output text \
--query 'CommandInvocations[?Status != `Success`]'

Running a shell script in CloudFormation cfn-init

I am trying to run a script in the cfn-init command but it keeps timing out.
What am I doing wrong when running the startup-script.sh?
"WebServerInstance" : {
"Type" : "AWS::EC2::Instance",
"DependsOn" : "AttachGateway",
"Metadata" : {
"Comment" : "Install a simple application",
"AWS::CloudFormation::Init" : {
"config" : {
"files": {
"/home/ec2-user/startup_script.sh": {
"content": {
"Fn::Join": [
"",
[
"#!/bin/bash\n",
"aws s3 cp s3://server-assets/startserver.jar . --region=ap-northeast-1\n",
"aws s3 cp s3://server-assets/site-home-sprint2.jar . --region=ap-northeast-1\n",
"java -jar startserver.jar\n",
"java -jar site-home-sprint2.jar --spring.datasource.password=`< password.txt` --spring.datasource.username=`< username.txt` --spring.datasource.url=`<db_url.txt`\n"
]
]
},
"mode": "000755"
}
},
"commands": {
"start_server": {
"command": "./startup_script.sh",
"cwd": "~",
}
}
}
}
},
The file part works fine and it creates the file but it times out at running the command.
What is the correct way of executing a shell script?
You can tail the logs in /var/log/cfn-init.log and detect the issues while running the script.
The commands in Cloudformation Init are ran as sudo user by default. Maybe there can be an issue were your script is residing in /home/ec2-user/ and you are trying to run the script from '~' (i.e. /root).
Please give the absolute path (/home/ec2-user) in cwd. It will solve your concern.
However, the exact issue can be fetched from the logs only.
Usually the init scripts are executed by root unless specified otherwise. Can you try giving the full path while running your startup script. You can give cloudkast a try. It is an online cloudformation template generator. Makes easier creating objects such as aws::cloudformation::init.

AWS CLI Update_Stack can't pass parameter value containing a /

I've been banging my head all morning on trying to create a powershell script that will ultimately update an AWS stack. Everything is great right up to the point where I have to pass parameters to the cloudformation template.
One of the parameter values (ParameterKey=ZipFilePath) contains a /. But the script fails complaining that it was expecting a = but found a /. I've tried escaping the slash but then the API complains that it found the backslash instead of an equals. Where am I going wrong?
... <snip creating a zip file> ...
$filename = ("TotalCommApi-" + $DateTime + ".zip")
aws s3 cp $filename ("s3://S3BucketName/TotalCommApi/" + $filename)
aws cloudformation update-stack --stack-name TotalCommApi-Dev --template-url https://s3-region.amazonaws.com/S3bucketName/TotalCommApi/TotalCommApiCFTemplate.json --parameters ParameterKey=S3BucketName,ParameterValue=S3BucketNameValue,UsePreviousValue=false ParameterKey=ZipFilePath,ParameterValue=("TotalCommApi/" + $filename) ,UsePreviousValue=false
cd C:\Projects\TotalCommApi\TotalComm_API
And here is the pertinent section from the CloudFormation Template:
"Description": "An AWS Serverless Application that uses the ASP.NET Core framework running in Amazon Lambda.",
"Parameters": {
"ZipFilePath": {
"Type": "String",
"Description": "Path to the zip file containing the Lambda Functions code to be published."
},
"S3BucketName": {
"Type": "String",
"Description": "Name of the S3 bucket where the ZipFile resides."
}
},
"AWSTemplateFormatVersion": "2010-09-09",
"Outputs": {},
"Conditions": {},
"Resources": {
"ProxyFunction": {
"Type": "AWS::Lambda::Function",
"Properties": {
"Code": {
"S3Bucket": {"Ref": "S3BucketName" },
"S3Key": { "Ref": "ZipFilePath" }
},
And this is the error message generated by PowerShell ISE
[image removed]
Update: I am using Windows 7 which comes with Powershell 2. I updgraded to Powershell 4. Then my script yielded this error:
On recommendation from a consulting firm, I uninstalled the CLI that I installed via msi, then I upgraded Python to 3.6.2 and then re-installed the CLI via pip. I still get the same error. I "echo"d the command to the screen and this is what I see:
upload: .\TotalCommApi-201806110722.zip to s3://S3bucketName/TotalCommApi/TotalCommApi-201806110722.zip
aws
cloudformation
update-stack
--stack-name
TotalCommApi-Dev
--template-url
https://s3-us-west-2.amazonaws.com/s3BucketName/TotalCommApi/TotalCommApiCFTemplate.json
--parameters
ParameterKey=S3BucketName
UsePreviousValue=true
ParameterKey=ZipFilePath
ParameterValue=TotalCommApi/TotalCommApi-201806110722.zip
Sorry for the delay getting back to you on this - the good news is that I might have a hint about what your issue is.
ParameterKey=ZipFilePath,ParameterValue=("TotalCommApi/" + $filename) ,UsePreviousValue=false
I was driving myself mad trying to reproduce this issue. Why? Because I assumed that the space after ("TotalCommApi/" + $filename) was an artifact from copying, not the actual value that you were using. When I added the space in:
aws cloudformation update-stack --stack-name test --template-url https://s3.amazonaws.com/test-bucket-06-09/test.template --parameters ParameterKey=S3BucketName,ParameterValue=$bucketname,UsePreviousValue=false ParameterKey=ZipFilePath,ParameterValue=testfolder/$filename ,UsePreviousValue=false
Error parsing parameter '--parameters': Expected: '=', received: ','
This isn't exactly your error message (, instead of /), but I think it's probably a similar issue in your case - check to make sure the values that are being used in your command don't have extra spaces somewhere.

Passing userdata file to AWS Cloudformation stack

I have a shell script(userdata file) and wondering is there a CLI command parameter that allows user to launch Cloudformation stack with userdata file?
Inside your template, use a CloudFormation parameter for the instance userdata:
{
"Parameters": {
"UserData": {
"Type": "String"
}
},
"Resources": {
"Instance": {
"Type" : "AWS::EC2::Instance",
"Properties" : {
"UserData" : { "Ref" : "UserData" },
...
}
},
...
}
}
Assuming you're using a Unix-like command line environment, create your stack like this:
aws cloudformation create-stack --stack-name myStack \
--template-body file://myStack.json \
--parameters ParameterKey=UserData,ParameterValue=$(base64 -w0 userdata.sh)
Your user-data must exist in the CloudFormation template when you create the stack. You can write a script to read in your user-data from the file and insert it into the CloudFormation stack prior to creating the stack. Note that you may need to make formatting changes to the userdata (see http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-ec2-instance.html#cfn-ec2-instance-userdata).

Deploy image to AWS Elastic Beanstalk from private Docker repo

I'm trying to pull Docker image from its private repo and deploy it on AWS Elastic Beanstalk with the help of Dockerrun.aws.json packed in zip. Its content is
{
"AWSEBDockerrunVersion": "1",
"Authentication": {
"Bucket": "my-bucket",
"Key": "docker/.dockercfg"
},
"Image": {
"Name": "namespace/repo:tag",
"Update": "true"
},
"Ports": [
{
"ContainerPort": "8080"
}
]
}
Where "my-bucket" is my bucket's name on s3, which uses the same location as my BS environment. Configuration that's set in key is the result of
$ docker login
invoked in docker2boot app's terminal. Then it's copied to folder "docker" in "my-bucket". The image exists for sure.
After that I upload .zip with dockerrun file to EB and on deploy I get
Activity execution failed, because: WARNING: Invalid auth configuration file
What am I missing?
Thanks in advance
Docker has updated the configuration file path from ~/.dockercfg to ~/.docker/config.json. They also have leveraged this opportunity to do a breaking change to the configuration file format.
AWS however still expects the former format, the one used in ~/.dockercfg (see the file name in their documentation):
{
"https://index.docker.io/v1/": {
"auth": "__auth__",
"email": "__email__"
}
}
Which is incompatible with the new format used in ~/.docker/config.json:
{
"auths": {
"https://index.docker.io/v1/": {
"auth": "__auth__",
"email": "__email__"
}
}
}
They are pretty similar though. So if your version of Docker generates the new format, just strip the auths line and its corresponding curly brace and you are good to go.