CloudFormation cfn-init.exe command - amazon-web-services

I saw the following code in the begining of the userdata script. What is it for?
if (${PROXY}) {
&"cfn-init.exe" -v -s ${STACK_ID} -r ${RESOURCE} --region ${REGION} --http-proxy=${PROXY} --https-proxy=${PROXY}
} else {
write-host "Unable to determine Proxy setting"
&"cfn-init.exe" -v -s ${STACK_ID} -r ${RESOURCE} --region ${REGION}
}

"cfn-init.exe" is a utility from cloudformation to execute instance initialization provisioning defined in cloudformation template .
This ensures the server initialization configurations defined under "AWS::CloudFormation::Init" section inside the cloud formation template is executed.
Please check the following documentation for reference
http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/cfn-init.html

Related

Update kube context using shell script

I am trying to set the current context of the cluster using shell script.
#!/usr/bin/env bash
#Set Parameters
echo ${cluster_arn}
sh "aws eks update-kubeconfig --name abc-eks-cluster --role-arn ${cluster_arn} --alias abc-eks-cluster"
export k8s_host="$(kubectl config view --minify | grep server | cut -f 2- -d ":" | tr -d " ")"
However, the above command gives error:
sh: 0: Can't open aws eks update-kubeconfig --name abc-eks-cluster --role-arn arn:aws:iam::4399999999873:role/abc-eks-cluster-cluster-admin --alias abc-eks-cluster
Can someone suggest me how do what is the issue and how can I set the current context because the k8_host command yiedls another error that context is not set.
I have no idea why you are even involving sh as a subshell, when you're already in a shell script, but
Can someone suggest me ... what is the issue
You have provided the entire command to sh but failed to use the -c that informs it that the first argument is an inline shell snippet; otherwise, sh expects that the first non-option argument is a file which is why it says it cannot open a file with that horrifically long name
There are two outcomes to get out of this mess: use -c or stop trying to use a subshell
sh -c "aws eks update-kubeconfig --name abc-eks-cluster --role-arn ${cluster_arn} --alias abc-eks-cluster"
or:
aws eks update-kubeconfig --name abc-eks-cluster --role-arn ${cluster_arn} --alias abc-eks-cluster

AWS Cloudformation : Order of Execution among ConfigSets, UserData, Cloud-Init

I have created a template where I created EC2 instance and I used cfn-init to process the configsets, and in the Userdata section of the instance, I wrote some commands to be executed by cloud-init and some commands to be executed without cloud-init.
I am not sure which commands runs in which sequence?
What I mean is, in which order the commands are executed? For example:
Commands in the configsets
Commands in the cloud-init section of the userdata
Commands in the Userdata
Part of my code is below:
UserData:
Fn::If:
- DatadogAgentEnabled
-
Fn::Base64: !Sub |
#!/bin/bash -xe
yum update -y
yum update -y aws-cfn-bootstrap
/opt/aws/bin/cfn-init --stack ${AWS::StackName} --resource GhostServer --configsets prepare_instance_with_datadog --region ${AWS::Region}
/opt/aws/bin/cfn-signal -e $? --stack ${AWS::StackName} --resource GhostServer --region ${AWS::Region}
#cloud-config <----cloud-init section inside the Userdata
runcmd:
- [ sh, -c, "sed 's/api_key:.*/api_key: {DatadogAPIKey}/' /etc/datadog-agent/datadog.yaml.example > /etc/datadog-agent/datadog.yaml" ]
- systemctl restart datadog-agent
From the AWS documentation
UserData :
The user data to make available to the instance. For more
information, see Running commands on your Linux instance at launch
(Linux) and Adding User Data (Windows). If you are using a command
line tool, base64-encoding is performed for you, and you can load the
text from a file. Otherwise, you must provide base64-encoded text.
User data is limited to 16 KB.
So Basically we define UserData to execute some commands at the launch of our EC2 instance.
Reference : https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-ec2-instance.html
To Answer your question first of all UserData commands will be executed and all the commands specified in this section will execute sequentially so in your example you invoked cfn helper scripts first and then cloud init thus configsets will be applied first and then cloud init commands will be called.
Inside that UserData section we are invoking cfn helper scripts which reads cloudformation template metadata and executes all the configsets defined under AWS::CloudFormation::Init:
From the AWS Documentation :
The cfn-init helper script reads template metadata from the
AWS::CloudFormation::Init key and acts accordingly to:
Fetch and parse metadata from AWS CloudFormation
Install packages
Write files to disk
Enable/disable and start/stop services
Reference : https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/cfn-init.html

How can I resolve non supported format error in aws cloudformation script

I am writing a cloudformation script which is supposed to pick the stack name, template body and parameter file from a .txt file for deployment. I would not want the yaml and json file to be edited during a new deployment instead the .txt file should be edited
The code is below
aws cloudformation create-stack --stack-name $(<stack_name.txt) --template-body
file://$(<stack_template_file_name.txt) --parameters file://$(<stack_parameter_file_name.txt) capabilities "CAPABILITY_IAM" "CAPABILITY_NAMED_IAM" --region=us-west-2
note: stack_name.txt contains the name to be used the stack, stack_template_file_name.txt contains the name of the template.yml file, stack_parameter_file_name.txt contains the name of the parameter.json file
when I type the command directly in the cli, the stack is deployed but when I copy it into create.sh and run ./create.sh I get the error below
`' doesn't match a supported format.`
How can I fix this?
I just wanted to comment I found the same error from a different issue in the aws CLI.
' doesn't match a supported format.
When running:
aws s3 cp file.txt s3://bucket-name/
The issue was a malformed configuration in the
~/.aws config folder.
Re-running the
~/.aws configure command didn't seem to fix the issue
however by inspecting the files in the
~/.aws folder I was able to remove the random characters that were causing the issue.
In my case I was using Ubuntu Windows Linux Subsystem.
I was getting shell variables from properties file.
For e.g. My properties file abc.properties.
#VSRX
region=us-west-1
key_name=cft111
key_location=/var/cft111.pem
My Shell script abc.sh
#!/bin/bash
action=$1
source ./abc.properties
if [[ $action == "create" ]]; then
aws cloudformation deploy \
--template-file abc.yml \
--stack-name ${parent_stack_name} \
--capabilities CAPABILITY_NAMED_IAM \
--region ${region} \
--parameter-overrides KeyName=${key_name}
fi
I faced the similar issue and i tried echoing the command as well and i saw region parameter itself were invisible in echoed command.
To my resolution : it was only formatting issue and i fixed it with below two commands.
sed -i 's/\r$//' abc.sh
sed -i 's/\r$//' abc.properties
I found out that it had to do with the EC2 environment, the exact command worked in a Windows and Ubuntu system.

Cloudformation deploy --parameter-overrides doesnt accept file Workaround

I am in process of using setting up pipeline using codebuild and use cloudformation package and cloudformation deploy to spin up the stack which spuns lambda function. Now i know that with cloudformation deploy we cant use parameters file with --parameters-overrides and this feature request is still in open state wit AWS https://github.com/aws/aws-cli/issues/2828 . So i am trying to use a workaround using JQ which is decsribed in this link https://github.com/aws/aws-cli/issues/3274#issuecomment-529155262 like below.
PARAMETERS_FILE="parameters.json" && PARAMS=($(jq -r '.Parameters[] | [.ParameterKey, .ParameterValue] | "\(.[0])=\(.[1])"' ${PARAMETERS_FILE})) - aws cloudformation deploy --template-file /codebuild/output/packaged.yaml --region us-east-2 --stack-name InitialSetup --capabilities CAPABILITY_IAM --parameter-overrides ${PARAMS[#]}
This workaround works well if tested via cli . I also tried this workaround inside a container as buildspec.yaml file creates a container in background which runs these commands , but codebuild doesnt excute the aws cloudformation deploy step and fails . I get error "aws: error: argument --parameter-overrides: expected at least one argument" . I even tried copying the two steps of workaround in shell script and then executing it but i run into error "[Container] 2020/01/21 09:19:14 Phase context status code: COMMAND_EXECUTION_ERROR Message: Error while executing command: ./test.sh. Reason: exit status 255"
Can someone please guide me here .My buildspec.yaml file is as below :
'''
version: 0.2
phases:
install:
runtime-versions:
java: corretto8
commands:
- wget -O jq https://github.com/stedolan/jq/releases/download/jq-1.6/jq-linux64
- chmod +x ./jq
- cp jq /usr/bin
- jq --version
pre_build:
commands:
# - echo "[Pre-Build phase]
build:
commands:
- aws cloudformation package --template-file master.yaml --s3-bucket rtestbucket --output-template-file packaged.yaml
- aws s3 cp ./packaged.yaml s3://rtestbucket/packaged.yaml
- aws s3 cp s3://rtestbucket/packaged.yaml /codebuild/output
post_build:
commands:
- PARAMETERS_FILE="parameters.json" && PARAMS=($(jq -r '.Parameters[] | [.ParameterKey, .ParameterValue] | "\(.[0])=\(.[1])"' ${PARAMETERS_FILE}))
- ls
- aws cloudformation deploy --template-file /codebuild/output/packaged.yaml --region us-east-2 --stack-name InitialSetup --capabilities CAPABILITY_IAM --parameter-overrides ${PARAMS[#]}
artifacts:
type: zip
files:
- packaged.yaml
CodeBuild buildspec commands are not running in bash shell and I think the syntax:
${PARAMS[#]}
... is bash specific.
As per the answer here: https://stackoverflow.com/a/44811491/12072431
Try to wrap your commands in a script file with a shebang specifying the shell you'd like the commands to execute with.
The expression ${PARAMS[#]} is not returning any value, which causes the error aws: error: argument --parameter-overrides: expected at least one argument. Review the code and resolve, or remove that parameter.
I was able to resolve this issue with executing the all the required steps in shell script and providing access to script.

AWS ecr get-login generates docker login command with an unknown flag

when I generate the docker login command to my AWS ECR with the following command:
aws ecr get-login --region us-east-2
I get an output like:
docker login -u AWS -p [bigbass] -e none https://xxxx.dkr.ecr.us-east-2.amazonaws.com
The problem is the -e flag that throws an error:
unknown shorthand flag: 'e' in -e
See 'docker login --help'.
I first thought that the problem was a mis configured aws configure, as I was using none as "Default output format" option. After that I fixed the format option inside aws configure but it still happens.
They not so long ago changed their CLI. It looks like this now:
get-login
[--registry-ids <value> [<value>...]]
[--include-email | --no-include-email]
So simply replace -e none with --no-include-email.
See the corresponding documentation here.