Running a shell script in CloudFormation cfn-init - amazon-web-services

I am trying to run a script in the cfn-init command but it keeps timing out.
What am I doing wrong when running the startup-script.sh?
"WebServerInstance" : {
"Type" : "AWS::EC2::Instance",
"DependsOn" : "AttachGateway",
"Metadata" : {
"Comment" : "Install a simple application",
"AWS::CloudFormation::Init" : {
"config" : {
"files": {
"/home/ec2-user/startup_script.sh": {
"content": {
"Fn::Join": [
"",
[
"#!/bin/bash\n",
"aws s3 cp s3://server-assets/startserver.jar . --region=ap-northeast-1\n",
"aws s3 cp s3://server-assets/site-home-sprint2.jar . --region=ap-northeast-1\n",
"java -jar startserver.jar\n",
"java -jar site-home-sprint2.jar --spring.datasource.password=`< password.txt` --spring.datasource.username=`< username.txt` --spring.datasource.url=`<db_url.txt`\n"
]
]
},
"mode": "000755"
}
},
"commands": {
"start_server": {
"command": "./startup_script.sh",
"cwd": "~",
}
}
}
}
},
The file part works fine and it creates the file but it times out at running the command.
What is the correct way of executing a shell script?

You can tail the logs in /var/log/cfn-init.log and detect the issues while running the script.
The commands in Cloudformation Init are ran as sudo user by default. Maybe there can be an issue were your script is residing in /home/ec2-user/ and you are trying to run the script from '~' (i.e. /root).
Please give the absolute path (/home/ec2-user) in cwd. It will solve your concern.
However, the exact issue can be fetched from the logs only.

Usually the init scripts are executed by root unless specified otherwise. Can you try giving the full path while running your startup script. You can give cloudkast a try. It is an online cloudformation template generator. Makes easier creating objects such as aws::cloudformation::init.

Related

Cloudformation template completes deployment before UserData is finished

In the CloudFormation template I am deploying, I am running a long running operation in the UserData block.
It looks as follows:
"UserData": {
"Fn::Base64" : {
"Fn::Join" : [
"",
[
"/usr/local/bin/mylongrunningscript.sh"
]
]
}
}
The contents of the script are:
echo "UserData starting!" > /var/log/mycustomlog.log
sleep 300
echo "UserData finished!" >> /var/log/mycustomlog.log
The issue I am seeing is that I believe the CloudFormation template is completing it's deployment before the UserData script finishes running. I believe this is the case because if I am quick enough and ssh into the instance, I will see something as follows:
$ cat /var/log/mycustomlog.log
UserData starting
which suggests that the UserData didn't finish running
How can I make sure that the UserData code execution is completed before the stack is in the "CreateComplete" status?
To ensure the CloudFormation template waits for the completion of the UserData script, you must do two things:
Add a CreationPolicy to the resource you are targeting (virtual machine in my case).
Add logic in the script to signal its completion. This custom logic uses the cfn-signal utility, which you might have to install in your instance.
Here's how the template looks now:
"UserData": {
"Fn::Base64" : {
"Fn::Join" : [
"",
[
"/usr/local/bin/mylongrunningscript.sh"
]
]
}
},
"CreationPolicy": {
"ResourceSignal" : {
"Count": "1",
"Timeout": "PT10M"
}
}
The cfn-signal utility is used to signal the termination of the script:
"/home/ubuntu/aws-cfn-bootstrap-*/bin/cfn-signal -e $? ",
" --stack ", { "Ref": "AWS::StackName" },
" --resource MyInstance" ,
" --region ", { "Ref" : "AWS::Region" }, "\n"
See here for a Windows example.

Cant get cloudformation template to install apps during deployment

I have a cloudformation template, to deploy a windows server and run some powershell commands. I can get the server to deploy, but none of my powershell commands seem to run. They are getting passed over.
I have been focusing on cnit to install my apps, no luck
{
"AWSTemplateFormatVersion":"2010-09-09",
"Description":"CHOCO",
"Resources":{
"MyEC2Instance1":{
"Type":"AWS::EC2::Instance",
"Metadata" : {
"AWS::CloudFormation::Init": {
"configSet" : {
"config" : [
"extract",
"prereq",
"install"
]
},
"extract" : {
"command" : "powershell.exe -Command Set-ExecutionPolicy -
Force remotesigned"
},
"prereq" : {
"command" : "powershell.exe -Command Invoke-WebRequest -
Uri https://xxxxx.s3.us-east-2.amazonaws.com/chocoserverinstall.ps1 -
OutFile C:chocoserverinstall.ps1"
},
"install" : {
"command" : "powershell.exe -File chocoserverinstall.ps1"
}
}
},
"Properties":{
"AvailabilityZone":"us-east-1a",
"DisableApiTermination":false,
"ImageId":"ami-06bee8e1000e44ca4",
"InstanceType":"t3.medium",
"KeyName":"xxx",
"SecurityGroupIds":[
"sg-01d044cb1e6566ef0"
],
"SubnetId":"subnet-36c3a56b",
"Tags":[
{
"Key":"Name",
"Value":"CHOCOSERVER"
},
{
"Key":"Function",
"Value":"CRISPAPPSREPO"
}
],
"UserData":{
"Fn::Base64":{
"Fn::Join":[
"",
[
"<script>\n",
"cfn-init.exe -v ",
" --stack RDSstack",
" --configsets config ",
" --region us-east-1",
"\n",
"<script>"
]]}
}
}
}
}
}
I'm excepting cloudformation to run thru my metadata commands when provisioning this template
The cfn-init command requires the -c or --configsets command to specify "a comma-separated list of configsets to run (in order)".
See:
cfn-init - AWS CloudFormation
AWS::CloudFormation::Init - AWS CloudFormation

How to Execute bash code in EC2 instance from Automation Document deployed with CloudFormation

I'm trying to execute bash code via aws:runcommand. I have taken and adapted the below snippet from the AWS Repo to deploy a Golden Image pipeline
What you see below is deployed via a CloudFormation Stack. An AWS::SSM::Document object is formed, passing various inputs. This is one of the mainSteps of my automation document. I'm trying to update the OS of my instance.
{
"name": "updateOSSoftware",
"action": "aws:runCommand",
"maxAttempts": 3,
"timeoutSeconds": 3600,
"onFailure": "Abort",
"inputs": {
"DocumentName": "AWS-RunShellScript",
"InstanceIds": [
"{{startInstances.InstanceIds}}"
],
"Parameters": {
"commands": [
"export https_proxy=http://myproxy.com:myport",
"export https_proxy='{{OutBoundProxy}}'",
"set -e",
"[ -x \"$(which wget)\" ] && get_contents='wget $1 -O -'",
"[ -x \"$(which curl)\" ] && get_contents='curl -s -f $1'",
"eval $get_contents https://aws-ssm-downloads-eu-west-1.s3.amazonaws.com/scripts/aws-update-linux-instance > /tmp/aws-update-linux-instance",
"chmod +x /tmp/aws-update-linux-instance",
"/tmp/aws-update-linux-instance --pre-update-script '{{PreUpdateScript}}' --post-update-script '{{PostUpdateScript}}' --include-packages '{{IncludePackages}}' --exclude-packages '{{ExcludePackages}}' 2>&1 | tee /tmp/aws-update-linux-instance.log"
]
}
}
}
Once I execute the document from the Systems Manger, I SSH into the EC2 instance and trying and echo $http_proxy, the variable is unset, showing the code wasn't launched.
How can I run bash code?
The environment variable exported using 'export' command is for the current shell only.
If you login to the hosts via ssh, they will be in a different shell, I don't think you could see the env variable set via 'RunCommand'.
A workaround could be add the command in bash profile. e.g.
echo "export https_proxy=http://myproxy.com:myport" >> ~/.bash_profile

Setting Environment Variables per step in AWS EMR

I am unable to set environment variables for my spark application. I am using AWS EMR to run a spark application. Which is more like a framework I wrote in python on top of spark, to run multiple spark jobs according to environment variables present. So in order for me to start the exact job, I need to pass the environment variable into the spark-submit. I tried several methods to do this. But none of them works. As I try to print the value of the environment variable inside the application it returns empty.
To run the cluster in the EMR I am using following AWS CLI command
aws emr create-cluster --applications Name=Hadoop Name=Hive Name=Spark --ec2-attributes '{"KeyName":"<Key>","InstanceProfile":"<Profile>","SubnetId":"<Subnet-Id>","EmrManagedSlaveSecurityGroup":"<Group-Id>","EmrManagedMasterSecurityGroup":"<Group-Id>"}' --release-label emr-5.13.0 --log-uri 's3n://<bucket>/elasticmapreduce/' --bootstrap-action 'Path="s3://<bucket>/bootstrap.sh"' --steps file://./.envs/steps.json --instance-groups '[{"InstanceCount":1,"InstanceGroupType":"MASTER","InstanceType":"c4.xlarge","Name":"Master"}]' --configurations file://./.envs/Production.json --ebs-root-volume-size 64 --service-role EMRRole --enable-debugging --name 'Application' --auto-terminate --scale-down-behavior TERMINATE_AT_TASK_COMPLETION --region <region>
Now Production.json looks like this:
[
{
"Classification": "yarn-env",
"Properties": {},
"Configurations": [
{
"Classification": "export",
"Properties": {
"FOO": "bar"
}
}
]
},
{
"Classification": "spark-defaults",
"Properties": {
"spark.executor.memory": "2800m",
"spark.driver.memory": "900m"
}
}
]
And steps.json like this :
[
{
"Name": "Job",
"Args": [
"--deploy-mode","cluster",
"--master","yarn","--py-files",
"s3://<bucket>/code/dependencies.zip",
"s3://<bucket>/code/__init__.py",
"--conf", "spark.yarn.appMasterEnv.SPARK_YARN_USER_ENV=SHAPE=TRIANGLE",
"--conf", "spark.yarn.appMasterEnv.SHAPE=RECTANGLE",
"--conf", "spark.executorEnv.SHAPE=SQUARE"
],
"ActionOnFailure": "CONTINUE",
"Type": "Spark"
}
]
When I try to access the environment variable inside my __init__.py code, it simply prints empty. As you can see I am running the step using spark with yarn cluster in cluster mode. I went through these links to reach this position.
How do I set an environment variable in a YARN Spark job?
https://spark.apache.org/docs/latest/configuration.html#environment-variables
https://spark.apache.org/docs/latest/configuration.html#runtime-environment
Thanks for any help.
Use classification yarn-env to pass environment variables to the worker nodes.
Use classification spark-env to pass environment variables to the driver, with deploy mode client. When using deploy mode cluster, use yarn-env.
(Dear moderator, if you want to delete the post, let me know why.)
To work with EMR clusters I work using the AWS Lambda, creating a project that build an EMR cluster when a flag is set in the condition.
Inside this project, we define the variables that you can set in the Lambda and then, replace this to its value. To use this, we have to use the AWS API. The possible method you have to use is the AWSSimpleSystemsManagement.getParameters.
Then, make a map like val parametersValues = parameterResult.getParameters.asScala.map(k => (k.getName, k.getValue)) to have a tuple with its name and value.
Eg: ${BUCKET} = "s3://bucket-name/
What this means, you only have to write in your JSON ${BUCKET} instead all the name of your path.
Once you have replace the value, the step JSON can have a view like this,
[
{
"Name": "Job",
"Args": [
"--deploy-mode","cluster",
"--master","yarn","--py-files",
"${BUCKET}/code/dependencies.zip",
"${BUCKET}/code/__init__.py",
"--conf", "spark.yarn.appMasterEnv.SPARK_YARN_USER_ENV=SHAPE=TRIANGLE",
"--conf", "spark.yarn.appMasterEnv.SHAPE=RECTANGLE",
"--conf", "spark.executorEnv.SHAPE=SQUARE"
],
"ActionOnFailure": "CONTINUE",
"Type": "Spark"
}
]
I hope this can help you to solve your problem.

cfn-init specify custom json

Is there a way to tell cfn-init to use a custom JSON file loaded from disk? That way I can quickly troubleshoot problems, otherwise the only way to change the AWS::CloudFormation::Init section is to delete the stack and create it anew.
I'd rather just make changes to my template.json, then tell something like cfn-init -f C:\template.json ... so it uses C:\template.json instead of getting the template from 169.254.169.254 or however it gets it.
Turns out cfn-init does accept a file, but the file needs to have the "Metadata" key as the root key.
So if we save this to template.json:
{
"AWS::CloudFormation::Init": {
"config": {
"files": {
"C:\\Users\\Administrator\\Desktop\\test-file": {
"source": "https://s3.us-east-2.amazonaws.com/example-bucket/example-file"
}
}
}
},
"AWS::CloudFormation::Authentication": {
"S3AccessCreds": {
"type": "S3",
"buckets": ["example-bucket"],
"roleName": "s3access"
}
}
}
Then we can execute cfn-init -v --region us-east-2 template.json.
Note: Do not include the stack or resource, if you use cfn-init -v -s my_stack -r my_instance --region us-east-2 template.json you will get:
Error: You cannot specify more than one input source for metadata
If you put the entire template file instead of just "Metadata" as root, you will get:
Could not find 'AWS::CloudFormation::Init' key in template.json