ssm automation document input in AWS-RunShellScript not substituting variable - amazon-web-services

I am trying to run a command in bash where part of the command is substituted from a variable that I created in a previous step, however the string substitution is not working. I have tried many variations of this with single, double quotes, etc but cant not get it to work.
mainSteps:
- name: getIps
action: 'aws:invokeLambdaFunction'
timeoutSeconds: 1200
maxAttempts: 1
onFailure: Abort
inputs:
FunctionName: Automation-GetIPs
Payload: '{"asg": "Staging_web_ASG"}'
outputs:
- Name: asg_ips
Selector: $.Payload.IPs
Type: StringList
- name: updatelsync
action: 'aws:runCommand'
timeoutSeconds: 1200
inputs:
DocumentName: AWS-RunShellScript
InstanceIds:
- '{{ InstanceID }}'
Parameters:
commands:
- 'echo {{getIps.asg_ips}} > /root/asg_ips.test'
In the above code. I set asg_ips in step1 who's OutputPayload is as follows :
{"Payload":{"IPs": ["172.xx.x.xxx", "172.xx.x.xxx"]},"StatusCode":200}
but for input in the 2nd step, it shows as follows...
{"commands":["echo {{getIps.asg_ips}} > /root/asg_ips.test"]}
I need to get it to show something like this...
{"commands":["echo ["172.xx.x.xxx", "172.xx.x.xxx"] > /root/asg_ips.test"]}

Based on the comments.
The issue was caused by incorrect use of outputs in aws:invokeLambdaFunction SSM action. Specifically, the lambda action does not have outputs attribute as shown in the linked documentation:
name: invokeMyLambdaFunction
action: aws:invokeLambdaFunction
maxAttempts: 3
timeoutSeconds: 120
onFailure: Abort
inputs:
FunctionName: MyLambdaFunction
In contrast, as a side note, outputs attribute is valid in aws:executeAwsApi.
Therefore, the solution is to directly refer to the Payload returned by the lambda action:
mainSteps:
- name: getIps
action: 'aws:invokeLambdaFunction'
timeoutSeconds: 1200
maxAttempts: 1
onFailure: Abort
inputs:
FunctionName: Automation-GetIPs
Payload: '{"asg": "Staging_web_ASG"}'
- name: updatelsync
action: 'aws:runCommand'
timeoutSeconds: 1200
inputs:
DocumentName: AWS-RunShellScript
InstanceIds:
- '{{ InstanceID }}'
Parameters:
commands:
- 'echo {{getIps.Payload}} > /root/asg_ips.test'
The side effect is that now asg_ips.test needs to be post-processed to get the IP ranges values.

Related

Want to send cloud-formation output through an email

I am writing a cloud-formation template where I am running two powershell scripts. Now I want to fetch the output of both the scripts and want to send that to an email which is already mentioned in cloud-formation parameter.
here is the code:-
AWSTemplateFormatVersion: '2010-09-09'
Description: Test Document
Resources:
Type: AWS::SSM::Document
Properties:
DocumentType: Command
Name: "Test Upgrade"
Content:
schemaVersion: '2.2'
description: "Test Upgrade"
parameters:
Emails:
type: String
description: |-
enter the email address to send the overall output
mainSteps:
- action: "aws:runPowerShellScript"
name: "DriverUpgrade"
precondition:
StringEquals: ["platformType", "Windows"]
inputs:
runCommand:
[]
timeoutSeconds: 3600
onFailure: Continue
maxAttempts: 1
isCritical: False
nextStep: Second Driver
- action: "aws:runPowerShellScript"
name: "SecondDriverUpgrade"
precondition:
StringEquals: ["platformType", "Windows"]
inputs:
runCommand:
[]
timeoutSeconds: 3600
onFailure: Continue
isCritical: False
Could you use AWS CLI command 'ses send-email' inside the Powershell script or as a part of Run Command parameters?
Certainly you have to configure the SES service beforehand in cfn template or console.
You could retrieve the email address from parameter store by using AWS CLI command ssm get-parameters.
To store the output of the scripts you could use a local file, parameter store, or, for example, dynamodb table etc.
https://awscli.amazonaws.com/v2/documentation/api/latest/reference/ses/send-email.html
https://awscli.amazonaws.com/v2/documentation/api/latest/reference/ssm/get-parameters.html

Creating a SSM Composite Document that pulls Parameters from Parameter Store - AWS

The task is simple: whenever an EC2 instance is launched with tag key:value I want it to install a specific software. Whenever an EC2 instance is launched with a different tag key:value I want it to install a different software.
I understand that I can create 2 different associations in State Manager that uses runCommand RuneRemoteScript to install software based on the tags, but the goal is to have 1 composite document that can do this.
Any help / guidance would be appreciated!
You can achieve that using SSM Automation documents - https://docs.aws.amazon.com/systems-manager/latest/userguide/automation-branchdocs.html
However, probably you will need to do something like this:
In the State Manager use AWS-RunDocument,
This document should execute SSM Automation document (your Composite document)
Your Composite document should look like this:
I didn't validate this template, and I assume It shouldn't work without a few days of debugging
schemaVersion: '0.3'
parameters:
InstanceId:
type: String
mainSteps:
- name: DescribeEc2
action: 'aws:executeScript'
inputs:
Runtime: python3.7
Handler: script_handler
Script: |
import json
import boto3
def script_handler(events):
ec2_instance = boto3.client('ec2').describe_instances(
InstanceIds=events["instance_id"],
)["Reservations"][0]["Instances"][0]
# thread it like an example,
# Here you should parse your tags and decide what software you
# want to install on the provided instance
return json.dumps(
{
"to_be_installed": "result"
},
sort_keys=True,
default=str
)
InputPayload:
instance_id: '{{ InstanceId }}'
Outputs:
- Name: result
Selector: "$.to_be_installed"
- name: WhatToInstall
action: aws:branch
inputs:
Choices:
- NextStep: InstallSoft1
Variable: "{{DescribeEc2.result}}"
StringEquals: soft_1
- NextStep: InstallSoft1
Variable: "{{DescribeEc2.result}}"
StringEquals: soft_2
- name: InstallSoft1
action: aws:runCommand
inputs:
DocumentName: AWS-RunShellScript
InstanceIds:
- '{{ InstanceId }}'
Parameters:
commands:
...
- name: InstallSoft2
action: aws:runCommand
inputs:
DocumentName: AWS-RunShellScript
InstanceIds:
- '{{ InstanceId }}'
Parameters:
commands:
...
Tbh, you will find a lot of troubles with such solution (IAM and SSM specific issues), so I will recommend using Event Bridge -> Lambda Function(that decides which Document/Automation should be run) -> SSM-RunDocument (executed directly in the Lambda Function).

Cloudcustodian - filter by tag name for on/off hours

I have the following policy:
policies:
- name: stop-after-hours
resource: ec2
filters:
- tag:Schedule: "OfficeHours"
actions:
- stop
mode:
type: periodic
schedule: "rate(10 minutes)"
role: arn:aws:iam::XXXXXX:role/LambdaRoleCloudCustodian
Which correctly identified my EC2 tagged with "Schedule: OfficeHours":
$> custodian run --dry-run -s out shutdown-out-of-office.yml
custodian.policy:INFO policy:stop-after-hours-cologne resource:ec2 region:eu-central-1 count:1 time:0.00
However, when I want to set the offhour:
policies:
- name: stop-after-hours
resource: ec2
filters:
- tag:Schedule: "OfficeHours"
- type: offhour
offhour: 11
actions:
- stop
mode:
type: periodic
schedule: "rate(10 minutes)"
role: arn:aws:iam::XXXXXX:role/LambdaRoleCloudCustodian
The instance is not identified anymore.
2022-07-05 12:01:04,541: custodian.policy:INFO policy:stop-after-hours-cologne resource:ec2 region:eu-central-1 count:0 time:0.78
I also tried
- type: value
key: tag:Schedule
value: OfficeHours
which doesn't work.
Any idea on how I can filter on tag name AND value here?
So, after fiddling around quite some time, I finally found the solution.
Here's the complete policy
# Stop instances tagged with "Schedule: OfficeHour" at offhour
- name: stop-after-hours
resource: ec2
filters:
- tag:Schedule: OfficeHours
- State.Name: running
- type: offhour
tag: Schedule
weekends: true
default_tz: cet
offhour: 10
actions:
- stop
mode:
type: periodic
schedule: "rate(10 minutes)"
role: arn:aws:iam::XXXXXXXXX:role/LambdaRoleCloudCustodian
Some things to keep in mind
Here, under filters/type, I have a tag attribute for which the value is Schedule. This will tell Cloudcustodian to look for any instance which has the tag Schedule, whatever its value. If you do not specify this, you need to tag your instance with the default offhour tag which is maid_offhours
I also have tag:Schedule: OfficeHours which will filter out instances based on the tag Schedule's value.
If you want to test your policy with a dry-run, you must test in the current hour. So, if my offhour is set to 10, then the dry-run will only be able to fetch the resource if it is run between 10:00am and 10:59am.
I hope it helps some people, I find the Cloudcustodian documentation quite difficult to understand.

Azure DevOps Pipeline: same template twice in one stage

In my main pipeline in one stage, I call the same (deployment) template twice with just a bit different data:
//pipeline.yml
- stage: dev
condition: and(succeeded(), eq('${{ parameters.environment }}', 'dev'))
variables:
getCommitDate: $[ stageDependencies.prepare_date.set_date.outputs['setCommitDate.rollbackDate'] ]
jobs:
- template: mssql/jobs/liquibase.yml#templates
parameters:
command: update
username: $(username_dev)
password: $(password_dev)
environment: exampleEnv
databaseName: exampleDB
databaseIP: 123456789
context: dev
checkoutStep:
bash: git checkout ${{parameters.commitHash}} -- ./src/main/resources/objects
- template: mssql/jobs/liquibase.yml#templates
parameters:
command: rollbackToDate $(getCommitDate)
username: $(username_dev)
password: $(password_dev)
environment: exampleEnv
databaseName: exampleDB
databaseIP: 123456789
context: dev
//template.yml
parameters:
- name: command
type: string
- name: environment
type: string
- name: username
type: string
- name: password
type: string
- name: databaseName
type: string
- name: databaseIP
type: string
- name: context
type: string
- name: checkoutStep
type: step
default:
checkout: self
jobs:
- deployment: !MY PROBLEM!
pool:
name: exampleName
demands:
- agent.name -equals example
environment: ${{ parameters.environment }}
container: exampleContainer
strategy:
runOnce:
deploy:
steps:
...
My problem is that the deployment cannot have the same name twice.
It is not possible to use the ${{parameters.command}} to distinguish between deployments names, because it contains forbidden characters. Only ${{parameters.command}} differs between two calls.
My question is whether it is possible to distinguish the name of a deployment other way than passing another parameter (e.g. jobName: ). I have tried various conditions and predefined variables but without success.
Additionally, I should add DependsOn so that the second template is called for sure after the first.
It is not possible because getCommitDate and thus command parameter in your second stage contains runtime expression and job name needs compile time expression. So if you use command as job name at compile you have there rollbackToDate $(getCommitDate).
To solve this issue, the job identifier should be empty in a template:
- job: # Empty identifier
More informations available HERE

AWS step functions - Transform {AWS::AccountId}::StepFunctionsYamlTransform failed without an error message

I am writing a cloudformation template for creating an aws step function and statemachine. Following is the part of my template which is causing the error
AWSTemplateFormatVersion: 2010-09-09
Transform:
- StepFunctionsYamlTransform
StepFunctionsStateMachine:
Type: AWS::StepFunctions::StateMachine
Properties:
StateMachineName: MyStack
RoleArn: !GetAtt StateMachineRole.Arn
DefinitionStringYaml: !Sub
- |
Comment: My-Stack-workflow
StartAt: LambdaToStart
TimeoutSeconds: 43200
States:
LambdaToStart:
Type: Task
Resource: "${LambdaToStartArn}"
Next: WaitToWriteInS3
WaitToWriteInS3:
Type: Wait
Seconds: 5
Next: Batch_Job_1
Batch_Job_1:
Type: Task
Next: LambdaForTriggerEmrJob
Resource: arn:aws:states:::batch:submitJob.sync
Parameters:
JobName: "${BatchJob1}"
JobDefinition: "${BatchJob1DefinitionArn}"
JobQueue: arn:aws:batch:${AWS::Region}:${AWS::AccountId}:job-queue/${QueueName}
LambdaForTriggerEmrJob:
Type: Task
Resource: "${LambdaForEmrArn}"
Next: WaitFoEmrState
WaitFoEmrState:
Type: Wait
Seconds: 90
Next: CheckEmrState
CheckEmrState:
Type: Task
Resource: "${ClusterStateCheckArn}"
InputPath: "$.input.cluster" # Values coming from lambda
ResultPath: "$.input.cluster" # Values coming from lambda
Retry: *LambdaRetryConfig
Next: IsClusterRunning
IsClusterRunning:
Type: Choice
Default: WaitFoEmrState
Choices:
- Variable: "$.input.cluster.state"
StringEquals: FAILED
Next: StateMachineFailure
- Variable: "$.input.cluster.state" # Values coming from lambda
StringEquals: SUCCEEDED
Next: FinalBatchJob
StateMachineFailure:
Type: Fail
FinalBatchJob:
Type: Task
Resource: arn:aws:states:::batch:submitJob.sync
Parameters:
JobName: "${FinalBatch}"
JobDefinition: "${FinalBatchDefinitionArn}"
JobQueue: arn:aws:batch:${AWS::Region}:${AWS::AccountId}:job-queue/${QueueName
End: true
- LambdaToStartArn: !GetAtt LambdaToStart.Arn
LambdaForEmrArn: !GetAtt LambdaForEmr.Arn
BatchJob1DefinitionArn: !Ref BatchJob1Definition
FinalBatchDefinitionArn: !Ref FinalBatchDefinition
BatchJob1: !Sub ${AWS::StackName}-batch-1
FinalBatch: !Sub ${AWS::StackName}-final-batch
ClusterStateCheckArn: !Sub arn:aws:lambda:${AWS::Region}:${AWS::AccountId}:function:cluster-state
It returns the following error
Failed to create the changeset: Waiter ChangeSetCreateComplete failed:
Waiter encountered a terminal failure state Status: FAILED. Reason:
Transform {AWS::AccountId}::StepFunctionsYamlTransform failed without an
error message.
Can anyone help in recognising the solution to this? I can't debug a lot since it fails without an error message. TIA
AWS cloudformation errors are sometimes quite wierd and its difficult to debug them. But I found the error. It was 9th line JobQueue: arn:aws:batch:${AWS::Region}:${AWS::AccountId}:job-queue/${QueueName and one can easily see that I missed } at the end. So it was a syntax error