I have the following step in a SSM document. The result of the call is a Json, so I wanted to parse it as a stringMap (which seems to be the correct type for it) instead of creating an output for each variable I want to reference
I've tried referencing this as both:
{{ GetLoadBalancerProperties.Description.Scheme }}
and
{{ GetLoadBalancerProperties.Description[\"LoadBalancerName\"] }}
In both cases I get an error saying the variable was never defined
{
"name": "GetLoadBalancerProperties",
"action": "aws:executeAwsApi",
"isCritical": true,
"maxAttempts": 1,
"onFailure": "step:deleteParseCloudFormationTemplate",
"inputs": {
"Service": "elb",
"Api": "describe-load-balancers",
"LoadBalancerNames": [
"{{ ResourceId }}"
]
},
"outputs": [
{
"Name": "Description",
"Selector": "$.LoadBalancerDescriptions[0]",
"Type": "StringMap"
}
]
}
This is the actual message:
Step fails when it is validating and resolving the step inputs. Failed to resolve input: GetLoadBalancerProperties.Description["LoadBalancerName"] to type String. GetLoadBalancerProperties.Description["LoadBalancerName"] is not defined in the Automation Document.. Please refer to Automation Service Troubleshooting Guide for more diagnosis details.
I believe the answer you were searching is in here:
https://docs.aws.amazon.com/systems-manager/latest/userguide/ssm-plugins.html#top-level-properties-type
Just to name a few examples:
Map type is a Python dict, hence if your output is a dict you should use StringMap in the SSM Document.
While List type is same as Python list.
So if your output is a List of Dictionary the type you want to use is MapList.
In some cases it seems that you cannot. I was able to work around this issue, by using a Python script in the SSM document to output the right type, but otherwise I believe the SSM document is not flexible enough to cover all cases.
The script I used:
- name: myMainStep
action: aws:executeScript
inputs:
Runtime: python3.6
Handler: myMainStep
InputPayload:
param: "{{ myPreviousStep.myOutput }}"
Script: |-
def myMainStep(events,context):
myOutput = events['myOutput']
for tag in myOutput:
if tag["Key"] == "myKey":
return tag["Value"]
return "myDefaultValue"
outputs:
- Name: output
Selector: "$.Payload"
Type: String
You can find out what the myOutput should be in AWS web console > SSM > Automation > Your execution, if you have already executed your automation once > executeScript step > input parameters
Related
I encountered an error when running Cloud Workflow that's supposed to execute a parameterised query.
The Cloud Workflow error is as follow:
"message": "Query parameter 'run_dt' not found at [1:544]",
"reason": "invalidQuery"
The Terraform code that contains the workflow is like this:
resource "google_workflows_workflow" "workflow_name" {
name = "workflow"
region = "location"
description = "description"
source_contents = <<-EOF
main:
params: [input]
steps:
- init:
assign:
- project_id: ${var.project}
- location: ${var.region}
- run_dt: $${map.get(input, "run_dt")}
- runQuery:
steps:
- insert_query:
call: googleapis.bigquery.v2.jobs.insert
args:
projectId: ${var.project}
body:
configuration:
query:
query: ${replace(templatefile("../../bq-queries/query.sql", { "run_dt" = "input.run_dt" } ), "\n", " ")}
destinationTable:
projectId: ${var.project}
datasetId: "dataset-name"
tableId: "table-name"
create_disposition: "CREATE_IF_NEEDED"
write_disposition: "WRITE_APPEND"
allowLargeResults: true
useLegacySql: false
partitioning_field: "dt"
- the_end:
return: "SUCCESS"
EOF
}
The query in the query.sql file looks like this:
SELECT * FROM `project.dataset.table-name`
WHERE sv.dt=#run_dt
With the code above the Terraform deployment succeeded, but the workflow failed.
If i wrote "input.run_dt" without double quote, i'd encounter Terraform error:
A managed resource "input" "run_dt" has not been declared in the root module.
If i wrote it as $${input.run_dt}, i'd encounter Terraform error:
This character is not used within the language.
If i wrote it as ${input.run_dt}, i'd encounter Terraform error:
Expected the start of an expression, but found an invalid expression token.
How can I pass the query parameter of this BigQuery job in Cloud Workflow using Terraform?
Found the solution!
add queryParameters field in the subworkflow:
queryParameters:
parameterType: {"type": "DATE"}
parameterValue: {"value": '$${run_dt}'}
name: "run_dt"
Is thegre anyway for me to pass custom variables as an input to AWS step function ?
processData:
name: ingest-data
StartAt: Execute
States:
Execute:
Type: Task
Resource: "arn:aws:lambda:#{AWS::Region}:#{AWS::AccountId}:function:#{AWS::StackName}-ingestIntnlData"
Next: Check
Check:
Type: Choice
Choices:
- Variable: "$.results['finished']"
BooleanEquals: false
Next: Wait
- Variable: "$.results['finished']"
BooleanEquals: true
Next: Notify
Wait:
Type: Wait
SecondsPath: "$.waitInSeconds"
Next: Execute
Notify:
Type: Task
Resource: "arn:aws:lambda:#{AWS::Region}:#{AWS::AccountId}:function:#{AWS::StackName}-sendEMail"
End: true
I have two different stepfunctions which call the same lambda. I'm looking to pass a custom variable to the lambda to differentiate the calls made from the two step functions.
Something like a flag variable or if even there is a way to find out the name of the function which is invoking the lambda, that should also suffice.
Please help me out
We can build an object in Pass state and pass as input to lambda
"Payload.$":"$" simply passes through all the input
{
"StartAt":"Dummy Step 1 Output",
"States":{
"Dummy Step 1 Output":{
"Type":"Pass",
"Result":{
"name":"xyz",
"testNumber":1
},
"ResultPath":"$.inputForMap",
"Next":"invoke-lambda"
},
"invoke-lambda":{
"End":true,
"Retry":[
{
"ErrorEquals":[
"Lambda.ServiceException",
"Lambda.AWSLambdaException",
"Lambda.SdkClientException"
],
"IntervalSeconds":2,
"MaxAttempts":6,
"BackoffRate":2
}
],
"Type":"Task",
"Resource":"arn:aws:states:::lambda:invoke",
"Parameters":{
"FunctionName":"arn:aws:lambda:us-east-1:111122223333:function:my-lambda",
"Payload.$":"$"
}
}
}
}
You can use the context object and pass the ExecutionId like
{
"Comment": "A Catch example of the Amazon States Language using an AWS Lambda Function",
"StartAt": "nextstep",
"States": {
"nextstep": {
"Type": "Task",
"Resource": "arn:aws:lambda:eu-central-1:1234567890:function:catcherror",
"Parameters": {
"executionId.$": "$$.Execution.Id"
},
"End": true
}
}
}
This give you
arn:aws:states:us-east-1:123456789012:execution:stateMachineName:executionName
As you can see it contains your state machine name.
You can take decisions on them
You can create Lambda functions to use as steps within an workflow created by using AWS Step Functions. For your first step (the Lambda function you use as the first step), you can pass values to it to meet your needs:
https://www.c-sharpcorner.com/article/passing-data-to-aws-lambda-function-and-invoking-it-using-aws-cli/
Then you can pass data between steps using Lambda functions as discussed here:
https://medium.com/#tturnbull/passing-data-between-lambdas-with-aws-step-functions-6f8d45f717c3
You can create Lambda functions in other supported programming languages as well such as Java, as discussed here:
https://github.com/awsdocs/aws-doc-sdk-examples/tree/master/javav2/usecases/creating_workflows_stepfunctions
AS you can see, there is a lot of development options when using Lambda and AWS Step Functions.
The scope of the question:
AWS CodeBuild, ParametersOverrides section
Using Parameter Override Functions with CodePipeline Pipelines
1) I can pass a string to ParameterOverrides,
and then pass them to a nested stack, as it is described below
-create a string parameter
-pass it to ParameterOverrides section of the AWS CodeBuild project
-parse parameter in the nested stack
Quoting the official documentation:
AWS::CloudFormation::Stack
If you use the Ref function to pass a parameter value to a nested
stack, comma-delimited list parameters must be of type String. In
other words, you cannot pass values that are of type
CommaDelimitedList to nested stacks.
2) I can not figure out on how to transform a List to a string
inside a ParameterOverrides section with the configuration below:
-define a Parameter with type of list
ServiceSubnets:
Type: List
Description: Subnets associated with the service
-try to pass a parameter inside ParameterOverrides section as a value
to a nested stack, apply Join function to transform it to a string
ParameterOverrides: !Sub |
{
"ImageURI" : { "Fn::GetParam" : [ "BuildOutput", "imageDetail.json", "ImageURI" ] },
"ApplicationRepoName": "${ApplicationRepoName}",
"Cluster": "${Cluster}",
"ListenerArn": "${ListenerArn}",
"TargetGroup": "${TargetGroup}",
"ServiceDesiredCount": "${ServiceDesiredCount}",
"ServiceLoadBalancerPath": "${ServiceLoadBalancerPath}",
"ServiceContainerPort": "${ServiceContainerPort}",
"ServiceSecurityGroups": { "Fn::Join" : [ ",", "${ServiceSecurityGroups}"] ] },
"ServiceSubnets": { "Fn::Join" : [ ",", "${ServiceSubnets}" ] },
"TaskContainerPort": "${TaskContainerPort}",
"TaskCpu": "${TaskCpu}",
"TaskMemory": "${TaskMemory}",
"TaskExecutionRoleArn": "${TaskExecutionRoleArn}"
}
So I expect that the List should be transformed to a String and String
should be passed and then used in the nested stack, however,
attempt to deploy such stack returned an error:
Template error: variable ServiceSecurityGroups in Fn::Sub expression
does not resolve to a string
My question:
Is it possible to use Join function inside ParameterOverrides section
to transform a List to a String?
If yes, I would much appreciate if you share
with me some example that illustrates how to do this.
Thank you.
Unfortunately you cannot use intrinsic functions within the Fn::Sub
Syntax Reference
You can still use other intrinsic functions such as Fn::Join within Fn::Sub variable map. So your ParameterOverrides will be as follows instead:
ParameterOverrides: !Sub
- |
{
"ImageURI" : { "Fn::GetParam" : [ "BuildOutput", "imageDetail.json", "ImageURI" ] },
"ApplicationRepoName": "${ApplicationRepoName}",
"Cluster": "${Cluster}",
"ListenerArn": "${ListenerArn}",
"TargetGroup": "${TargetGroup}",
"ServiceDesiredCount": "${ServiceDesiredCount}",
"ServiceLoadBalancerPath": "${ServiceLoadBalancerPath}",
"ServiceContainerPort": "${ServiceContainerPort}",
"ServiceSecurityGroups": "${KEY_NAME_1}",
"ServiceSubnets": "${KEY_NAME_2}",
"TaskContainerPort": "${TaskContainerPort}",
"TaskCpu": "${TaskCpu}",
"TaskMemory": "${TaskMemory}",
"TaskExecutionRoleArn": "${TaskExecutionRoleArn}"
}
- KEY_NAME_1: !Join [ ",", [ !Ref ServiceSecurityGroups ] ]
KEY_NAME_2: !Join [ ",", [ !Ref ServiceSubnets ] ]
If your ServiceSecurityGroups and ServiceSubnets are already lists then remove the square braces around the !Ref statements.
I'm very new to airflow, and have been playing with it on GCP.
I'm modifying the example at https://cloud.google.com/composer/docs/how-to/using/triggering-with-gcf that shows how a DAG can be invoked by a cloud function.
That simple DAG just prints the content of run_dag.conf using the bash operator.
I'm now trying to get the value of run_dag.conf['bucket'] and run_dag.conf['name'] in order to create an example where I use the CloudSqlImportOperator.
My problem is that it seems that I can't find a way to get those values to be passed as part of the body on the operator.
My understanding is that jinja templates get evaluated at the operators. My first attempt was to do:
import_body = {
"importContext": {
"fileType": "csv",
"database": "dw",
"uri": "gs://{{ dag_run.conf['bucket'] }}/{{ dag_run.conf['name'] }}",
"csvImportOptions": {
"table": "SHAKESPEARE",
"columns": ["word", "word_count", "corpus", "corpus_date"]
}
}
}
And that fails because the jinja template section never gets evaluated, and the operator receives a literal "gs://{{ dag_run.conf['bucket'] }}/{{ dag_run.conf['name'] }}" instead.
I tried to pass a string instead:
import_body = """{
"importContext": {
"fileType": "csv",
"database": "dw",
"uri": "gs://{{ dag_run.conf['bucket'] }}/{{ dag_run.conf['name'] }}",
"csvImportOptions": {
"table": "SHAKESPEARE",
"columns": ["word", "word_count", "corpus", "corpus_date"]
}
}
}"""
And still I'm getting an error now : 'str' object has no attribute 'get'
I've seen examples using the PythonOperator and kwargs to fetch the contents, but so far no example of reading the contents of that dag_run.conf object inside the code.
What would be a proper way of doing that?
Cheers
In the example you mentioned, the jinja template is passed to the parameter bash_command which is a templated field. If you look at the python operator source code you can see that the only templated parameter is templated_dict so you to make airflow evaluate the {{ dag_run.conf['bucket'] }} you need to pass it through this variable. I am shooting into the dark here because you did not post the full code but the solution should be something like the following:
Inside the python code that your python operator call (works with python3):
import_body = f'''{
"importContext": {
"fileType": "csv",
"database": "dw",
"uri": "gs://{templated_dict['bucket']}/{templated_dict['name']}",
"csvImportOptions": {
"table": "SHAKESPEARE",
"columns": ["word", "word_count", "corpus", "corpus_date"]
}
}
}'''
when you define the python operator in the DAG:
python_operator.PythonOperator(
task_id=f'task_id',
python_callable=my_func,
provide_context=True,
templated_dict={
"bucket": "{{ dag_run.conf['bucket'] }}",
"name": "{{ dag_run.conf['name'] }}"
},
dag=dag
)
Note that I referenced to the airflow version 1.10.2, which I assume you are running because you tagged google cloud composer and this is the latest version supported.
If you look at 1.10.3 you can see that op_args and op_kwargs are added to the templated fields of the python operator. So in the next version update, you will be able to pass it as ousing also those.
I am encountering a strange problem with ansible, and I'm sure its just due to my lack of experience as I am relatively new to ansible (I've only been working with it for a couple weeks)
So, in short what I am trying to do is use the command module to run AWS CLI commands to list AWS access keys for a user then delete them from that user. The reason I am using CLI instead of iam module is because I believe there to be a bug with the IAM module in regard to removing access keys. Even when I specified state update and the access keys to remove and access key state remove it still would not remove access keys, or make them inactive when i set access key state to inactive.
The first task lists access keys for a given user and registers the output:
- name: List the access keys (if any) of the user(s) we just created
vars:
use_key: "{{ enable_access_keys }}"
command: "aws iam list-access-keys --user-name {{ item }}"
with_items:
- "{{ iam_user_name_list }}"
when: not use_key
register: list_key_output
^Keep in mind that iam_user_name_list only contains 1 user at the moment, which is why i access results the way I do. I know this needs to be changed in the future.
Since the stdout from list_key_output looks like this
"stdout": "{\n \"AccessKeyMetadata\": [\n {\n \"UserName\": \"other-guy\", \n \"Status\": \"Active\", \n \"CreateDate\": \"2017-06-29T18:45:04Z\", \n \"AccessKeyId\": \"removed\"\n }\n ]\n}",
I debug msg the stdout and register that to a variable test to give it proper json format without the slashes and newlines so I can use json_query to get the key from the stdout. I am using json query because AccessKeyId is not recognized as a key for the AccessKeyMetadata dictionary for whatever reason.
- name: list keys stdout
debug:
msg: "{{ list_key_output.results[0].stdout }}"
register: test
- name: test variable output
debug:
msg: "{{ test.msg.AccessKeyMetadata | json_query('[].AccessKeyId') }}"
At this point, I am successfully getting the access key from the stdout
ok: [127.0.0.1] => {
"changed": false,
"msg": [
"correct access key here"
]
}
Now, I feed the access key to the delete CLI command like so
- name: Remove any access keys our new console user might have
vars:
use_key: "{{ enable_access_keys }}"
command: "aws iam delete-access-key --access-key {{ test.msg.AccessKeyMetadata | json_query('[].AccessKeyId') }} --user-name other-guy"
when: not use_key
register: delete_key_output
This task fails due to an invalid access key being provided.
fatal: [127.0.0.1]: FAILED! => {"changed": true, "cmd": ["aws", "iam", "delete-access-key", "--access-key", "[u********************]", "--user-name", "other-guy"], "delta": "0:00:00.388902", "end": "2017-06-29 18:59:13.308230", "failed": true, "rc": 255, "start": "2017-06-29 18:59:12.919328", "stderr": "\nAn error occurred (ValidationError) when calling the DeleteAccessKey operation: The specified value for accessKeyId is invalid. It must contain only alphanumeric characters.", "stderr_lines": ["", "An error occurred (ValidationError) when calling the DeleteAccessKey operation: The specified value for accessKeyId is invalid. It must contain only alphanumeric characters."], "stdout": "", "stdout_lines": []}
As you can see, when I pass the access key to the command, [u is prepended to the front of the access key and ] is appended to the back of it.
Why is this happening? How can I achieve my goal without having the 3 characters added to the access key making it invalid? I don't understand why this happens because when I debug msg the access key the same way i provide it to the command, it only shows the access key without [u in front and ] behind.
Sorry for the long post but I felt I really had to describe the situation to be able to get help here. Thanks in advance for any answers!
To answer your exact question:
ok: [127.0.0.1] => {
"changed": false,
"msg": [
"correct access key here"
]
}
Note the [ and ] in the msg – this means that you print a list, that contain one element – correct access key here string.
When you try to convert list into string, you get it's Python interpretation [u'correct access key here'].
You need to get first element of a list:
{{ test.msg.AccessKeyMetadata | json_query('[].AccessKeyId') | first }}
P.S. but from my point of view, you are going in a wrong way. try to fix your issues with iam module.