I have a simple aws Systems Manager Automation that is designed to rotate the local windows password for systems located at externalized sites. During Step 7 of the automation, AWS calls and executes a powershell command document that validates that the rotated password and outputs a string value of either True or False in JSON format. This string value is then passed back into the automation and sent to cloudwatch.
I am having an issue where the True or False value passed into the automation in step 7 via the validPassword variable is not getting resolved when passed into Step 8. Instead, only the Automation variable identifier is passed ({{CheckNewPassword.validPassword}})".
Does anyone know why this is this happening? I assume it has something to do with the command document not producing output in a format that Systems Manager likes.
Any assistance would be appreciated
Step 7 Output
{
"validPassword": "True"
}
{"Status":"Success","ResponseCode":0,"Output":"{
\"validPassword\": \"True\"
}
","CommandId":"6419ba15-b0f3-4af4-86a2-c4693639fc9e"}
Step 8 Input Passed from Step 7
{"passwordValid":"{{CheckNewPassword.validPassword}}","siteCode":"LBZ1-20","num_failedValidation":1}
AWS Automation Document -- Step 7 and 8
- name: CheckNewPassword
action: 'aws:runCommand'
inputs:
DocumentName: SPIN_CheckPass
InstanceIds:
- '{{nodeID}}'
Parameters:
password:
- '{{GenerateNewPassword.newPassword}}'
outputs:
- Name: validPassword
Selector: validPassword
Type: String
- Name: dataType
Selector: dataType
Type: String
- name: RecordPasswordStatus
action: 'aws:invokeLambdaFunction'
inputs:
InvocationType: RequestResponse
FunctionName: SPIN-CheckPassMetric
InputPayload:
passwordValid: '{{CheckNewPassword.validPassword}}'
siteCode: '{{siteCode}}'
num_failedValidation: 1
AWS Command Document (SPIN_CheckPass)
{
"schemaVersion": "2.2",
"description": "Check Rotated Password",
"parameters": {
"password": {
"type": "String",
"description": "The new password used in the password rotation."
}
},
"mainSteps": [
{
"action": "aws:runPowerShellScript",
"name": "rotatePassword",
"inputs": {
"runCommand": [
"function checkPass {",
" param (",
" $password",
" )",
" $username = 'admin'",
" $password = $password",
" $computer = $env:COMPUTERNAME",
" Add-Type -AssemblyName System.DirectoryServices.AccountManagement",
" $obj = New-Object System.DirectoryServices.AccountManagement.PrincipalContext('machine',$computer)",
" [String] $result = $obj.ValidateCredentials($username, $password)",
"",
" $json = ",
" #{",
" validPassword = $result",
" } | ConvertTo-Json",
"",
"return $json",
"}",
"checkPass('{{password}}')"
],
"runAsElevated": true
}
}
]
}
I've tried changing the datatype of the validPassword variable to a bool, and Ive tried changing the format of the command document from JSON to YAML both of which have not worked.
Ive also attempted to capture another output element from the command document into a variable which also results in the variable name not resulting in inputs for subsequent Steps.
AWS Support confirmed that this is a bug they are now tracking.
Related
I encountered an error when running Cloud Workflow that's supposed to execute a parameterised query.
The Cloud Workflow error is as follow:
"message": "Query parameter 'run_dt' not found at [1:544]",
"reason": "invalidQuery"
The Terraform code that contains the workflow is like this:
resource "google_workflows_workflow" "workflow_name" {
name = "workflow"
region = "location"
description = "description"
source_contents = <<-EOF
main:
params: [input]
steps:
- init:
assign:
- project_id: ${var.project}
- location: ${var.region}
- run_dt: $${map.get(input, "run_dt")}
- runQuery:
steps:
- insert_query:
call: googleapis.bigquery.v2.jobs.insert
args:
projectId: ${var.project}
body:
configuration:
query:
query: ${replace(templatefile("../../bq-queries/query.sql", { "run_dt" = "input.run_dt" } ), "\n", " ")}
destinationTable:
projectId: ${var.project}
datasetId: "dataset-name"
tableId: "table-name"
create_disposition: "CREATE_IF_NEEDED"
write_disposition: "WRITE_APPEND"
allowLargeResults: true
useLegacySql: false
partitioning_field: "dt"
- the_end:
return: "SUCCESS"
EOF
}
The query in the query.sql file looks like this:
SELECT * FROM `project.dataset.table-name`
WHERE sv.dt=#run_dt
With the code above the Terraform deployment succeeded, but the workflow failed.
If i wrote "input.run_dt" without double quote, i'd encounter Terraform error:
A managed resource "input" "run_dt" has not been declared in the root module.
If i wrote it as $${input.run_dt}, i'd encounter Terraform error:
This character is not used within the language.
If i wrote it as ${input.run_dt}, i'd encounter Terraform error:
Expected the start of an expression, but found an invalid expression token.
How can I pass the query parameter of this BigQuery job in Cloud Workflow using Terraform?
Found the solution!
add queryParameters field in the subworkflow:
queryParameters:
parameterType: {"type": "DATE"}
parameterValue: {"value": '$${run_dt}'}
name: "run_dt"
I have a simple Lambda function that for now returns the event. So when I test with this, I return event.body.url and event.body.wepbSupport. That works in the AWS Lambda test environment.
TEST
{
"body": {
"url": "https://someURL.com",
"webpSupport": true
}
}
RESPONSE
Test Event Name
test
Response
{
"url": "https://someURL.com",
"webpSupport": true
}
I have no idea how to pass this query string to the staging API Gateway staging URL. All the documentation from AWS shows an old interface that they've changed. I see add URL Query String Parameters in these instructions but don't see that anywhere in the new interface.
Cloudwatch logs show me this:
{
"errorType": "TypeError",
"errorMessage": "The \"url\" argument must be of type string. Received undefined",
"code": "ERR_INVALID_ARG_TYPE",
"stack": [
"TypeError [ERR_INVALID_ARG_TYPE]: The \"url\" argument must be of type string. Received undefined",
" at validateString (internal/validators.js:124:11)",
" at Url.parse (url.js:170:3)",
" at Object.urlParse [as parse] (url.js:157:13)",
" at dispatchHttpRequest (/var/task/node_modules/axios/lib/adapters/http.js:91:22)",
" at new Promise (<anonymous>)",
" at httpAdapter (/var/task/node_modules/axios/lib/adapters/http.js:46:10)",
" at dispatchRequest (/var/task/node_modules/axios/lib/core/dispatchRequest.js:52:10)",
" at async Promise.all (index 0)",
" at async Runtime.exports.handler (/var/task/index.js:4:28)"
]
}
After examining the event object I see that all GET query parameters are passed in event.queryStringParameters. Leaving this on the site in case anyone else stumbles across this question.
I have the following step in a SSM document. The result of the call is a Json, so I wanted to parse it as a stringMap (which seems to be the correct type for it) instead of creating an output for each variable I want to reference
I've tried referencing this as both:
{{ GetLoadBalancerProperties.Description.Scheme }}
and
{{ GetLoadBalancerProperties.Description[\"LoadBalancerName\"] }}
In both cases I get an error saying the variable was never defined
{
"name": "GetLoadBalancerProperties",
"action": "aws:executeAwsApi",
"isCritical": true,
"maxAttempts": 1,
"onFailure": "step:deleteParseCloudFormationTemplate",
"inputs": {
"Service": "elb",
"Api": "describe-load-balancers",
"LoadBalancerNames": [
"{{ ResourceId }}"
]
},
"outputs": [
{
"Name": "Description",
"Selector": "$.LoadBalancerDescriptions[0]",
"Type": "StringMap"
}
]
}
This is the actual message:
Step fails when it is validating and resolving the step inputs. Failed to resolve input: GetLoadBalancerProperties.Description["LoadBalancerName"] to type String. GetLoadBalancerProperties.Description["LoadBalancerName"] is not defined in the Automation Document.. Please refer to Automation Service Troubleshooting Guide for more diagnosis details.
I believe the answer you were searching is in here:
https://docs.aws.amazon.com/systems-manager/latest/userguide/ssm-plugins.html#top-level-properties-type
Just to name a few examples:
Map type is a Python dict, hence if your output is a dict you should use StringMap in the SSM Document.
While List type is same as Python list.
So if your output is a List of Dictionary the type you want to use is MapList.
In some cases it seems that you cannot. I was able to work around this issue, by using a Python script in the SSM document to output the right type, but otherwise I believe the SSM document is not flexible enough to cover all cases.
The script I used:
- name: myMainStep
action: aws:executeScript
inputs:
Runtime: python3.6
Handler: myMainStep
InputPayload:
param: "{{ myPreviousStep.myOutput }}"
Script: |-
def myMainStep(events,context):
myOutput = events['myOutput']
for tag in myOutput:
if tag["Key"] == "myKey":
return tag["Value"]
return "myDefaultValue"
outputs:
- Name: output
Selector: "$.Payload"
Type: String
You can find out what the myOutput should be in AWS web console > SSM > Automation > Your execution, if you have already executed your automation once > executeScript step > input parameters
I am creating a Cloudfront Service for my organization. I am trying to create a job where a user can execute a Jenkins Job to update a distribution.
I would like the ability for the user to input a Distribution ID and then have Jenkins Auto-Fill a secondary set of parameters. Jenkins would need to grab the configuration for that Distribution (via Groovy or other means) to do that auto-fill. The user then would select which configuration options they would like to change and hit submit. The job would then make the requested updates (via a python script).
Can this be done through some combination of plugins(or any other means?)
// the first input requests the DistributionID from a user
stage 'Input Distribution ID'
def distributionId = input(
id: 'distributionId', message: "Cloudfront Distribution ID", parameters: [
[$class: 'TextParameterDefinition',
description: 'Distribution ID', name: 'DistributionID'],
])
echo ("using DistributionID=" + distributionId)
// Second
// Sample data - you'd need to get the real data from somewhere here
// assume data will be in distributionData after this
def map = [
"1": [ name: "1", data: "data_1"],
"2": [ name: "2", data: "data_2"],
"other": [ name: "other", data: "data_other"]
]
def distributionData;
if(distributionId in map.keySet()) {
distributionData = map[distributionId]
} else {
distributionData = map["other"]
}
// The third stage uses the gathered data, puts these into default values
// and requests another user input.
// The user now has the choice of altering the values or leave them as-is.
stage 'Configure Distribution'
def userInput = input(
id: 'userInput', message: 'Change Config', parameters: [
[$class: 'TextParameterDefinition', defaultValue: distributionData.name,
description: 'Name', name: 'name'],
[$class: 'TextParameterDefinition', defaultValue: distributionData.data,
description: 'Data', name: 'data']
])
// Fourth - Now, here's the actual code to alter the Cloudfront Distribution
echo ("Name=" + userInput['name'])
echo ("Data=" + userInput['data'])
Create a new pipeline and copy/paste this into the pipeline script section
Play around with it
I can easily imagine this code could be implemented in a much better way, but at least, it's a start.
I'm new to the whole map-reduce concept, and i'm trying to perform a simple map-reduce function.
I'm currently working with Couchbase server as my NoSQL db.
I want to get a list of all my types:
key: 1, value: null
key: 2, value: null
key: 3, value: null
Here are my documents:
{
"type": "1",
"value": "1"
}
{
"type": "2",
"value": "2"
}
{
"type": "3",
"value": "3"
}
{
"type": "1",
"value": "4"
}
What I've been trying to do is:
Write a map function:
function (doc, meta) {
emit(doc.type, 0);
}
Using built-in reduce function:
_count
But i'm not getting the expected result.
How can I get all types ?
UPDATE
Please notice that the types are different documents, and I know that reduce works on a document and doesn't executes outside of it.
By default it will reduce all key groups. The feature you want is called group_level:
This is equivalent of reduce=true
~ $ curl 'http://localhost:8092/so/_design/dev_test/_view/test?group_level=0'
{"rows":[
{"key":null,"value":4}
]
}
But here is how you can get reduction by the first level of the key
~ $ curl 'http://localhost:8092/so/_design/dev_test/_view/test?group_level=1'
{"rows":[
{"key":"1","value":2},
{"key":"2","value":1},
{"key":"3","value":1}
]
}
There is also blog post about this: http://blog.couchbase.com/understanding-grouplevel-view-queries-compound-keys
There is appropriate option in couchbase admin console: