AWS Systems Manager: Enumerate a StringList Parameter in State Manager Document - amazon-web-services

State Manager Documents allow us to define input parameters of type StringList. How can we enumerate each value in a StringList within the document definition?
Eg, imagine a StringList input parameter that defined a list of commands to run. How could we create a new aws:runShellScript action for each command in the list?
The pseudo-document below shows what I'm trying to achieve - creating a a new action for each value in a StringList.
schemaVersion: "2.2"
description: "Updates services configured for the current role"
parameters:
ListOfCommands:
type: "StringList"
description: "A list of commands to execute"
mainSteps:
/* For $C in ListOfCommands: */
- action: "aws:runShellScript"
name: "InstallConsul"
inputs:
runCommand:
- "{{$C}}"

According to AWS support, this is not currently possible. There is no way to enumerate any values in a StringList within the document itself.

Related

How To use case insensitive pattern tag Key value in AWS SSM Document run parameter?

We have a ssm document run parameter as below
RuntimeParameters:
targetKey: 'tag:{{ app }}'
Passing parameter value as below
app:
type: String
allowedPattern: '^[aA]pplication$'
default: Application
Now i want to allow application (small a) along with Application as key value for app.Is it possible to achieve this using regex?

AWS Proton parameters - clarify how the schema.yaml parameters are consumed in CF template

After going through the docs and examples I haven't clarified where exactly the parameters from the schema.yaml file are used.
Using the AWS code example here: https://github.com/aws-samples/aws-proton-sample-templates/blob/main/lambda-crud-svc/service/schema/schema.yaml
Pertinent section of the schema.yaml file:
schema:
format:
openapi: "3.0.0"
service_input_type: "CrudServiceInput"
pipeline_input_type: "PipelineInputs"
types:
CrudServiceInput:
type: object
description: "Input properties for a Lambda backed CRUD API. When given a resource name, this input will be used to generate Create, Read, Update and Delete API methods and lambdas."
properties:
resource_name:
type: string
description: "The resource to generate a CRUD API for"
minLength: 1
maxLength: 50
default: "greeting"
resource_handler:
type: string
description: "The handler path to find the CRUD methods for this API"
minLength: 1
maxLength: 50
default: "index"
lambda_memory:
type: number
description: "The size of your Lambda functions in MB"
default: 512
minimum: 1
maximum: 3008
...
I would expect that in the cloudformation.yaml file I would be able to reference {{service_input_type.resource_name}}, but it is referred to as {{service.resource_name}}.
Assume that Proton somehow has service namespace mapped to the values in service_input_type.
HOWEVER
When you use that logic for "lambda_memory" parameter in the same service_input_type object, it doesn't work, because in the template file it refers to this as service_instance.lambda_memory
Can anyone clarify the following:
How are the schema.yaml parameters consumed in the
cloudformation.yaml template?
Further... how do the "xx-spec.yaml"
files play into the mix. I assume they are merged into the service
template when creating the instance, but the parameter naming
convention is also different than the template parameters above.
You should always use service_instance as the namespace, with the exception of
if you are parametrizing a pipeline template, where you can use service;
if you are referring to the name, where you can use the service_name and service_instance_name variables (no namespace).
A Proton service is ultimately a collection of service instances with a pipeline. When Proton provisions infrastructure, it will create a CloudFormation stack for every instance, so the parameters are always applied at the instance level. The pipeline is an exception because it doesn't belong to a single environment, and so in certain cases, you might need to refer to the service as a whole.
As per the xx-spec.yaml files - Those are actually the reflections of the parameters that the developer might provide. If you use the Proton UX to create a new service, Proton will output a file like that and store it to reflect the parameters for that particular service.

AWS MSK Cloud Formation Tags problems

When creating AWS::MSK::Cluster with Cloud Formation I am not able to set Tags in the usual way:
Tags:
- Key: Name
Value: !Ref Identifier
Because of this error:
Property validation failure: [Value of property {/Tags} does not match type {Map}]
As of the time of writing, the documentation states that, instead of the usual Type: List of Tag, I should use: Type: Json.
Also the same documentation states that:
You can specify tags in JSON or in YAML, depending on which format you use for your template
After further investigation (and AWS support help), the working (only on creation) example looks like this:
Tags:
Name: !Ref Identifier
Additionally, tags cannot be modified (the docs actually state that tags change require replacement), when tried a slightly confusing error shows up:
CloudFormation cannot update a stack when a custom-named resource requires replacing. Rename kafka-eu-west-1-dev and update the stack again.

Use private ips in google dataflow job being created via google provided template

I'm trying to set up a dataflow job via deployment manager using the google provided template Cloud_PubSub_to_Avro.
To do this I had to register dataflow as a type provider, like this:
resources:
- name: 'register-dataflow'
action: 'gcp-types/deploymentmanager-v2beta:deploymentmanager.typeProviders.insert'
properties:
name: 'dataflow'
descriptorUrl: 'https://dataflow.googleapis.com/$discovery/rest?version=v1b3'
options:
inputMappings:
- fieldName: Authorization
location: HEADER
value: >
$.concat("Bearer ", $.googleOauth2AccessToken())
Then I created my job template that looks something like
resources:
- name: "my-topic-to-avro"
type: 'my-project-id/dataflow:dataflow.projects.locations.templates.launch'
properties:
projectId: my-project-id
gcsPath: "gs://dataflow-templates/latest/Cloud_PubSub_to_Avro"
jobName: "my-topic-to-avro"
location: "europe-west1"
parameters:
inputTopic: "projects/my-project-id/topics/my-topic"
outputDirectory: "gs://my-bucket/avro/my-topic/"
avroTempDirectory: "gs://my-bucket/avro/tmp/my-topic/"
Now I'm trying to understand how I can tell my job to not use public ips. From this it looks like I need to set --usePublicIps=false but I can't figure out where to place this parameter, or if this is even possible.
A possible workaround I found here would be to remove the access-config, but again I haven't been able to figure out how to do this, if at all possible.
Is what I'm trying to do possible through provided templates or will I have to use the dataflow API?
Any help appreciated.

How to store the entered parameters from cloudformation stack?

What I want to accomplish is to store the entered parameters from a Cloudformation stack.
For example: Imagine having two parameters param1 and param2.
I want to store the entered values either in DynamoDB, RDS Db, Etc.
I though in SNS notification:
Unfortunately, the notification's payload looks as follow:
StackId='arn:aws:cloudformation:us-east-1:accountId:stack/rfdsf/b6df0100-fd18-11e7-b3ab-500c2893c0d2'
Timestamp='2018-01-19T13:00:24.774Z'
EventId='b6df9d40-fd18-11e7-b3ab-500c2893c0d2'
LogicalResourceId='rfdsf'
Namespace='accountId'
PhysicalResourceId='arn:aws:cloudformation:us-east-1:accountId:stack/rfdsf/b6df0100-fd18-11e7-b3ab-500c2893c0d2'
PrincipalId='accountId'
ResourceProperties='null'
ResourceStatus='CREATE_IN_PROGRESS'
ResourceStatusReason='User Initiated'
ResourceType='AWS::CloudFormation::Stack'
StackName='rfdsf'
ClientRequestToken='Console-CreateStack-774eec95-c976-434c-b43b-ad3d295a0b9b'
As you can see, there is not any entered values.
Is it possible to store the entered parameters into a DB?
As suggested by #Rodrigigo M, You can save the params into SSM parameters store.
Description: "Create SSM Parameter"
Resources:
BasicParameter:
Type: "AWS::SSM::Parameter"
Properties:
Name: "param1"
Type: "String"
Value: "ABCD"
Description: "SSM Parameter for running date command."
AllowedPattern: "^[a-zA-Z]{1,10}$"
Also, if you want to save these into DB, you can create a Lambda to read them and store into DynamoDb or RDS.
In your cloudformation there is an Outputs parameter. You can output any value that was brought into the stack, so long as you explicitly specify which parameters you want to output.
Those values will be visible in the Outputs tab of cloudformation. If you want to move them to a database, such as DynamoDB, you can use the cloudformation:describeStacks api call to get all the output values for any stack.