Can someone help me out in handling a dynamic "ParameterValue" in parameter.json file.
I'm running "cloudformation create-stack" and passing in --parameters a parameter.json file, there are few "ParameterValue" in the file that needs to be dynamic for example, timestamp and appending index values from loop etc... so, how can i modify the parameters.json file to handle dynamic values.
Alternate way i could go with is to just not use the parameters.json file and pass in the key, value like below to the create-stack command inside the loop in the script,
--parameters ParameterKey="XYZ",ParameterValue="${someval}${index}"
I would create parameters.json.template file to hold the values in their parameterized form like you show:
[
{
"ParameterKey": "XYZ",
"ParameterValue": "{someval}{index}"
},
{
"ParameterKey": "ABC",
"ParameterValue": "staticval-{suffix}"
}
]
I am assuming you are doing this on the cli, based on the use of the --parameters flag. In that case, I would create a script to merge the template file with the values (into a generated file) and call the create-stack cli command after that.
Something like this on linux:
#! /bin/bash
# create output file from template
cp templates/parameters.json.template generated/parameters.json
# merge dynamic values into templated file
sed -i "s/{someval}/$SOME_VAL/g" generated/parameters.json
sed -i "s/{index}/$INDEX/g" generated/parameters.json
sed -i "s/{suffix}/$SUFFIX/g" generated/parameters.json
aws cloudformation create-stack ... --parameters generated/parameters.json ...
This of course assumes your script has access to your dynamic values.
Related
I need to check the list of aws_vpc_endpoint_service_allowed_principal from a specific aws_vpc_endpoint_service.
The aws_vpc_endpoint_service data source does not return the list of allowed_principals.
Does anyone know how can I retrieve that information?
Since the data source for that resource does not exist, you can use external data source with a custom script to query the required information.
Here's an example script (get_vpc_endpoint_service_permissions.sh) that fetches the required information:
#!/bin/bash
sep=$(aws ec2 describe-vpc-endpoint-service-permissions --service-id vpce-svc-03d5ebb7d9579a2b3 --query 'AllowedPrincipals')
jq -n --arg sep "$sep" '{"sep":$sep}'
and here's how you consume it in terraform:
data "external" "vpc_endpoint_service_permissions" {
program = ["bash", "get_vpc_endpoint_service_permissions.sh"]
}
output "vpc_endpoint_service_permissions" {
value = data.external.vpc_endpoint_service_permissions.result.sep
}
data.external.vpc_endpoint_service_permissions.result.sep contains the output of the bash script, which is a JSON array that you can access/manipulate as needed.
I can get the details with
$ aws lambda get-function --function-name random_number
{
"Configuration": {
"FunctionName": "random_number",
"FunctionArn": "arn:aws:lambda:us-east-2:193693970645:function:random_number",
"Runtime": "ruby2.5",
"Role": "arn:aws:iam::193693970645:role/service-role/random_number-role-8cy8a1a7",
...
But how can get just a couple of fields like function name ?
I tried:
$ aws lambda get-function --function-name random_number --query "Configuration[*].[FunctionName]"
but I get null
Your overall approach is correct, you just need to adjust the query:
$ aws lambda get-function --function-name random_number \
--query "Configuration.FunctionName" --output text
I also added a parameter to convert the result to text, which makes processing a bit easier.
Here is a simple awk (standard Linux gnu awk) script that does the trick: Extract the values of quoted field #3, only for line having /FunctionName/.
awk 'BEGIN {FPAT="\"[^\"]+"}/FunctionName/{print substr($3,2)}'
Piped with your initial command:
$ aws lambda get-function --function-name random_number | awk 'BEGIN {FPAT="\"[^\"]+"}/FunctionName/{print substr($3,2)}'
One way to achieve that is by using jq.
therefore, the output must be JSON.
From the docs :
jq is like sed for JSON data - you can use it to slice and filter and
map and transform structured data with the same ease that sed, awk,
grep and friends let you play with text.
Usage example :
aws lambda get-function --function-name test --output json | jq -r '.Configuration.FunctionName'
Use get-function-configuration as in the following:
aws lambda get-function-configuration --function-name MyFunction --query "[FunctionName]"
I have >100 files where each line is a json. It looks something like this (no commas & no []):
{"one":"one","two":{"tree":...}}
{"one":"one","two":{"tree":...}}
...
{"one":"one","two":{"tree":...}}
To be able to use aws firehose put-record-batch, file needs to be in the format:
[
{
"Data": blob
},
{
"Data": blob
},
...
]
I want to put all of these files to aws Firehose from terminal.
I'm looking to write a shell script that looks something like this:
for file in files
do
aws firehose put-record-batch --delivery-stream-name <name> --records file://$file
done
So there're 2 questions:
How to transform the files into the applicable format
And, how to iterate through all the files
for file in *.json;
do
jq -s . "${file}" >${file}.tmp && mv ${file}.tmp $file
done
This will read all the json file in the current directory and change it into the desired form and save to the file .
OR if you do not have jq here is alternate way using python's json module.
for file in *.json;do
while read line ; do
echo $line | python -m json.tool
done < ${file} |awk 'BEGIN{print "["}{print}END{print "]"}'
done
lets say i have all parameters needed to create a cloudformation stack in a json file but want to override some parameters from the parameters file..is this possible?
aws cloudformation create-stack \
--stack-name sample-stack \
--template-body file://sample-stack.yaml \
--parameters file://sample-stack.json \
--capabilities CAPABILITY_IAM \
--disable-rollback \
--region us-east-1 \
--output json && \
aws cloudformation wait stack-create-complete \
--stack-name sample-stack
so lets say there are like 10 parameters in sample-stack.json file BUT i have like 2 parameters i want to override from that file.
Is this possible?
Thanks
This isn't available in the AWS CLI right now, but there is a feature request on GitHub. For now you'll need to script something to generate your overrides prior to creating the stack. Another potential option is to store your values in something that you can dynamically reference, such as Parameter Store, and update them via the API prior to stack creation.
If you want to update a stack and specify only the list of parameters that changed, you can have a look at this shell script that I wrote.
Usage:
▶ bash update_stack.sh -h
Usage: update_stack.sh [-h] STACK_NAME KEY1=VAL1 [KEY2=VAL2 ...]
Updates CloudFormation stacks based on parameters passed here as key=value pairs. All
other parameters are based on existing values.
To solve your problem, you could borrow the edit() function:
PARAMS='sample-stack.json'
edit() {
local key value pair
for pair in "$#" ; do
IFS='=' read -r key value <<< "$pair"
jq --arg key "$key" \
--arg value "$value" \
'(.[] | select(.ParameterKey==$key)
| .ParameterValue) |= $value' \
"$PARAMS" > x ; mv x "$PARAMS"
done
}
cp $PARAMS $PARAMS.bak
edit param1=newval1 param2=newval2
And then create your stack as normal.
make all values in the files as the variables, and use another script pass the default values or overwrite them.
For example, i have my jason files sample-stack.json like following:
[
{
"ParameterKey": "InstanceType",
"ParameterValue": "${instance_type}"
},
{
"ParameterKey": "DesiredSize",
"ParameterValue": "${ASG_DESIRED_Number}"
}
]
in the script file, run following commands to replace
instance_type=t3.small
envsubst < "${IN_FILENAME}" > "${OUT_FILENAME}"
what you need to do is to replace those variables you need. for those don't need change, the default value will be passed in.
I build a VirtualBox VM using Packer and I would like to set some VM meta data (e.g. description, version) using the export_opts parameter. The docs say
export_opts (array of strings) - Additional options to pass to the VBoxManage export. This can be useful for passing product information to include in the resulting appliance file.
I am trying to do this in a bash script calling packer:
desc=' ... some ...'
desc+=' ... multiline ...'
desc+=' ... description ...'
# this is actually done using printf, shortened for clarity
export_opts='[ "version", "0.2.0", "description", "${desc}" ]'
# the assembled string looks OK
echo "export_opts: ${export_opts}"
packer build \
... (more options) ...
-var "export_opts=${export_opts}" \
... (more options) ...
<packer configuration file>
I also tried --version instead of version and putting version and the value into the same string, but none of this works; once exported and re-imported, the VM description is empty.
Does anyone have some working sample code or can help me out with what I'm doing wrong ?
Thank you very much.
Update:
Following Anthony Staunton's approach, I figured out that adding
"export_opts": [ "--vsys", "0", "--version", "0.2.0", "--description", "some test description" ],
to the Packer JSON file does work; passing the same string as --var to Packer does not work.
Fixed the problem at long last, updated the packer documentation with the example below, pull requests pending:
Packer JSON configuration file example:
{
"type": "virtualbox-ovf",
"export_opts":
[
"--manifest",
"--vsys", "0",
"--description", "{{user `vm_description`}}",
"--version", "{{user `vm_version`}}"
],
"format": "ova",
}
A VirtualBox VM description may contain arbitrary strings; the GUI interprets HTML formatting. However, the JSON format does not allow arbitrary newlines within a value. Add a multi-line description by preparing the string in the shell before the packer call like this (shell > continuation character snipped for easier copy & paste):
vm_description='some
multiline
description'
vm_version='0.2.0'
packer build \
-var "vm_description=${vm_description}" \
-var "vm_version=${vm_version}" \
"packer_conf.json"
You may have to specify the data as
in your packer json file
"export_opts": [ "--vsys 0 --version \"0.2.0\"", "{{.Name}} --description \"${desc}\" " ],