Using JQ to generate AWS CloudFormation Parameters File - amazon-web-services

Trying to generate an AWS Parameter file using JQ for use in a call to CloudFormation
aws cloudformation create-stack --stack-name test --parameters file://params.json --template-body file://cfn.yaml
I was thinking of taking template like this:
'[{"ParameterKey":"KEY","ParameterValue":"VALUE","UsePreviousValue":false}]'
And then adding some parameters in JQ and trying to do some string operations on an arg such as jq --arg params 'key1,value1:key2,value2' then split on ':' to duplicate it and populate each with the comma separated values.
The best I can do is create the correct number of top level nodes ahead of time and then do a static replacement, supplying each argument individually. Not dynamic enough to even bother -- better to just write a bash script and do substitutions in a loop.
Any help would be appreciated.
Currently the best I've been able to do is this
echo '[{"ParameterKey":"KEY","ParameterValue":"VALUE","UsePreviousValue":false}]' | jq --arg vars "key1,val1:key2,val2" '.[0].ParameterKey = ($vars|split(":")|.[]|split(",")|.[0]) | .[0].ParameterValue = ($vars|split(":")|.[]|split(",")|.[1])'
But the output is a combination of all values (e.g. key1 val1, key1 val2, etc.)

You may want to consider using --argfile. Here are some answers that demonstrate it:
replace json node with another json file with command line jq
Value map with JQ
Update one JSON file values with values from another JSON using JQ

Related

AWS CLI - Put output into a readable format

So I have ran the following command in my CLI and it returned values, however, they are unreadable how would I format this into a table with a command?
do
echo "Check if SSE is enabled for bucket -> ${i}"
aws s3api get-bucket-encryption --bucket ${i} | jq -r .ServerSideEncryptionConfiguration.Rules[0].ApplyServerSideEncryptionByDefault.SSEAlgorithm
done
Would I need to change the command above?
You can specify an --output parameter when using the AWS CLI, or configure a default format using the aws configure command.
From Setting the AWS CLI output format - AWS Command Line Interface:
The AWS CLI supports the following output formats:
json – The output is formatted as a JSON string.
yaml – The output is formatted as a YAML string.
yaml-stream – The output is streamed and formatted as a YAML string. Streaming allows for faster handling of large data types.
text – The output is formatted as multiple lines of tab-separated string values. This can be useful to pass the output to a text processor, like grep, sed, or awk.
table – The output is formatted as a table using the characters +|- to form the cell borders. It typically presents the information in a "human-friendly" format that is much easier to read than the others, but not as programmatically useful.

AWS CLI filtering with JQ

I'm still trying to understand how to use JQ to get what I want. I want to get the size of all snapshots in my account older than a specific date and then add them up so that I can calculate cost. I am able to do this without the date filtering with this.
aws ec2 describe-snapshots --profile my_profile_name | jq "[.Snapshots[].VolumeSize] | add"
This returns a numerical value. Without JQ, I'm also able to get a list of snapshots using "query" but I don't think that will be applied when using JQ but I could be wrong.
aws ec2 describe-snapshots --profile my_profile_name --owner-ids self --query "Snapshots[?(StartTime<='2022-09-08')].[SnapshotId]"
I tried various arrangements using "select" along with my first example. However, I haven't been able to get anything returned yet. I appreciate any pointers.
This is the "select" that doesn't quite work.
aws ec2 describe-snapshots --profile my_profile_name | jq "[.Snapshots[]select(.StartTime < "2022-09-08")] | [.Snapshots[].VolumeSize] | add"
Edit 11/15/22
I was able to make progress and I found a site that allows you to test JQ. The example is able to select strings and numbers, but I'm having trouble with the date part. I don't understand how to interrupt the date in the format that AWS provides. I can do the add part, I removed it to simplify the example.
This the the working "select" for a string. I can only do greater/less than when I use numbers and remove the quotes from the JSON section.
.Snapshots[] | select(.StartTime == "2022-11-14T23:28:39+00:00") | .VolumeSize
jq play example
It would appear that the jq expression you're looking for is:
[.Snapshots[] | select(.StartTime < "2022-09-08") | .VolumeSize] | add
Without an illustrative JSON, however, it's difficult to test this out; that's one of the reasons for the mcve guidelines.
A solution without jq is this:
aws ec2 describe-snapshots \
--owner-ids self \
--query "sum(Snapshots[?StartTime < '2022-11-12'].VolumeSize)"
But note that the comparison is not done "by date" but by literally comparing strings like 2016-11-02T01:25:28.000Z with 2022-11-12.

What modifications do I need to make to my gcloud command to get the list of enabled services in all GCP projects in the below tabular format?

I have the following code block to enumerate the enabled services in all GCP projects:
for project in $(gcloud projects list --format="value(projectId)"); \
do gcloud services list --project $project --format="table[box]($project,config.name,config.title)"; \
done;
It gives me the output in this format:
But I would like the output to be in this format:
How do I accomplish that? Please advise. Thanks.
You can't using gcloud because you need to assemble values from multiple commands: Projects.List and Services.List. gcloud --format applies only to the output from a single command.
You're grabbing all the data you need (Project, Name and Title). I recommend you use bash and the characters output by gcloud to form the table, and synthesize the table output for yourself.
Update
Here's a script that'll get you CSV output that matches your rows:
PROJECTS=$(\
gcloud projects list \
--format="value(projectId)")
for PROJECT in ${PROJECTS}
do
CONFIGS=$(gcloud services list \
--project=${PROJECT} \
--format="csv[no-heading](config.name,config.title.encode(base64))")
for CONFIG in ${CONFIGS}
do
IFS=, read NAME TITLE <<< ${CONFIG}
TITLE=$(echo ${TITLE} | base64 --decode)
echo "${PROJECT},${NAME},\"${TITLE}\""
done
done
NOTE encode(base64) to avoid the unquote title values from being split. base64 --decode reverses this at output. It should be possible to use format but I don't understand the syntax. --format='csv[no-heading](config.name,format("\"{0}\"",config.title))' didn't work.
Then, in the best *nix tradition, you can pipe that output into awk or columns or some other script that formats the CSV as a pretty-printed table. This is the more difficult challenge because you'll want to determine the longest value in each field in order to layout correctly. I'll leave that to you.

AWS CLI DynamoDB Called From Powershell Put-Item fails when a value contains a space

So, let's say I'm trying to post this JSON via the command line (not in a file because I'm not going to write a file for every invocation of this script) to a dynamo DB table
{\"TeamId\":{\"S\":\"One_Space_123\"},\"TeamName\":{\"S\":\"One_Space\"},\"Environment\":{\"S\":\"cte\"},\"StartDate\":{\"S\":\"null\"},\"EndDate\":{\"S\":\"null\"},\"CreatedDate\":{\"S\":\"today\"},\"CreatedBy\":{\"S\":\"someones user\"},\"EmailDistributionList\":{\"S\":\"test#test.com\"},\"RemedyGroup\":{\"S\":\"OneSpace\"},\"ScomSubscriptionId\":{\"S\":\"guid-ab22-2345\"},\"ZabbixActionId\":{\"S\":\"11\"},\"SnsTopic\":{\"M\":{\"TopicName\":{\"S\":\"ATopicName\"},\"TopicArn\":{\"S\":\"AtopicArn1234\"},\"CreatedDate\":{\"S\":\"today\"},\"CreatedBy\":{\"S\":\"someones user\"}}}}
Then the result from the CLI is one like this:
Unknown options: Space"},"ScomSubscriptionId":{"S":"guid-ab22-2345"},"ZabbixActionId":{"S":"11"},"SnsTopic":{"M":{"TopicName":{"S":"ATopicName"},"TopicArn":{"S":"AtopicArn1234"},"CreatedDate":{"S":"today"},"CreatedBy":{"S":"someones, user"}}}}, user"},"EmailDistributionList":{"S":"test#test.com"},"RemedyGroup":{"S":"One
As you can see, it fails on the TeamName property that in the above example is "One Space". If I change that value to "OneSpace" then instead it starts to fail on the "CreatedBy" property that is populated by "someones user" but if I remove all spaces from all properties I can suddenly pass this json to dynamoDB successfully.
In a working example the json looks like this:
{\"TeamId\":{\"S\":\"One_Space_123\"},\"TeamName\":{\"S\":\"One_Space\"},\"Environment\":{\"S\":\"cte\"},\"StartDate\":{\"S\":\"null\"},\"EndDate\":{\"S\":\"null\"},\"CreatedDate\":{\"S\":\"today\"},\"CreatedBy\":{\"S\":\"someonesuser\"},\"EmailDistributionList\":{\"S\":\"test#test.com\"},\"RemedyGroup\":{\"S\":\"OneSpace\"},\"ScomSubscriptionId\":{\"S\":\"guid-ab22-2345\"},\"ZabbixActionId\":{\"S\":\"11\"},\"SnsTopic\":{\"M\":{\"TopicName\":{\"S\":\"ATopicName\"},\"TopicArn\":{\"S\":\"AtopicArn1234\"},\"CreatedDate\":{\"S\":\"today\"},\"CreatedBy\":{\"S\":\"someonesuser\"}}}}
I can't find any documentation that tells me I can't have spaces, if I read this in from a file it will post it with the spaces, so what gives? If anyone has any advice on this matter, I certainly appreciate it.
For what it's worth in Powershell the execution looks like this currently (though I've tried various combinations of quoting the $dbTeamTableEntry variable
$dbEntry = aws.exe dynamodb put-item --region $region --table-name $table --item "$($dbTeamTableEntry)"

Pass comma separated argument to spark jar in AWS EMR using CLI

I am using aws cli to create EMR cluster and adding a step. My create cluster command looks like :
aws emr create-cluster --release-label emr-5.0.0 --applications Name=Spark --ec2-attributes KeyName=*****,SubnetId=subnet-**** --use-default-roles --bootstrap-action Path=$S3_BOOTSTRAP_PATH --instance-groups InstanceGroupType=MASTER,InstanceCount=1,InstanceType=m4.large InstanceGroupType=CORE,InstanceCount=$instanceCount,InstanceType=m4.4xlarge --steps Type=Spark,Name="My Application",ActionOnFailure=TERMINATE_CLUSTER,Args=[--master,yarn,--deploy-mode,client,$JAR,$inputLoc,$outputLoc] --auto-terminate
$JAR - is my spark jar which takes two params input and output
$input is basically a comma separated list of input files like s3://myBucket/input1.txt,s3://myBucket/input2.txt
However, aws cli command treats comma separated values as separate arguments and hence my second parameter is being treated as second parameter and hence the $output here becomes s3://myBucket/input2.txt
Is there any way to escape comma and treat this whole argument as single value in CLI command so that spark can handle reading multiple files as input?
Seems like there is no possible way of escaping comma from input files.
After trying quite a few ways, I finally had to put a hack by passing a delimiter for separating input files and handling the same in code. In my case,I added % as my delimiter and in Driver code, I am doing
if (inputLoc.contains("%")) {
inputLoc = inputLoc.replaceAll("%", ",");
}