How do I extract this field using JQ? - amazon-web-services

JQ makes me irrationally angry and I despise its very existence. I have been trying to extract the value of the tf-name key for 30 minutes and I'm going to become a hermit if I have to make another attempt at this. Please, how do I get tmp as a a result?
➜ aws resourcegroupstaggingapi get-resources --tag-filters Key=tf-name,Values=tmp --profile=test | jq
{
"ResourceTagMappingList": [
{
"ResourceARN": "arn:aws:s3:::stack-hate-1234567890salt",
"Tags": [
{
"Key": "aws:cloudformation:stack-name",
"Value": "hatestack"
},
{
"Key": "aws:cloudformation:stack-id",
"Value": "arn:aws:cloudformation:us-west-2:6666666666666:stack/FSStack/hate-hate-hate-123456789"
},
{
"Key": "tf-name",
"Value": "tmp"
},
{
"Key": "aws:cloudformation:logical-id",
"Value": "hatebucket1234"
},
{
"Key": "aws-cdk:auto-delete-objects",
"Value": "true"
}
]
}
]

Its quite simple to extract, once you know the type of the top level objects. You can see ResourceTagMappingList is a list of records, so it should have a [] notation following it, which I think most beginners with jq tend to miss out. So
.ResourceTagMappingList[].Tags | from_entries."tf-name"
should get you the desired value. See jqplay demo
The way it works is, the from_entries function takes an array of objects with key names key and value and transforms it to a value only pair. From there, we extract only the key name tf-name. The special double quotes are needed because the key name here contains a meta-character -, which should be quoted to treat the whole string as the key name.
Another way would be to use a select statement like below
.ResourceTagMappingList[].Tags[] | select(.Key == "tf-name").Value

Related

Is there a way to interpolate OutputPath's JsonPath using state's input in AWS step function?

Basically, i have the following input:
{
"name": "abc",
"choice": "choice1"
}
My dynamoDB table has the following structure:
Partition key - "name"
Complex json with choices:
{
"choices":
{
"choice1": ......,
"choice2": ......
}
}
I want to directly read from dynamodb, and get a subitem under the relevant choice:
{
"StartAt": "Read Next Message from DynamoDB",
"States": {
"Read Next Message from DynamoDB": {
"Type": "Task",
"Resource": "arn:aws:states:::dynamodb:getItem",
"Parameters": {
"TableName": "my_table",
"Key": {
"customerName": {"S.$": "$.name"}
}
},
"OutputPath": "$.Item.choices.M.choice1.M.myvalue.S",
"Next": "World"
},
"World": {
"Type": "Pass",
"End": true
}
}
}
basically i want to do something like "$.Item.choices.M.{$.choice}.M.myvalue.S", and take one of the output's keys from the input. is this possible?
I think what you're looking for is JsonPath interpolation, but that is not supported as per this thread on AWS forums.
As far as I know Step Functions allow only path reference through $, . and [] operators (Reference Path).
I don't know how much control you have on the DynamoDB table's data but I think your problem can be solved easily if your choice types are modeled in following way
{
"choices": [{
"choiceType": "choice1",
........
},
{
"choiceType": "choice2",
........
}]
}
Now you can use the map state to iterate over the choices array. Note that don't forget to pass the expected choiceType to each iteration.
First state of the map iterator can be a choice state which compares choiceType and moves to appropriate next state. So, basically your rest of the workflow is modeled as iterator of the map state in step 1.
Now, if you don't have the control over DynamoDB table, then you can process the query result in an AWS Lambda.

Regex In body of API test

I'm testing API with https://cloud.google.com/datastore/docs/reference/data/rest/v1/projects/lookup
The following brings a found result with data. I would like to use a regular expression with bring back all records with name having the number 100867. All my attempts result wit a missing result set.
i.e. change to "name": "/1000867.*/"
{
"keys": [
{
"path": [
{
"kind": "Job",
"name": "1000867:100071805:1"
}
]
}
]
}
The Google documentation for lookup key states that the name is a "string" and that
The name of the entity. A name matching regex __.*__ is reserved/read-only. A name must not be more than 1500 bytes when UTF-8 encoded. Cannot be "".
The regex part threw me off and the solution was to use runQuery!
Consider this closed.

CouchDB Map syntax

I have documents in CouchDB (v. 2.1.1) as follows:
{
"xyz": "a",
"abc": "def"
},
{
"xyz": "a",
"ghi": "jkl"
},
{
"xyz": "a",
"mno": "pqr"
},
{
"xyz": "a",
"stu": "vwx"
},
{
"xyz": "a",
"bcd": 1000
}
If I run a simple map function, for example:
function (doc) {
if (doc.xyz ){
emit(doc.xyz, doc.abc);}}
I get:
{
"id": "4c3406a1d92942b4fb10d1314e0061a9",
"key": "a",
"value": "def"
},
{
"id": "4c3406a1d92942b4fb10d1314e006ccf",
"key": "a",
"value": null
},
{
"id": "4c3406a1d92942b4fb10d1314e00787f",
"key": "a",
"value": null
},
{
"id": "4c3406a1d92942b4fb10d1314e00871e",
"key": "a",
"value": null
},
{
"id": "4c3406a1d92942b4fb10d1314e00906a",
"key": "a",
"value": null
}
I want to try and eliminate the 'null' outputs.
I am looking at having a CouchDB database with many small documents containing small snippets of information rather than having larger documents containing much more information per document.
My question is, is my document design a good one and if so how do I get just what I am looking for rather than rows of 'nulls'. If my storage design is not ideal, what kind of design should I be looking at to simplify the output given my plan to have many small 'docs'.
EDIT:
Having looked at possible answers, I have decided that having numerous small documents as I described in my question is not giving me the kind of benefit I imangined they would.
I was unable to get a satisfactory solution to the map function to get readable answers.
However, I investigated the 'Mango' query system available in recent updates of CouchDB and I was able using these queries to get acceptable output from a database like my supplied one.
This is what I did:
curl -X POST http://admin:123#127.0.0.1:5984/ptn/_find -d '{"selector": {"$or": [{"abc": {"$gt": null}},{"ghi": {"$gt": null}}]},"fields": ["abc","ghi"]}' -H "Content-Type:application/json"
Un-minified:
{
"selector": {
"$or": [
{
"abc": {
"$gt": null
}
},
{
"ghi": {
"$gt": null
}
}
]
},
"fields": [
"abc",
"ghi"
]
}
The output:
{"docs":[
{"abc":"def"},
{"ghi":"jkl"}
]
.....
A concise answer.
Sorting can be done but sorted fields must be indexed. Indexing is in any case advised for larger data sets.
Reference:
http://docs.couchdb.org/en/2.1.1/api/database/find.html
As my question required a map function, this perhaps cannot be regarded as a valid answer but for me it is an answer. I have tried the 'Mango' query system a little on other databases and it seems to be more useful/powerful than I thought is was although it offers no means of totaling etc.

Formatting a JSON with a dictionary

I have a JSON with format variables in it, similar to string format, and I'd like to be able to load it with the variables replaced by actual values.
For example, if the JSON is:
[
{
"role": "President",
"name": "{first_name}",
"age": "{first_age}"
},
{
"role": "Vice President",
"name": "{second_name}",
"age": "{second_age}"
}
]
And the dictionary I'd like to format with is:
{"first_name": "Bob", "first_age": "50", "second_name": "Bill", "second_age": "35"}
I'd like to get:
[
{
"role": "President",
"name": "Bob",
"age": "50"
},
{
"role": "Vice President",
"name": "Bill",
"age": "35"
}
]
I tried converting the JSON to a string, using format, and then turning it back to a list of dictionaries:
from ast import literal_eval
literal_eval(str(raw_json).format(**json_params))
But the dictionaries' curly brackets confuse the format function and give me a KeyError exception. I suppose I could replace every pair of curly brackets which don't have a variable name between them with double curly brackets, but that's bound to go wrong and also not very Pythonic.
What would be the most elegant way to solve that issue?
What you are looking for is a templating engine.
Template is json string and data must be injected into this template.
Right tool to do that with python is jinja2

Writing a simple group by with map-reduce (Couchbase)

I'm new to the whole map-reduce concept, and i'm trying to perform a simple map-reduce function.
I'm currently working with Couchbase server as my NoSQL db.
I want to get a list of all my types:
key: 1, value: null
key: 2, value: null
key: 3, value: null
Here are my documents:
{
"type": "1",
"value": "1"
}
{
"type": "2",
"value": "2"
}
{
"type": "3",
"value": "3"
}
{
"type": "1",
"value": "4"
}
What I've been trying to do is:
Write a map function:
function (doc, meta) {
emit(doc.type, 0);
}
Using built-in reduce function:
_count
But i'm not getting the expected result.
How can I get all types ?
UPDATE
Please notice that the types are different documents, and I know that reduce works on a document and doesn't executes outside of it.
By default it will reduce all key groups. The feature you want is called group_level:
This is equivalent of reduce=true
~ $ curl 'http://localhost:8092/so/_design/dev_test/_view/test?group_level=0'
{"rows":[
{"key":null,"value":4}
]
}
But here is how you can get reduction by the first level of the key
~ $ curl 'http://localhost:8092/so/_design/dev_test/_view/test?group_level=1'
{"rows":[
{"key":"1","value":2},
{"key":"2","value":1},
{"key":"3","value":1}
]
}
There is also blog post about this: http://blog.couchbase.com/understanding-grouplevel-view-queries-compound-keys
There is appropriate option in couchbase admin console: