How do I write a query for a json in logs insights? - amazon-web-services

I have a simple message in the form of json like below in one of the log group. The query that I use is {$.level = "INFO"} This doesn't bring up any result. What could be the problem? Can somebody help please.
{
"level": "INFO",
"location": "lambda_handler:31",
"message": {
"msg": "abc",
"event": {
"Records": [
{
.
.
.
}]
}
}
}

Now CloudWatch Log Insights allows to filter based on json fields.
The sintax is as following:
Filter based on field 'level'
filter level = 'INFO'
| display level, #message
Filter based on nested fields
filter message.msg != '123'
| display message.msg, #message
Documentation:
https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/CWL_AnalyzeLogData-discoverable-fields.html#CWL_AnalyzeLogData-discoverable-JSON-logs

Related

Stat operation on AWS Cloudwatch insight query to get field value from JSON Array

I have below JSON in Cloudwatch logs, wanted to get the timestamp value of all the records in this.
"records": [
{
"timestamp": "2020-10-16T18:00:24Z",
"temp": "-65.0",
"temp1": 0,
"temp2": -64
},
{
"timestamp": "2020-10-16T19:00:24Z",
"temp": "-65.0",
"temp1": 0,
"temp2": -64
}
}
tried with the below queries, didn't help
fields #message
| parse #message '"records":[*]' as recordList
| parse recordList '{*}' as recordLine
| parse recordLine '"timestamp":"*"' as recordTime
| stat count() by recordTime
This results in only the first record time in the query results, i need all the timestamp.

CouchDB query to get the doc with MAX timestamp

My CouchDB document format as below and based on the price changes there can be multiple documents with same product_id & store_id
{
"_id": "6b645d3b173b4776db38eb9fe6014a4c",
"_rev": "1-86a1d9f0af09beaa38b6fbc3095f06a8",
"product_id": "6b645d3b173b4776db38eb9fe60148ab",
"store_id": "0364e82c13b66325ee86f99f53049d39",
"price": "12000",
"currency": "AUD_$",
"time": 1579000390326
}
and I need to get the latest document (by time - the timestamp) for given product_id & store_id
For this, with my current solution I have to do two queries as below;
To get the latest timestamp. This returns the latest timestamp for given product_id & store_id
"max_time_by_product_store_id": {
"reduce": "function(keys, values) {var ids = []
values.forEach(function(time) {
if (!isNaN(time)){
ids.push(time);
}
});
return Math.max.apply(Math, ids)
}",
"map": "function (doc) {emit([doc.store_id, doc.product_id], doc.time);}"
}
Based on the latest timestamp, again I query to get the document with three parameters that are store_id, product_id & time as below,
"store_product_time": {
"map": "function (doc) {
emit([doc.store_id, doc.product_id, doc.time]);
}"
}
This works perfectly for me but my problem is I need to do two DB queries to get the document and looking for a solution to fetch the document within one DB query.
In CouchDB selector also has no way to get the document by MAX value.
With CouchDB's /db/_find, you can descending sort the result and limit the result to one document as follows:
{
"selector": {
"_id": {
"$gt": null
}
},
"sort": [
{
"time": "desc"
}
],
"limit": 1
}
CURL
curl -H 'Content-Type: application/json' -X POST http://localhost:5984/<db>/_find -d '{"selector":{"_id":{"$gt":null}},"sort":[{"time": "desc"}],"limit": 1}'
Please note that an index must previously be created for the sort field time (see /db/_index).

AWS IoT rule - timestamp for Elasticsearch

Have a bunch of IoT devices (ESP32) which publish a JSON object to things/THING_NAME/log for general debugging (to be extended into other topics with values in the future).
Here is the IoT rule which kind of works.
{
"sql": "SELECT *, parse_time(\"yyyy-mm-dd'T'hh:mm:ss\", timestamp()) AS timestamp, topic(2) AS deviceId FROM 'things/+/stdout'",
"ruleDisabled": false,
"awsIotSqlVersion": "2016-03-23",
"actions": [
{
"elasticsearch": {
"roleArn": "arn:aws:iam::xxx:role/iot-es-action-role",
"endpoint": "https://xxxx.eu-west-1.es.amazonaws.com",
"index": "devices",
"type": "device",
"id": "${newuuid()}"
}
}
]
}
I'm not sure how to set #timestamp inside Elasticsearch to allow time based searches.
Maybe I'm going about this all wrong, but it almost works!
Elasticsearch can recognize date strings matching dynamic_date_formats.
The following format is automatically mapped as a date field in AWS Elasticsearch 7.1:
SELECT *, parse_time("yyyy/MM/dd HH:mm:ss", timestamp()) AS timestamp FROM 'events/job/#'
This approach does not require to create a preconfigured index, which is important for dynamically created indexes, e.g. with daily rotation for logs:
devices-${parse_time("yyyy.MM.dd", timestamp(), "UTC")}
According to elastic.co documentation,
The default value for dynamic_date_formats is:
[ "strict_date_optional_time","yyyy/MM/dd HH:mm:ss Z||yyyy/MM/dd Z"]
#timestamp is just a convention as the # prefix is the default prefix for Logstash generated fields. Because you are not using Logstash as a middleman between IoT and Elasticsearch, you don't have a default mapping for #timestamp.
But basically, it is just a name, so call it what you want, the only thing that matters is that you declare it as a timestamp field in the mappings section of the Elasticsearch index.
If for some reason you still need it to be called #timestamp, you can either SELECT it with that prefix right away in the AS section (might be an issue with IoT's sql restrictions, not sure):
SELECT *, parse_time(\"yyyy-mm-dd'T'hh:mm:ss\", timestamp()) AS #timestamp, topic(2) AS deviceId FROM 'things/+/stdout'
Or you use the copy_to functionality when declaring you're mapping:
PUT devices/device
{
"mappings": {
"properties": {
"timestamp": {
"type": "date",
"copy_to": "#timestamp"
},
"#timestamp": {
"type": "date",
}
}
}
}

Facebook Graph API V 2.6 User Type

Is there a way to know if the user info being returned by the graph api explorer is a user's profile or a business page?
The query I am using is:
me?fields=feed{from,comments{from}}
which gives me back all the users who have left a comment or a post on my page.
Here is an example of what I would get back for that query:
{
"feed": {
"data": [
{
"from": {
"name": "John's Tires",
"id": "114615108955555"
},
{
"from": {
"name": "John Smith",
"id": "123615108951010"
},
Is there something I can add to the query to make it return a user type? For example- type:user or type:page?
I've searched facebook graph api documentation and found nothing. Thanks in advance for your help.
With metadata=1 you get a type in the response for a single top level item, but I don't know how to do it for a list.
$ fbapi /me?metadata=1 | jq .metadata.type
"user"
$ fbapi /BBCQuestionTime?metadata=1 | jq .metadata.type
"page"

How to format date in Logstash Configuration

I am using logstash to parse log entries from an input log file.
LogLine:
TID: [0] [] [2016-05-30 23:02:02,602] INFO {org.wso2.carbon.registry.core.jdbc.EmbeddedRegistryService} - Configured Registry in 572ms {org.wso2.carbon.registry.core.jdbc.EmbeddedRegistryService}
Grok Pattern:
TID:%{SPACE}\[%{INT:SourceSystemId}\]%{SPACE}\[%{DATA:ProcessName}\]%{SPACE}\[%{TIMESTAMP_ISO8601:TimeStamp}\]%{SPACE}%{LOGLEVEL:MessageType}%{SPACE}{%{JAVACLASS:MessageTitle}}%{SPACE}-%{SPACE}%{GREEDYDATA:Message}
My grok pattern is working fine. I am sending these parse entries to an rest base api made by myself.
Configurations:
output {
stdout { }
http {
url => "http://localhost:8086/messages"
http_method => "post"
format => "json"
mapping => ["TimeStamp","%{TimeStamp}","CorrelationId","986565","Severity","NORMAL","MessageType","%{MessageType}","MessageTitle","%{MessageTitle}","Message","%{Message}"]
}
}
In the current output, I am getting the date as it is parsed from the logs:
Current Output:
{
"TimeStamp": "2016-05-30 23:02:02,602"
}
Problem Statement:
But the problem is that my API is not expecting the date in such format, it is expecting the date in generic xsd type i.e datetime format. Also, as mentioned below:
Expected Output:
{
"TimeStamp": "2016-05-30T23:02:02:602"
}
Can somebody please guide me, what changes I have to add in my filter or output mapping to achieve this goal.
In order to transform
2016-05-30 23:02:02,602
to the XSD datetime format
2016-05-30T23:02:02.602
you can simply add a mutate/gsub filter in order to replace the space character with a T and the , with a .
filter {
mutate {
gsub => [
"TimeStamp", "\s", "T",
"TimeStamp", ",", "."
]
}
}