AWS cloudWatch logs insight, sum the key/value pairs in json - amazon-web-services

I need some help with the cloud watch insight query that can sum up the key/value pairs in JSON form in logs.
fields #timestamp, #message | filter #message like /CTS/
After this, how do I parse the JSON and sum the key/values
Logs look like this
2022-10-28 16:58:14,685 :INFO: CTS {'aa': 135, 'bb': 187, 'cc': 14, 'dd': 8, 'ee': 3, 'ff': 1} CTE
2022-10-28 16:49:11,397 :INFO: CTS {'aa': 101, 'bb': 153, 'gg': 11, 'ii': 17, 'jj': 2, 'pp': 1, 'zz': 5} CTE
....
...
..
I need to sum up the pairs to make a pie chart.
like aa: 236, bb: 240 .....

Related

Why does nulls_last=False not put the nulls first in Django?

I'm finding that while nulls_last=True works, nulls_last=False doesn't. Example below is in a Django shell.
In [10]: [x.date for x in Model.objects.all().order_by(F('date').asc(nulls_last=True))]
Out[10]:
[datetime.datetime(2020, 3, 10, 16, 58, 7, 768401, tzinfo=<UTC>),
datetime.datetime(2020, 3, 10, 17, 4, 51, 601980, tzinfo=<UTC>),
None,
]
[ins] In [11]: [x.last_run_created_at for x in Model.objects.all().order_by(F('date').asc(nulls_last=False))]
Out[11]:
[datetime.datetime(2020, 3, 10, 16, 58, 7, 768401, tzinfo=<UTC>),
datetime.datetime(2020, 3, 10, 17, 4, 51, 601980, tzinfo=<UTC>),
None,
]
In [12]:
I've tried this with both desc() and asc().
The mistake is assuming that the opposite of nulls_last=True is nulls_last=False. It isn't.
nulls_last=True does the following to the query:
SELECT ... ORDER BY ... ASC NULLS LAST
Whereas nulls_last=False just means use the DB default:
SELECT ... ORDER BY ... ASC
What you want instead is to use nulls_first=True OR nulls_last=True to explicitly get the order you want.
This is mentioned in the docs, but perhaps not as explicitly as it could be:
Using F() to sort null values
Use F() and the nulls_first or
nulls_last keyword argument to Expression.asc() or desc() to control
the ordering of a field’s null values. By default, the ordering
depends on your database.

Getting the ID of the Max record in an aggregate

Take these models:
class Rocket(Model):
...
class Flight(Model):
rocket = ForeignKey(Rocket)
start_time = DateTimeField(...)
If I want to get start times of the latest flight for every rocket, that is simple:
>>> Flight.objects.values('rocket').annotate(max_start_time=Max('start_time'))
<QuerySet [
{'rocket': 3, 'max_start_time': datetime.datetime(2019, 6, 13, 6, 58, 46, 299013, tzinfo=<UTC>)},
{'rocket': 4, 'max_start_time': datetime.datetime(2019, 6, 13, 6, 59, 12, 759964, tzinfo=<UTC>)},
...]>
But what if instead of max_start_time I wanted to select IDs of those same Flights?
In other words, I want to get the ID of the latest Flight for every rocket.
What database backend are you using? If your database backend has support for DISTINCT ON this is most easily accomplished by:
Flight.objects.order_by("rocket", "-start_time").distinct("rocket").values("id", "rocket")

How to solve error Expected singleton: purchase.order.line (57, 58, 59, 60, 61, 62, 63, 64)

I'm using odoo version 9 and I've created a module to customize the reports of purchase order. Among the fields that I want displayed in the reports is the supplier reference for article but when I add the code that displays this field <span> <t t-esc="', '.join([str(x.product_code) for x in o.order_line.product_id.product_tmpl_id.seller_ids])"/>
but it displays an error when I want to start printing the report
QWebException: "Expected singleton: purchase.order.line(57, 58, 59, 60, 61, 62, 63, 64)" while evaluating
"', '.join([str(x.product_code) for x in o.order_line.product_id.product_tmpl_id.seller_ids])"
PS: I don't change anything in the module purchase.
I don't know how to fix this problem any idea for help please ?
It is because your purchase order got several orderlines and you are hoping that the order will have only one orderline.
o.orderline.product_id.product_tmpl_id.seller_ids
will work only if there is one orderline otherwise you have loop through each orderline. Here o.orderline will have multiple orderlines and you can get product_id from multiple orderline. If you try o.orderline[0].product_id.product_tmpl_id.seller_ids it will work but will get only first orderline details. Inorder to get all the orderline details you need to loop through it.
There are more than one seller ids found . That's why you are getting number of ids here .i.e. purchase.order.line(57, 58, 59, 60, 61, 62, 63, 64). You have to select one of the id among them . To see the result just try this :
o.order_line[0].product_id.product_tmpl_id.seller_ids
If you want to show all of these seller ids on report apply for loop in to the xml.

Remove duplicates based on specific column

I do have a big text file in the following format on my Linux CentOS 7.
430004, 331108, 075, 11, 19, Chunsuttiwat Nattika
431272, 331108, 075, 11, 19, Chunsuttiwat Nattika
435979, 335086, 803, 6, 19, ANNI BRENDA
436143, 335151, 545, 4, 23, Agrawal Abhishek
436723, 335387, 386, 2, 19, Bhati Naintara
438141, 325426, 145, 11, 19, Teh Joshua
I would like to remove duplicate lines including the origin if it matches on second column.
Expected Output:
435979, 335086, 803, 6, 19, ANNI BRENDA
436143, 335151, 545, 4, 23, Agrawal Abhishek
436723, 335387, 386, 2, 19, Bhati Naintara
438141, 325426, 145, 11, 19, Teh Joshua
Update:
sort + uniq + awk pipeline:
sort -k2,2 file | uniq -f1 -c -w7 | awk '$1==1{ sub(/[[:space:]]*[0-9]+[[:space:]]*/,"",$0); print}'
sort -k2 -n file - sort the file by the 2nd field numerically
uniq -f1 -c - output lines with the number of occurrences (-f1 - skips the 1st field in the file)
awk '$1==1{ $1=""; print}' - print the lines which occur only once ($1==1 - check for count value from uniq command)
Using awk
#Input
awk '{R[i++]=$0;f=$1;$1="";N[$0]++;}
END{for(i=0;i<NR;i++){
temp=R[i];sub(/^[[:digit:]]*\, /,"",R[i]);
if(N[" "R[i]]==1){print temp}}}' filename
#Output
435979, 335086, 803, 6, 19, ANNI BRENDA
436143, 335151, 545, 4, 23, Agrawal Abhishek
436723, 335387, 386, 2, 19, Bhati Naintara
438141, 325426, 145, 11, 19, Teh Joshua
This is all you need:
$ awk 'NR==FNR{c[$2]++;next} c[$2]==1' file file
435979, 335086, 803, 6, 19, ANNI BRENDA
436143, 335151, 545, 4, 23, Agrawal Abhishek
436723, 335387, 386, 2, 19, Bhati Naintara
438141, 325426, 145, 11, 19, Teh Joshua

Is there a way to automate the creation of a new text file that contains a dictionary from a different file?

I'm using Python 2.7
Here I create a set of dictionaries:
day0 = 0
day1 = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10,
11, 12, 13, 14, 15, 16, 17, 18, 19, 20,
21, 22, 23, 24, 25]
month0 = 0
december = [day0, day1]
calendar = [month0, december]
Then what I want to do is this:
file = open("calendarScript.py", "w")
file.write(calendar) ## Trying to create the calendar in a new doc
file.close()
But I get this error:
TypeError: expected a string or other character buffer object
Is there a way to recreate a dictionary in a new document?
Thank you for your help :)
P.s., I just tried this:
import shutil
shutil.copy(calendar, newFolder)
And got back this error:
TypeError: coercing to Unicode: need string or buffer, list found
Trying to find a way to copy a dict to a new file.
The answer to my problem was "dump". What I was trying to do was "dump" to a text file. Thanks to #KFL for this response (link below):
Writing a dict to txt file and reading it back?
>>> import json
>>> d = {"one":1, "two":2}
>>> json.dump(d, open("text.txt",'w'))
He also answered what was going to be my next problem:
>>> d2 = json.load(open("text.txt"))
>>> print d2
{u'two': 2, u'one': 1}