how calculation be done in JSON object elements - django

I have declared instalment_2 as JSONField in the Django model.
My concern here is I am getting below error:
string indices must be integers
I tried all the ways which were available on the internet. But no luck.
I am passing this JSON as POST request:
"instalment_2": {
"type": "instalment_2",
"mode": "cash",
"date": "1/09/2019",
"amount": "35000"
}
It should calculate paid and balance based on installment amounts. instalment_1 amount is already in the table.
Now I am trying to do below.
instalment_2 = {
"type": body['instalment_2']['type'],
"mode": body['instalment_2']['mode'],
"date": body['instalment_2']['date'],
"amount": body['instalment_2']['amount']
}
paid = int(obj.instalment_1['amount']) + int(obj.instalment_2['amount'])
balance = int(obj.total) - paid

Related

How do I extract a string of numbers from random text in Power Automate?

I am setting up a flow to organize and save emails as PDF in a Dropbox folder. The first email that will arrive includes a 10 digit identification number which I extract along with an address. My flow creates a folder in Dropbox named in this format: 2023568684 : 123 Main St. Over a few weeks, additional emails arrive that I need to put into that folder. The subject always has a 10 digit number in it. I was building around each email and using functions like split, first, last, etc. to isolate the 10 digits ID. The problem is that there is no consistency in the subjects or bodies of the messages to be able to easily find the ID with that method. I ended up starting to build around each email format individually but there are way too many, not to mention the possibility of new senders or format changes.
My idea is to use List files in folder when a new message arrives which will create an array that I can filter to find the folder ID the message needs to be saved to. I know there is a limitation on this because of the 20 file limit but that is a different topic and question.
For now, how do I find a random 10 digit number in a randomly formatted email subject line so I can use it with the filter function?
For this requirement, you really need regex and at present, PowerAutomate doesn't support the use of regex expressions but the good news is that it looks like it's coming ...
https://powerusers.microsoft.com/t5/Power-Automate-Ideas/Support-for-regex-either-in-conditions-or-as-an-action-with/idi-p/24768
There is a connector but it looks like it's not free ...
https://plumsail.com/actions/request-free-license
To get around it for now, my suggestion would be to create a function app in Azure and let it do the work. This may not be your cup of tea but it will work.
I created a .NET (C#) function with the following code (straight in the portal) ...
#r "Newtonsoft.Json"
using System.Net;
using Microsoft.AspNetCore.Mvc;
using Microsoft.Extensions.Primitives;
using Newtonsoft.Json;
public static async Task<IActionResult> Run(HttpRequest req, ILogger log)
{
string requestBody = await new StreamReader(req.Body).ReadToEndAsync();
dynamic data = JsonConvert.DeserializeObject(requestBody);
string strToSearch = System.Text.Encoding.UTF8.GetString(Convert.FromBase64String((string)data?.Text));
string regularExpression = data?.Pattern;
var matches = System.Text.RegularExpressions.Regex.Matches(strToSearch, regularExpression);
var responseString = JsonConvert.SerializeObject(matches, new JsonSerializerSettings()
{
ReferenceLoopHandling = ReferenceLoopHandling.Ignore
});
return new ContentResult()
{
ContentType = "application/json",
Content = responseString
};
}
Then in PowerAutomate, call the HTTP action passing in a base64 encoded string of the content you want to search ...
The is the expression in the JSON ... base64(variables('String to Search')) ... and this is the json you need to pass in ...
{
"Text": "#{base64(variables('String to Search'))}",
"Pattern": "[0-9]{10}"
}
This is an example of the response ...
[
{
"Groups": {},
"Success": true,
"Name": "0",
"Captures": [],
"Index": 33,
"Length": 10,
"Value": "2023568684"
},
{
"Groups": {},
"Success": true,
"Name": "0",
"Captures": [],
"Index": 98,
"Length": 10,
"Value": "8384468684"
}
]
Next, add a Parse JSON action and use this schema ...
{
"type": "array",
"items": {
"type": "object",
"properties": {
"Groups": {
"type": "object",
"properties": {}
},
"Success": {
"type": "boolean"
},
"Name": {
"type": "string"
},
"Captures": {
"type": "array"
},
"Index": {
"type": "integer"
},
"Length": {
"type": "integer"
},
"Value": {
"type": "string"
}
},
"required": [
"Groups",
"Success",
"Name",
"Captures",
"Index",
"Length",
"Value"
]
}
}
Finally, extract the first value that you find which matches the regex pattern. It returns multiple results if found so if you need to, you can do something with those.
This is the expression ... #{first(body('Parse_JSON'))?['value']}
From this string ...
We're going to search for string 2023568684 within this text and we're also going to try and find 8384468684, this should work.
... this is the result ...
Don't have a Premium PowerAutomate licence so can't use the HTTP action?
You can do this exact same thing using the LogicApps service in Azure. It's the same engine with some slight differences re: connectors and behaviour.
Instead of the HTTP, use the Azure Functions action.
In relation to your action to fire when an email is received, in LogicApps, it will poll every x seconds/minutes/hours/etc. rather than fire on event. I'm not 100% sure which email connector you're using but it should exist.
Dropbox connectors exist, that's no problem.
You can export your PowerAutomate flow into a LogicApps format so you don't have to start from scratch.
https://learn.microsoft.com/en-us/azure/logic-apps/export-from-microsoft-flow-logic-app-template
If you're concerned about cost, don't be. Just make sure you use the consumption plan. Costs only really rack up for these services when the apps run for minutes at a time on a regular basis. Just keep an eye on it for your own mental health.
TO get the function URL, you can find it in the function itself. You have to be in the function ...

Which alert timestamp (createTime or endTime) correlates to the time an email was "REMOVED_FROM_INBOX"

Is anyone aware of which timestamp presented in alerts correlates to the actual time the email was removed from the inbox if the systemActionType states "REMOVED_FROM_INBOX"?
My question is specific to the "Gmail phishing" alert source (https://developers.google.com/admin-sdk/alertcenter/reference/alert-types). I have yet to see an endTime that is after the alerts createTime for Phishing reclassification and a review of the alert-types page and definitions makes me assume createTime is the correct time to utilize...... however that makes me confused on why there is an endTime being populated for these types.
Key/Value
Description
Phishing reclassification
Unopened messages that are detected as phishing post-delivery are automatically reclassified and removed from the user's inbox.
createTime
Output only. The time this alert was created.
endTime
Optional. The time the event that caused this alert ceased being active. If provided, the end time must not be earlier than the start time. If not provided, it indicates an ongoing alert.
Sample Alert
"customerId": "<removed>",
"alertId": "<removed>",
"createTime": "2021-03-11T18:25:47.538082Z",
"startTime": "2021-03-11T13:19:50.374062Z",
"endTime": "2021-03-11T17:53:54.482936Z",
"type": "Phishing reclassification",
"source": "Gmail phishing",
"data": {
"#type": "type.googleapis.com/google.apps.alertcenter.type.MailPhishing",
"domainId": {
"customerPrimaryDomain": "<removed>"
},
"maliciousEntity": {
"fromHeader": "<removed>"
},
"messages": [
{
"messageId": "<removed>",
"md5HashMessageBody": "<removed>",
"md5HashSubject": "<removed>",
"attachmentsSha256Hash": [
"<removed>"
],
"recipient": "<removed>",
"date": "2021-03-11T13:19:50.374062Z"
}
],
"isInternal": true,
"systemActionType": "REMOVED_FROM_INBOX"
},
"metadata": {
"customerId": "<removed>",
"alertId": "<removed>",
"status": "NOT_STARTED",
"updateTime": "2021-03-11T18:25:47.538082Z",
"severity": "MEDIUM",
"etag": "<removed>"
}
API Link if you so desire: https://developers.google.com/admin-sdk/alertcenter/reference/rest/v1beta1/alerts
Answer:
In Phishing reclassification alerts, the date when each message was removed from inbox (when it was reclassified) corresponds to the date field in each message:
{
"data": {
"messages": [
{
"date": "2021-03-11T13:19:50.374062Z"
}
]
}
}
You will notice that it corresponds to the date in startTime. That’s because startTime corresponds to the date when the first message in the report was reclassified (since that’s the first and only reclassified message in this alert).
Reported in Issue Tracker:
The documentation at GmailMessageInfo is not clear on this, since date corresponds to The date the malicious email was sent only in some alert types.
Therefore, I reported a documentation bug in Issue Tracker:
[Alert Center API] Description of date in GmailMessageInfo not meaningful for all alert types

Error when importing CSV file into Amazon Personalize

I am trying to import a CSV file into Amazon Personalize
my schema looks like this:
{
"type": "record",
"name": "Items",
"namespace": "com.amazonaws.personalize.schema",
"fields": [
{
"name": "ITEM_ID",
"type": "string"
},
{
"name": "AUTHOR",
"type": "string",
"categorical": true
},
{
"name": "COUNTRY",
"type": "string",
"categorical": true
},
{
"name": "CITY",
"type": "string",
"categorical": true
},
{
"name": "STYLES",
"type": "string",
"categorical": true
},
{
"name": "CATEGORIES",
"type": "string",
"categorical": true
}
],
"version": "1.0"
}
the first few rows of data look like this:
ITEM_ID,AUTHOR,COUNTRY,CITY,STYLES,CATEGORIES
5b4253a7e12434f55875381e,5acd193f48ed4b9b3add5be6,US,city_us_austin,5ad45bc575eb016f3cdb562b|571aa21888a4fd9934f0fd7b|571aa21888a4fd9934f0fd79|5ad45e8c75eb016f3cdb563f|5b4ea35abaa12285687a1f47,593a866a082c26444eab2d3c|5a8e4820fc112d414fbc1be3
5b4253a7e12434f55875381f,5acd193f48ed4b9b3add5be6,US,city_us_jackson,571aa21888a4fd9934f0fd82|57600e419e4959cd069658eb|5ad45c3a75eb016f3cdb5631|571aa21888a4fd9934f0fd7b|57aaa7094a393f531ace43f0|575e6d8e34ca56f742bea1c8|571aa21888a4fd9934f0fd8f,593a866a082c26444eab2d3c|5a8e4820fc112d414fbc1be3
I get the error
Failed to create a data import job for item dataset.
Input csv has rows that do not conform to the dataset schema. Please ensure all required data fields are present and that they are of the type specified in the schema.
How can I figure out what is wrong with the CSV (it's thousands of lines long), so I have not idea if its a general mistake, or something wrong on a specific line?
In my experience, so long as the dataset is not >250 thousand records, you can still use Excel to check the data utilizing data filters and corresponding search functions. If it's more than that, look into using Notepad++ and RegEx. Your problem may be one of the following things:
(1) There's a missing comma. This would misalign your data and keep it from being processed.
(2) There's a missing ITEM_ID value. For Items, Personalize requires ITEM_ID and at least one metadata field. It might give this error if there is an instance where you are missing ITEM_ID or have ITEM_ID but no other metadata field values.
(3) STYLES and/or CATEGORIES exceeds 256 characters. There is probably a limit on String length, but I can't get a clear answer on this from the developer's guide. I would guess it's 256 characters. If I was betting money, this would be my guess on your problem.
Here is a different approach to solve the problem, maybe will be useful for other cases. I had the same issue, but when dealing with int columns having null values. Pandas by default converts the columns to float data type - something AWS Personalize dataset import job will not accept if you have dedfined these columns as int or long. Long story short, converting these columns to int solves the problem:
df.column_name = df.column_name.astype(pd.Int32Dtype())

Range query for long type in aws elasticsearch

I am trying to query an elasticsearch index in AWS to get all entries with a mass attribute greater than 1000, the datatype for the attribute is Long.
I found the range query and have tried that (see example below) but it's returning nothing but when I use other queries they return attributes with mass greater than 1000 so they're definitely in the index.
This is the Range query I'm trying:
{
"method": "POST",
"index": "users",
"type": "user",
"path": "_search?filter_path=filter",
"body": {
"size": 20,
"from": 0,
"query": {
"bool": {
"must":[{
"range": {
"mass": {
"gte": 1000
}
}
}]
}
}
}
}
I'm not getting any error messages, just zero hits.
So the problem that's causing to get you zero hits is the filter_path parameter you specify in
"path": "_search?filter_path=filter"
As stated in the official documentation the filter_path parameter is part of the common options regarding the REST API's. That means you can always add that parameter.
With Response Filtering you can reduce the response returned by Elasticsearch. Since you defined
_search?filter_path=filter
you probably get zero hits because there is no filter-element that can be returned.

How to retrieve user country using facebook Graph API?

I want to retrieve the logged in user country, and insert it into a div.
I succeed making it with the user Name, Gender and Age range, but somewhy I can't retrieve the country.
Here is my code:
function testAPI() {
console.log('Welcome! Fetching your information.... ');
FB.api('/me?fields=name,email,gender,age_range,picture,country', function (response) {
document.getElementById('status').innerHTML = response.name + ",";
document.getElementById('status1').innerHTML = response.gender + ",";
document.getElementById('status2').innerHTML = (response.age_range.min) + "-" + (response.age_range.min + 1) + " years old";
document.getElementById('status3').innerHTML = response.country;
}
Thanks!
To complete what Lix said:
You can do it with only one call to Graph API, by calling
/me?fields=name,email,gender,age_range,picture,location{location}
Indeed, the location object under the user object is a Page object, which has a Location object.
So, this will give you something like this:
"location": {
"location": {
"city": "Ramat Gan",
"country": "Israel",
"latitude": 32.0833,
"longitude": 34.8167,
"zip": "<<not-applicable>>"
}
}
This avoids you to make a double call to Graph API.
I believe the parameter you are looking for is location instead of country.
/me?fields=name,email,gender,age_range,picture,location
In order to query this data, you'll need your users to grant you the user_location permission.
This will give you value of the user submitted field - take note that this parameter might not always be populated since it depends on the user actually submitting this information - if they have not provided it - you will not be able to retrieve it.
The object will look something like this:
"location": {
"id": "112604772085346",
"name": "Ramat Gan"
},
Once you have the location object (which will most likely be a page), you can query that object to retrieve the country:
/112604772085346?fields=location
This will give you more information including the country.
{
"location": {
"city": "Ramat Gan",
"country": "Israel",
"latitude": 32.0833,
"longitude": 34.8167,
"zip": "<<not-applicable>>"
},
"id": "112604772085346"
}
Fields in Fb API are now concatenable, so, for a detailed info of User's location you need:
scope="public_profile,user_location"
fields="name,location{location{country, country_code, city, city_id, latitude, longitude, region, region_id, state, street, name}}"
v2.11 query:
/me?fields=hometown,location
permission:
user_hometown
user_location
result:
{
"hometown": {
"id": "XXXXREDACTED",
"name": "Manila, Philippines"
},
"location": {
"id": "XXXXREDACTED",
"name": "Manila, Philippines"
},
"id": "XXXXREDACTED"
}