Application Insights Data Limited Importing Into PowerBI - powerbi

I'm attempting to import data from Azure Application Insights into PowerBI. The issue is that, regardless of the timespan I set, I seem to only be pulling about a week's worth of data. Here's what the M query looks like:
let AnalyticsQuery =
let Source = Json.Document(Web.Contents("https://api.applicationinsights.io/v1/apps/<uuid>/query",
[Query=[#"query"="customEvents
| project customDimensions
",#"x-ms-app"="AAPBI",#"timespan"="P30D"],Timeout=#duration(0,0,60,0)])),
TypeMap = #table(
{ "AnalyticsTypes", "Type" },
{
{ "string", Text.Type },
{ "int", Int32.Type },
{ "long", Int64.Type },
{ "real", Double.Type },
{ "timespan", Duration.Type },
{ "datetime", DateTimeZone.Type },
{ "bool", Logical.Type },
{ "guid", Text.Type },
{ "dynamic", Text.Type }
}),
DataTable = Source[tables]{0},
Columns = Table.FromRecords(DataTable[columns]),
ColumnsWithType = Table.Join(Columns, {"type"}, TypeMap , {"AnalyticsTypes"}),
Rows = Table.FromRows(DataTable[rows], Columns[name]),
Table = Table.TransformColumnTypes(Rows, Table.ToList(ColumnsWithType, (c) => { c{0}, c{3}}))
in
Table
in
AnalyticsQuery
I was thinking this was a size issue, but I've already narrowed it down to a single column (albeit a wide one) and it's still not returning any more data.
Narrowing the returned dataset to two columns has increased the dataset to include a few weeks instead of less than a week, but I'm still looking for a bigger dataset. Here's the latest query:
let AnalyticsQuery =
let Source = Json.Document(Web.Contents("https://api.applicationinsights.io/v1/apps/<uuid>/query",
[Query=[#"query"="customEvents
| extend d=parse_json(customDimensions)
| project timestamp, d[""Properties""]
| order by timestamp desc
| where timestamp <= now() and d_Properties <> """"
",#"x-ms-app"="AAPBI"],Timeout=#duration(0,0,4,0)])),
TypeMap = #table(
{ "AnalyticsTypes", "Type" },
{
{ "string", Text.Type },
{ "int", Int32.Type },
{ "long", Int64.Type },
{ "real", Double.Type },
{ "timespan", Duration.Type },
{ "datetime", DateTimeZone.Type },
{ "bool", Logical.Type },
{ "guid", Text.Type },
{ "dynamic", Text.Type }
}),
DataTable = Source[tables]{0},
Columns = Table.FromRecords(DataTable[columns]),
ColumnsWithType = Table.Join(Columns, {"type"}, TypeMap , {"AnalyticsTypes"}),
Rows = Table.FromRows(DataTable[rows], Columns[name]),
Table = Table.TransformColumnTypes(Rows, Table.ToList(ColumnsWithType, (c) => { c{0}, c{3}}))
in
Table,
#"Sorted Rows" = Table.Sort(AnalyticsQuery,{{"timestamp", Order.Ascending}})
in
#"Sorted Rows"

You should look into either table buffering or directquery: see this discussion

Related

Removing the "reserved keyword" from dynamodb update item node js

I am able to SET values for reserved keywords but not able to remove certain keywords in dynamodb.
My expression is below. Not able to remove data if I give like this. But able to remove size though.
Invalid UpdateExpression: Attribute name is a reserved keyword; reserved keyword:data
{
ConditionExpression: "#id = :id",
ExpressionAttributeNames: {
"#id": "id",
“#name": “name",
},
ExpressionAttributeValues: {
":id": “1234566",
“:name": “John",
},
Key: {
id: "1234566",
},
ReturnValues: "ALL_NEW",
TableName: “table_name",
UpdateExpression: "SET #name = :name REMOVE size, data",
}
Modified to include data in expression attribute names as below. Still throws error.
{
ConditionExpression: "#id = :id",
ExpressionAttributeNames: {
"#id": "id",
“#name": “name",
“#size": “size",
“#data": “data",
},
ExpressionAttributeValues: {
":id": “1234566",
“:name": “John",
“:size": undefined,
“:data": undefined,
},
Key: {
id: "1234566",
},
ReturnValues: "ALL_NEW",
TableName: "table_name",
UpdateExpression: "SET #name = :name, #size = :size, #data =
:data REMOVE size, data”,
}
Remove size and data from ExpressionAttributeValues and from SET UpdateExpression. Add #size and #data to REMOVE on UpdateExpression instead of size and data.
{
ConditionExpression: "#id = :id",
ExpressionAttributeNames: {
"#id": "id",
“#name": “name",
“#size": “size",
“#data": “data",
},
ExpressionAttributeValues: {
":id": “1234566",
“:name": “John"
},
Key: {
id: "1234566",
},
ReturnValues: "ALL_NEW",
TableName: "table_name",
UpdateExpression: "SET #name = :name, REMOVE #size, #data”,
}
Reserved words in DynamoDB
to do that dynamically you need only to provide a json object with this function
it's create dynamically
-ExpressionAttributeNames
-ExpressionAttributeValues
-UpdateExpression
also skip reserved keyword for dynamodb
def generate_update_att(body):
"""
#############input
body={
'newval':newData['newval'],
'lastname':newData['lastname'],
'name':newData['name']
}
#################output
-ExpressionAttributeNames
-ExpressionAttributeValues
-UpdateExpression
"""
UpdateExpression = ["set "]
ExpressionAttributeValues = dict()
ExpressionAttributeNames=dict()
for key, val in body.items():
UpdateExpression.append(f" #{key} = :{key},")
ExpressionAttributeValues[f":{key}"] = val
ExpressionAttributeNames[f"#{key}"] = key
return "".join(UpdateExpression)[:-1], ExpressionAttributeValues,ExpressionAttributeNames
function Call
my_data={
'newval':newData['newval'],
'lastname':newData['lastname'],
'name':newData['name']
}
a, v,z = generate_update_att(my_data)
my_table.update_item(
Key={'id':my_id},
UpdateExpression=my_UpdateExpression,
ExpressionAttributeValues=dict(my_ExpressionAttributeValues),
ExpressionAttributeNames=dict(my_ExpressionAttributeNames)
)

How can I get distinct values for the area.names using graphene?

my resolver in schema.py looks like this
def resolve_areas(self, info, **kwargs):
result = []
dupfree = []
user = info.context.user
areas = BoxModel.objects.filter(client=user, active=True).values_list('area_string', flat=True)
In GraphiQL I am using this query:
{
areas {
edges {
node {
id
name
}
}
}
}
And get Output that starts like this:
{
"data": {
"areas": {
"edges": [
{
"node": {
"id": "QXJlYTpkZWZ",
"name": "default"
}
},
{
"node": {
"id": "QXJlYTptZXN",
"name": "messe"
}
},
{
"node": {
"id": "QXJlYTptZXN",
"name": "messe"
}
},
But i want distinct values on the name variable
(Using a MySQL Database so distinct does not work)
SOLVED:
distinct was not working. so i just wrote a short loop which tracked onlye the string names duplicates in a list and only appended the whole "area" object if it's name has not been added to the duplicates list yet
result = []
dupl_counter = []
for area in areas:
if area not in dupl_counter:
dupl_counter.append(area)
result.append(Area(name=area))
print(area)

Azure Cosmos query to convert into List

This is my JSON data, which is stored into cosmos db
{
"id": "e064a694-8e1e-4660-a3ef-6b894e9414f7",
"Name": "Name",
"keyData": {
"Keys": [
"Government",
"Training",
"support"
]
}
}
Now I want to write a query to eliminate the keyData and get only the Keys (like below)
{
"userid": "e064a694-8e1e-4660-a3ef-6b894e9414f7",
"Name": "Name",
"Keys" :[
"Government",
"Training",
"support"
]
}
So far I tried the query like
SELECT c.id,k.Keys FROM c
JOIN k in c.keyPhraseBatchResult
Which is not working.
Update 1:
After trying with the Sajeetharan now I can able to get the result, but the issue it producing another JSON inside the Array.
Like
{
"id": "ee885fdc-9951-40e2-b1e7-8564003cd554",
"keys": [
{
"serving": "Government"
},
{
"serving": "Training"
},
{
"serving": "support"
}
]
}
Is there is any way that extracts only the Array without having key value pari again?
{
"userid": "e064a694-8e1e-4660-a3ef-6b894e9414f7",
"Name": "Name",
"Keys" :[
"Government",
"Training",
"support"
]
}
You could try this one,
SELECT C.id, ARRAY(SELECT VALUE serving FROM serving IN C.keyData.Keys) AS Keys FROM C
Please use cosmos db stored procedure to implement your desired format based on the #Sajeetharan's sql.
function sample() {
var collection = getContext().getCollection();
var isAccepted = collection.queryDocuments(
collection.getSelfLink(),
'SELECT C.id,ARRAY(SELECT serving FROM serving IN C.keyData.Keys) AS keys FROM C',
function (err, feed, options) {
if (err) throw err;
if (!feed || !feed.length) {
var response = getContext().getResponse();
response.setBody('no docs found');
}
else {
var response = getContext().getResponse();
var map = {};
for(var i=0;i<feed.length;i++){
var keyArray = feed[i].keys;
var array = [];
for(var j=0;j<keyArray.length;j++){
array.push(keyArray[j].serving)
}
feed[i].keys = array;
}
response.setBody(feed);
}
});
if (!isAccepted) throw new Error('The query was not accepted by the server.');
}
Output:

Swift Storing Appending multiple Dictionary into array

I want to store multiple dictionary into an array so that the final results looks like so
(
{
id: 12,
task : completed
},
{
id: 15,
task : error
},
{
id: 17,
task : pending
},
)
I tried with code below but it does not give me what I want Please can someone help me out. Thanks
var FinalTaskData = [[String:AnyObject]]()
for i in 0..<taskObj.count{
let dict = ["id":taskObj[i].id!,"task":taskObj[i].task!] as [String : AnyObject]
FinalTaskData.append(dict)
}
And this gives me the output of
(
{
id = 190;
},
{
task = "Task To Be Edited";
},
{
id = 191;
},
{
task = "Also To Be Edited";
}
)
Which is not what I want. Thanks

Using Elastic Search Geo Functionality To Find Most Common Locations?

I have a geojson file containing a list of locations each with a longitude, latitude and timestamp. Note the longitudes and latitudes are multiplied by 10000000.
{
"locations" : [ {
"timestampMs" : "1461820561530",
"latitudeE7" : -378107308,
"longitudeE7" : 1449654070,
"accuracy" : 35,
"junk_i_want_to_save_but_ignore" : [ { .. } ]
}, {
"timestampMs" : "1461820455813",
"latitudeE7" : -378107279,
"longitudeE7" : 1449673809,
"accuracy" : 33
}, {
"timestampMs" : "1461820281089",
"latitudeE7" : -378105184,
"longitudeE7" : 1449254023,
"accuracy" : 35
}, {
"timestampMs" : "1461820155814",
"latitudeE7" : -378177434,
"longitudeE7" : 1429653949,
"accuracy" : 34
}
..
Many of these locations will be the same physical location (e.g. the user's home) but obviously the longitude and latitudes may not be exactly the same.
I would like to use Elastic Search and it's Geo functionality to produce a ranked list of most common locations where locations are deemed to be the same if they are within, say, 100m of each other?
For each common location I'd also like the list of all timestamps they were at that location if possible!
I'd very much appreciate a sample query to get me started!
Many thanks in advance.
In order to make it work you need to modify your mapping like this:
PUT /locations
{
"mappings": {
"location": {
"properties": {
"location": {
"type": "geo_point"
},
"timestampMs": {
"type": "long"
},
"accuracy": {
"type": "long"
}
}
}
}
}
Then, when you index your documents, you need to divide the latitude and longitude by 10000000, and index like this:
PUT /locations/location/1
{
"timestampMs": "1461820561530",
"location": {
"lat": -37.8103308,
"lon": 14.4967407
},
"accuracy": 35
}
Finally, your search query below...
POST /locations/location/_search
{
"aggregations": {
"zoomedInView": {
"filter": {
"geo_bounding_box": {
"location": {
"top_left": "-37, 14",
"bottom_right": "-38, 15"
}
}
},
"aggregations": {
"zoom1": {
"geohash_grid": {
"field": "location",
"precision": 6
},
"aggs": {
"ts": {
"date_histogram": {
"field": "timestampMs",
"interval": "15m",
"format": "DDD yyyy-MM-dd HH:mm"
}
}
}
}
}
}
}
}
...will yield the following result:
{
"aggregations": {
"zoomedInView": {
"doc_count": 1,
"zoom1": {
"buckets": [
{
"key": "k362cu",
"doc_count": 1,
"ts": {
"buckets": [
{
"key_as_string": "Thu 2016-04-28 05:15",
"key": 1461820500000,
"doc_count": 1
}
]
}
}
]
}
}
}
}
UPDATE
According to our discussion, here is a solution that could work for you. Using Logstash, you can call your API and retrieve the big JSON document (using the http_poller input), extract/transform all locations and sink them to Elasticsearch (with the elasticsearch output) very easily.
Here is how it goes in order to format each event as depicted in my initial answer.
Using http_poller you can retrieve the JSON locations (note that I've set the polling interval to 1 day, but you can change that to some other value, or simply run Logstash manually each time you want to retrieve the locations)
Then we split the locations array into individual events
Then we divide the latitude/longitude fields by 10,000,000 to get proper coordinates
We also need to clean it up a bit by moving and removing some fields
Finally, we just send each event to Elasticsearch
Logstash configuration locations.conf:
input {
http_poller {
urls => {
get_locations => {
method => get
url => "http://your_api.com/locations.json"
headers => {
Accept => "application/json"
}
}
}
request_timeout => 60
interval => 86400000
codec => "json"
}
}
filter {
split {
field => "locations"
}
ruby {
code => "
event['location'] = {
'lat' => event['locations']['latitudeE7'] / 10000000.0,
'lon' => event['locations']['longitudeE7'] / 10000000.0
}
"
}
mutate {
add_field => {
"timestampMs" => "%{[locations][timestampMs]}"
"accuracy" => "%{[locations][accuracy]}"
"junk_i_want_to_save_but_ignore" => "%{[locations][junk_i_want_to_save_but_ignore]}"
}
remove_field => [
"locations", "#timestamp", "#version"
]
}
}
output {
elasticsearch {
hosts => ["localhost:9200"]
index => "locations"
document_type => "location"
}
}
You can then run with the following command:
bin/logstash -f locations.conf
When that has run, you can launch your search query and you should get what you expect.