I am playing with the externalTime / externalTimeBatch in order to calculate average value for events that happen within a certain time interval as below
from sensorStream#window.externalTimeBatch(meta_timestamp, 60 sec, meta_timestamp, 60 sec) [sensorValue > 100]
select meta_timestamp, avg(sensorValue) as sensorValue
insert into filteredStream
The issue I am having is that the average is always calculated for all events from the begining, rather then getting reset on the time interval.
Whats the best way to use it.
Thanks.
Below query seems to do the proper averaging using tumbling time interval. Once I moved my time holder attribute "meta_timestamp" to cross my time window it worked properly.
from sensorStream#window.externalTimeBatch(meta_timestamp, 1 min, 0, 1 min) [sensorValue > 100]
select meta_timestamp, meta_sensorName, correlation_longitude, correlation_latitude, avg(sensorValue) as sensorValue
insert current events into filteredStream
example POST message sent for testing
{
"event":
{
"metaData":
{
"timestamp":1514801340000,
"isPowerSaverEnabled": false,
"sensorId": 701,
"sensorName": "temperature"
},
"correlationData":
{
"longitude": 4.504343,
"latitude": 20.44345
},
"payloadData":
{
"humidity": 2.3,
"sensorValue": 150
}
}
}
Thanks for listening !!
Related
I have an API integration for our web store running on AWS Lambda to return live delivery quotes based on customer address, and then create the delivery order to a third party delivery as a service provider when the invoice is completed (paid).
I was able to add a time restriction for Monday-Saturday but Sunday has different hours and is not working. Here is the relevant code:
'use strict'
/**
* This function is use to generate qoutes to client 1st warehouse
*/
exports.handler = function (event, context, callback) {
console.log('-------------------EVENT OBJECT--------------------------')
// console.log(event.body.shipping_address)
console.log(event)
try {
const app = require('./app')
const EventEmitter = require('events').EventEmitter
const _bus = new EventEmitter()
let date = new Date()
if (date.getDay() == 0) {
if (!(date.getHours() >= 17 && date.getHours() <= 22) || !(date.getHours() < 3)) {
callback(null, {
message: 'The store is closed'
})
}
} else {
if (date.getHours() >= 3 && date.getHours() <= 15) {
callback(null, {
message: 'The store is closed'
})
}
}
let _shipmentReturn = []
let _shipmentReturnError = []
}
catch(e) {
}
}
Be very careful when using NOT logic.
Your 'normal' days have the store closed from 3am to 4pm. (Yes, 4pm. That's because you only check hours, so 3:59pm is still an 'hour' of 3, so it would be closed.)
On Sunday, the store is closed from midnight to 4:59pm, and also 10pm to midnight.
Take a look at this line:
if (!(date.getHours() >= 17 && date.getHours() <= 22) || !(date.getHours() < 3)) {
Let's pick a time of 2am. It equates to:
if (!(FALSE) || !(TRUE))
This equals TRUE, so the store is closed.
Same for 4am: if (!(FALSE) || !(FALSE)) also equals TRUE
You possibly want an AND rather than an OR in those logic statements.
I would also recommend that you convert the UTC times into your "local" times, which would make it easier for you to write the logic. This will avoid errors where UTC Sunday does not actually align to your 'local' Sunday. For example, if you are UTC-6, then 2am UTC Sunday is not Sunday in your timezone.
Sorry if the question is poorly worded.Here is my chart
I am looking into scaling the chart's display of dataset(s) values as a percentage such as:
//input
data:{
datasets[{
label: 'data1',
data: [15, 22, 18, 35, 16, 29, 40]
},
{
label: 'data2',
data: [20, 21, 20, 19, 21, 22, 35]
}]
data1's points on the chart would be displayed as [42.9, 51.2, 47.4, 64.8, 43.2, 56.9, 57.1]
data2's points on the chart would be displayed as [57.1, 48.8, 52.6, 35.2, 56.8, 43.1, 42.9]
It should look like this. All visible lines should stack up to 100%. If a dataset is hidden, how can I recalculate the percentage and update the chart so that everything stays stacked up to 100%?
I thought about doing a plugin where I do the calculation using myLine.data.datasets but then I don't know how to remove a hidden dataset's values from the calculation and I'm not sure how to display it unless I overwrite the original datasets. I'm pretty sure this is the wrong approach.
Any help would be greatly appreciated.
So, I figured it out. I needed to write a function to calculate the percentage area of the points in the index and then update the datasets with the calculated percentage values.
/*+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
*
* DS_update calculates the percentage area of the input datasets
*
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++*/
function DS_update(dataset_in, ds_vis){
// make a deep copy (no references to the source)
var temp = jQuery.extend(true, [], dataset_in);
// gets the sum of all datasets at a given index
function getTotal(index){
total = 0;
// step through the datasets
dataset_in.forEach(function(e, i){
// inc total if the dataset is visible
if(ds_vis[i]){
total += e[index];
}
// do nothing if the dataset is hidden
});
return total;
}
// update temp array with calculated percentage values
temp.forEach(function(el, ind){
var j = ind;
el.forEach(function(e, i){
// calculate percentage to the hundredths place
temp[j][i] = Math.round((e / getTotal(i))*10000)/100;
});
});
return temp;
}
Once I tested the functions I had to run them before initial load of the chart or else the user would see the datasets as non area-percent (raw data). which looks something like this:
// Keep source array to use in the tool tips
var Src_ary = Input_data; // multidimensional array of input data
// holds the percent-area calculations as datapoints
var Prod_ary = DS_update(Src_ary, Init_visible(Src_ary));
Next up was updating the onClick for the legend. I need this to update the calculations every time an item's visibility is toggled:
legend: {
position: 'bottom',
usePointStyle: true,
onClick:
function(e, legendItem){
var index = legendItem.datasetIndex;
var ci = this.chart;
var meta = ci.getDatasetMeta(index);
var vis_ary = [];
var updatedSet = [];
// See controller.isDatasetVisible comment
meta.hidden = meta.hidden === null? !ci.data.datasets[index].hidden : null;
// load the visible array
for(var i = 0; i < (ci.data.datasets || []).length; i++){
switch (ci.getDatasetMeta(i).hidden){
case null:
vis_ary.push(true);
break;
default:
vis_ary.push(false);
break;
}
}
// update datasets using vis_ary to tell us which sets are visible
updatedSet = DS_update(Prod_ary, vis_ary);
myLine.data.datasets.forEach(function (e,i){
e.data = updatedSet[i];
});
// We did stuff ... rerender the chart
ci.update();
}
}
END RESULT
This is what I was trying to do: highchart fiddle
This is what I ended up with:fiddle
It took a few days and a lot of reading through chartjs.org's documentation to put this together. In the end I think it came out pretty good considering I am new to chart.js and borderline illiterate with javascript.
I'm trying to run a task each day at 8:00 AM in a vibe.d web app.
For the moment, I use the setTimer function with the periodic parameter to true. But this way, I can't control exactly the hour at which the task will be triggered. Is there an easy way to do this in vibed ?
Thank you sigod, that is exactly what I've done. I calculate the time until next 8:00 AM and call setTimer. Here is the code for further reference:
void startDailyTaskAtTime(TimeOfDay time, void delegate() task) {
// Get the current date and time
DateTime now = cast(DateTime)Clock.currTime();
// Get the next time occurrence
DateTime nextOcc = cast(DateTime)now;
if (now.timeOfDay >= time) {
nextOcc += dur!"days"(1);
}
nextOcc.timeOfDay = time;
// Get the duration before now and the next occurrence
Duration timeBeforeNextOcc = nextOcc - now;
void setDailyTask() {
// Run the task once
task();
// Run the task all subsequent days at the same time
setTimer(1.days, task, true);
}
setTimer(timeBeforeNextOcc, &setDailyTask);
}
I have an app written in C++ with 16 threads which reads from the output of wireshark/tshark. Wireshark/tshark dissects pcap files which are gsm_map signalling captures.
Mongodb is 2.6.7
The structure I need for my documents are like this:
Note "packet" is an array, it will become apparent why later.
For all who don't know TCAP, the TCAP layer is transaction-oriented, this means, all packets include:
Transaction State: begin/continue/end
Origin transaction ID (otid)
Destination transaction ID (dtid)
So for instance, you might see a transaction comprising 3 packets, which looking at the TCAP layer would be roughly this
Two packets, one "begin", one "end".
{
"_id" : ObjectId("54ccd186b8ea19c89ee8f231"),
"deleted" : "0",
"packet" : {
"datetime" : ISODate("2015-01-31T12:58:11.939Z"),
"signallingType" : "M2PA",
"opc" : "326",
"dpc" : "6406",
"transState" : "begin",
"otid" : "M2PA0400435B",
"dtid" : "",
"sccpCalling" : "523332075100",
"sccpCalled" : "523331466304",
"operation" : "mo-forwardSM (46)",
...
}
}
/* 1 */
{
"_id" : ObjectId("54ccd1a1b8ea19c89ee8f7c5"),
"deleted" : "0",
"packet" : {
"datetime" : ISODate("2015-01-31T12:58:16.788Z"),
"signallingType" : "M2PA",
"opc" : "6407",
"dpc" : "326",
"transState" : "end",
"otid" : "",
"dtid" : "M2PA0400435B",
"sccpCalling" : "523331466304",
"sccpCalled" : "523332075100",
"operation" : "Not Found",
...
}
}
Because of the network architecture, we're tracing in two (2) points, and the traffic is balanced amongst these two points. This means sometimes we see "continue"s or "end"s BEFORE a "begin". Conversely, we might see a "continue" BEFORE a "begin" or "end". In short, transactions are not ordered.
Moreover, multiple end-points are "talking" amongst themselves, and transactionIDs might get duplicated, 2 endpoints could be using the same tid and other 2 endpoints at the same time, though this doesn't happen all the time, it does happen.
Because of the later, I also need to use the SCCP layer's "calling" and "called" Global titles (like phone numbers).
Bear in mind that I don't know which way a given packet is going, so this is what I'm doing:
Whenever I get a new packet I must find whether the transaction already exists in mongodb, I'm using upsert to do this.
I do this by searching the current's packet otid or dtid in either otid or dtid of existing packets
If it does: push the new packet into the existing document.
If it doesn't: create a new document with the packet.
As an example, this is a upsert for an "end" which should find a "begin":
db.runCommand(
{
update: "packets",
updates:
[
{ q:
{ $and:
[
{
$or: [
{ "packet.otid":
{ $in: [ "M2PA042e3918" ] }
},
{ "packet.dtid":
{ $in: [ "M2PA042e3918" ] }
}
]
},
{
$or: [
{ "packet.sccpCalling":
{ $in: [ "523332075151", "523331466305" ] }
},
{ "packet.sccpCalled":
{ $in: [ "523332075151", "523331466305" ] }
}
]
}
]
},
{
$setOnInsert: {
"unique-id": "422984b6-6688-4782-9ba1-852a9fc6db3b", deleted: "0"
},
$push: {
packet: {
datetime: new Date(1422371239182),
opc: "327", dpc: "6407",
transState: "end",
otid: "", dtid: "M2PA042e3918", sccpCalling: "523332075151", ... }
}
},
upsert: true
}
],
writeConcern: { j: "1" }
}
)
Now, all of this works, until I put it in production.
It seems packets are coming way to fast and I see lots of:
"ClientCursor::staticYield Can't Unlock B/c Of Recursive Lock" Warnings
I read that we can ignore this warning, but I've found that my upserts DO NOT update the documents! It looks like there's a lock and mongodb forgets about the update. If I change the upsert to a simple insert, no packets are lost
I also read this is related to no indexes being used, I have the following index:
"3" : {
"v" : 1,
"key" : {
"packet.otid" : 1,
"packet.dtid" : 1,
"packet.sccpCalling" : 1,
"packet.sccpCalled" : 1
},
"name" : "packet.otid_1_packet.dtid_1_packet.sccpCalling_1_packet.sccpCalled_1",
"ns" : "tracer.packets"
So in conclusion:
1.- If this index is not correct, can someone please help me creating the correct index?
2.- Is it normal that mongo would NOT update a document if it finds a lock?
Thanks and regards!
David
Why are you storing all of the packets in an array? Normally in this kind of situation it's better to make each packet a document on its own; it's hard to say more without more information about your use case (or, perhaps, more knowledge of all these acronyms you're using :D). Your updates would become inserts and you would not need to do the update query. Instead, some other metadata on a packet would join related packets together so you could reconstruct a transaction or whatever you need to do.
More directly addressing your question, I would use an array field tids to store [otid, dtid] and an array field sccps to store [sccpCalling, sccpCalled], which would make your update query look like
{ "tids" : { "$in" : ["M2PA042e3918"] }, "sccps" : { "$in" : [ "523332075151", "523331466305" ] } }
and amenable to the index { "tids" : 1, "sccps" : 1 }.
I am experimenting with Dojo, using a DataGrid/JsonRestStore against a REST-service implemented using Django/tastypie.
It seems that the JsonRestStore expects the data to arrive as a pure array, whilst tastypie returns the dataset within a structure containing "schema" and "objects".
{
"meta": {"limit": 20, "next": null, "offset": 0, "previous": null, "total_count": 1},
"objects": [{...}]
}
So, what I need is to somehow attach to the "objects" part.
What is the most sensible way to achieve this ?
Oyvind
Untested, but you might try creating a custom store that inherits from JsonRestStore and override the internal _processResults method. It's a two-liner in the Dojo 1.7 code base, so you can implement you own behavior quite simply.
_processResults: function(results, deferred){
var count = results.objects.length;
return {totalCount: deferred.fullLength || (deferred.request.count == count ? (deferred.request.start || 0) + count * 2 : count), items: results.objects};
}
See lines 414-417 of the dojox/data/JsonRestStore.js for reference.
I don't know whether this will helpful for you or not. http://jayapal-d.blogspot.in/2009/08/dojo-datagrid-with-editable-cells-in.html