RRD Time since last non-change of counter - rrdtool

I have a RRD DCOUNTER, which gets its data from the water meter: so many units since start of the program which looks at the meter.
So the input might be 2,3,4,5,5,5,5,8,12,13,13,14,14,14,14,14
That means the flow is 1,1,1,0,0,0,0,3,4,1,0,1,0,0,0,0,0
I want a graph showing minutes since last rest
0,1,2,0,1,2,3,0,0,0,0,0,0,1,2,3,4,5
If the flow is never zero, there must be a leak.
Hopefully the graph should rise steadily from bedtime to wakeup, and from leaving to work to coming back.
Ideas?

First, you set up your input data source as a COUNTER type, so that you will be storing the changes, IE the flow.
Now, you can define a calculated datasource (for graphs etc) that counts the minutes since the last zero, using something like:
IF ( flow == 0 )
THEN
timesincerest = 0
ELSE
timesincerest = previous value of timesincerest + 1
END
In RPN, that would be:
timesincerest = flow, 0, GT, PREV(timesincerest), STEPWIDTH, +, 0, IF
This will give you a count of the number of seconds since the last reset.

Related

c++ Create time remaining estimate when data calcs get progressively longer?

I'm adding items to a list, so each insert takes just a bit longer than the last (this is a requirement, assume you can't change that). I've manually timed a sample dataset on MY computer but I want a generalized way to predict the time on any computer, and given ANY dataset size.
In my flailing around trying to figure this out, what i have collected is a vector, 100 long, of "how long 1/100th the sample data" took. So in my example data set i have 237,965 objects, which means in the vector of times i collected, each bucket tells how long it took to add 2,379 items.
Here's a link to the sample data of 100 items. So you can see the first 2k items took about 8 seconds, and the last 2k items took about 101 seconds. All together, if you add all the time, that's 4,295 seconds or about 1 hr 11 minutes.
So my question is, given this data set, and using it for future predictions, how do i estimate the remaining time when adding different size data?
In more flailing, i made some plots, wondering if it could help. First plot is just the raw data on a log graph:
I then made a 2nd data set based on first, this time showing accumulated time, rather than just the time for the current slice, and plotted that on a linear graph:
Notice the lovely trend line formula? That MUST be something that i just need to somehow plug into my code but i can't for the life of me figure out how.
Should i have instead gathered the data into time-slices and not index-slices? ie: i KNOW this data takes 1:10 to load, so take snapshots every 1/100th of that duration, instead of snapshotting every 1/100th of the data set?
Or HOW do i figure this out?
the function I need to write has this API:
CFTimeInterval get_estimated_end_time(int maxI, int curI, CFTimeInterval elapsedT);
so given only those three variables (maxI, curI, and elapsedT), and knowing the trend line formula from above, i need to return "duration until maxI" (seconds).
Any ideas?
Update:
well it seems after much futzing around, i can just do this (note "LERP" is just linear interpolate):
#define kDataSetMax 237965
double FunctionX(int in_x)
{
double _x(LERP(0, 100, in_x, 0, i_maxI));
double resultF =
(0.32031139888898874 * math_square(_x))
+ (9.609731568497784 * _x)
- (7.527252350031663);
if (resultF <= 1) {
resultF = 1;
}
return resultF;
}
CFTimeInterval get_estimated_end_time(int maxI, int curI, CFTimeInterval elapsedT)
{
CFTimeInterval endT(FunctionX(maxI));
return remainingT;
}
But that means i'm just ignoring curI and elapsedT?? That doesn't seem... right? What am I missing?
Footnotes:
#define LERP(to_min, to_max, from, from_min, from_max) \
((from_max) == (from_min) ? from : \
(double)(to_min) + ((double)((to_max) - (to_min)) \
* ((double)((from) - (from_min)) \
/ (double)((from_max) - (from_min)))))
#define LERP_PERCENT(from, from_max) \
LERP(0.0f, 1.0f, from, 0.0f, from_max)
Your FunctionX is most of the way there. It's currently calculating expectedTimeToReachMaxIOnMyMachine. What you need to do is figure out how much slower the current time is relative to the expected on your machine to reach this same point, and then extrapolate that same ratio to the maximum time.
CFTimeInterval get_estimated_end_time(int maxI, int curI, CFTimeInterval elapsedT) {
//calculate how long we expected it to take to reach this point
CFTimeInterval expectedTimeToReachCurrentIOnMyMachine = FunctionX(curI);
//calculate how much slower we are than the expectation
//if this machine is faster, the math still works out.
double slowerThanExpectedByRatio
= double(elapsedT) / expectedTimeToReachCurrentIOnMyMachine;
//calculate how long we expected to reach the max
CFTimeInterval expectedTimeToReachMaxIOnMyMachine = FunctionX(maxI);
//if we continue to be the same amount slower, we'll reach the max at:
CFTimeInterval estimatedTimeToReachMaxI
= expectedTimeToReachMaxIOnMyMachine * slowerThanExpectedByRatio;
return estimatedTimeToReachMaxI;
}
Note that a smart implementation can cache and reuse expectedTimeToReachMaxIOnMyMachine and not calculate it every time.
Basically this assumes that after doing X% of the work, we can calculate how much slower we were than the expected curve, and assume we will stay approximately that same amount slower than the expected curve.
In the example below, the expected time taken is the blue line. At 4000 elements, we see that the expected time on your machine was 8,055,826, But the actual time taken on this machine was 10,472,573, which is 30% higher (slowerThanExpectedByRatio=1.3). At that point, we can extrapolate that we'll probably remain 30% higher throughout the entire process (the purple line). So if the total expected time on your machine for 10000 elements was 32,127,229, then our total estimated time on this machine for 10000 will be 41,765,398 (30% higher)

Using an if-statement for div by 0 protection in Modelica

I made a simple model of a heat pump which uses sensor data to calculate its COP.
while COP = heat / power
sometimes there is no power so the system does a (cannot divide by zero). I would like these values to just be zero. So i tried an IF-statementif-statement. if power(u) = 0 then COP(y) = 0. somehow this does not work (see time 8)COP output + data. Anyone who seems to notice the problem?
edit(still problems at time 8.1
edit(heat and power)
To make the computation a bit more generally applicable (e.g. the sign of power can change), take a look at the code below. It could also be a good idea to build a function from it (for the function the noEvent()-statements can be left out)...
model DivNoZeroExample
parameter Real eps = 1e-6 "Smallest number to be used as divisor";
Real power = 0.5-time "Some artificial value for power";
Real heat = 1 "Some artificial value for heat";
Real COP "To be computed";
equation
if noEvent(abs(power) < abs(eps)) then
COP = if noEvent(power>= 0) then heat/eps else heat/(-eps);
else
COP = heat/power;
end if;
end DivNoZeroExample;
Relational operations work a bit differently in Modelica.
If you replace if u>0 by if noEvent(u>0) it should work as you expected.
For details see section 8.5 Events and Synchronization in the Modelica specification https://modelica.org/documents/ModelicaSpec34.pdf

GNURadio issues with timing

I am having trouble getting a custom block to operate at high frequency.
The block I would like to use is going to take in data from an external radio.
I am using an Ettus USRP block to stream data in from this radio, and I can display this on the QT Scope. I can set this block's sample rate to 15 MHz, and with the scope this seems to work ok.
Problem:
I have tried making a simple block with the gnuradio gr_modtool which takes in 2 floats as input and has 0 outputs. The block has private members "timer", a time_t, and "counter", an int. In the "work" function, my code simply does this at the moment:
const float *in_i = (const float *) input_items[0];
const float *in_q = (const float *) input_items[1];
if (count == 0){
if (*in_i > 0.5){
timer = clock();
count = 30000;
}
}else{
count --;
if(count == 0){
timer = clock()-timer;
printf("Count took %d clicks, or %f seconds\n",timer,(float)timer/CLOCKS_PER_SEC);
}
}
// Tell runtime system how many output items we produced.
return 0;
However, when I run this code, it takes longer than the expected time.
For 30000 cycles, it takes 0.872970 to complete, instead of the desired 0.002 seconds. Since the standard gnuradio block generated with gr_modtool is a sync block, and the input stream to the block is coming from the 15 MHz USRP, I would have expected this block to run at that same frequency. This is not currently the case.
Eventually my goal is to be able to store data streaming in over a period of time, and write it to file with certain formatting(A block already exists to do this, but there is some sort of bug that is preventing that block and the USRP block from working at the same time, so I am attempting to write my own.). However, unless I can keep up with the sample rate of 15 MHz, I will lose data. Since this block is fairly simple, I would have hoped it would be able to run quickly enough to keep up. However, the input stream block is able to pull data from the radio and output at 15 MHz, so I know my computer is capable of it.
How can I make this custom block operate more quickly, and keep up with the 15 MHz frequency?(Or, how can I make this sync block operate at the input stream frequency, since it currently does not)
Your block is not consuming any samples. I presume you're writing a sync_block (work function, not general_work), so your number of produced items is identical to the number of consumed items. But as your source code says:
// Tell runtime system how many output items we produced.
return 0;
In other words, your block tells GNU Radio that it didn't use any of the input GNU Radio offered, and produced no output. That means GNU Radio can't do nothing. You must return the number of items you've produced, and for sync blocks, that's the number of items you consumed – even if you're a sink, with zero output streams!

Trying to decode a FM like signal encoded on audio

I have an audio signal that has a kind of FM encoded signal on it. The encoded signal is using this Biphase mark coding technique <-- see at the end of this page.
This signal is a digital representation of a timecode, in hours, minutes, seconds and frames. It basically works like this:
lets consider that we are working in 25 frames per second;
we know that the code is transmitting 80 bits of information every frame (that is 80 bits per frame x 25 frames per second = 2000 bits per second);
The wave is being sampled at 44100 samples per second. So, if we divide 44100/2000 we see that every bit uses 22,05 samples;
A bit happens when the signal changes sign.
If the wave changes sign and keeps its sign during the whole bit period it is a ZERO. If the wave changes sign two times over one bit period it is a ONE;
What my code does is this:
detects the first zero crossing, that is the clock start (to)
measures the level for to = to + 0.75*bitPeriod... 0.75 to give a tolerance.
if that second level is different, we have a 1, if not we have a 0;
This is the code:
// data is a C array of floats representing the audio levels
float bitPeriod = ceil(44100 / 2000);
int firstZeroCrossIndex = findNextZeroCross(data);
// firstZeroCrossIndex is the value where the signal changed
// for example: data[0] = -0.23 and data[1] = 0.5
// firstZeroCrossIndex will be equal to 1
// if firstZeroCrossIndex is invalid, go away
if (firstZeroCrossIndex < 0) return
float firstValue = data[firstZeroCrossIndex];
int lastSignal = sign(firstValue);
if (lastSignal == 0) return; // invalid, go away
while (YES) {
float newValue = data[firstZeroCrossIndex + 0.75* bitPeriod];
int newSignal = sign(newValue);
if (lastSignal == newSignal)
printf("0");
else
printf("1");
firstZeroCrossIndex += bitPeriod;
// I think I must invert the signal here for the next loop interaction
lastSignal = -newSignal;
if (firstZeroCrossIndex > maximuPossibleIndex)
break;
}
This code appears logical to me but the result coming from it is a total nonsense. What am I missing?
NOTE: this code is executing over a live signal and reads values from a circular ring buffer. sign returns -1 if the value is negative, 1 if the value is positive or 0 if the value is zero.
Cool problem! :-)
The code fails in two independent ways:
You are searching for the first (any) zero crossing. This is good. But then there is a 50% chance, that this transition is the one which occurs before every bit (0 or 1) or whether this transition is one which marks a 1 bit. If you get it wrong in the beginning you end up with nonsense.
You keep on adding bitPeriod (float, 22.05) to firstZeroCrossIndex (int). This means that your sampling points will slowly run out of phase with your analog signal and you will see strange effects when you sample point gets near the signal transitions. You will get nonsense, periodically at least.
Solution to 1: You must search for at least one 0 first, so you know which transition indicates just the next bit and which indicates a 1 bit. In practice you will want to re-synchronize your sampler at every '0' bit.
Solution to 2: Do not add bitPeriod to your sampling point. Instead search for the next transition, like you did in the beginning. The next transition is either 'half a bit' away, or a 'complete bit' away, which gives you the information you want. After a 'half a bit' period you must see another 'half a bit' period. If not, you must re-synchronize since you took a middle transition for a start transition by accident. This is exactly the re-sync I was talking about in 1.

measuring concurent loop times in erlang

I create a round of processes in erlang and wish to measure the time that it took for the first message to pass throigh the network and the entire message series, each time the first node gets the message back it sends another one.
right now in the first node i have the following code:
receive
stop->
io:format("all processes stopped!~n"),
true;
start->
statistics(runtime),
Son!{number, 1},
msg(PID, Son, M, 1);
{_, M} ->
{Time1, _} = statistics(runtime),
io:format("The last message has arrived after ~p! ~n",[Time1*1000]),
Son!stop;
of course i start the statistics when sending the first message.
as you can see i use the Time_Since_Last_Call for the first message loop and wish to use the Total_Run_Time for the entire run, the problem is that Total_Run_Time is accumulative since i start the statistics for the first time.
The second thought i had in mind is using another process with 2 receive loops getting the times for each one adding them and printing but i'm sure that erlang can do better than this.
i guess the best method to solve this is somehow flush the Total_Run_Time, but i couldn't find how this could be done. any ideas how this can be tackled?
One way to measure round-trip times would be to send a timestamp along with each message. When the first node receives the message, it can then measure the round-trip time, calculating Total_Run_Time - Timestamp.
To calculate the total run time, I would memorize the first timestamp in the process state (or dictionary), and calculate the total run time when stopping the test.
Besides, given that you mention the network, are you sure that the CPU time (which is what statistics(runtime) calculates is what you're after? Perhaps, wall clock time would be more appropriate.