How to record total values with rrdtool - rrdtool

I'm pretty sure this question has been asked several times, but either I did not find the correct answer or I didn't understand the solution.
To my current problem:
I have a sensor which measures the time a motor is running.
The sensor is reset after reading.
I'm not interested in the time the motor was running the last five minutes.
I'm more interested in how long the motor was running from the very beginning (or from the last reset).
When storing the values in an rrd, depending on the aggregate function, several values are recorded.
When working with GAUGE, the value read is 3000 (10th seconds) every five minutes.
When working with ABSOLUTE, the value is 10 every five minutes.
But what I would like to get is something like:
3000 after the first 5 minutes
6000 after the next 5 minutes (last value + 3000)
9000 after another 5 minutes (last value + 3000)
The accuracy of the older values (and slopes) is not so important, but the last value should reflect the time in seconds since the beginning as accurate as possible.
Is there a way to accomplish this?

I dont know if it is useful for ur need or not but maybe using TREND/TRENDNAN CDEF function is what u want, look at here:
TREND CDEF function

I now created a small SQLite database with one table and one column in that tabe.
The table has one row. I update that row every time my cron job runs and add the current value to the current value. So the current value of the one row and column is the cumualted value of my sensor. This is then fed into the rrd.
Any other (better) ideas?

The way that I'd tackle this (in Linux) is to write the value to a plain-text file and then use the value from that file for the RRDTool graph. I think that maybe using SQLite (or any other SQL server) just to keep track of this would be unnecessarily hard on a system just to keep track of something like this.

Related

Django query to find value of objects from previous period (if exists)

I have a simple django project and I am trying to keep track of ranks for certain objects to see how they change over time. For example, what was the rank of US GDP (compared to other countries) over last 3 years. Below is the postgres db structure I am working with:
Below is what I am trying to achieve:
What I am finding challenging is that the previous period value may or may not exist and it's possible that even the entity may or may not be in the pervious period. Period can be year, quarter or months but for a specific record it can be either of one and stays consistently same for all the years for that record.
Can someone guide me in the right direction to write a query to achieve those tables? I am trying to avoid writing heavy forloop queries because there may be 100s of entities and many years of data.
So far I have only been able to achieve the below output:
I am just trying to figure out how to use annotate to fetch previous period values and ranks but I am pretty much stuck.

How to determine the time an actuator has been enabled during the past x minutes?

I need to find the most efficient approach to the following. If someone can point me in the right direction, I can write the code myself.
Environment
I am using an ESP32 and working in Arduino C++.
What I want to achieve
I want to track the amount of time an actuator has been on over the past x minutes. This is to prevent the actuator from over-heating.
My idea
Storing current times in an array every time the actuator goes on (it is on for a fixed amount of time). When the oldest measurement is older than x minutes, it is removed from the array. If the array exceeds a certain size (e.g. certain amount of minutes the actuator has been on), a cool down period is started.
However, I feel there must be a more efficient / easy way to achieve this. How would you go about this?
Thanks in advance.
If possible, has temperature sensor is the easiest way.
With array, there will be problem with the size, especially, if you want to count in minutes. For counting, there is also way for easier as following:
T is the total time ON in last xx minutes as you expected. During initialization, it will be 0.
If actuator is ON, so every check cycle (may be every s or smaller depend on your required), T will be increase value = cycle time
If actuator is OFF: if T>0 then decrease value = cycle time, if T= 0, nothin to subtract more.

change the db when the time ran out

i'm writing a django app that features a timer like in a game.
lets say that the game is a basketball game and i have 4 quarters of 10 min.
i need that in the end of each of the 10 min the db will be changed.
to set a timer that will change the db won't work for me because the quarter
won't always be of 10 min, and it will be changed while the app is on
production, i.e i save the quarter time in the db so i can change it whenever
i want.
i thought to use signals but i just could't find a way to make it work.
any help will be good
thx
one way to think about it would be to say it doesn't matter what state the db is in when nobody is looking at it... in other words you don't have to update the db after exactly 10 minutes
instead: as each request comes in first check if you are past the limit of the timer, if so then update the db before continuing with the usual view code

Configure SQLite for real time operations

In short, this post would like to answer the following question : how (if possible) can we configure a SQLite database to be absolutely sure that any INSERT command will return in less than 8 milliseconds?
By configure, I mean: compiling options, database pragma options, and run-time options.
To give some background, we would like to apply the same INSERT statement at 120 fps. (1000 ms / 120 fps ≃ 8 ms)
The Database is created with the following strings:
"CREATE TABLE IF NOT EXISTS MYTABLE ("
"int1 INTEGER PRIMARY KEY AUTOINCREMENT, "
"int2 INTEGER, "
"int3 INTEGER, "
"int4 INTEGER, "
"fileName TEXT);
and the options:
"PRAGMA SYNCHRONOUS=NORMAL;"
"PRAGMA JOURNAL_MODE=WAL;"
The INSERT statement is the following one:
INSERT INTO MYTABLE VALUES (NULL, ?, ?, ?, ?)
The last ? (for fileName) is the name of a file, so it's a small string. Each INSERT is thus small.
Of course, I use precompiled statements to accelerate the process.
I have a little program that makes one insert every 8 ms, and measures the time taking to perform this insert. To be more precise, the program makes one insert, THEN wait for 8 ms, THEN makes another insert, etc... At the end, 7200 inserts were pushed, so the program runs for about 1 minutes.
Here are two links that show two charts:
This image shows how many milliseconds were spent to make an insert as a function of the time expressed in minutes. As you can see, most of the time, the insert time is 0, but there are spikes than can go higher than 100 ms.
This image is the histogram representation of the same data. All the values below 5 ms are not represented, but I can tell you that from the 7200 inserts, 7161 are below 5 milliseconds (and would give a huge peak at 0 that would make the chart less readable).
The total program time is
real 1m2.286s
user 0m1.228s
sys 0m0.320s.
Let's say it's 1 minute and 4 seconds. Don't forget that we spend 7200 times 8 milliseconds to wait. So the 7200 inserts take 4 seconds ---> we have a rate of 1800 inserts per seconds, and thus an average time of 0.55 milliseconds per insert.
This is really great, except that in my case, i want ALL THE INSERTS to be below 8 milliseconds, and the chart shows that this is clearly not the case.
So where do these peaks come from?
When the WAL file reaches a given size (1MB in our case), SQLite makes a checkpoint (the WAL file is applied to the real database file). And because we passed PRAGMA SYNCHRONOUS=NORMAL, then at this moment, SQLite performs a fsync on the hard drive.
We suppose this is this fsync that makes the corresponding insert really slow.
This long insert time does not depend on the WAL file size. We played with the pragma WAL_AUTOCHECKPOINT (1000 by default) linked to the WAL file, and we could not reduce the height of the peaks.
We also tried with PRAGMA SYNCHRONOUS=OFF. The performances are better but still not enough.
For information, the dirty_background_ratio (/proc/sys/vm/dirty_background_ratio) on my computer was set to 0, meaning that all dirty pages in the cache must be flushed immediately on the hard drive.
Does anyone have an idea and how to "smooth" the chart, meaning that all inserts time will not overpass 8 ms ?
By default, pretty much everything in SQLite is optimized for throughput, not latency.
WAL mode moves most delays into the checkpoint, but if you don't want those big delays, you have to use more frequent checkpoints, i.e., do a checkpoint after each transaction.
In that case, WAL mode does not make sense; better try journal_mode=persist.
(This will not help much because the delay comes mostly from the synchronization, not from the amount of data.)
If the WAL/journal operations are too slow, and if even synchronous=off is not fast enough, then your only choice is to disable transaction safety and try journal_mode=memory or even =off.

Most efficient way to process complex histogram data?

I'm currently implementing a histogram that will show a very large scale data using Qt and I have some doubts about which data structure(s) I should be using for my problem. I will be displaying the amount of queries received from users of an application and the way I should display is as follows -in a single application that will show different histograms upon clicking different "show me this data etc." buttons-
1) Display the histogram of total queries per every month -4 months of data here, I
kept four variables and incremented them as I caught queries belonging to those months
in the CSV file-
2) Display the histogram of total queries per every single day in a selected month -I was thinking of using 4 QVectors to represent the months for this one, incrementing every element of the vector (day), as I come by that specific day -e.g. the vector represents the month of August and whenever I come across a data with 2011-08-XY , I will increment the (XY + 1)th element of that vector by 1- my second alternative is to use 4 QLinkedList's for the sake of better complexity but I'm not sure if the ways I've come up with are efficient enough and I'm willing to listen to any other idea.
3) Here's where things get a bit complicated. Display the histogram of total queries per every hour in a selected day and month. The data represented is multiplied in a vast manner and I don't know which data structure -or combination of structures- I should use to implement this one. A list of lists perhaps?
Any ideas on my problems at 2) and 3) would be helpful, Thanks in advance.
Actually, it shouldn't be too unmanageable to always do queries per hour. Assuming that the number of queries per hour is never greater than the maximum int value, that's only 24 ints per day = 32 bits or 64 depending on your machine. Assuming 32 bits, then you could get up to 28 years worth of data per MB.
There's no need to transfer the month/year - your program can work that out. Just assign hour 0 to the earliest point in your data, which you keep as a constant, then work out the date based on hours passed since then.
This avoids having to have a list of lists or anything fancy - just use an array where each address contains the number of hours since hour 0, and the number of queries for that hour.
Why don't you simply use a classical database?
When you start asking these kind of question I think it is a good time to consider a more robust structure.There are multiple data structures implemented inside any DB, optimized either for different access type. You should considerate at least lookup, insertion, deletion, range queries. There is no structure which is better than the others in all costs, so there is always a trade-off.
Qt has some database classes you can use. I never used the Qt SQL library, but I think you should give it a shot. Fortunately, there is a Qt SQL programming guide at the end of the page linked.