simple input-output with time delay - c++

Let's say I have an input signal, could be a random double:
while(true){
double t; // = current time, let's assume I know that
double input = rand();
}
I want to generate an output signal that simply applies a 0.5 sec time delay (a 0.5sec dead time in signal processing terms.)
while(true){
double input = rand();
// in pseudocode double output(t) = input(t-0.5)
}
I was thinking about storing the input in a vector, along with a time stamp in another vector, and then look up output = input(0.5sec ago). However, that seems very inefficient.
What's an appropriate data structure for this type of problem? (A buffer that let's me recall a value that was stored 0.5 sec ago and discards recorded values that are further in the past than the chosen time delay)

The struct you use to store data should have a timestamp (either expiry or the moment it was enqueued) along with the double value.
The data structure to store the structs should be a priority queue (sorted on timestamp).
The consumer thread should sleep for n milliseconds where n is initialized to 500ms.
When the consumer pops the first item, it can check the second item and calculate n (the amount of time to sleep for the next iteration). Else it can sleep again for 500 ms.
Let me know if I should write code for it.

What immediately comes to mind is the Producer-Consumer Pattern.
Have the producer push the input to a std::queue and every 0.5 seconds (using a std::thread) have the consumer pop from it.

Related

Real-Time Plot Oscilloscope UDP

I am currently working on implementing a simple oscilloscope in C++, which is receiving data via UDP. First, I have implemented a function to generate a sine wave (up to 10 kHz) with 30 kSps, which is transferring this data via UDP locally. On the other side, there is a (QWT) plot. The UDP client is running in a thread appending the received values (and delete the first one) to a Qlist, while the plot is updated every 30 ms via a timer.
The question is now, how can I implement a simple oscilloscope which plots the signal with its original frequency, independent of the number of samples it receives (implement a time base)? Could you give me some general ideas? Thanks in advance.
Solution:
The solution is to copy (every time the plot is updated, here 30 ms) the Qlist of the received items to a temporary variable and delete the Qlist. Then, create a time vector for this temporary variable. The time vector is created via this function:
QVector<double> make_vector(double start, double end, double size)
{
QVector<double> vec;
double step = abs((abs(start) - abs(end)))/size;
while (start <= end)
{
vec.push_back(start);
start += step;
}
return vec;
}
For example, to scale the value to 1 second I am calling this function like: make_vector(-1.0, 0.0, temporary variable.size()).
By this procedure, the plot is independent of the number of sample received. You only have to make sure that you receive enough values in your time period (here 30 ms).

running a background process on arduino

I am trying to get my arduino mega to run a function in the background while it is also running a bunch of other functions.
The function that I am trying to run in the background is a function to determine wind speed from an anemometer. The way it processes the data is similar to that of an odometer in that it reads the number of turns that the anemometer makes during a set time period and then takes that number of turns over the time to determine the wind speed. The longer time period that i have it run over the more accurate data i receive as there is more data to average.
The problem that i have is there is a bunch of other data that i am also reading in to the arduino which i would like to be reading in once a second. This one second time interval is too short for me to get accurate wind readings as not enough revolutions are being completed by the anemometer to give high accuracy wind data.
Is there a way to have the wind sensor function run in the background and update a global variable once every 5 seconds or so while the rest of my program is running simultaneously and updating the other data every second.
Here is the code that i have for reading the data from the wind sensor. Every time the wind sensor makes a revolution there is a portion where the signal reads in as 0, otherwise the sensor reads in as a integer larger than 0.
void windmeterturns(){
startime = millis();
endtime = startime + 5000;
windturncounter = 0;
turned = false;
int terminate = startime;
while(terminate <= endtime){
terminate = millis();
windreading = analogRead(windvelocityPin);
if(windreading == 0){
if(turned == true){
windturncounter = windturncounter + 1;
turned = false;
}
}
else if(windreading >= 1){
turned = true;
}
delay(5);
}
}
The rest of the processing of takes place in another function but this is the one that I am currently struggling with. Posting the whole code would not really be reasonable here as it is close to a 1000 lines.
The rest of the functions run with a 1 second delay in the loop but as i have found through trial and error the delay along with the processing of the other functions make it so that the delay is actually longer than a second and it varies based off of what kind of data i am reading in from the other sensors so a 5 loop counter for timing i do not think will work here
Let Interrupts do the work for you.
In short, I recommend using a Timer Interrupt to generate a periodic interrupt that measures the analog reading in the background. Subsequently this can update a static volatile variable.
See my answer here as it is a similar scenario, detailing how to use the timer interrupt. Where you can replace the callback() with your above analogread and increment.
Without seeing how the rest of your code is set up, I would try having windturncounter as a global variable, and add another integer that is iterated every second your main program loops. Then:
// in the main loop
if(iteratorVariable >= 5){
iteratorVariable = 0;
// take your windreading and implement logic here
} else {
iteratorVariable++;
}
I'm not sure how your anemometer stores data or what other challenges you might be facing, so this may not be a 100% solution, but it would allow you to run the logic from your original post every five seconds.

How to only get a key entry once per second? (or delay the time in between two keyboard entry)

In pygame, I am using function "pressed_key"
This is my Code:
if(pressed_keys[K_y]):
base += 10;
But when I do it by pressing it only once, the "base" increased 200ish. I want to know if there is a way to increase the time between two entry?
Thanks for helping!
(p.s. I really dont know how to search similar questions on this question. I hope this is not duplicate. But in case it is, let me know. I will delete this question. Thanks again!)
Here http://www.pygame.org/docs/ref/key.html#pygame.key.set_repeat
pygame.key.set_repeat(delay, interval): return None
also:
pygame.key.get_pressed()[K_y]: return bool
another way is to get the time you accepted the "key pressing" ,and wait before accepting it again:
import time
interval = 100 #you set your interval in miliseconds
lasttime = 0
while 1:
draw() #draw routine
events() #another events
now = time.time() #save in one variable if you are going to test against more than one, reducing the number of time.time() calls
if(pressed_keys[K_y] and (now-lasttime)>interval):
lasttime = now
base += 10
time.time() Return the time in seconds since the epoch as a floating point number.
The epoch is the point where the time starts. On January 1st of that year, at 0 hours, the “time since the epoch” is zero. For Unix, the epoch is 1970.
knowing that, you are getting the time right now against the lasttime you saved it:
now-lasttime. When this delta is more than the interval, you are allowed to continue your event, don't forget to update your lasttime variable.
I hope you know enough about pygame to use a clock.
(For simplicity's sake we'll say the time interval required will be one second)
A simple solution would be to only check for input every second, using a simple counter and the pygame clock.
First off start the clock and the counter, outside of your main loop.
Also, add a boolean variable to determine if the key was pressed within this second.
FRAMERATE = 30 #(The framerate used in this example is 30 FPS)
clock = pygame.time.Clock()
counter = 0
not_pressed = True
Then inside the main loop, the first thing you do is increase the counter, then tick the clock.
while argument:
counter+=1
clock.tick(FRAMERATE)
Then were you have your code, an if statement to see if the button has been pressed this second:
if not_pressed:
if(pressed_keys[K_y]):
not_pressed=False
base += 10
#Rest of code:
if(pressed_keys[K_up]):
Finally, at the end of your main loop, add a checker to switch the boolean not_pressed back to True every second:
if counter == FRAMERATE:
counter=0
not_pressed=True
That should allow the program to only take input from the user once every second.
To change the interval, simply change the if counter == FRAMERATE: line.
if counter == FRAMERATE: would be 1 Second
if counter == (FRAMERATE*2): would be 2 Seconds
if counter == int(FRAMERATE/4): would be a quarter of a second*
*note- make sure you turn FRAMERATE divided by a number, into an integer, either by using int() surrounding the division, or by using integer division: (FRAMERATE//4)
For a similar example to see how everything fits, see this answer.
See also: Pygame: key.get_pressed() does not coincide with the event queue To use repeated movement while key is held down. Using state polling for those keys works better.

Limit iterations per time unit

Is there a way to limit iterations per time unit? For example, I have a loop like this:
for (int i = 0; i < 100000; i++)
{
// do stuff
}
I want to limit the loop above so there will be maximum of 30 iterations per second.
I would also like the iterations to be evenly positioned in the timeline so not something like 30 iterations in first 0.4s and then wait 0.6s.
Is that possible? It does not have to be completely precise (though the more precise it will be the better).
#FredOverflow My program is running
very fast. It is sending data over
wifi to another program which is not
fast enough to handle them at the
current rate. – Richard Knop
Then you should probably have the program you're sending data to send an acknowledgment when it's finished receiving the last chunk of data you sent then send the next chunk. Anything else will just cause you frustrations down the line as circumstances change.
Suppose you have a good Now() function (GetTickCount() is bad example, it's OS specific and has bad precision):
for (int i = 0; i < 1000; i++){
DWORD have_to_sleep_until = GetTickCount() + EXPECTED_ITERATION_TIME_MS;
// do stuff
Sleep(max(0, have_to_sleep_until - GetTickCount()));
};
You can check elapsed time inside the loop, but it may be not an usual solution. Because computation time is totally up to the performance of the machine and algorithm, people optimize it during their development time(ex. many game programmer requires at least 25-30 frames per second for properly smooth animation).
easiest way (for windows) is to use QueryPerformanceCounter(). Some pseudo-code below.
QueryPerformanceFrequency(&freq)
timeWanted = 1.0/30.0 //time per iteration if 30 iterations / sec
for i
QueryPerf(count1)
do stuff
queryPerf(count2)
timeElapsed = (double)(c2 - c1) * (double)(1e3) / double(freq) //time in milliseconds
timeDiff = timeWanted - timeElapsed
if (timeDiff > 0)
QueryPerf(c3)
QueryPerf(c4)
while ((double)(c4 - c3) * (double)(1e3) / double(freq) < timeDiff)
queryPerf(c4)
end for
EDIT: You must make sure that the 'do stuff' area takes less time than your framerate or else it doesn't matter. Also instead of 1e3 for milliseconds, you can go all the way to nanoseconds if you do 1e9 (if you want that much accuracy)
WARNING... this will eat your CPU but give you good 'software' timing... Do it in a separate thread (and only if you have more than 1 processor) so that any guis wont lock. You can put a conditional in there to stop the loop if this is a multi-threaded app too.
#FredOverflow My program is running very fast. It is sending data over wifi to another program which is not fast enough to handle them at the current rate. – Richard Knop
What you might need a buffer or queue at the receiver side. The thread that receives the messages from the client (like through a socket) get the message and put it in the queue. The actual consumer of the messages reads/pops from the queue. Of course you need concurrency control for your queue.
Besides the flow control methods mentioned, if you also have the need to maintain an accurate specific data sending rate in your sender part. Usually it can be done like this.
E.x. if you want to send at 10Mbps, create a timer of interval 1ms so it will call a predefined function every 1ms. Then in the timer handler function, by keep tracking of 2 static variables 1)Time elapsed since beginning of sending data 2)How much data in bytes have been sent up to last call, you can easily calculate how much data is needed to be sent in the current call (or just sleep and wait for next call).
By this way, you can do "streaming" of data in a very stable way with very little jitterness, and this is usually adopted in streaming of videos. Of course it also depends on how accurate the timer is.

How can I set tens of thousands of tasks to each trigger at a different defined time?

I'm constructing a data visualisation system that visualises over 100,000 data points (visits to a website) across a time period. The time period (say 1 week) is then converted into simulation time (1 week = 2 minutes in simulation), and a task is performed on each and every piece of data at the specific time it happens in simulation time (the time each visit occurred during the week in real time). With me? =p
In other programming languages (eg. Java) I would simply set a timer for each datapoint. After each timer is complete it triggers a callback that allows me to display that datapoint in my app. I'm new to C++ and unfortunately it seems that timers with callbacks aren't built-in. Another method I would have done in ActionScript, for example, would be using custom events that are triggered after a specific timeframe. But then again I don't think C++ has support for custom events either.
In a nutshell; say I have 1000 pieces of data that span across a 60 second period. Each piece of data has it's own time in relation to that 60 second period. For example, one needs to trigger something at 1 second, another at 5 seconds, etc.
Am I going about this the right way, or is there a much easier way to do this?
Ps. I'm using Mac OS X, not Windows
I would not use timers to do that. Sounds like you have too many events and they may lie too close to each other. Performance and accuracy may be bad with timers.
a simulation is normally done like that:
You are simly doing loops (or iterations). And on every loop you add an either measured (for real time) or constant (non real time) amount to your simulation time.
Then you manually check all your events and execute them if they have to.
In your case it would help to have them sorted for execution time so you would not have to loop through them all every iteration.
Tme measuring can be done with gettimer() c function for low accuracy or there are better functions for higher accuracy e.g. QueryPerformanceTimer() on windows - dont know the equivalent for Mac.
Just make a "timer" mechanism yourself, that's the best, fastest and most flexible way.
-> make an array of events (linked to each object event happens to) (std::vector in c++/STL)
-> sort the array on time (std::sort in c++/STL)
-> then just loop on the array and trigger the object action/method upon time inside a range.
Roughly that gives in C++:
// action upon data + data itself
class Object{
public:
Object(Data d) : data(d) {
void Action(){display(data)};
Data data;
};
// event time + object upon event acts
class Event{
public:
Event(double t, Object o) time (t), object(o) {};
// useful for std::sort
bool operator<(Event e) { return time < e.time; }
double time;
Object object;
}
//init
std::vector<Event> myEvents;
myEvents.push_back(Event(1.0, Object(data0)));
//...
myEvents.push_back(Event(54.0, Object(data10000)));
// could be removed is push_back() is guaranteed to be in the correct order
std::sort(myEvents.begin(), myEvents.end());
// the way you handle time... period is for some fuzziness/animation ?
const double period = 0.5;
const double endTime = 60;
std::vector<Event>::iterator itLastFirstEvent = myEvents.begin();
for (double currtime = 0.0; currtime < endTime; currtime+=0.1)
{
for (std::vector<Event>::iterator itEvent = itLastFirstEvent ; itEvent != myEvents.end();++itEvent)
{
if (currtime - period < itEvent.time)
itLastFirstEvent = itEvent; // so that next loop start is optimised
else if (itEvent.time < currtime + period)
itEvent->actiontick(); // action speaks louder than words
else
break; // as it's sorted, won't be any more tick this loop
}
}
ps: About custom events, you might want to read/search about delegates in c++ and function/method pointers.
If you are using native C++, you should look at the Timers section of the Windows API on the MSDN website. They should tell you exactly what you need to know.