I am developing scientific application in Windows Forms (VC++ 2010), which controls relatively new, electronic device. I control it by additional, wrapped library written in C. After initial setup of all parameters, this application triggers a measurement in the device. Then, it sends to my app a huge data of over 200k samples of int at significant rate – let’s assume it’s 50 datasets per second.
Now, I need to plot my data at the real-time pace using Windows Forms chart. It would be perfect to have 750 samples plotted inside chart at rate of about 30 FPS. The problem I encountered lies in the algorithm of reducing database in a fast way without losing reliability of plot.
My ideas (data is oscilating around value = 127):
Choose 750 points just by selecting every (200 000/ 750) th point
Group the data and calculate mean value
Group the data and select maximum or minimum (based on overall group placement – if most of them is above 127 – select minimum, else maximum).
Which one (if any) of those solution is the best considering I have to plot data at real-time speed and plot should not miss spots, where we have any significant signal (looking like a kind of narrowed, modulated sine wave)? Is there any better approach?
And the last question: should I consider using table of pointers to my huge data buffer or data copies as data for plot considering I always have the same buffer of collected data (device just overwrites this buffer constantly with new data)?
This is my first post, so please inform me if there will be anything wrong in the style of post.
I developed an application that reads data at 256Hz (256 samples / second) from 16 channels and displays it in 16 different charts. The best way of plotting all data in real time was using a separate thread to updoat the plots. Here is the solution (in c#) that might be useful for you too.
When new data is read, data is stored in a list or array. Since it is real-time data, the timestamps are also generated here. Using the sample rate of the data acquired: timeStamp = timeStamp + sampleIdx/sampleRate;
public void OnDataRead(object source, EEGEventArgs e)
{
if ((e.rawData.Length > 0) && (!_shouldStop))
{
lock (_bufferRawData)
{
for (int sampleIdx = 0; sampleIdx < e.rawData.Length; sampleIdx++)
{
// Append data
_bufferRawData.Add(e.rawData[sampleIdx]);
// Calculate corresponding timestamp
secondsToAdd = (float) sampleIdx/e.sampleRate;
// Append corresponding timestamp
_bufferXValues.Add( e.timeStamp.AddSeconds(secondsToAdd));
}
}
Then, create a thread that sleeps every N ms (100ms is suitable for me for a 2 seconds display of data, but if I wanna display 10 seconds, I need to increase to 500ms of sleep time for the thread)
//Create thread
//define a thread to add values into chart
ThreadStart addDataThreadObj = new ThreadStart(AddDataThreadLoop);
_addDataRunner = new Thread(addDataThreadObj);
addDataDel += new AddDataDelegate(AddData);
//Start thread
_addDataRunner.Start();
And finally, update the charts and make the thread sleep every N ms
private void AddDataThreadLoop()
{
while (!_shouldStop)
{
chChannels[1].Invoke(addDataDel);
// Sleeep thread for 100ms
Thread.Sleep(100);
}
}
Data will be added to the chart every 100ms
private void AddData()
{
// Copy data stored in lists to arrays
float[] rawData;
DateTime[] xValues;
if (_bufferRawData.Count > 0)
{
// Copy buffered data in thread-safe manner
lock (_bufferRawData)
{
rawData = _bufferRawData.ToArray();
_bufferRawData.Clear();
xValues = _bufferXValues.ToArray();
_bufferXValues.Clear();
}
for (int sampleIdx = 0; sampleIdx < rawData.Length; sampleIdx++)
{
foreach (Series ptSeries in chChannels[channelIdx].Series)
// Add new datapoint to the corresponding chart (x, y, chartIndex, seriesIndex)
AddNewPoint(xValues[sampleIdx], rawData[sampleIdx], ptSeries);
}
}
}
Related
I am currently working on implementing a simple oscilloscope in C++, which is receiving data via UDP. First, I have implemented a function to generate a sine wave (up to 10 kHz) with 30 kSps, which is transferring this data via UDP locally. On the other side, there is a (QWT) plot. The UDP client is running in a thread appending the received values (and delete the first one) to a Qlist, while the plot is updated every 30 ms via a timer.
The question is now, how can I implement a simple oscilloscope which plots the signal with its original frequency, independent of the number of samples it receives (implement a time base)? Could you give me some general ideas? Thanks in advance.
Solution:
The solution is to copy (every time the plot is updated, here 30 ms) the Qlist of the received items to a temporary variable and delete the Qlist. Then, create a time vector for this temporary variable. The time vector is created via this function:
QVector<double> make_vector(double start, double end, double size)
{
QVector<double> vec;
double step = abs((abs(start) - abs(end)))/size;
while (start <= end)
{
vec.push_back(start);
start += step;
}
return vec;
}
For example, to scale the value to 1 second I am calling this function like: make_vector(-1.0, 0.0, temporary variable.size()).
By this procedure, the plot is independent of the number of sample received. You only have to make sure that you receive enough values in your time period (here 30 ms).
I'm trying to use Google Oboe for a 3D audio processing app due to it's low latency. The app will have a C++ backend, which does the processing, and the frontend is done with Flutter. I'm running a couple of tests to see if it'll work but I'm having issues loading assets from Flutter to Oboe. I checked the example RhythmGame in Oboe's repo, done with Java, but couldn't quiet find a way of doing that straight from Dart to C++.
The connection between front and backend is through dart::ffi
Here's what I've tried so far. Based on the example published by Richard Heap here, I changed the noise variable from just a sine wave to a short fragment of a song in a wav file:
class _MyAppState extends State<MyApp> {
final stream = OboeStream();
var noise = Float32List(512);
Timer t;
#override
void initState() {
super.initState();
// for (var i = 0; i < noise.length; i++) {
// noise[i] = sin(8 * pi * i / noise.length);
// }
_loadSound();
}
void _loadSound() async {
final ByteData data = await rootBundle.load('assets/song_cut.wav');
noise = data.buffer.asFloat32List();
}
(...)
Then this function in Dart calls the Dart wrapper of the native library:
void start() {
stream.start();
var interval = (512000 / stream.getSampleRate()).floor() + 1;
t = Timer.periodic(Duration(milliseconds: interval), (_) {
stream.write(noise);
});
}
The wrapper in Dart is:
void write(Float32List original) {
var length = original.length;
var copy = allocate<Float>(count: length)
..asTypedList(length).setAll(0, original);
FfiGoogleOboe()._streamWrite(_nativeInstance, copy, length);
free(copy);
}
_streamWrite is the native function in C++:
EXTERNC void stream_write(void* ptr, void* data, int32_t size) {
auto stream = static_cast<OboeFfiStream*>(ptr);
auto dataToWrite = static_cast<float*>(data);
stream->write(dataToWrite, size);
}
void OboeFfiStream::write(float *data, int32_t size) {
managedStream->write(data, size, 1000000);
}
Now I can hear the song but it comes out with too much distortion. When trying with the sine I could hear it too, but it also had some distortion. I'm not yet using the callback mode in Oboe, since I wanted to try if this worked first.
1 - what format is your WAV file in? Is it 32 bit floats? Don't forget that WAV files have a header, so you should discard the first few tens of bytes (up to the data segment). Be sure that you start reading the audio data on a float boundary (which may not be a multiple of 4 if the header isn't). If necessary, just use a hex editor to ascertain the offset of the float data and start reading there. Or, truncate the header and rename your asset to song_cut.raw. Audacity should be able to produce a header-less raw audio file.
2 - What sample rate is your audio clip recorded at? Does that match the sample rate of the device? (Note that iOS devices are normally 44.1k, but Android devices are frequently 48k. When using an Android emulator on macOS, who knows what the reported sample rate will be! Expect pitch distortion if your rates don't match - or use a resampler. I think Oboe has one. Alternatively, the sample repo associated with the talk contains one you can use.)
3 - note that the timer interval is finely tuned (for demo purposes) to the approximate time taken to deliver 512 samples at the sound card rate. This might be ok for demos, but isn't for real life. Also, your wav file probably doesn't have exactly 512 samples in it. Either adjust your audio loop to 512 samples, or adjust the 512000 constant to match the number of samples in your loop.
4a - You aren't using the callback method yet, but you probably should as soon as possible. One method I've had success with is to use a lock-free circular buffer. The Oboe callback tries to empty the buffer, while the Dart timer routine tries to fill it. The bigger the buffer the less chance there is of an underflow, but the worse the latency.
4b - The ideal solution would be to have the Oboe callback call up into Dart, but I haven't found a way to do that as C->Dart calls must be on the main Dart thread, but the Oboe callbacks are surely on a high-priority IO thread.
I am trying to plot some serial data on my Qt Gui program using qcustomplot class. I had no trouble when I tried to plot low sampling frequency datas like 100 data/second. The graph was really cool and was plotting the data smoothly. But at high sampling rates such 1000data/second, plotter makes a bottleneck for serial read function. It slow downs serial there was a huge delay like 4-5 seconds apart from device. Straightforwardly, plotter could not reach the data stream speed. So, is there any common issue which i dont know about or any recommendation?
I thougth these scenarious,
1- to devide whole program to 2 or 3 thread. For example, serial part runs in one thread and plotting part runs in another thread and two thread communicates with a QSemaphore
2- fps of qcustom plot is limited. but there should be a solution because NI LABVIEW plots up to 2k of datas without any delay
3- to desing a new virtual serial device in usb protocol. Now, I am using ft232rl serial to usb convertor.
4- to change programming language. What is the situation and class support in C# or java for realtime plotting? (I know it is like a kid saying, but this is a pretex to be experienced in other languages)
My serial device send data funct(it is foo device for experiment there is no serious coding) is briefly that:
void progTask()
{
DelayMsec(1); //my delay function, milisecond
//read value from adc13
Adc13Read(adcValue.ui32Part);
sendData[0] = (char)'a';
sendData[1] = (char)'k';
sendData[2] = adcValue.bytes[0];
sendData[3] = (adcValue.bytes[1] & 15);
Qt Program read function is that:
//send test data
UARTSend(UART6_BASE,&sendData[0],4);
}
union{
unsigned char bytes[2];
unsigned int intPart;
unsigned char *ptr;
}serData;
void MedicalSoftware::serialReadData()
{
if(serial->bytesAvailable()<4)
{
//if the frame size is less than 4 bytes return and
//wait to full serial receive buffer
//note: serial->setReadBufferSize(4)!!!!
return;
}
QByteArray serialInData = serial->readAll();
//my algorithm
if(serialInData[0] == 'a' && serialInData[1] == 'k')
{
serData.bytes[0] = serialInData[2];
serData.bytes[1] = serialInData[3];
}else if(serialInData[2] == 'a' && serialInData[3] == 'k')
{
serData.bytes[0] = serialInData[0];
serData.bytes[1] = serialInData[1];
}
else if(serialInData[1] == 'a' && serialInData[2] == 'k')
{
serial->read(1);
return;
}else if(serialInData[0] == 'k' && serialInData[3] == 'a')
{
serData.bytes[0] = serialInData[1];
serData.bytes[1] = serialInData[2];
}
plotMainGraph(serData.intPart);
serData.intPart = 0;
}
And qcustom plot setting fuction is:
void MedicalSoftware::setGraphsProperties()
{
//MAIN PLOTTER
ui->mainPlotter->addGraph();
ui->mainPlotter->xAxis->setRange(0,2000);
ui->mainPlotter->yAxis->setRange(-0.1,3.5);
ui->mainPlotter->xAxis->setLabel("Time(s)");
ui->mainPlotter->yAxis->setLabel("Magnitude(mV)");
QSharedPointer<QCPAxisTickerTime> timeTicker(new QCPAxisTickerTime());
timeTicker->setTimeFormat("%h:%m:%s");
ui->mainPlotter->xAxis->setTicker(timeTicker);
ui->mainPlotter->axisRect()->setupFullAxesBox();
QPen pen;
pen.setColor(QColor("blue"));
ui->mainPlotter->graph(0)->setPen(pen);
dataTimer = new QTimer;
}
And the last is plot function:
void MedicalSoftware::plotMainGraph(const quint16 serData)
{
static QTime time(QTime::currentTime());
double key = time.elapsed()/1000.0;
static double lastPointKey = 0;
if(key-lastPointKey>0.005)
{
double value0 = serData*(3.3/4096);
ui->mainPlotter->graph(0)->addData(key,value0);
lastPointKey = key;
}
ui->mainPlotter->xAxis->setRange(key+0.25, 2, Qt::AlignRight);
counter++;
ui->mainPlotter->replot();
counter = 0;
}
Quick answer:
Have you tried:
ui->mainPlotter->replot(QCustomPlot::rpQueuedReplot);
according to the documentation it can improves performances when doing a lot of replots.
Longer answer:
My feeling on your code is that you are trying to replot as often as you can to get a "real time" plot. But if you are on a PC with a desktop OS there is no such thing as real time.
What you should care about is:
Ensure that the code that read/write to the serial port is not delayed too much. "Too much" is to be interpreted with respect to the connected hardware. If it gets really time critical (which seems to be your case) you have to optimize your read/write functions and eventually put them alone in a thread. This can go as far as reserving a full hardware CPU core for this thread.
Ensure that the graph plot is refreshed fast enough for the human eyes. You do not need to do a full repaint each time you receive a single data point.
In your case you receive 1000 data/s which make 1 data every ms. That is quite fast because that is beyond the default timer resolution of most desktop OS. That means you are likely to have more than a single point of data when calling your "serialReadData()" and that you could optimize it by calling it less often (e.g call it every 10ms and read 10 data points each time). Then you could call "replot()" every 30ms which would add 30 new data points each time, skip about 29 replot() calls every 30ms compared to your code and give you ~30fps.
1- to devide whole program to 2 or 3 thread. For example, serial part
runs in one thread and plotting part runs in another thread and two
thread communicates with a QSemaphore
Dividing the GUI from the serial part in 2 threads is good because you will prevent a bottleneck in GUI to block the serial communication. Also you could skip using semaphore and simply rely on Qt signal/slot connections (connected in Qt::QueuedConnection mode).
4- to change programming language. What is the situation and class
support in C# or java for realtime plotting? (I know it is like a kid
saying, but this is a pretex to be experienced in other languages)
Changing the programming language, in best case, won't change anything or could hurt your performances, especially if you go toward languages which are not compiled to native CPU instructions.
Changing the plotting library on the other hand could change the performances. You can look at Qt Charts and Qwt. I do not know how they compare to QCustomPlot though.
I am having trouble getting a custom block to operate at high frequency.
The block I would like to use is going to take in data from an external radio.
I am using an Ettus USRP block to stream data in from this radio, and I can display this on the QT Scope. I can set this block's sample rate to 15 MHz, and with the scope this seems to work ok.
Problem:
I have tried making a simple block with the gnuradio gr_modtool which takes in 2 floats as input and has 0 outputs. The block has private members "timer", a time_t, and "counter", an int. In the "work" function, my code simply does this at the moment:
const float *in_i = (const float *) input_items[0];
const float *in_q = (const float *) input_items[1];
if (count == 0){
if (*in_i > 0.5){
timer = clock();
count = 30000;
}
}else{
count --;
if(count == 0){
timer = clock()-timer;
printf("Count took %d clicks, or %f seconds\n",timer,(float)timer/CLOCKS_PER_SEC);
}
}
// Tell runtime system how many output items we produced.
return 0;
However, when I run this code, it takes longer than the expected time.
For 30000 cycles, it takes 0.872970 to complete, instead of the desired 0.002 seconds. Since the standard gnuradio block generated with gr_modtool is a sync block, and the input stream to the block is coming from the 15 MHz USRP, I would have expected this block to run at that same frequency. This is not currently the case.
Eventually my goal is to be able to store data streaming in over a period of time, and write it to file with certain formatting(A block already exists to do this, but there is some sort of bug that is preventing that block and the USRP block from working at the same time, so I am attempting to write my own.). However, unless I can keep up with the sample rate of 15 MHz, I will lose data. Since this block is fairly simple, I would have hoped it would be able to run quickly enough to keep up. However, the input stream block is able to pull data from the radio and output at 15 MHz, so I know my computer is capable of it.
How can I make this custom block operate more quickly, and keep up with the 15 MHz frequency?(Or, how can I make this sync block operate at the input stream frequency, since it currently does not)
Your block is not consuming any samples. I presume you're writing a sync_block (work function, not general_work), so your number of produced items is identical to the number of consumed items. But as your source code says:
// Tell runtime system how many output items we produced.
return 0;
In other words, your block tells GNU Radio that it didn't use any of the input GNU Radio offered, and produced no output. That means GNU Radio can't do nothing. You must return the number of items you've produced, and for sync blocks, that's the number of items you consumed – even if you're a sink, with zero output streams!
I'm constructing a data visualisation system that visualises over 100,000 data points (visits to a website) across a time period. The time period (say 1 week) is then converted into simulation time (1 week = 2 minutes in simulation), and a task is performed on each and every piece of data at the specific time it happens in simulation time (the time each visit occurred during the week in real time). With me? =p
In other programming languages (eg. Java) I would simply set a timer for each datapoint. After each timer is complete it triggers a callback that allows me to display that datapoint in my app. I'm new to C++ and unfortunately it seems that timers with callbacks aren't built-in. Another method I would have done in ActionScript, for example, would be using custom events that are triggered after a specific timeframe. But then again I don't think C++ has support for custom events either.
In a nutshell; say I have 1000 pieces of data that span across a 60 second period. Each piece of data has it's own time in relation to that 60 second period. For example, one needs to trigger something at 1 second, another at 5 seconds, etc.
Am I going about this the right way, or is there a much easier way to do this?
Ps. I'm using Mac OS X, not Windows
I would not use timers to do that. Sounds like you have too many events and they may lie too close to each other. Performance and accuracy may be bad with timers.
a simulation is normally done like that:
You are simly doing loops (or iterations). And on every loop you add an either measured (for real time) or constant (non real time) amount to your simulation time.
Then you manually check all your events and execute them if they have to.
In your case it would help to have them sorted for execution time so you would not have to loop through them all every iteration.
Tme measuring can be done with gettimer() c function for low accuracy or there are better functions for higher accuracy e.g. QueryPerformanceTimer() on windows - dont know the equivalent for Mac.
Just make a "timer" mechanism yourself, that's the best, fastest and most flexible way.
-> make an array of events (linked to each object event happens to) (std::vector in c++/STL)
-> sort the array on time (std::sort in c++/STL)
-> then just loop on the array and trigger the object action/method upon time inside a range.
Roughly that gives in C++:
// action upon data + data itself
class Object{
public:
Object(Data d) : data(d) {
void Action(){display(data)};
Data data;
};
// event time + object upon event acts
class Event{
public:
Event(double t, Object o) time (t), object(o) {};
// useful for std::sort
bool operator<(Event e) { return time < e.time; }
double time;
Object object;
}
//init
std::vector<Event> myEvents;
myEvents.push_back(Event(1.0, Object(data0)));
//...
myEvents.push_back(Event(54.0, Object(data10000)));
// could be removed is push_back() is guaranteed to be in the correct order
std::sort(myEvents.begin(), myEvents.end());
// the way you handle time... period is for some fuzziness/animation ?
const double period = 0.5;
const double endTime = 60;
std::vector<Event>::iterator itLastFirstEvent = myEvents.begin();
for (double currtime = 0.0; currtime < endTime; currtime+=0.1)
{
for (std::vector<Event>::iterator itEvent = itLastFirstEvent ; itEvent != myEvents.end();++itEvent)
{
if (currtime - period < itEvent.time)
itLastFirstEvent = itEvent; // so that next loop start is optimised
else if (itEvent.time < currtime + period)
itEvent->actiontick(); // action speaks louder than words
else
break; // as it's sorted, won't be any more tick this loop
}
}
ps: About custom events, you might want to read/search about delegates in c++ and function/method pointers.
If you are using native C++, you should look at the Timers section of the Windows API on the MSDN website. They should tell you exactly what you need to know.