For my current project I want to be able to plot a N x N matrix in real time in C++. Where N is in the range between 100 and 1000. The content of the matrix changes over time. I also want to be able to interact with the matrix i.e. draw objects inside the matrix using my mouse.
I found this post but I find it hard to decide which tool I should use.
Any reccomendations?
Related
I have been working to find temporal displacement between audio signals using a spectrogram. I have a short array containing data of a sound wave (pulses at specific frequencies). Now I want to plot spectrogram from that array. I have followed this steps (Spectrogram C++ library):
It would be fairly easy to put together your own spectrogram. The steps are:
window function (fairly trivial, e.g. Hanning)
FFT (FFTW would be a good choice but if licensing is an issue then go for Kiss FFT or
similar)
calculate log magnitude of frequency domain components(trivial: log(sqrt(re * re + im * im))
Now after performing these 3 steps, I am stuck at how to plot the spectrogram from this available data? Being naive in this field, I need some clear steps ahead to plot the spectrogram.
I know that a simple spectrogram has Frequency at Y-Axis, time at X-axis and magnitude as the color intensity.
But how do I get these three things to plot the spectrogram? (I want to observe and analyze data behind spectral peaks(what's the value on Y-axis and X-axis), the main purpose of plotting spectrogram).
Regards,
Khubaib
I developed a script that calculates the position of N bodies during a gravitational simulation, and I'd like some tips to visualize it. So far, I can either plot the initial conditions, the final position or the trajectories of all bodies, which is not so great. My goal now is "simply" to visualize the movement of all the bodies and/or to plot several figures (one for each time increase) such that, later, it may be possible to do a gif with "animate *.png".
So far I managed to call Gnuplot inside my Fortran code like this:
call system('gnuplot data_test.plt')
Where "data_test.plt" is:
set terminal x11
plot 'teste.dat' pt 5 ps 1
pause 0.1
clear
And "teste.dat" is a file (re-)created at every time increase, which contains the position (x,y) of all the N bodies at that given time.
What would b the best way to implement a simple shape-matching algorithm to match a plot interpolated from just 8 points (x, y) against a database of similar plots (> 12 000 entries), each plot having >100 nodes. The database has 6 categories of plots (signals measured under 6 different conditions), and the main aim is to find the right category (so for every category there's around 2000 plots to compare against).
The 8-node plot would represent actual data from measurement, but for now I am simulating this by selecting a random plot from the database, then 8 points from it, then smearing it using gaussian random number generator.
What would be the best way to implement non-linear least-squares to compare the shape of the 8-node plot against each plot from the database? Are there any c++ libraries you know of that could help with this?
Is it necessary to find the actual formula (f(x)) of the 8-node plot to use it with least squares, or will it be sufficient to use interpolation in requested points, such as interpolation from the gsl library?
You can certainly use least squares without knowing the actual formula. If all of your plots are measured at the same x value, then this is easy -- you simply compute the sum in the normal way:
where y_i is a point in your 8-node plot, sigma_i is the error on the point and Y(x_i) is the value of the plot from the database at the same x position as y_i. You can see why this is trivial if all your plots are measured at the same x value.
If they're not, you can get Y(x_i) either by fitting the plot from the database with some function (if you know it) or by interpolating between the points (if you don't know it). The simplest interpolation is just to connect the points with straight lines and find the value of the straight lines at the x_i that you want. Other interpolations might do better.
In my field, we use ROOT for these kind of things. However, scipy has a great collections of functions, and it might be easier to get started with -- if you don't mind using Python.
One major problem you could have would be that the two plots are not independent. Wikipedia suggests McNemar's test in this case.
Another problem you could have is that you don't have much information in your test plot, so your results will be affected greatly by statistical fluctuations. In other words, if you only have 8 test points and two plots match, how will you know if the underlying functions are really the same, or if the 8 points simply jumped around (inside their error bars) in such a way that it looks like the plot from the database -- purely by chance! ... I'm afraid you won't really know. So the plots that test well will include false positives (low purity), and some of the plots that don't happen to test well were probably actually good matches (low efficiency).
To solve that, you would need to either use a test plot with more points or else bring in other information. If you can throw away plots from the database that you know can't match for other reasons, that will help a lot.
I am trying to make polygon A be at position (x,y) at time= 1 sec for example. Then it should be at position (x,y+2) when time = 2 sec. Then I plan to make more polygons like this. I also want this to be animated and the polygon to smoothly move from the first position to the second, not a polygon jumping a round.
Now thus far, I have learned about the glutTimerFunction, however, from my understanding, I cannot individually tell polygons to be at position (x,y) and time T. But rather it seems like I have to make every polygon that i desire(around 500) and then have timer cycle through all the polygons at once.
Is there a way to explicitly tell the polygon to be at position (x,y) at time T using the glutTimerFunc?
OpenGL is a low level API, not an engine or a framework. It has no built-in method for automatically doing interpolation between positions over time, that's up to you as the engine writer to implement as you see fit. Linear interpolation between two points over time is fairly easy (ie, in pseudocode position = startPos + ((endPos - startPos) * timeElapsed). Interpolation along a more complex curve is essentially the same, just the math is a little more involved to represent the desired curve.
You are correct that you must iterate through all of your polygons by hand and position them. This is another feature of using the low level graphics API instead of a pre-written engine.
There are a number of engines of varying complexity (and price) available that abstract away these details for you, however, if your goal is to learn graphics programming I would suggest shying away from them and plowing through.
I have an image which was shown to groups of people with different domain knowledge of its content. I than recorded gaze fixation data of them watching the image.
I now kind of want to compare the results of the two groups - so what I need to know is, if there is a correlation of the positions of the sampling data between the two groups or not.
I have the original image as well as the fixation coords. Do you have any good idea how to start analyzing the data?
It's more about the idea or the plan so you don't have to be too technical on that one.
Thanks
Simple idea: render all the coordinates on the original image in a 'heat map' like way, one image for each group. You can then visually compare the images for correlation, and you have some nice graphics for in your paper.
There is something like the two-dimensional correlation coefficient. With software like R or Matlab you can do the number crunching for the correlation.
Matlab has a function for this:
Two Dimensional Correlation Function: corr2
Computes two dimensional correlation coefficient between two matrices
and the matrices must be of the same size. r = corr2 (A,B)
In gaze tracking, the most interesting data lies in two areas.
In where all people look, for that you can use the heat map Daan suggests. Make a heat map for all people, and heat maps for separate groups of people.
In when people look there. For that I would recommend you start by making heat maps as above, but for short time intervals starting from the time the picture was first shown. Again, for all people, and for the separate groups you have.
The resulting set of heat-maps, perhaps animated for the ones from the second point, should give you some pointers for further analysis.