I want to draw 2000 spheres using OpenGL in Visual C++.
The following code draw 1000 spheres and the result looks fine.
But when I increase the number of spheres of 2000 (see the partial below code and highlighted by ** ), it failed.
The following error message appears.
"freeglut : fgInitGL2 : fghGenBuffers is NULL"
Could you help me to solve this problem?
for (int j = 0; j < 10; j++) {
for (int k = 0; k < 10; k++) {
**for (int l = 0; l < 20; l++) { \\ for (int l = 0; l < 10; l++)**
glPushMatrix();
glTranslatef(j, k, l);
gluSphere(myQuad, 0.5, 100, 100);
glPopMatrix();
}
}
}
This is a full code for test.
#include <GL/glew.h>
#include <GL/glut.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <time.h>
#include <fstream>
#include <string>
#include <cstring>
#include <cmath>
#include <iostream>
int selectedObject = 1;
bool drawThatAxis = 0;
bool lightEffect = 1;
float fovy = 60.0, aspect = 1.0, zNear = 1.0, zFar = 100.0;
float depth = 8;
float phi = 0, theta = 0;
float downX, downY;
bool leftButton = false, middleButton = false;
GLfloat white[3] = { 1.0, 1.0, 1.0 };
void displayCallback(void);
GLdouble width, height;
int wd;
int main(int argc, char* argv[])
{
glutInit(&argc, argv);
glutInitDisplayMode(GLUT_DEPTH);
glutInitWindowSize(800,600);
wd = glutCreateWindow("3D Molecules");
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glEnable(GL_DEPTH_TEST);
glCullFace(GL_BACK);
glEnable(GL_CULL_FACE);
glClearColor(0.0, 0.0, 0.0, 0.0);
GLuint id;
id = glGenLists(1);
GLUquadric* myQuad;
myQuad = gluNewQuadric();
glNewList(id, GL_COMPILE);
for (int j = 0; j < 10; j++) {
for (int k = 0; k < 10; k++) {
for (int l = 0; l < 10; l++) {
glPushMatrix();
glTranslatef(j, k, l);
gluSphere(myQuad, 0.5, 100, 100);
glPopMatrix();
}
}
}
glEndList();
glutDisplayFunc(displayCallback);
glutMainLoop();
return 0;
}
void displayCallback(void)
{
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluPerspective(fovy, aspect, zNear, zFar);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
gluLookAt(0, 0, 40, 0, 0, 0, 0, 1, 0);
glTranslatef(0.0, 0.0, -depth);
glRotatef(-theta, 1.0, 0.0, 0.0);
glRotatef(phi, 0.0, 1.0, 0.0);
if (lightEffect) {
glEnable(GL_LIGHTING);
glEnable(GL_LIGHT0);
}
else
{
glDisable(GL_LIGHTING);
glDisable(GL_LIGHT0);
}
switch (selectedObject)
{
case (1):
glCallList(1);
break;
default:
break;
}
glFlush();
}
Is there a limit of object in openGL?
OpenGL doesn't define limits like a "maximum number of objects".
Application is able to draw as many objects as possible as they fit into CPU memory, but usually drawing becomes unreliably slow before application hits memory limits. Even when all texture and vertex data do not fit into GPU memory, OpenGL still doesn't fail and keep drawing by constantly uploading CPU->GPU memory on each frame.
So if we come back to the question of OpenGL limits - indeed, there are memory limits, as you may see from another similar question. Your code doesn't actually check for any OpenGL errors using glGetError(), hence your conclusion about fghGenBuffers() being a root cause is misleading. I would expected GL_OUT_OF_MEMORY error to appear in your case. Modern OpenGL also defines a more sophisticated mechanism for reporting errors - ARB_debug_output.
Display Lists is a very archaic mechanism in OpenGL world, intended to optimize drawing of large amounts of data by "remembering" a sequence of OpenGL commands into some internal driver-managed caches. This mechanism was commonly used before wide adoption of Vertex Buffer Objects, which have been added to OpenGL 1.5 as a more straightforward and efficient way to control vertex data memory, and before Vulkan and GL_NV_command_list re-invented Command Buffers as a more reliable interface for caching a sequence of GPU commands.
A big design issue of Display Lists mechanism is an unpredictable memory management and extremely varying implementation across vendors (from very poor to extremely optimized). Modern graphic drivers try to uploaded vertex data onto GPU memory implicitly while compiling Display Lists, but what they actually do remains hidden.
Archaic GLU library is another source of a mystery in your code, as it is difficult to estimate a memory utilized by gluSphere(). A pessimistic calculations show:
size_t aNbSpheres = 10 * 10 * 20;
size_t aNbSlices, aNbStacks = 100;
size_t aNbTriangles = aNbSlices * aNbSlices * 2;
size_t aNbNodes = aNbSpheres * aNbTriangles * 3; // non-indexed array
size_t aNodeSize = (3 * sizeof(GLfloat)) * 2; // position + normal
size_t aMemSize = aNbNodes * aNodeSize;
size_t aMemSizeMiB = aMemSize / 1024 / 1024;
that just vertex data of 2000 spheres may utilize about 2.746 GiB of memory!
If your application is built in 32-bit mode then no surprise it does hit 32-bit address space memory limits. But even in case of 64-bit application, OpenGL driver implementation might hit some internal limits, which will be reported by the same GL_OUT_OF_MEMORY error.
Regardless of memory limits, your code is trying to draw around 40M of triangles. This is not something impossible for a fast modern GPU hardware, but it might be really slow on low-end embedded graphics.
So what could be done next?
Learn OpenGL debugging practices - using glGetError() and/or ARB_debug_output to localize the place and root cause of this and other issues.
Reduce gluSphere() tessellation parameters.
Generate a Display List of a single sphere and draw it many times. The instancing dramatically reduces memory consumption. This, however, may be a slower alternative to drawing all sphere at once (but 2000 draw calls is not that big for modern CPU).
Replace obsolete GLU library with direct generation of vertex data - sphere tessellation is not that difficult to implement and there are a lot of samples around the web.
Learn Vertex Buffer Objects and use them instead of obsolete Display Lists.
Learn GLSL and modern OpenGL so that you may implement hardware instancing for drawing sphere most efficiently.
From the other side, fghGenBuffers error looks really weird as glGenBuffers() should present in every modern OpenGL implementation. Print driver information via glGetString(GL_VENDOR)/glGetString(GL_RENDERER)/glGetString(GL_VERSION) to see if your system has a proper GPU driver installation and doesn't use an obsolete Microsoft software implementation of OpenGL 1.1.
Buffer allocations can fail, of course — everything in computers is finite — but your error message isn't related to your problem.
You received the error:
freeglut : fgInitGL2 : fghGenBuffers is NULL
That's an error from freeglut, which is not part of OpenGL. So look up the implementation of freeglut's fgInitGL2.
If fghGenBuffers failed, that means that the following line failed:
CHECK("fghGenBuffers", fghGenBuffers = (FGH_PFNGLGENBUFFERSPROC)glutGetProcAddress("glGenBuffers"));
i.e. GLUT was unable to obtain the address of the glGenBuffers function. It didn't ask for buffers and fail to get them, it asked for the address of the function it should ask for buffers, and didn't even get that.
That, however is only an fgWarning, i.e. a warning, not an error. I would dare guess that you would see that message on your terminal from the moment your program starts, irrespective of whether it subsequently fails. It's something GLUT wants you to know, but it isn't proximate to your failure.
As to your actual problem: it is almost certainly to do with attempting to overfill a display list. Your best solution in context is to put only a single sphere into a display list, and issue 2000 calls to draw that, modifying the model-view matrix between each.
As a quick aside: display lists proved to be a bad idea, not offering much scope for optimisation and becoming mostly unused by OpenGL 1.5. They were deprecated in OpenGL 3.0 in 2008, as was the entire fixed-functionality pipeline — including glPushMatrix, glTranslate and glPopMatrix.
That's not to harangue, but be aware that the way your code is formed relies on lingering deprecated functionality. It may contain hard limits that nobody has bothered to update, or in any other way see very limited maintenance.
It's far and away the simplest way to get going though, and you're probably in the company of a thousand CAD or other scientific programs, so the best advice right now really is just not to try to put all your spheres in one display list.
I am doing some openGL programming lately and it invovles basic matrix transformation such as translation, rotation and scaling. I encounter some problems when doing rotation. Here is my question.
Now I am using a variable rotationDegree and a variable rotationStepSize to control the rotation. When the rotation flag is on
//inside paintGL function
if(rotationFlag is on)
rotationDegree += rotationStepSize
if(rotationDegree > 360.0f)
rotationDegree -= 360.0f
Here's the strange thing, Since I define rotationStepSize to be very small, the rotation starts out very slow, but then as time increases it gets faster and faster!
I come up with two explanations for this phenomena:
360f is not the range of values for the degree parameter in glm::rotate
The program starts out slow, causing paintGL to be painted to screen less. Then as the program become steady(or other parameter is not changing), the mainLoopEvent is executing faster and faster, causing paintGL to be painted more.
Does anyone know how to solve this problem? I googled about using glutget(GL_TIME_ELAPSED), but on my machine, this function reports "glutget: missing ENUM handle", which indicates that my glut file is not complete, I guess?
So does anyone know how to fix the enum problem or how to get around this to create a scene where I have an object rotating in constant speed?
Thanks a lot!
According to the freeglut_state.c, there is the glutGet function defined.
int FGAPIENTRY glutGet( GLenum eWhat )
{
#if TARGET_HOST_WIN32 || TARGET_HOST_WINCE
int returnValue ;
GLboolean boolValue ;
#endif
switch (eWhat)
{
case GLUT_INIT_STATE:
return fgState.Initialised;
case GLUT_ELAPSED_TIME:
return fgElapsedTime();
}
I'm not using freeglut, but look at the docmentation, maybe you should try GLUT_ELAPSED_TIME instead of GL_TIME_ELAPSED?
And calculate the deltatime like this:
int preTime= 0;
while( ... )
{
int currentTime= glutGet(GLUT_ELAPSED_TIME);
int deltaTime = currentTime- preTime;
preTime = currentTime;
//... pass the deltaTime to whatever you want...
}
i was experimenting with OpenGL fragment shaders by doing a huge blur (300*300) done in two passes, one horizontal, one vertical.
I noticed that passing the direction as a uniform (vec2) is about 10 time slower than to directly write it in the code (140 to 12 fps).
ie:
vec2 dir = vec2(0, 1) / textureSize(tex, 0);
int size = 150;
for(int i = -size; i != size; ++i) {
float w = // compute weight here...
acc += w * texture(tex, + coord + vec2(i) * dir);
}
appear to be faster than:
uniform vec2 dir;
/*
...
*/
int size = 150;
for(int i = -size; i != size; ++i) {
float w = // compute weight here...
acc += w * texture(tex, + coord + vec2(i) * dir);
}
Creating two programs with different uniforms doesn't change anything.
Does anyone know why there is such a huge difference and why doesn't the driver see that "inlining" dir might be much faster ?
EDIT : Taking size as a uniform also have an impact, but not as much as dir.
If you are interested in seeing what it looks like (FRAPS provides the fps counter):
uniform blur.
"inline" blur.
no blur.
Quick notes : i am running on a nVidia 760M GTX using OpenGL 4.2 and glsl 420. Also puush's jpeg is responsible for the colors in the images.
A good guess would be that the UBOs are stored in shared memory, but might require an occasional round-trip to global memory (vram), while the non-uniform version stores that little piece of data in registers or constant memory.
However, since the OpenGL standard does not dictate where your data is stored, you would have to look at a profiler, and try to gain better understanding of how NVIDIA's GL implementation works.
I'd recommend, you start by profiling, using NVIDIA PerfKit or NVIDIA NSIGHT for VS. Even if you think, it's too much trouble for now. If you want to write high-performance code, you should start getting used to the process. You will see how easy it gets eventually.
EDIT:
So why is it so much slower? Because in this case, one failed optimization (data not in registers) can cause other (if not most other) optimizations to also fail. And, coincidentally, optimizations are absolutely necessary for GPU code to run fast.
It is generally very easy to call mex files (written in c/c++) in Matlab to speed up certain calculations. In my experience however, the true bottleneck in Matlab is data plotting. Creating handles is extremely expensive and even if you only update handle data (e.g., XData, YData, ZData), this might take ages. Even worse, since Matlab is a single threaded program, it is impossible to update multiple plots at the same time.
Therefore my question: Is it possible to write a Matlab GUI and call C++ (or some other parallelizable code) which would take care of the plotting / visualization? I'm looking for a cross-platform solution that will work on Windows, Mac and Linux, but any solution that get's me started on either OS is greatly appreciated!
I found a C++ library that seems to use Matlab's plot() syntax but I'm not sure whether this would speed things up, since I'm afraid that if I plot into Matlab's figure() window, things might get slowed down again.
I would appreciate any comments and feedback from people who have dealt with this kind of situation before!
EDIT: obviously, I've already profiled my code and the bottleneck is the plotting (dozen of panels with lots of data).
EDIT2: for you to get the bounty, I need a real life, minimal working example on how to do this - suggestive answers won't help me.
EDIT3: regarding the data to plot: in a most simplistic case, think about 20 line plots, that need to be updated each second with something like 1000000 data points.
EDIT4: I know that this is a huge amount of points to plot but I never said that the problem was easy. I can not just leave out certain data points, because there's no way of assessing what points are important, before actually plotting them (data is sampled a sub-ms time resolution). As a matter of fact, my data is acquired using a commercial data acquisition system which comes with a data viewer (written in c++). This program has no problem visualizing up to 60 line plots with even more than 1000000 data points.
EDIT5: I don't like where the current discussion is going. I'm aware that sub-sampling my data might speeds up things - however, this is not the question. The question here is how to get a c / c++ / python / java interface to work with matlab in order hopefully speed up plotting by talking directly to the hardware (or using any other trick / way)
Did you try the trivial solution of changing the render method to OpenGL ?
opengl hardware;
set(gcf,'Renderer','OpenGL');
Warning!
There will be some things that disappear in this mode, and it will look a bit different, but generally plots will runs much faster, especially if you have a hardware accelerator.
By the way, are you sure that you will actually gain a performance increase?
For example, in my experience, WPF graphics in C# are considerably slower than Matlabs, especially scatter plot and circles.
Edit: I thought about the fact that the number of points that is actually drawn to the screen can't be that much. Basically it means that you need to interpolate at the places where there is a pixel in the screen. Check out this object:
classdef InterpolatedPlot < handle
properties(Access=private)
hPlot;
end
methods(Access=public)
function this = InterpolatedPlot(x,y,varargin)
this.hPlot = plot(0,0,varargin{:});
this.setXY(x,y);
end
end
methods
function setXY(this,x,y)
parent = get(this.hPlot,'Parent');
set(parent,'Units','Pixels')
sz = get(parent,'Position');
width = sz(3); %Actual width in pixels
subSampleX = linspace(min(x(:)),max(x(:)),width);
subSampleY = interp1(x,y,subSampleX);
set(this.hPlot,'XData',subSampleX,'YData',subSampleY);
end
end
end
And here is an example how to use it:
function TestALotOfPoints()
x = rand(10000,1);
y = rand(10000,1);
ip = InterpolatedPlot(x,y,'color','r','LineWidth',2);
end
Another possible improvement:
Also, if your x data is sorted, you can use interp1q instead of interp, which will be much faster.
classdef InterpolatedPlot < handle
properties(Access=private)
hPlot;
end
% properties(Access=public)
% XData;
% YData;
% end
methods(Access=public)
function this = InterpolatedPlot(x,y,varargin)
this.hPlot = plot(0,0,varargin{:});
this.setXY(x,y);
% this.XData = x;
% this.YData = y;
end
end
methods
function setXY(this,x,y)
parent = get(this.hPlot,'Parent');
set(parent,'Units','Pixels')
sz = get(parent,'Position');
width = sz(3); %Actual width in pixels
subSampleX = linspace(min(x(:)),max(x(:)),width);
subSampleY = interp1q(x,y,transpose(subSampleX));
set(this.hPlot,'XData',subSampleX,'YData',subSampleY);
end
end
end
And the use case:
function TestALotOfPoints()
x = rand(10000,1);
y = rand(10000,1);
x = sort(x);
ip = InterpolatedPlot(x,y,'color','r','LineWidth',2);
end
Since you want maximum performance you should consider writing a minimal OpenGL viewer. Dump all the points to a file and launch the viewer using the "system"-command in MATLAB. The viewer can be really simple. Here is one implemented using GLUT, compiled for Mac OS X. The code is cross platform so you should be able to compile it for all the platforms you mention. It should be easy to tweak this viewer for your needs.
If you are able to integrate this viewer more closely with MATLAB you might be able to get away with not having to write to and read from a file (= much faster updates). However, I'm not experienced in the matter. Perhaps you can put this code in a mex-file?
EDIT: I've updated the code to draw a line strip from a CPU memory pointer.
// On Mac OS X, compile using: g++ -O3 -framework GLUT -framework OpenGL glview.cpp
// The file "input" is assumed to contain a line for each point:
// 0.1 1.0
// 5.2 3.0
#include <vector>
#include <sstream>
#include <fstream>
#include <iostream>
#include <GLUT/glut.h>
using namespace std;
struct float2 { float2() {} float2(float x, float y) : x(x), y(y) {} float x, y; };
static vector<float2> points;
static float2 minPoint, maxPoint;
typedef vector<float2>::iterator point_iter;
static void render() {
glClearColor(1.0f, 1.0f, 1.0f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(minPoint.x, maxPoint.x, minPoint.y, maxPoint.y, -1.0f, 1.0f);
glColor3f(0.0f, 0.0f, 0.0f);
glEnableClientState(GL_VERTEX_ARRAY);
glVertexPointer(2, GL_FLOAT, sizeof(points[0]), &points[0].x);
glDrawArrays(GL_LINE_STRIP, 0, points.size());
glDisableClientState(GL_VERTEX_ARRAY);
glutSwapBuffers();
}
int main(int argc, char* argv[]) {
ifstream file("input");
string line;
while (getline(file, line)) {
istringstream ss(line);
float2 p;
ss >> p.x;
ss >> p.y;
if (ss)
points.push_back(p);
}
if (!points.size())
return 1;
minPoint = maxPoint = points[0];
for (point_iter i = points.begin(); i != points.end(); ++i) {
float2 p = *i;
minPoint = float2(minPoint.x < p.x ? minPoint.x : p.x, minPoint.y < p.y ? minPoint.y : p.y);
maxPoint = float2(maxPoint.x > p.x ? maxPoint.x : p.x, maxPoint.y > p.y ? maxPoint.y : p.y);
}
float dx = maxPoint.x - minPoint.x;
float dy = maxPoint.y - minPoint.y;
maxPoint.x += dx*0.1f; minPoint.x -= dx*0.1f;
maxPoint.y += dy*0.1f; minPoint.y -= dy*0.1f;
glutInit(&argc, argv);
glutInitDisplayMode(GLUT_RGBA | GLUT_DOUBLE);
glutInitWindowSize(512, 512);
glutCreateWindow("glview");
glutDisplayFunc(render);
glutMainLoop();
return 0;
}
EDIT: Here is new code based on the discussion below. It renders a sin function consisting of 20 vbos, each containing 100k points. 10k new points are added each rendered frame. This makes a total of 2M points. The performance is real-time on my laptop.
// On Mac OS X, compile using: g++ -O3 -framework GLUT -framework OpenGL glview.cpp
#include <vector>
#include <sstream>
#include <fstream>
#include <iostream>
#include <cmath>
#include <iostream>
#include <GLUT/glut.h>
using namespace std;
struct float2 { float2() {} float2(float x, float y) : x(x), y(y) {} float x, y; };
struct Vbo {
GLuint i;
Vbo(int size) { glGenBuffersARB(1, &i); glBindBufferARB(GL_ARRAY_BUFFER, i); glBufferDataARB(GL_ARRAY_BUFFER, size, 0, GL_DYNAMIC_DRAW); } // could try GL_STATIC_DRAW
void set(const void* data, size_t size, size_t offset) { glBindBufferARB(GL_ARRAY_BUFFER, i); glBufferSubData(GL_ARRAY_BUFFER, offset, size, data); }
~Vbo() { glDeleteBuffers(1, &i); }
};
static const int vboCount = 20;
static const int vboSize = 100000;
static const int pointCount = vboCount*vboSize;
static float endTime = 0.0f;
static const float deltaTime = 1e-3f;
static std::vector<Vbo*> vbos;
static int vboStart = 0;
static void addPoints(float2* points, int pointCount) {
while (pointCount) {
if (vboStart == vboSize || vbos.empty()) {
if (vbos.size() >= vboCount+2) { // remove and reuse vbo
Vbo* first = *vbos.begin();
vbos.erase(vbos.begin());
vbos.push_back(first);
}
else { // create new vbo
vbos.push_back(new Vbo(sizeof(float2)*vboSize));
}
vboStart = 0;
}
int pointsAdded = pointCount;
if (pointsAdded + vboStart > vboSize)
pointsAdded = vboSize - vboStart;
Vbo* vbo = *vbos.rbegin();
vbo->set(points, pointsAdded*sizeof(float2), vboStart*sizeof(float2));
pointCount -= pointsAdded;
points += pointsAdded;
vboStart += pointsAdded;
}
}
static void render() {
// generate and add 10000 points
const int count = 10000;
float2 points[count];
for (int i = 0; i < count; ++i) {
float2 p(endTime, std::sin(endTime*1e-2f));
endTime += deltaTime;
points[i] = p;
}
addPoints(points, count);
// render
glClearColor(1.0f, 1.0f, 1.0f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(endTime-deltaTime*pointCount, endTime, -1.0f, 1.0f, -1.0f, 1.0f);
glColor3f(0.0f, 0.0f, 0.0f);
glEnableClientState(GL_VERTEX_ARRAY);
for (size_t i = 0; i < vbos.size(); ++i) {
glBindBufferARB(GL_ARRAY_BUFFER, vbos[i]->i);
glVertexPointer(2, GL_FLOAT, sizeof(float2), 0);
if (i == vbos.size()-1)
glDrawArrays(GL_LINE_STRIP, 0, vboStart);
else
glDrawArrays(GL_LINE_STRIP, 0, vboSize);
}
glDisableClientState(GL_VERTEX_ARRAY);
glutSwapBuffers();
glutPostRedisplay();
}
int main(int argc, char* argv[]) {
glutInit(&argc, argv);
glutInitDisplayMode(GLUT_RGBA | GLUT_DOUBLE);
glutInitWindowSize(512, 512);
glutCreateWindow("glview");
glutDisplayFunc(render);
glutMainLoop();
return 0;
}
As a number of people have mentioned in their answers, you do not need to plot that many points. I think it is important to rpeat Andrey's comment:
that is a HUGE amount of points! There isn't enough pixels on the screen to plot that amount.
Rewriting plotting routines in different languages is a waste of your time. A huge number of hours have gone into writing MATLAB, whay makes you think you can write a significantly faster plotting routine (in a reasonable amount of time)? Whilst your routine may be less general, and therefore would remove some of the checks that the MATLAB code will perform, your "bottleneck" is that you are trying to plot so much data.
I strongly recommend one of two courses of action:
Sample your data: You do not need 20 x 1000000 points on a figure - the human eye won't be able to distinguish between all the points, so it is a waste of time. Try binning your data for example.
If you maintain that you need all those points on the screen, I would suggest using a different tool. VisIt or ParaView are two examples that come to mind. They are parallel visualisation programs designed to handle extremenly large datasets (I have seen VisIt handle datasets that contained PetaBytes of data).
There is no way you can fit 1000000 data points on a small plot. How about you choose one in every 10000 points and plot those?
You can consider calling imresize on the large vector to shrink it, but manually building a vector by omitting 99% of the points may be faster.
#memyself The sampling operations are already occurring. Matlab is choosing what data to include in the graph. Why do you trust matlab? It looks to me that the graph you showed significantly misrepresents the data. The dense regions should indicate that the signal is at a constant value, but in your graph it could mean that the signal is at that value half the time - or was at that value at least once during the interval corresponding to that pixel?
Would it be possible to use an alternate architectue? For example, use MATLAB to generate the data and use a fast library or application (GNUplot?) to handle the plotting?
It might even be possible to have MATLAB write the data to a stream as the plotter consumes the data. Then the plot would be updated as MATLAB generates the data.
This approach would avoid MATLAB's ridiculously slow plotting and divide the work up between two separate processes. The OS/CPU would probably assign the process to different cores as a matter of course.
I think it's possible, but likely to require writing the plotting code (at least the parts you use) from scratch, since anything you could reuse is exactly what's slowing you down.
To test feasibility, I'd start with testing that any Win32 GUI works from MEX (call MessageBox), then proceed to creating your own window, test that window messages arrive to your WndProc. Once all that's going, you can bind an OpenGL context to it (or just use GDI), and start plotting.
However, the savings is likely to come from simpler plotting code and use of newer OpenGL features such as VBOs, rather than threading. Everything is already parallel on the GPU, and more threads don't help transfer of commands/data to the GPU any faster.
I did a very similar thing many many years ago (2004?). I needed an oscilloscope-like display for kilohertz sampled biological signals displayed in real time. Not quite as many points as the original question has, but still too many for MATLAB to handle on its own. IIRC I ended up writing a Java component to display the graph.
As other people have suggested, I also ended up down-sampling the data. For each pixel on the x-axis, I calculated the minimum and maximum values taken by the data, then drew a short vertical line between those values. The entire graph consisted of a sequence of short vertical lines, each immediately adjacent to the next.
Actually, I think that the implementation ended up writing the graph to a bitmap that scrolled continuously using bitblt, with only new points being drawn ... or maybe the bitmap was static and the viewport scrolled along it ... anyway it was a long time ago and I might not be remembering it right.
Blockquote
EDIT4: I know that this is a huge amount of points to plot but I never said that the problem was easy. I can not just leave out certain data points, because there's no way of assessing what points are important, before actually plotting them
Blockquote
This is incorrect. There is a way to to know which points to leave out. Matlab is already doing it. Something is going to have to do it at some point no matter how you solve this. I think you need to redirect your problem to be "how do I determine which points I should plot?".
Based on the screenshot, the data looks like a waveform. You might want to look at the code of audacity. It is an open source audio editing program. It displays plots to represent the waveform in real time, and they look identical in style to the one in your lowest screen shot. You could borrow some sampling techniques from them.
What you are looking for is the creation of a MEX file.
Rather than me explaining it, you would probably benefit more from reading this: Creating C/C++ and Fortran Programs to be Callable from MATLAB (MEX-Files) (a documentation article from MathWorks).
Hope this helps.