display list doesn't work the proper way - c++

I had already coded the display list.. and when i opened it again the next day -> gone..
niiice i thought.. i've wasted hours for nothing
and the next problem is.. i can't get it to work anymore
the display list actually works but not how it should.. textures are stretched somehow
i get my world from a text file.. each platform defined by start amount of x.. end amount of x
the y amount of the platform's bottom.. u and v (which i don't use) and filter for choosing texture.
setupworld is the function that reads from the text file and writes my variables into a structure.
Oh and numblocks is the number of platforms to display
void setupworld(){
float xstart,xend,ystart,u,v;
unsigned int filter;
FILE *filein;
char oneline[255];
filein=fopen("data/world.txt","rt");
readstr(filein,oneline);
sscanf(oneline, "Anzahl %d\n",&numblocks);
for (int loop=0;loop<numblocks; loop++)
{
readstr(filein,oneline);
sscanf(oneline,"%f %f %f %f %f %d",&xstart,&xend,&ystart,&u,&v,&filter);
block.data[loop].xstart=xstart;
block.data[loop].xend=xend;
block.data[loop].ystart=ystart;
block.data[loop].u=u;
block.data[loop].v=v;
block.data[loop].filter=filter;
}
fclose(filein);
return;}
BuildLists() creates my Display List, but first loads the png files and the world, cause they influence my display list... i had to rewrite this part of code and i just dont know where i made the mistake..
first loop is for creating the platforms and the 2nd one for blocks.. each platform consists of a number of 2x2 blocks simply next to eachother
GLvoid BuildLists(){
texture[0]=LoadPNG("data/rock_gnd.png");
texture[1]=LoadPNG("data/rock_wall2.png");
texture[2]=LoadPNG("data/pilz_test.png");
setupworld();
quad[0]=glGenLists(numblocks);
for(int loop=0;loop<numblocks;loop++)
{
GLfloat xstart,xend,ystart,u,v;
xstart=block.data[loop].xstart;
xend=block.data[loop].xend;
ystart=block.data[loop].ystart;
u=block.data[loop].u;
v=block.data[loop].v;
GLuint filter=block.data[loop].filter;
GLfloat blocks=(xend-xstart)/2.0f;
glNewList(quad[loop],GL_COMPILE);
glBindTexture(GL_TEXTURE_2D, texture[filter]);
for(int y=0;y<blocks;y++)
{
glBegin(GL_QUADS);
glTexCoord2f(0.0f,0.0f);
glVertex3f(xstart,ystart,-1.0f);
glTexCoord2f(1.0f,0.0f);
glVertex3f(xstart+((y+1)*2.0f),ystart,-1.0f);
glTexCoord2f(1.0f,1.0f);
glVertex3f(xstart+((y+1)*2.0f),ystart+2.0f,-1.0f);
glTexCoord2f(0.0f,1.0f);
glVertex3f(xstart,ystart+2.0f,-1.0f);
glEnd();
}
glEndList();
quad[loop+1]=quad[loop]+1;
}
}
the display list is compiled during initialisation just before enabling the 2d textures
this is how i call it during my actual code
int DrawWorld(GLvoid){
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
GLfloat camtrans=-xpos;
glTranslatef(camtrans,0,0);
glPushMatrix();
for(int i=0;i<numblocks;i++)
{
glCallList(quad[i]);
}
glPopMatrix();
return TRUE; }
so this is it.. i think the mistake is in the BuildLists() function but i'm not sure anymore..
Here is the link to my screenshot.. as u see the textures look weird for some reason
http://www.grenzlandzocker.de/test.png

In your code:
quad[0]=glGenLists(numblocks);
creates numblocks display lists, but you only get the first display list id, and it is stored in quad[0]. Later, you use:
glNewList(quad[loop],GL_COMPILE);
where quad[loop] is undefined for loop != 0. You can instead use:
glNewList(quad[0] + loop,GL_COMPILE);
Because the ids for display lists are contiguous, starting with the value of quad[0]. You also need to modify your rendering code to:
glCallList(quad[0] + i);
For the same reason ...

Related

Using an array of int when trying to draw points in openGL using glVertex2i

I'm attempting to incorporate a path finding algorithm I made into code but I'm running into a problem. I am trying to be flexible with my code and allow data sets of different lengths and then draw the points using openGL. My problem is that for the points I am using an array of pointers to accomplish the variable length and openGL doesn't like that when trying to convert data types. With the function glVertex2i() it wants GLint as its two parameters but when I try and convert my array to GLint I get a blank window. I understand this is a typedef but it wont take the regular int from the array. Please help!
struct Points { int x, y; }; //My struct to hold the x,y cords
int size; //This is the size of the array
Points *crds = new Points[size]; //The data for this array was input in another function
for (int i = 0; i < size; i++)
{
//These are some things to help configure the look of the points
glEnable(GL_POINT_SMOOTH);
glPointSize(100);
glColor3f(250, 250, 250);
glBegin(GL_POINTS);
for (int i = 0; i < size; i++)
{
glVertex2i((GLint)crds[i].x, (GLint)crds[i].y);
}
glEnd();
use GLint for the coords
because int is not guaranteed to be 32bit it differs from compiler to compiler and platform and can be 16/32/64 bit these days. Your solution should work too but if you use GLint then You do not need to cast the (GLint) in glVertex and also can use the vector version like this:
GLint pnt[100][2];
glVertex2iv(pnt[i]);
or like this:
GLint pnt[100<<1];
glVertex2iv(pnt[i<<1]);
But the real problem lies in following bullets...
matrices
we do not see any matrices nor the range of your points. The OpenGL uses unit matrices by default which means your points should be <-1,+1> to be visible which is not practical on integers. So if your points are in pixels and your screen is xs,ys resolution you should add this before your render:
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glTranslatef(-1.0,-1.0,0.0);
glScalef(2.0/float(xs),2.0/float(ys),0.0);
glColor3f(250,250,250)
The floating point range of colors in OpenGL is <0.0,1.0> so you are setting wrong colors. Try this instead:
glColor3f(1.0,1.0,1.0);
glPointSize(100)
100 is too big as the size is in pixels try:
glPointSize(8);
That is all I can think of what could be wrong... Look here:
Drawing a line using individual pixels in OpenGl core
the related QA contains working example for both old and new API.

OpenGL heightmap terrain render does not draw

I have a bitmap map that created from a dted file. I want to use that bitmap and create a 3d terrain render. I am trying to do it with vertex vectors but I couldn't reach my goal at all. Here is my paintGL code:
void render::paintGL(){
glLoadIdentity();
glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT);
glMatrixMode(GL_MODELVIEW);
glTranslatef(0,0,-100);
for(int z=0;z<dted.width()-1;z++){
glBegin(GL_TRIANGLE_STRIP);
for(int x = 0; x<dted.height()-1; x++){
glColor3f(matrix[x][z], matrix[x][z], matrix[x][z]);
glVertex3f(x, matrix[x][z], -z);
//vertex0 ends here
glColor3f(matrix[x+1][z], matrix[x+1][z], matrix[x+1][z]);
glVertex3f(x+1, matrix[x+1][z], -z);
//vertex1 ends here
glColor3f(matrix[x][z+1], matrix[x][z+1], matrix[x][z+1]);
glVertex3f(x, matrix[x][z+1], -z-1);
//vertex2 ends here
glColor3f(matrix[x+1][z+1], matrix[x+1][z+1], matrix[x+1][z+1]);
glVertex3f(x+1, matrix[x+1][z+1], -z-1);
//vertex3 ends here
}
glEnd();
}}
I had to write the code instead of copy-paste because of internet issues. So there may be things that I forgot but I did my best. Let me explain the variables and other things above:
dted is the dted image I created and I set boundaries from that images width and height values. matrix is a 2d matrix that holds the elevation values I get from the pre-created dted image pixels.
After all the code above I couldn't manage to render the terrain. I don't know if it renders the terrain but it just doesn't show me or it's not rendering at all. If you explain where is my mistake and suggest a solution for it, it would be great.
Sorry for the grammar mistakes if I did any, and I hope I did explain my problem well. If the code I provided is not sufficient, feel free to mention that and I can write the other parts down too.
Have a nice day.
Edit: Ok I got a fresh news over here. I have managed to draw a plain after getting over some control with mouse issues. However, when I try to get amplitude, it shows nothing. That because of the too much amplitude I guess, but when I divide it by a constant like 15 the same plain shows up again. Is this because of I used GL_TRIANGLE_STRIP ?
Your rendering code seems strange: You build your tri strip with degenerated triangles and might access data which isn't there. I guessed the rest of your approach and tried to fix your method. Try this:
void render::paintGL(){
glLoadIdentity();
glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT);
glMatrixMode(GL_MODELVIEW);
glTranslatef(0,0,-100);
for(int z = 0; z < dted.width() - 2; z++) {
glBegin(GL_TRIANGLE_STRIP);
for(int x = 0; x < dted.height() - 1; x++){
glColor3f(matrix[x][z], matrix[x][z], matrix[x][z]);
glVertex3f(x, matrix[x][z], -z);
//vertex0 ends here
glColor3f(matrix[x][z + 1], matrix[x][z + 1], matrix[x][z + 1]);
glVertex3f(x, matrix[x][z + 1], -z - 1);
//vertex1 ends here
}
glEnd();
}}
Other than this, triangle strips are fine. If you still don't see anything, follow the suggestion by Sga and use GL_POINTS. Then fix you camera/other rendering code until you see them.
For the other guys who are looking for solution at this: Make sure that your glFrustum's zNear parameter is far enough to see things. Mine were to close and when I increased it, the problem was solved.

OpenGL glRasterPos doesn't properly accept my Values

I've been trying to add some basic HUD-like Text to my OpenGL/C++ project.
Therefore i decided
glutBitmapCharacter( type, character)
to be my weapon of choice.
I realized that, for the easiest use, i should append a 2D Matrix to my previous Rendering area.
glPushMatrix();
glLoadIdentity();
gluOrtho2D(0, 10, 0, 10);
This should give me a sweet 10*10 2D cube to play around in.
Now a problem arised when glutBitmapCharacter remembers his last character position to append the next one. Pretty smart while it's writing a string, it obviously remembers this position and makes my Text fly all the way across screen once :) Naughty.
glRasterPos2f(x,y);
should be - as i heard - the normal thing to reset this thingy - (altough i don't perfectly understand why a gl function controls a glu Function, but thats just a sidenote).
Now the weird thing that happens, is once i run the code:
glPushMatrix();
glLoadIdentity();
gluOrtho2D(0, 10, 0, 10);
string message ="mmh... Pie...";
float poss[3];
glRasterPos3f(0.0, 0.0, 0.0);
glGetFloatv(GL_CURRENT_RASTER_POSITION, poss);
std::cout<< "X" <<(GLfloat) poss[0] << " Y"<<poss[1] << " Z"<<poss[2] << std::endl;
for( size_t i = 0; i < message.size(); ++i ) {
glutBitmapCharacter(GLUT_BITMAP_HELVETICA_18, message[i]);
}
glPopMatrix();
My Text doesnt get shown. In fact the resulting X, Y and Z
(not even sure Z is necessary.. should be 0 anyway) are all inf (infinite).
If i don't set my own glRasterPos, things just work out fine, apart from leaving the screen. My returning positions are an upcounting X and 0 , 0 - as expected.
So whats the deal on this, what exactly am i doing wrong?
you produced a grand Segmentation Error: glGetFloatv(GL_CURRENT_RASTER_POSITION, poss) returns 4 values, so you must use:
float poss[4];
Apart from this, your code runs fine on my machine and returns:
X0 Y0 Z0.5
So the fault seems not to be in the code you posted here. You should try&copy it to a basic "HelloWorld"-GL program and see if it works there. Be careful that some OpenGL commands only work after a valid drawing surface is initialized - I guess you did that?

Defining members of a struct within a namespace

I'm using NeHe's tutorial on FreeType and OpenGL, and I'm having a problem defining members of
struct font_data within namespace freetype. It doesn't recognize font_data as a struct when I define its members in the namespace.
CE_Text.h:
#ifndef CE_TEXT
#define CE_TEXT
#include <Common/Headers.h>
/////////////////// MAJOR CREDIT TO NeHe FOR HIS TUTORIAL ON FREETPYE ///////////////////
///Wrap everything in a namespace, that we can use common
///function names like "print" without worrying about
///overlapping with anyone else's code.
namespace freetype {
//Inside of this namespace, give ourselves the ability
//to write just "vector" instead of "std::vector"
using std::vector;
//Ditto for string.
using std::string;
//This holds all of the information related to any
//freetype font that we want to create.
struct font_data{
float h; ///< Holds the height of the font.
GLuint * textures; ///< Holds the texture id's
GLuint list_base; ///< Holds the first display list id
//The init function will create a font of
//of the height h from the file fname.
void init(const char * fname, unsigned int h);
//Free all the resources assosiated with the font.
void clean();
};
//The flagship function of the library - this thing will print
//out text at window coordinates x,y, using the font ft_font.
//The current modelview matrix will also be applied to the text.
void print(const font_data &ft_font, float x, float y, const char *fmt) ;
}
#endif
CE_Text.cpp (my problem is at void font_data::init):
#include <Common/Headers.h>
using namespace freetype;
namespace freetype {
///This function gets the first power of 2 >= the
///int that we pass it.
inline int next_p2 ( int a )
{
int rval=1;
while(rval<a) rval<<=1;
return rval;
}
///Create a display list coresponding to the give character.
void make_dlist ( FT_Face face, char ch, GLuint list_base, GLuint * tex_base ) {
//The first thing we do is get FreeType to render our character
//into a bitmap. This actually requires a couple of FreeType commands:
//Load the Glyph for our character.
if(FT_Load_Glyph( face, FT_Get_Char_Index( face, ch ), FT_LOAD_DEFAULT ))
throw std::runtime_error("FT_Load_Glyph failed");
//Move the face's glyph into a Glyph object.
FT_Glyph glyph;
if(FT_Get_Glyph( face->glyph, &glyph ))
throw std::runtime_error("FT_Get_Glyph failed");
//Convert the glyph to a bitmap.
FT_Glyph_To_Bitmap( &glyph, ft_render_mode_normal, 0, 1 );
FT_BitmapGlyph bitmap_glyph = (FT_BitmapGlyph)glyph;
//This reference will make accessing the bitmap easier
FT_Bitmap& bitmap=bitmap_glyph->bitmap;
//Use our helper function to get the widths of
//the bitmap data that we will need in order to create
//our texture.
int width = next_p2( bitmap.width );
int height = next_p2( bitmap.rows );
//Allocate memory for the texture data.
GLubyte* expanded_data = new GLubyte[ 2 * width * height];
//Here we fill in the data for the expanded bitmap.
//Notice that we are using two channel bitmap (one for
//luminocity and one for alpha), but we assign
//both luminocity and alpha to the value that we
//find in the FreeType bitmap.
//We use the ?: operator so that value which we use
//will be 0 if we are in the padding zone, and whatever
//is the the Freetype bitmap otherwise.
for(int j=0; j <height;j++) {
for(int i=0; i < width; i++){
expanded_data[2*(i+j*width)]= expanded_data[2*(i+j*width)+1] =
(i>=bitmap.width || j>=bitmap.rows) ?
0 : bitmap.buffer[i + bitmap.width*j];
}
}
//Now we just setup some texture paramaters.
glBindTexture( GL_TEXTURE_2D, tex_base[ch]);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_LINEAR);
//Here we actually create the texture itself, notice
//that we are using GL_LUMINANCE_ALPHA to indicate that
//we are using 2 channel data.
glTexImage2D( GL_TEXTURE_2D, 0, GL_RGBA, width, height,
0, GL_LUMINANCE_ALPHA, GL_UNSIGNED_BYTE, expanded_data );
//With the texture created, we don't need to expanded data anymore
delete [] expanded_data;
//So now we can create the display list
glNewList(list_base+ch,GL_COMPILE);
glBindTexture(GL_TEXTURE_2D,tex_base[ch]);
glPushMatrix();
//first we need to move over a little so that
//the character has the right amount of space
//between it and the one before it.
glTranslatef(bitmap_glyph->left,0,0);
//Now we move down a little in the case that the
//bitmap extends past the bottom of the line
//(this is only true for characters like 'g' or 'y'.
glTranslatef(0,bitmap_glyph->top-bitmap.rows,0);
//Now we need to account for the fact that many of
//our textures are filled with empty padding space.
//We figure what portion of the texture is used by
//the actual character and store that information in
//the x and y variables, then when we draw the
//quad, we will only reference the parts of the texture
//that we contain the character itself.
float x=(float)bitmap.width / (float)width,
y=(float)bitmap.rows / (float)height;
//Here we draw the texturemaped quads.
//The bitmap that we got from FreeType was not
//oriented quite like we would like it to be,
//so we need to link the texture to the quad
//so that the result will be properly aligned.
glBegin(GL_QUADS);
glTexCoord2d(0,0); glVertex2f(0,bitmap.rows);
glTexCoord2d(0,y); glVertex2f(0,0);
glTexCoord2d(x,y); glVertex2f(bitmap.width,0);
glTexCoord2d(x,0); glVertex2f(bitmap.width,bitmap.rows);
glEnd();
glPopMatrix();
glTranslatef(face->glyph->advance.x >> 6 ,0,0);
//increment the raster position as if we were a bitmap font.
//(only needed if you want to calculate text length)
//glBitmap(0,0,0,0,face->glyph->advance.x >> 6,0,NULL);
//Finnish the display list
glEndList();
}
void font_data::init(const char * fname, unsigned int h) {
//Allocate some memory to store the texture ids.
textures = new GLuint[128];
this->h=h;
//Create and initilize a freetype font library.
FT_Library library;
if (FT_Init_FreeType( &library ))
throw std::runtime_error("FT_Init_FreeType failed");
//The object in which Freetype holds information on a given
//font is called a "face".
FT_Face face;
//This is where we load in the font information from the file.
//Of all the places where the code might die, this is the most likely,
//as FT_New_Face will die if the font file does not exist or is somehow broken.
if (FT_New_Face( library, fname, 0, &face ))
throw std::runtime_error("FT_New_Face failed (there is probably a problem with your font file)");
//For some twisted reason, Freetype measures font size
//in terms of 1/64ths of pixels. Thus, to make a font
//h pixels high, we need to request a size of h*64.
//(h << 6 is just a prettier way of writting h*64)
FT_Set_Char_Size( face, h << 6, h << 6, 96, 96);
//Here we ask opengl to allocate resources for
//all the textures and displays lists which we
//are about to create.
list_base=glGenLists(128);
glGenTextures( 128, textures );
//This is where we actually create each of the fonts display lists.
for(unsigned char i=0;i<128;i++)
make_dlist(face,i,list_base,textures);
//We don't need the face information now that the display
//lists have been created, so we free the assosiated resources.
FT_Done_Face(face);
//Ditto for the library.
FT_Done_FreeType(library);
}
void font_data::clean() {
glDeleteLists(list_base,128);
glDeleteTextures(128,textures);
delete [] textures;
}
/// A fairly straight forward function that pushes
/// a projection matrix that will make object world
/// coordinates identical to window coordinates.
inline void pushScreenCoordinateMatrix() {
glPushAttrib(GL_TRANSFORM_BIT);
GLint viewport[4];
glGetIntegerv(GL_VIEWPORT, viewport);
glMatrixMode(GL_PROJECTION);
glPushMatrix();
glLoadIdentity();
gluOrtho2D(viewport[0],viewport[2],viewport[1],viewport[3]);
glPopAttrib();
}
/// Pops the projection matrix without changing the current
/// MatrixMode.
inline void pop_projection_matrix() {
glPushAttrib(GL_TRANSFORM_BIT);
glMatrixMode(GL_PROJECTION);
glPopMatrix();
glPopAttrib();
}
///Much like Nehe's glPrint function, but modified to work
///with freetype fonts.
void print(const font_data &ft_font, float x, float y, const char *fmt, ...) {
// We want a coordinate system where things coresponding to window pixels.
pushScreenCoordinateMatrix();
GLuint font=ft_font.list_base;
float h=ft_font.h/.63f; //We make the height about 1.5* that of
char text[256]; // Holds Our String
va_list ap; // Pointer To List Of Arguments
if (fmt == NULL) // If There's No Text
*text=0; // Do Nothing
else {
va_start(ap, fmt); // Parses The String For Variables
vsprintf(text, fmt, ap); // And Converts Symbols To Actual Numbers
va_end(ap); // Results Are Stored In Text
}
//Here is some code to split the text that we have been
//given into a set of lines.
//This could be made much neater by using
//a regular expression library such as the one avliable from
//boost.org (I've only done it out by hand to avoid complicating
//this tutorial with unnecessary library dependencies).
const char *start_line=text;
vector<string> lines;
for(const char *c=text;*c;c++) {
if(*c=='\n') {
string line;
for(const char *n=start_line;n<c;n++) line.append(1,*n);
lines.push_back(line);
start_line=c+1;
}
}
if(start_line) {
string line;
for(const char *n=start_line;n<c;n++) line.append(1,*n);
lines.push_back(line);
}
glPushAttrib(GL_LIST_BIT | GL_CURRENT_BIT | GL_ENABLE_BIT | GL_TRANSFORM_BIT);
glMatrixMode(GL_MODELVIEW);
glDisable(GL_LIGHTING);
glEnable(GL_TEXTURE_2D);
glDisable(GL_DEPTH_TEST);
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
glListBase(font);
float modelview_matrix[16];
glGetFloatv(GL_MODELVIEW_MATRIX, modelview_matrix);
//This is where the text display actually happens.
//For each line of text we reset the modelview matrix
//so that the line's text will start in the correct position.
//Notice that we need to reset the matrix, rather than just translating
//down by h. This is because when each character is
//draw it modifies the current matrix so that the next character
//will be drawn immediatly after it.
for(int i=0;i<lines.size();i++) {
glPushMatrix();
glLoadIdentity();
glTranslatef(x,y-h*i,0);
glMultMatrixf(modelview_matrix);
// The commented out raster position stuff can be useful if you need to
// know the length of the text that you are creating.
// If you decide to use it make sure to also uncomment the glBitmap command
// in make_dlist().
// glRasterPos2f(0,0);
glCallLists(lines[i].length(), GL_UNSIGNED_BYTE, lines[i].c_str());
// float rpos[4];
// glGetFloatv(GL_CURRENT_RASTER_POSITION ,rpos);
// float len=x-rpos[0];
glPopMatrix();
}
glPopAttrib();
pop_projection_matrix();
}
}
struct freetype::font_data{
should be
struct font_data{
The fact that font_data is in namespace freetype is already covered by the surrounding namespace freetype { }.
So, in fact, in your code it is true that you never did declare any freetype::font_data type! It's as if you were attempting to declare a freetype::freetype::font_data type instead.
This is analagous to how you do not write:
struct T
{
void T::foo();
};
but instead:
struct T
{
void foo();
};
You have to include CE_Text.h into CE_Text.cpp for that to work. Without seeing the definition for font_data class, the compiler will not allow you to define its members.
That's what it is telling you by "not recognizing font_data as a struct". Of course, it is not recognizing it, since it is completely unknown in CE_Text.cpp.
According to your comments, you included your header files in circular fashion. This is your problem right there. Never include header files circularly. Granted, your include guards make sure that the inclusion cycle gets broken in some way. But that does not in any way help your code to compile, as you can see in your example.
Until you completely remove any inclusion cycles from your code, trying to fix it is a crapshoot.

is it possible to speed-up matlab plotting by calling c / c++ code in matlab?

It is generally very easy to call mex files (written in c/c++) in Matlab to speed up certain calculations. In my experience however, the true bottleneck in Matlab is data plotting. Creating handles is extremely expensive and even if you only update handle data (e.g., XData, YData, ZData), this might take ages. Even worse, since Matlab is a single threaded program, it is impossible to update multiple plots at the same time.
Therefore my question: Is it possible to write a Matlab GUI and call C++ (or some other parallelizable code) which would take care of the plotting / visualization? I'm looking for a cross-platform solution that will work on Windows, Mac and Linux, but any solution that get's me started on either OS is greatly appreciated!
I found a C++ library that seems to use Matlab's plot() syntax but I'm not sure whether this would speed things up, since I'm afraid that if I plot into Matlab's figure() window, things might get slowed down again.
I would appreciate any comments and feedback from people who have dealt with this kind of situation before!
EDIT: obviously, I've already profiled my code and the bottleneck is the plotting (dozen of panels with lots of data).
EDIT2: for you to get the bounty, I need a real life, minimal working example on how to do this - suggestive answers won't help me.
EDIT3: regarding the data to plot: in a most simplistic case, think about 20 line plots, that need to be updated each second with something like 1000000 data points.
EDIT4: I know that this is a huge amount of points to plot but I never said that the problem was easy. I can not just leave out certain data points, because there's no way of assessing what points are important, before actually plotting them (data is sampled a sub-ms time resolution). As a matter of fact, my data is acquired using a commercial data acquisition system which comes with a data viewer (written in c++). This program has no problem visualizing up to 60 line plots with even more than 1000000 data points.
EDIT5: I don't like where the current discussion is going. I'm aware that sub-sampling my data might speeds up things - however, this is not the question. The question here is how to get a c / c++ / python / java interface to work with matlab in order hopefully speed up plotting by talking directly to the hardware (or using any other trick / way)
Did you try the trivial solution of changing the render method to OpenGL ?
opengl hardware;
set(gcf,'Renderer','OpenGL');
Warning!
There will be some things that disappear in this mode, and it will look a bit different, but generally plots will runs much faster, especially if you have a hardware accelerator.
By the way, are you sure that you will actually gain a performance increase?
For example, in my experience, WPF graphics in C# are considerably slower than Matlabs, especially scatter plot and circles.
Edit: I thought about the fact that the number of points that is actually drawn to the screen can't be that much. Basically it means that you need to interpolate at the places where there is a pixel in the screen. Check out this object:
classdef InterpolatedPlot < handle
properties(Access=private)
hPlot;
end
methods(Access=public)
function this = InterpolatedPlot(x,y,varargin)
this.hPlot = plot(0,0,varargin{:});
this.setXY(x,y);
end
end
methods
function setXY(this,x,y)
parent = get(this.hPlot,'Parent');
set(parent,'Units','Pixels')
sz = get(parent,'Position');
width = sz(3); %Actual width in pixels
subSampleX = linspace(min(x(:)),max(x(:)),width);
subSampleY = interp1(x,y,subSampleX);
set(this.hPlot,'XData',subSampleX,'YData',subSampleY);
end
end
end
And here is an example how to use it:
function TestALotOfPoints()
x = rand(10000,1);
y = rand(10000,1);
ip = InterpolatedPlot(x,y,'color','r','LineWidth',2);
end
Another possible improvement:
Also, if your x data is sorted, you can use interp1q instead of interp, which will be much faster.
classdef InterpolatedPlot < handle
properties(Access=private)
hPlot;
end
% properties(Access=public)
% XData;
% YData;
% end
methods(Access=public)
function this = InterpolatedPlot(x,y,varargin)
this.hPlot = plot(0,0,varargin{:});
this.setXY(x,y);
% this.XData = x;
% this.YData = y;
end
end
methods
function setXY(this,x,y)
parent = get(this.hPlot,'Parent');
set(parent,'Units','Pixels')
sz = get(parent,'Position');
width = sz(3); %Actual width in pixels
subSampleX = linspace(min(x(:)),max(x(:)),width);
subSampleY = interp1q(x,y,transpose(subSampleX));
set(this.hPlot,'XData',subSampleX,'YData',subSampleY);
end
end
end
And the use case:
function TestALotOfPoints()
x = rand(10000,1);
y = rand(10000,1);
x = sort(x);
ip = InterpolatedPlot(x,y,'color','r','LineWidth',2);
end
Since you want maximum performance you should consider writing a minimal OpenGL viewer. Dump all the points to a file and launch the viewer using the "system"-command in MATLAB. The viewer can be really simple. Here is one implemented using GLUT, compiled for Mac OS X. The code is cross platform so you should be able to compile it for all the platforms you mention. It should be easy to tweak this viewer for your needs.
If you are able to integrate this viewer more closely with MATLAB you might be able to get away with not having to write to and read from a file (= much faster updates). However, I'm not experienced in the matter. Perhaps you can put this code in a mex-file?
EDIT: I've updated the code to draw a line strip from a CPU memory pointer.
// On Mac OS X, compile using: g++ -O3 -framework GLUT -framework OpenGL glview.cpp
// The file "input" is assumed to contain a line for each point:
// 0.1 1.0
// 5.2 3.0
#include <vector>
#include <sstream>
#include <fstream>
#include <iostream>
#include <GLUT/glut.h>
using namespace std;
struct float2 { float2() {} float2(float x, float y) : x(x), y(y) {} float x, y; };
static vector<float2> points;
static float2 minPoint, maxPoint;
typedef vector<float2>::iterator point_iter;
static void render() {
glClearColor(1.0f, 1.0f, 1.0f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(minPoint.x, maxPoint.x, minPoint.y, maxPoint.y, -1.0f, 1.0f);
glColor3f(0.0f, 0.0f, 0.0f);
glEnableClientState(GL_VERTEX_ARRAY);
glVertexPointer(2, GL_FLOAT, sizeof(points[0]), &points[0].x);
glDrawArrays(GL_LINE_STRIP, 0, points.size());
glDisableClientState(GL_VERTEX_ARRAY);
glutSwapBuffers();
}
int main(int argc, char* argv[]) {
ifstream file("input");
string line;
while (getline(file, line)) {
istringstream ss(line);
float2 p;
ss >> p.x;
ss >> p.y;
if (ss)
points.push_back(p);
}
if (!points.size())
return 1;
minPoint = maxPoint = points[0];
for (point_iter i = points.begin(); i != points.end(); ++i) {
float2 p = *i;
minPoint = float2(minPoint.x < p.x ? minPoint.x : p.x, minPoint.y < p.y ? minPoint.y : p.y);
maxPoint = float2(maxPoint.x > p.x ? maxPoint.x : p.x, maxPoint.y > p.y ? maxPoint.y : p.y);
}
float dx = maxPoint.x - minPoint.x;
float dy = maxPoint.y - minPoint.y;
maxPoint.x += dx*0.1f; minPoint.x -= dx*0.1f;
maxPoint.y += dy*0.1f; minPoint.y -= dy*0.1f;
glutInit(&argc, argv);
glutInitDisplayMode(GLUT_RGBA | GLUT_DOUBLE);
glutInitWindowSize(512, 512);
glutCreateWindow("glview");
glutDisplayFunc(render);
glutMainLoop();
return 0;
}
EDIT: Here is new code based on the discussion below. It renders a sin function consisting of 20 vbos, each containing 100k points. 10k new points are added each rendered frame. This makes a total of 2M points. The performance is real-time on my laptop.
// On Mac OS X, compile using: g++ -O3 -framework GLUT -framework OpenGL glview.cpp
#include <vector>
#include <sstream>
#include <fstream>
#include <iostream>
#include <cmath>
#include <iostream>
#include <GLUT/glut.h>
using namespace std;
struct float2 { float2() {} float2(float x, float y) : x(x), y(y) {} float x, y; };
struct Vbo {
GLuint i;
Vbo(int size) { glGenBuffersARB(1, &i); glBindBufferARB(GL_ARRAY_BUFFER, i); glBufferDataARB(GL_ARRAY_BUFFER, size, 0, GL_DYNAMIC_DRAW); } // could try GL_STATIC_DRAW
void set(const void* data, size_t size, size_t offset) { glBindBufferARB(GL_ARRAY_BUFFER, i); glBufferSubData(GL_ARRAY_BUFFER, offset, size, data); }
~Vbo() { glDeleteBuffers(1, &i); }
};
static const int vboCount = 20;
static const int vboSize = 100000;
static const int pointCount = vboCount*vboSize;
static float endTime = 0.0f;
static const float deltaTime = 1e-3f;
static std::vector<Vbo*> vbos;
static int vboStart = 0;
static void addPoints(float2* points, int pointCount) {
while (pointCount) {
if (vboStart == vboSize || vbos.empty()) {
if (vbos.size() >= vboCount+2) { // remove and reuse vbo
Vbo* first = *vbos.begin();
vbos.erase(vbos.begin());
vbos.push_back(first);
}
else { // create new vbo
vbos.push_back(new Vbo(sizeof(float2)*vboSize));
}
vboStart = 0;
}
int pointsAdded = pointCount;
if (pointsAdded + vboStart > vboSize)
pointsAdded = vboSize - vboStart;
Vbo* vbo = *vbos.rbegin();
vbo->set(points, pointsAdded*sizeof(float2), vboStart*sizeof(float2));
pointCount -= pointsAdded;
points += pointsAdded;
vboStart += pointsAdded;
}
}
static void render() {
// generate and add 10000 points
const int count = 10000;
float2 points[count];
for (int i = 0; i < count; ++i) {
float2 p(endTime, std::sin(endTime*1e-2f));
endTime += deltaTime;
points[i] = p;
}
addPoints(points, count);
// render
glClearColor(1.0f, 1.0f, 1.0f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(endTime-deltaTime*pointCount, endTime, -1.0f, 1.0f, -1.0f, 1.0f);
glColor3f(0.0f, 0.0f, 0.0f);
glEnableClientState(GL_VERTEX_ARRAY);
for (size_t i = 0; i < vbos.size(); ++i) {
glBindBufferARB(GL_ARRAY_BUFFER, vbos[i]->i);
glVertexPointer(2, GL_FLOAT, sizeof(float2), 0);
if (i == vbos.size()-1)
glDrawArrays(GL_LINE_STRIP, 0, vboStart);
else
glDrawArrays(GL_LINE_STRIP, 0, vboSize);
}
glDisableClientState(GL_VERTEX_ARRAY);
glutSwapBuffers();
glutPostRedisplay();
}
int main(int argc, char* argv[]) {
glutInit(&argc, argv);
glutInitDisplayMode(GLUT_RGBA | GLUT_DOUBLE);
glutInitWindowSize(512, 512);
glutCreateWindow("glview");
glutDisplayFunc(render);
glutMainLoop();
return 0;
}
As a number of people have mentioned in their answers, you do not need to plot that many points. I think it is important to rpeat Andrey's comment:
that is a HUGE amount of points! There isn't enough pixels on the screen to plot that amount.
Rewriting plotting routines in different languages is a waste of your time. A huge number of hours have gone into writing MATLAB, whay makes you think you can write a significantly faster plotting routine (in a reasonable amount of time)? Whilst your routine may be less general, and therefore would remove some of the checks that the MATLAB code will perform, your "bottleneck" is that you are trying to plot so much data.
I strongly recommend one of two courses of action:
Sample your data: You do not need 20 x 1000000 points on a figure - the human eye won't be able to distinguish between all the points, so it is a waste of time. Try binning your data for example.
If you maintain that you need all those points on the screen, I would suggest using a different tool. VisIt or ParaView are two examples that come to mind. They are parallel visualisation programs designed to handle extremenly large datasets (I have seen VisIt handle datasets that contained PetaBytes of data).
There is no way you can fit 1000000 data points on a small plot. How about you choose one in every 10000 points and plot those?
You can consider calling imresize on the large vector to shrink it, but manually building a vector by omitting 99% of the points may be faster.
#memyself The sampling operations are already occurring. Matlab is choosing what data to include in the graph. Why do you trust matlab? It looks to me that the graph you showed significantly misrepresents the data. The dense regions should indicate that the signal is at a constant value, but in your graph it could mean that the signal is at that value half the time - or was at that value at least once during the interval corresponding to that pixel?
Would it be possible to use an alternate architectue? For example, use MATLAB to generate the data and use a fast library or application (GNUplot?) to handle the plotting?
It might even be possible to have MATLAB write the data to a stream as the plotter consumes the data. Then the plot would be updated as MATLAB generates the data.
This approach would avoid MATLAB's ridiculously slow plotting and divide the work up between two separate processes. The OS/CPU would probably assign the process to different cores as a matter of course.
I think it's possible, but likely to require writing the plotting code (at least the parts you use) from scratch, since anything you could reuse is exactly what's slowing you down.
To test feasibility, I'd start with testing that any Win32 GUI works from MEX (call MessageBox), then proceed to creating your own window, test that window messages arrive to your WndProc. Once all that's going, you can bind an OpenGL context to it (or just use GDI), and start plotting.
However, the savings is likely to come from simpler plotting code and use of newer OpenGL features such as VBOs, rather than threading. Everything is already parallel on the GPU, and more threads don't help transfer of commands/data to the GPU any faster.
I did a very similar thing many many years ago (2004?). I needed an oscilloscope-like display for kilohertz sampled biological signals displayed in real time. Not quite as many points as the original question has, but still too many for MATLAB to handle on its own. IIRC I ended up writing a Java component to display the graph.
As other people have suggested, I also ended up down-sampling the data. For each pixel on the x-axis, I calculated the minimum and maximum values taken by the data, then drew a short vertical line between those values. The entire graph consisted of a sequence of short vertical lines, each immediately adjacent to the next.
Actually, I think that the implementation ended up writing the graph to a bitmap that scrolled continuously using bitblt, with only new points being drawn ... or maybe the bitmap was static and the viewport scrolled along it ... anyway it was a long time ago and I might not be remembering it right.
Blockquote
EDIT4: I know that this is a huge amount of points to plot but I never said that the problem was easy. I can not just leave out certain data points, because there's no way of assessing what points are important, before actually plotting them
Blockquote
This is incorrect. There is a way to to know which points to leave out. Matlab is already doing it. Something is going to have to do it at some point no matter how you solve this. I think you need to redirect your problem to be "how do I determine which points I should plot?".
Based on the screenshot, the data looks like a waveform. You might want to look at the code of audacity. It is an open source audio editing program. It displays plots to represent the waveform in real time, and they look identical in style to the one in your lowest screen shot. You could borrow some sampling techniques from them.
What you are looking for is the creation of a MEX file.
Rather than me explaining it, you would probably benefit more from reading this: Creating C/C++ and Fortran Programs to be Callable from MATLAB (MEX-Files) (a documentation article from MathWorks).
Hope this helps.