Can't get OpenGL view to work in QT application - c++

I'm trying to make a QT application with open GL, but no matter what I do I can't get it to do anything, it just renders blackness.
void GraphView::initializeGL()
{
qglClearColor(Qt::black);
glClear(GL_COLOR_BUFFER_BIT );
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(0.0 , 256.0, 256.0, 0.0, -1.0, 1.0);
glColor3f(1.0, 1.0, 1.0);
glBegin(GL_QUADS);
glVertex3f(0 , 0 , 0.0);
glVertex3f(0 , 128.0 , 0.0);
glVertex3f(128.0, 128.0 , 0.0);
glVertex3f(128.0, 0 , 0.0);
glEnd();
glFlush();
}
the only thing I've been able to get it to do is color the screen red by changing the clear color to red. But even then if I add the following code:
void GraphView::mousePressEvent(QMouseEvent *event)
{
qglClearColor(Qt::red);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glFlush();
}
It seems to do absolutely nothing; but the color will change if the window resizes--the resize method is just an empty { } affair. But if I open a second window in the same application with another GLwidget, it displays part of the firefox window, typically the bookmarks. I'm not sure if there's any particular reason for that; but it doesn't seem to display any other application.

Related

Qt, openGL, widgets and offscreen rendering

I am developing on RedHat Linux, cat /etc/redhat-release:
Red Hat Enterprise Linux Workstation release 7.2 (Maipo)
I am using Qt Creator 4.3.1:
Based on Qt 5.9.1 (GCC 5.3.1 20160406 (Red Hat 5.3.1-6), 64 bit)
The project I'm developing is using Qt 5.6.2 GCC 64bit, the project has been developed with graphical objects derived from QWidget, this includes a live video stream.
Unfortunately we have experienced tearing in the video whilst it is playing back and this is also evident in other widgets displayed around the video, I believe this is because the video is not using vsync.
I believe using openGL will rectify this situation, the aim is to rewrite the widgets including the video playback using openGL. I've spent several days trying to find complete and working solutions but so far failed to find a complete and working solution.
I've been looking at using QOpenGLWidget, in a widget I am using to test:
class clsElevStrip : public QOpenGLWidget, protected QOpenGLFunctions {
Q_OBJECT
In the constructor, I set-up the format for offscreen rendering:
//Create surface format for rendering offscreen
mobjFormat.setDepthBufferSize(24);
mobjFormat.setSamples(4);
mobjFormat.setVersion(3, 0);
mobjFormat.setSwapBehavior(QSurfaceFormat::DoubleBuffer);
setFormat(mobjFormat);
In the paintGL method:
QOpenGLContext* pobjContext = context();
QSurface* pobjSurface = pobjContext->surface();
assert(pobjSurface != NULL);
int intSB1 = pobjSurface->format().swapBehavior();
qDebug() << (QString("paintGL:format: ")
+ QString::number(intSB1));
pobjContext->makeCurrent(pobjSurface);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glBegin(GL_TRIANGLES);
glColor3f(1.0, 0.0, 0.0);
glVertex3f(-0.5, -0.5, 0);
glColor3f(0.0, 1.0, 0.0);
glVertex3f( 0.5, -0.5, 0);
glColor3f(0.0, 0.0, 1.0);
glVertex3f( 0.0, 0.5, 0);
glEnd();
pobjContext->swapBuffers(pobjSurface);
Nothing is visible on the main display, the debug statement shows the format as 2 (DoubleBuffering).
If I comment out the line in the constructor:
setFormat(mobjFormat);
The debug statement shows the format as 0 (DefaultSwapBehavior). And the graphics are visible, what have I missed?
The solution for your problem is simple:
Just do not all that QOpenGLGLContext jugging. The whole point of paintGL is, that this particular function is called inside a wrapper that already does all that context juggling for you. **There is no need to call makeCurrent or swapBuffers. Qt already does that for you!
From the Qt documentation
void QOpenGLWidget::paintGL()
This virtual function is called whenever the widget needs to be
painted. Reimplement it in a subclass.
There is no need to call makeCurrent() because this has
already been done when this function is called.
Before invoking this function, the context and the framebuffer are
bound, and the viewport is set up by a call to glViewport().
No other state is set and no clearing or drawing is performed
by the framework.
If you have just this as your paintGL it will show something, iff you have either a compatibility profile >=OpenGL-3.x context OR if you're using a <=OpenGL-2.x context. You're using the legacy fixed function pipeline there, which will not work with OpenGL-3.x core profile contexts!
void glwidget::paintGL(){
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glBegin(GL_TRIANGLES);
glColor3f(1.0, 0.0, 0.0);
glVertex3f(-0.5, -0.5, 0);
glColor3f(0.0, 1.0, 0.0);
glVertex3f( 0.5, -0.5, 0);
glColor3f(0.0, 0.0, 1.0);
glVertex3f( 0.0, 0.5, 0);
glEnd();
}

Texture Displayed with Green Hue

I'm learning how to apply textures in OpenGL. I have a fairly simple cube on which I am trying to apply a texture to make it look like a wooden board. When I apply my texture, it displays with a green hue. I can apply some other textures that look just fine, so I can't figure out what is wrong with this one. I created the texture from a jpg that I downloaded. The bmp file looks fine when I view it in Preview (I'm on a Mac). I'll attach a screenshot of the original bitmap and also of how it looks when rendered by OpenGL.
The texture loading code that I am using can be found here:
unsigned int texture[2]; // Texture names
// define the board
float square_edge = 1;
float border = 0.5;
float board_thickness = 0.25;
float board_corner = 4 * square_edge + border;
float board_width = 2 * board_corner;
GLfloat board_vertices[8][3] = {
{-board_corner, board_corner, 0.0},
{-board_corner, -board_corner, 0.0},
{ board_corner, -board_corner, 0.0},
{ board_corner, board_corner, 0.0},
{-board_corner, board_corner, -board_thickness},
{-board_corner, -board_corner, -board_thickness},
{ board_corner, -board_corner, -board_thickness},
{ board_corner, board_corner, -board_thickness}
};
void polygon(int a, int b, int c, int d) {
glBindTexture(GL_TEXTURE_2D, texture[0]);
glBegin(GL_QUADS);
glTexCoord2f(0.0, 0.0); glVertex3fv(board_vertices[a]);
glTexCoord2f(1.0, 0.0); glVertex3fv(board_vertices[b]);
glTexCoord2f(1.0, 1.0); glVertex3fv(board_vertices[c]);
glTexCoord2f(0.0, 1.0); glVertex3fv(board_vertices[d]);
glEnd();
}
void draw_board() {
glPushMatrix();
glRotatef(rotx, 1.0, 0.0, 0.0);
glScalef(1/board_corner, 1/board_corner, 1/board_corner);
glEnableClientState(GL_VERTEX_ARRAY);
glVertexPointer(3, GL_FLOAT, 0, board_vertices);
glBindTexture(GL_TEXTURE_2D, texture[1]);
glColor3f(1.0, 1.0, 1.0); //color of the border, sides, bottom of board
// draw the board
polygon(0,3,2,1);
polygon(2,3,7,6);
polygon(0,4,7,3);
polygon(1,2,6,5);
polygon(4,5,6,7);
polygon(0,1,5,4);
glPopMatrix();
}
void display()
{
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glEnable(GL_DEPTH_TEST);
glEnable(GL_TEXTURE_2D);
glLoadIdentity();
double Ex = -2*dim*Sin(th)*Cos(ph);
double Ey = +2*dim *Sin(ph);
double Ez = +2*dim*Cos(th)*Cos(ph);
gluLookAt(Ex,Ey,Ez , 0,0,0 , 0,Cos(ph),0);
glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_MODULATE);
draw_board();
glFlush();
glutSwapBuffers();
}
The problem is in the bmp image itself. As I mentioned, I created this from a jpg file by converting it via Gimp. Somehow, this conversion went awry. When I run 'file wood.bmp' from the command line, my response is 'wood.bmp: data'. It should show 'wood.bmp: PC bitmap, Windows 3.x format, 128 x 128 x 24" or something similar. I did the conversion using good ol' MS Paint, and the problem went away.
I'm trying to find out how to do this in Gimp, but for now this all works. Thanks for all the suggestions!
When exporting a .bmp image with gimp, make sure you select the "Do not write color space information" option, underneath the Compatibility Options dropdown. .bmp images exported this way won't appear with a green hue when rendered with opengl.

How to flash (flick) a 2d object at a certain frequency in opengl

I need to flash two 2d object at a certain frequency. I am using for this OpenGL (glut) and C++ in Visual C++ Express Edition. OS is Windows XP Sp3, 32 bit.
I think I have successfully implemented the basic application, but I can not figure out how to make flash objects at a certain frequency. Do you have any suggestions for me? The code I've done is this.
void display(void) {
glClear( GL_COLOR_BUFFER_BIT);
glBegin(GL_POLYGON);
glColor3f(0.0, 0.0, 0.0);
glVertex3f(140.0, 250.0+300.0, 0.0); //bottom left corner
glVertex3f(140.0+300.0, 250.0+300.0, 0.0); //bottom right corner
glVertex3f(140.0+300.0, 250.0, 0.0); //top right corner
glVertex3f(140.0, 250.0, 0.0);
glEnd();
glBegin(GL_POLYGON);
glColor3f(1.0, 1.0, 1.0);
glVertex3f(640.0+200.0, 250.0+300.0, 0.0); //bottom left corner
glVertex3f(640.0+200.0+300.0, 250.0+300.0, 0.0); //bottom right corner
glVertex3f(640.0+200.0+300.0, 250.0, 0.0); //top right corner
glVertex3f(640.0+200.0, 250.0, 0.0);
glEnd();
glFlush();
}
int main(int argc, char **argv) {
glutInit(&argc, argv);
glutInitDisplayMode ( GLUT_SINGLE | GLUT_RGB | GLUT_DEPTH);
glutInitWindowPosition(0,0);
glutInitWindowSize(1280,800);
glutGameModeString("1280x800:32#60");
glutEnterGameMode();
glutSetWindowTitle("OpenGL SSVEP stimulator");
glDisable(GL_DEPTH_TEST);
glClearColor(0.0, 0.0, 0.0, 0.0);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(0.0,1280,800,0.0,0.0,1.0);
glutDisplayFunc(display);
glutMainLoop();
return 0;
}
To flash I thought of something like this (Pseudo-code):
int leftFrequency=12;
int rightFrequency=20;
int i=0;
while(running) {
if(i%leftFrequency)
blackSquare;
}
else {
whiteSquare;
}
if(i%rightFrequency)
blackSquare;
}
else {
whiteSquare;
}
}
but I do not know where to put this code. In the display() function maybe? Where can I increment the i variable? I tried to put everything inside the display() function, but the two squares do not flash. The i variable is increased up to 3. I have no type of error.
Maybe is it not correct the flickering logic?
Make i a global variable (or make it a static local) and put your flashing code (the last code snippet) into the display function.
Enable double buffering glutInitDisplayMode(… | GLUT_DOUBLE) and replace the glFlush() in display with glutSwapBuffers().
Add an increment of i at the end of display.
Register glutPostRedisplay as GLUT idle function, i.e. in your main function before calling glutMainLoop()
glutIdleFunc(glutPostRedisplay);
If it's going to be a certain frequency, then you need to flick based on time.

OpenGL: how to superimpose in the same GL window two gl drawn in different viewport and glortho

in the Draw() function of a GLForm (openGL with borland builder),
I first draw an image with a SDK function called capture_card::paintGL().
And this function compells to have this projection before being called :
glViewport(0, 0, (GLsizei)newSize.width, (GLsizei)newSize.height);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(-1.0, 1.0, -1.0, 1.0, -1.0, 1.0);
glMatrixMode(GL_MODELVIEW);
And on foreground of this painted image, i have to draw an other layer which has been allready coded for another viewport and another glortho projection :
(in the MainResizeGL() called on "onResize" events) :
glViewport(-width, -height, width * 2, height * 2);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(double(-width)*viewport_ratio, double(width)*viewport_ratio, double(-height), double(height), 1000.0, 100000.0);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
(and in the MainDraw() called by a "Ontimer") :
glLoadIdentity();
glTranslatef(0.0, 0.0, -50000.0);
mpGLDrawScene->DrawScene(); //(this calls a doDrawScene() I don't understand exactly how it draws : with calling this->parent, etc.)
glFlush();
SwapBuffers(ghDC);
So I transformed the MainDraw() to the following :
// viewport and projection needed for the paintGL
glViewport(0, 0, (GLsizei)newSize.width, (GLsizei)newSize.height);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(-1.0, 1.0, -1.0, 1.0, -1.0, 1.0);
glMatrixMode(GL_MODELVIEW);
// call of the paintGL
if(capture_button_clicked) capture_card::paintGL();
// content of the ResizeGL in order to get back to the projection desired and to matrixmode modelview
glViewport(-width, -height, width * 2, height * 2);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(double(-width)*viewport_ratio, double(width)*viewport_ratio, double(-height), double(height), 1000.0, 100000.0);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
// original drawscene call
glLoadIdentity();
glTranslatef(0.0, 0.0, -50000.0);
mpGLDrawScene->DrawScene(); //(this calls a doDrawScene() I don't understand exactly how it draws : with calling this->parent, etc.)
glFlush();
SwapBuffers(ghDC);
The result is that I see the ancient project's "drawscene" items but when I click on the "capture_button" the paintGL remains invisible and the items drawn turn into something like an alpha-channel canva.
I tried to add glScalef(width, height, 1) after the paintGL, changed the glTranslatef(0,0,50000) with the result that I saw a small amount of pixel with the colors of the paintGL, but the overlay items then disappear.
How could I get these two different viewport to superimpose (i.e. the drawscene above the paintGL) ?
Thanks in advance,
cheers,
Arnaud.
Use scissor test together with the viewport
glEnable(GL_SCISSOR_TEST);
glScissor(viewport.x, viewport.y, viewport.width, viewport.height)
glViewport(viewport.x, viewport.y, viewport.width, viewport.height)
and clear the depth buffer (only the depth buffer) between the different rendering stages.

OpenGL and GLUT uncomprehension

i'm completely don't understanding opengl+glut works....
PLEASE explain why he does that? =(
I have simple code
void display()
{
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glPushMatrix();
ChessboardSurrogate.Draw(6,8,50, 0,0); // draw chessboard(6x8) in (0,0,0) edge per cell 50
glPopMatrix();
glPushMatrix();
//glutSolidCube(100); //!!!!!
glPopMatrix();
glLoadIdentity();
gluLookAt( 0.0, 0.0, -testSet::znear*2,
0.0, 0.0, 0.0,
0.0, 1.0, 0.0);
glMultMatrixf(_data->m); // my transformation matrix
glutSwapBuffers();
}
And i get the expected result. screenshot #1
Then I uncomment glutSolidCube(100). In any case, I even do push/pop current matrix, and later override it by identity matrix.... i think that i would see the same result image with cude... BUT! i see THIS screenshot #2 What the..... &*^##$% Why?
If i add code
glRotatef(angleX,1.0,0.0,0.0);
glRotatef(angleY,0.0,1.0,0.0);
before glutSwapBuffers, than I'll see that the chessboard on the spot .....
screenshot #3
This is probably only half the answer, but why on earth are you setting your matrices AFTER drawing ? What do you expect it to do ?
so :
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glLoadIdentity();
gluLookAt( 0.0, 0.0, -testSet::znear*2,
0.0, 0.0, 0.0,
0.0, 1.0, 0.0);
glMultMatrixf(_data->m); // my transformation matrix
glPushMatrix();
ChessboardSurrogate.Draw(6,8,50, 0,0); // draw chessboard(6x8) in (0,0,0) edge per cell 50
glPopMatrix();
glPushMatrix();
//glutSolidCube(100); //!!!!!
glPopMatrix();
glutSwapBuffers();
moreover, always make sure you're setting the right matrix :
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
gluLookAt()...
Lastly, unless your Draw() method changes the modelview matrix, your Push/PopMatrix is useless and should be avoided (for performance and portability reasons since it's deprecated)