OpenGL c++ - unwanted black triangle appears - c++

I started working on a simple GUI application using OpenGL. Drawing a simple background quad makes a lot of difficulties and I just can't spot what I am doing wrong or what's 'broken'. Here is the part of the code which is responsible for drawing my background 'quad':
static void Draw(void)
{
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glColor3f(0.5f, 0.5f, 0.5f);
glBegin(GL_QUADS);
glVertex2f(-1.0f, -1.0f);
glVertex2f(1.0f, -1.0f);
glVertex2f(-1.0f, 1.0f);
glVertex2f(1.0f, 1.0f);
glEnd();
glutSwapBuffers();
}
int main(int argc, char **argv)
{
glutInit(&argc, argv);
glutInitDisplayMode(GLUT_DOUBLE | GLUT_RGB | GLUT_DEPTH);
glutInitWindowSize(600, 600);
glutCreateWindow("Pro Sound");
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glutDisplayFunc(Draw);
glutKeyboardFunc(keyPressed);
glutKeyboardUpFunc(keyUp);
glutMainLoop();
}
The output looks like a gray quad, but on top of it (right side) there is a black triangle with vertices on the centre, top right corner and bottom right corner. So my background looks like bitten by a triangle-teeth pixel monster. The thing is, it recently worked properly, I even could add some textures, position it well etc.

It looks like you are drawing the quad vertices in the wrong order. Vertices should be ordered, by default, in a counter-clockwise fashion along the perimeter of the polygon.
So what you have is:
(-1.0f, -1.0f) Bottom-Left
( 1.0f, -1.0f) Bottom-Right
(-1.0f, 1.0f) Top-Left
( 1.0f, 1.0f) Top-Right
which is creating:
When you want:
(-1.0f, -1.0f) Bottom-Left
( 1.0f, -1.0f) Bottom-Right
( 1.0f, 1.0f) Top-Right
(-1.0f, 1.0f) Top-Left
which creates:
See also Face Culling and Primitive - Quads

Instead of using GL_QUADS and changing your coordinates, keep your coordinates and use GL_TRIANGLE_STRIP.
I'm fairly certain GL_QUADS is deprecated, and it seems to be good practice to use GL_TRIANGLE_STRIP or GL_TRIANGLE_FAN whether you are drawing quads or not.
The only time you should use GL_QUADS is when you are in immediate mode and sketching up a small program in which performance does not matter. (This may be one of those times; but for reinforcement and good measure, I always use GL_TRIANGLE_STRIP or GL_TRIANGLE_FAN.)

Related

OpenGL having trouble with Z-Buffer and Depth Test [duplicate]

This question already has answers here:
Can't get depth testing to work in OpenGL
(2 answers)
Closed 3 years ago.
I've been struggling with OpenGL's Z-Buffer, i can't get it to work.
This is the code (i narrowed it down to the minimum necessary to show the problem):
#include <stdio.h>
#include <stdlib.h>
#include <glad\glad.h>
#include <SFML\Graphics.hpp>
#define WIDTH 800
#define HEIGHT 600
int main(void) {
sf::ContextSettings settings;
settings.antialiasingLevel = 16;
settings.majorVersion = 4;
settings.minorVersion = 6;
sf::RenderWindow window(sf::VideoMode(WIDTH, HEIGHT), "Test", sf::Style::Close, settings);
if (!gladLoadGL()) {
printf("COULD NOT INITALIZE OPENGL CONTEXT\n");
}
window.setActive(true);
window.setVerticalSyncEnabled(true);
glClearColor(0.1f, 0.1f, 0.1f, 1.0f);
glEnable(GL_DEPTH_TEST);
glDepthFunc(GL_LESS);
while (window.isOpen()) {
sf::Event event;
while (window.pollEvent(event)) {
if (event.type == sf::Event::Closed) {
window.close();
}
}
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glBegin(GL_TRIANGLES);
glColor3f(1.0f, 0.0f, 0.0f); // red
glVertex3f(-1.0f, -1.0f, 0.25f); // first triangle
glVertex3f( 1.0f, -1.0f, 0.25f);
glVertex3f( 0.0f, 1.0f, 0.25f);
glColor3f(0.0f, 0.0f, 1.0f); // blue
glVertex3f(-1.0f, 1.0f, 0.5f); // second triangle
glVertex3f( 1.0f, 1.0f, 0.5f);
glVertex3f( 0.0f, -1.0f, 0.5f);
glEnd();
window.display();
}
return(0);
}
Output:
The blue triangle is on top of the red one cause i drawed it for second, even though it should stay behind.
I first noticed this behaviour on my main OpenGL project where i would import 3D models and i would have triangles being rendered in front of others, suggesting that the Z-Buffer was not doing its thing.
I tried many solutions, such as enabling GL_CULL_FACE, playing around with the near plane from the projection matrix (on the 3d project), but nothing seems to work. Every fix i found online that works for almost everyone doesn't seem to work for me, so i'm kind of desperate...
If anyone knows the issue let me know!
As commented by #Quimby, the solution can be found here
The issue was that SFML needs you to set a depth buffer through the window settings.
It's been a while, but I think it's not the Z buffer issues. OpenGL uses right handed coordinate system and positive Z is away from the screen. So your blue triangle having larger z values would means it should be in front. Smaller z values should be farther away right? So this is correct behavior I believe.
This seems like correct output to me. OpenGL has flipped Z axis compared to DirectX. So +1 is front, -1 is back so the "camera" looks into the negative Z direction.

OpenGL far clipping when using frustum

I'm using GLFW + OpenGL to try to make the "rotating cube". Although most of it is working, I have clipping in the far plane. I've tried changing values for frustum to very large numbers but it seems to have no effect.
int main(void) {
if (!glfwInit()) exit(EXIT_FAILURE);
glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 2);
glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 0);
glfwWindowHint(GLFW_SAMPLES, 4); // 4x antialiasing
GLFWwindow* window = glfwCreateWindow(640, 360, "3dTest", NULL, NULL);
if (!window) {
glfwTerminate();
exit(EXIT_FAILURE);
}
glfwMakeContextCurrent(window);
glfwSwapInterval(1);
glClearColor(0.5f, 0.5f, 0.5f, 1.0f); // Grey Background
float rotqube = 0;
while (!glfwWindowShouldClose(window)) {
// clear color and depth buffer for new frame
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
// set up camera
glViewport(0, 0, 640, 360);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glFrustum(-100.0, 100.0, -100.0, 100.0, 100.0, -100.0);
// position camera
glTranslatef(0.0f, 0.0f, -2.0f); // Translate Into The Screen 2.0 Units
glRotatef(rotqube, 0.0f, 1.0f, 0.0f); // Rotate The cube around the Y axis
glRotatef(rotqube, 1.0f, 1.0f, 1.0f);
glBegin(GL_QUADS); // Draw The Cube Using quads
glColor3f(0.0f, 1.0f, 0.0f); // Color Blue
glVertex3f(1.0f, 1.0f, -1.0f); // Top Right Of The Quad (Top)
glVertex3f(-1.0f, 1.0f, -1.0f); // Top Left Of The Quad (Top)
glVertex3f(-1.0f, 1.0f, 1.0f); // Bottom Left Of The Quad (Top)
glVertex3f(1.0f, 1.0f, 1.0f); // Bottom Right Of The Quad (Top)
...
glColor3f(1.0f, 0.0f, 1.0f); // Color Violet
glVertex3f(1.0f, 1.0f, -1.0f); // Top Right Of The Quad (Right)
glVertex3f(1.0f, 1.0f, 1.0f); // Top Left Of The Quad (Right)
glVertex3f(1.0f, -1.0f, 1.0f); // Bottom Left Of The Quad (Right)
glVertex3f(1.0f, -1.0f, -1.0f); // Bottom Right Of The Quad (Right)
glEnd(); // End Drawing The Cube
rotqube += 0.3f;
//Swap buffer and check for events
glfwSwapBuffers(window);
glfwPollEvents();
}
glfwDestroyWindow(window);
glfwTerminate;
exit(EXIT_SUCCESS);
return 0;
}
This is what it looks like:
You are not using a perspective projection at all. Your call
glFrustum(-100.0, 100.0, -100.0, 100.0, 100.0, -100.0);
has no effect whatsever, besides setting the GL_INVALID_VALUE error state.
As stated in the OpenGL 2.0 specification, section 2.11 "Coordinate Transformations":
For
void Frustum( double l, double r, double b, double t, double n, double f );
the coordinates (l, b, −n)^T and (r, t, −n)^T
specify the points on the near clipping
plane that are mapped to the lower left and upper right corners of the window,
respectively (assuming that the eye is located at (0, 0, 0)^T). f gives the distance
from the eye to the far clipping plane. If either n or f is less than or equal to zero,
l is equal to r, b is equal to t, or n is equal to f, the error INVALID_VALUE results.
Trying to set a projection where one of the near or far planes lies behind the camera does not make the slightest sense, and would result in a lot of mathematical oddities during rendering (i.e division by zero for vertices lying on the camera plane), hence it is not allowed.
Since this function fails with an error, you are using the identity matrix as the projection matrix, and do end up with a orthographic projection.
Now having written all that, I must make you aware that all of this is completely outdated. The fixed function pipeline and the GL matrix stack, including functions like glFrustum, glLoadIdendity, glRotate, and immediate mode rendering using glBegin/glEnd are deprecated and have been removed form core profiles of OpenGL almost a decade ago. It is a really bad idea to try to learn this stuff in 2018, and I strongly advice you to learn modern OpenGL instead.
glFrustum(-100.0, 100.0, -100.0, 100.0, 100.0, -100.0);
^ wat
glFrustum(GLdouble left, GLdouble right, GLdouble bottom, GLdouble top, GLdouble nearVal, GLdouble farVal):
Parameters:
left, right:
Specify the coordinates for the left and right vertical clipping planes.
bottom, top:
Specify the coordinates for the bottom and top horizontal clipping planes.
nearVal, farVal:
Specify the distances to the near and far depth clipping planes.
Both distances must be positive.
Try something like 0.1 to 100.0:
glFrustum(-100.0, 100.0, -100.0, 100.0, 0.1, 100.0);

gluLookAt not working properly after drawing shapes

I am really confused by how gluLookAt and glOrtho or gluPersective work together. Here is the problem.
I draw a 2D triangle and a 2D pentagon in the z-axis of -5.
//pentagon
glVertex3f(0.5f, 0.5f, -5.0f);
glVertex3f(1.5f, 0.5f, -5.0f);
glVertex3f(0.5f, 1.0f, -5.0f);
glVertex3f(0.5f, 1.0f, -5.0f);
glVertex3f(1.5f, 0.5f, -5.0f);
glVertex3f(1.5f, 1.0f, -5.0f);
glVertex3f(0.5f, 1.0f, -5.0f);
glVertex3f(1.5f, 1.0f, -5.0f);
glVertex3f(1.0f, 1.5f, -5.0f);
//Triangle
glVertex3f(-0.5f, 0.5f, -5.0f);
glVertex3f(-1.0f, 1.5f, -5.0f);
glVertex3f(-1.5f, 0.5f, -5.0f);
And Then I define my camera position (0,0,-10) and perspective
//Tell OpenGL how to convert from coordinates to pixel values
glViewport(0, 0, w, h);
gluLookAt(0, 0, -10, 0, 0, -200, 0, 1, 0);
glMatrixMode(GL_PROJECTION); //Switch to setting the camera perspective
//Set the camera perspective
glLoadIdentity(); //Reset the camera
gluPerspective(45.0, //The camera angle
(double)w / (double)h, //The width-to-height ratio
1.0, //The near z clipping coordinate
5.2); //The far z clipping coordinate
Based on my understanding, I can see nothing in the scene. Because objects are defined in the -5 z-axis, however, the camera is at -10 z-axis, and it looks into negative z-axis. Thus, the object should be behind the camera. But why I can still see the objects in the scene?
Similarly, I can still see the object when I define my camera at postive 5 at look towards positive z-axis. Why?
Another question is why I can see the objects after I set the far z clipping coordinate to 5?
Anyone can explain this?
My full code:
#include "stdafx.h"
#include <stdio.h>
#include <tchar.h>
#include <stdlib.h>
#include <GL/glut.h>
#include <math.h>
#include <iostream>
#include <stdlib.h> //Needed for "exit" function
//Include OpenGL header files, so that we can use OpenGL
#ifdef __APPLE__
#include <OpenGL/OpenGL.h>
#include <GLUT/glut.h>
#else
#include <GL/glut.h>
#endif
using namespace std;
//Called when a key is pressed
void handleKeypress(unsigned char key, //The key that was pressed
int x, int y) { //The current mouse coordinates
switch (key) {
case 27: //Escape key
exit(0); //Exit the program
}
}
//Initializes 3D rendering
void initRendering() {
//Makes 3D drawing work when something is in front of something else
glEnable(GL_DEPTH_TEST);
}
//Called when the window is resized
void handleResize(int w, int h) {
//Tell OpenGL how to convert from coordinates to pixel values
glViewport(0, 0, w, h);
gluLookAt(0, 0, -10, 0, 0, -200, 0, 1, 0);
glMatrixMode(GL_PROJECTION); //Switch to setting the camera perspective
//Set the camera perspective
glLoadIdentity(); //Reset the camera
gluPerspective(45.0, //The camera angle
(double)w / (double)h, //The width-to-height ratio
1.0, //The near z clipping coordinate
5.2); //The far z clipping coordinate
}
//Draws the 3D scene
void drawScene() {
//Clear information from last draw
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glMatrixMode(GL_MODELVIEW); //Switch to the drawing perspective
glLoadIdentity(); //Reset the drawing perspective
glBegin(GL_QUADS); //Begin quadrilateral coordinates
//Trapezoid
glVertex3f(-0.7f, -1.5f, -5.0f);
glVertex3f(0.7f, -1.5f, -5.0f);
glVertex3f(0.4f, -0.5f, -5.0f);
glVertex3f(-0.4f, -0.5f, -5.0f);
glEnd(); //End quadrilateral coordinates
glBegin(GL_TRIANGLES); //Begin triangle coordinates
//Pentagon
glVertex3f(0.5f, 0.5f, -5.0f);
glVertex3f(1.5f, 0.5f, -5.0f);
glVertex3f(0.5f, 1.0f, -5.0f);
glVertex3f(0.5f, 1.0f, -5.0f);
glVertex3f(1.5f, 0.5f, -5.0f);
glVertex3f(1.5f, 1.0f, -5.0f);
glVertex3f(0.5f, 1.0f, -5.0f);
glVertex3f(1.5f, 1.0f, -5.0f);
glVertex3f(1.0f, 1.5f, -5.0f);
//Triangle
glVertex3f(-0.5f, 0.5f, -5.0f);
glVertex3f(-1.0f, 1.5f, -5.0f);
glVertex3f(-1.5f, 0.5f, -5.0f);
glEnd(); //End triangle coordinates
glutSwapBuffers(); //Send the 3D scene to the screen
}
int main(int argc, char** argv) {
//Initialize GLUT
glutInit(&argc, argv);
glutInitDisplayMode(GLUT_DOUBLE | GLUT_RGB | GLUT_DEPTH);
glutInitWindowSize(400, 400); //Set the window size
//Create the window
glutCreateWindow("Basic Shapes - videotutorialsrock.com");
initRendering(); //Initialize rendering
//Set handler functions for drawing, keypresses, and window resizes
glutDisplayFunc(drawScene);
glutKeyboardFunc(handleKeypress);
glutReshapeFunc(handleResize);
glutMainLoop(); //Start the main loop. glutMainLoop doesn't return.
return 0; //This line is never reached
}
You're saying:
I draw a 2D triangle and a 2D pentagon in the z-axis of -5
...
And Then I define my camera position (0,0,-10) and perspective
If that is really the order you're doing the operations in, then that's the culprit. OpenGL is a command-based API, not a scene graph. Calling glVertex(x, y, z) does not mean "there is a vertex with coordinates x, y, z in the scene." It means "now go and draw a verex on coordinates x, y, z." This means the vertex is transformed by the modelview and projection matrix active at the time of the glVertex() call.
In other words, you issue the vertices with the default modelview and projection matrices, so they get drawn on the screen normally. Then you change the modelview and projection; if you then went on to issue any more vertices, they would be transformed by these new modelview and projection values. But the ones issued previously are already on the screen and unaffected.
In other words, remove these two lines from your draw function:
glMatrixMode(GL_MODELVIEW); //Switch to the drawing perspective
glLoadIdentity(); //Reset the drawing perspective
The comments seem to indicate the issue rather well.
The most common setup is to define the projection matrix in the resize hook (as it depends on the aspect ratio), and the view matrix (=camera position and orientation) in the draw function, before issuing any render commands.
As #datenwolf correctly pointed out in comments, this is the common setup for single view rendering. If you have more than one viewport, each of them will probably need a different projection matrix, in which case you would set up the projection matrix along with the view matrix in rendering code, before issuing primitives.

How to display a sphere correctly in openGL

I don't know very much about openGL/glut, but I've used it before successfully for some exceedingly simple things in 2D.
Now I want to be able to draw spheres in 3D. I'm trying to simulate particle collisions, so all I'm really going to need to do on the graphics end is draw spheres.
Here's my abortive attempt
void renderScene()
{
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glLoadIdentity();
// Set the camera
gluLookAt(1.0f, 1.0f, 1.0f,
0.0f, 0.0f, 0.0f,
0.0f, 1.0f, 0.0f);
glutSwapBuffers();
}
void timerProc(int arg)
{
glutTimerFunc(50,timerProc,0);
// Reset transformations
glLoadIdentity();
// Set the camera
gluLookAt(1.0f, 1.0f, 1.0f,
0.0f, 0.0f, 0.0f,
0.0f, 1.0f, 0.0f);
glClearColor(1.0f, 1.0f, 1.0f, 1.0f);
glColor3f(0.0,0.0,0.0); //color = black
glPushMatrix();
glTranslated(0,0,0);
glutSolidSphere(.74, 500, 500);
glPopMatrix();
glutSwapBuffers();
}
int main(int argc, char **argv)
{
srand(time(NULL));
init();
glutInit(&argc, argv);
glutInitDisplayMode(GLUT_DEPTH | GLUT_DOUBLE | GLUT_RGBA);
glutInitWindowPosition(50,30);
glutInitWindowSize(glutGet(GLUT_SCREEN_WIDTH)-80,glutGet(GLUT_SCREEN_HEIGHT)-60);
mainWindow=glutCreateWindow("New Window"); //global variable
WIDTH=glutGet(GLUT_WINDOW_WIDTH); //global variable
HEIGHT=glutGet(GLUT_WINDOW_HEIGHT); //global variable
glutDisplayFunc(renderScene);
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA,GL_ONE_MINUS_SRC_ALPHA);
glutTimerFunc(50,timerProc,0);
glutMainLoop();
return 0;
}
Hopefully all of my problems stem from one really basic mistake...
For some reason, this creates an oval. And, though the oval is pretty big (maybe about an 1/8th of the screen wide and tall), if I lower the radius down to .73 it vanishes, I'm guessing because it's too small to see.
How would I make it so that this sphere would show up circular like you'd expect, and so that as I can see everything that's happening in a given volume, say a 10x10x10 box, the way you would if you were just standing next to a box of particles that were flying around and peering into it, or a reasonable approximation. Right now it's hard to tell what exactly I'm looking at (I know that I'm standing at 1,1,1 and looking at the origin, but it's hard to grasp exactly what I'm seeing)
Also, occasionally when I run it the whole screen is just black. Then when I clean and build and run again it's fine. Not really a huge concern, but annoying, and I'd love to understand what was going on.
Also, when I the number of slices and stacks was lower, it would look fine if the radius was large, but become extremely distorted when the radius was small, which I thought was very strange...
The main problem you are having here is Z clipping. The initial Z range for the scene is (-1, 1) so you only see a part of the actual sphere and by change in its size you go out of z range.
Image
There are several problems I see in the code.
It is good to get a grasp of how the GLUT workflow actually works.
Lets see what the code does wrong.
Main
int main(int argc, char **argv)
{
srand(time(NULL));
init();
glutInit(&argc, argv);
glutInitDisplayMode(GLUT_DEPTH | GLUT_DOUBLE | GLUT_RGBA);
glutInitWindowPosition(50, 30);
glutInitWindowSize(glutGet(GLUT_SCREEN_WIDTH) - 80,
glutGet(GLUT_SCREEN_HEIGHT) - 60);
mainWindow = glutCreateWindow("New Window"); //global variable
WIDTH = glutGet(GLUT_WINDOW_WIDTH); //global variable
HEIGHT = glutGet(GLUT_WINDOW_HEIGHT); //global variable
glutDisplayFunc(renderScene);
Here you define the display function. It is called every time the window contents has to be invalidated. In this case it is invalidated only at start. The renderScene function does not do anything awesome, just clears the screen. So you get a black screen at the beginning.
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
No need for blending at the moment. You can skip that part altogether.
glutTimerFunc(50, timerProc, 0);
Now you set up the timerProc function to be called in 50 milliseconds.
glutMainLoop();
As the documentation states: glutMainLoop enters the GLUT event processing loop. This routine should be called at most once in a GLUT program. Once called, this routine will never return. It will call as necessary any callbacks that have been registered.
return 0;
}
Render Scene
void renderScene()
{
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
This is the only place where you clear the screen. Timer Func does not do this.
glLoadIdentity();
You are reseting the matrices.
// Set the camera
gluLookAt(1.0f, 1.0f, 1.0f,
0.0f, 0.0f, 0.0f,
0.0f, 1.0f, 0.0f);
Setting up the matrices. (One matrix to be precise)
glutSwapBuffers();
And without drawing anything you swap buffers.
}
Scene rendering function is called each time the window frame has to be redrawn.
Timer
This function does rely on the screen being cleared at first by the renderScene.
void timerProc(int arg)
{
glutTimerFunc(50, timerProc, 0);
// Reset transformations
glLoadIdentity();
// Set the camera
gluLookAt(1.0f, 1.0f, 1.0f,
0.0f, 0.0f, 0.0f,
0.0f, 1.0f, 0.0f);
glClearColor(1.0f, 1.0f, 1.0f, 1.0f);
Not clearing this time. Only setting the color.
glColor3f(0.0, 0.0, 0.0); //color = black
glPushMatrix();
glTranslated(0, 0, 0);
glutSolidSphere(.74, 500, 500);
glPopMatrix();
glutSwapBuffers();
}
How to fix it?
Just setup the matrices. With proper Z range.
void resetTransformations() {
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(-1, 1, -1, 1, -1000, 1000);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
gluLookAt(1.0f, 1.0f, 1.0f,
0.0f, 0.0f, 0.0f,
0.0f, 1.0f, 0.0f);
}
void renderScene()
{
glClearColor(1.0f, 1.0f, 1.0f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
// Reset transformations
resetTransformations();
// Just to see some triangles
glPolygonMode(GL_FRONT_AND_BACK, GL_LINE);
glColor3f(0.0, 0.0, 0.0); //color = black
glPushMatrix();
glTranslated(0, 0, 0);
glutSolidSphere(0.74, 500, 500);
glPopMatrix();
glutSwapBuffers();
}
int main(int argc, char **argv)
{
srand(time(NULL));
glutInit(&argc, argv);
glutInitDisplayMode(GLUT_DEPTH | GLUT_DOUBLE | GLUT_RGBA);
glutInitWindowPosition(50, 30);
glutInitWindowSize(256, 256);
mainWindow = glutCreateWindow("New Window"); //global variable
WIDTH = glutGet(GLUT_WINDOW_WIDTH); //global variable
HEIGHT = glutGet(GLUT_WINDOW_HEIGHT); //global variable
glutDisplayFunc(renderScene);
glutIdleFunc(renderScene);
glutMainLoop();
return 0;
}

SDL2 + OpenGL colored geometry

I want to use OpenGL to draw on top of a webcam stream. I'm using an SDL_Surface named screen_surface_ containing webcam data, that I'm rendering to the screen using (1):
SDL_UpdateTexture(screen_texture_, NULL, screen_surface_->pixels, screen_surface_->pitch);
SDL_RenderClear(renderer_);
SDL_RenderCopy(renderer_, screen_texture_, NULL, NULL);
Then I try to draw some geometry on top:
glLoadIdentity();
glColor3f(1.0, 0.0, 1.0);
glBegin( GL_QUADS );
glVertex3f( 10.0f, 50.0f, 0.0f ); /* Top Left */
glVertex3f( 50.0f, 50.0f, 0.0f ); /* Top Right */
glVertex3f( 50.0f, 10.0f, 0.0f ); /* Bottom Right */
glVertex3f( 10.0f, 10.0f, 0.0f ); /* Bottom Left */
glEnd( );
glColor3f(1.0, 1.0, 1.0); //<- I need this to make sure the webcam stream isn't pink?
SDL_RenderPresent(renderer_);
I have initialized OpenGL using (excerpt):
glDisable(GL_LIGHTING);
glDisable(GL_TEXTURE_2D);
glClearColor( 0.0f, 0.0f, 0.0f, 0.0f );
glViewport( 0, 0, res_width_, res_height_ );
glClear( GL_COLOR_BUFFER_BIT );
glMatrixMode( GL_PROJECTION );
glLoadIdentity();
glOrtho(0.0f, res_width_, res_height_, 0.0f, -1.0f, 1.0f);
Subquestion: If I don't reset the glColor to white the whole webcam stream is colored pink. I find this odd, because I thought that SDl_RenderCopy had already rendered that texture before the first call to glColor. So how does SDL_RenderCopy actually work?
Main question: I get a neat 40x40 square in the top left of the screen on top of my webcam feed (good!). However, in stead of pink, it is a kind of flickering dark purple color; seemingly dependent on the camera feed in the background. Could you please tell me what I'm overlooking?
Edit:
As per #rodrigo's comment, these are some images with the color set to R, G, B and white, respectively:
Red Square
Green Square
Blue Square
White Square
Looking at these, it seems that the underlying texture has some effect on the color. Could it be that OpenGL is still applying (some part of) the texture to the quad?
Edit:
I suspect now that the geometry is drawn using the render texture as a texture, even though I've called glDisable(GL_TEXTURE_2D). Looking at the "White Square" screenshot (below), you can see that the white quad is the same color as the bottom-right pixel. I guess that the quad has no texture coordinates, so only the bottom-right texel is used. Knowing this, better question: how do I disable texturing?.
I have fixed the problem by simply adding
glcontext_ = SDL_GL_CreateContext(window_);
to the SDL_Init code. I think all my calls to openGL functions (like glDisable(GL_TEXTURE_2D) were applied to the wrong context. Explicitly creating a context from SDL sets the right context to be active, I'm guessing.
Something else to look out for: when using textured geometry after using SDL_RenderCopy; I had to make sure to reset
glPixelStorei(GL_UNPACK_ROW_LENGTH, 0);
before calling glTexImage2d, because it uses
GL_UNPACK_ROW_LENGTH if it is greater than 0, the width argument to the pixel routine otherwise
(from the OpenGL docs)