How to blank my OpenGL texture - opengl

I created an OpenGL (GL_TEXTURE_2D) texture, made an OpenCL image2d_t buffer out of it using clCreateFromGLTexture(), I run my OpenCL kernel to draw to the texture, using clEnqueueAcquireGLObjects and clEnqueueReleaseGLObjects before and after, and then I display the result in OpenGL by doing this (I'm trying to simply draw my framebuffer texture to the window with no scaling, is this even the right way to do it?):
glBegin(GL_QUADS);
glTexCoord2f(0.0, 0.0); glVertex2f(0.0f, 0.0f);
glTexCoord2f(1.0, 0.0); glVertex2f(1920.0f, 0.0f);
glTexCoord2f(1.0, 1.0); glVertex2f(1920.0f, 1080.0f);
glTexCoord2f(0.0, 1.0); glVertex2f(0.0f, 1080.0f);
glEnd();
SDL_GL_SwapWindow(window);
It works great except there's one problem, I can't figure out how to fill the texture with zeroes in order to blank it, so I see old pixels where no new ones were written.
Apparently there's no way to blank an OpenCL image object from the host, so I'd have to do it in OpenGL (before calling clEnqueueAcquireGLObjects of course), so I tried this:
glClearColor(0,0,0,0);
glClear(GL_COLOR_BUFFER_BIT);
However this doesn't do anything. It does do something on screen when I comment out the GL_QUADS block above, it's like glClear doesn't do anything on the texture itself but rather the screen directly. It's confusing, I'm not very familiar with OpenGL, how do I blank my texture?

OpenGL glClear clears the currently bound framebuffer. To make this operate on a texture create a framebuffer object, attach the texture to it, bind the framebuffer object for drawing operations and clear it (which will then clear the texture).

Related

Drawing a simple rectangle in OpenGL 4

According to this wikibook it used to be possible to draw a simple rectangle as easily as this (after creating and initializing the window):
glColor3f(0.0f, 0.0f, 0.0f);
glRectf(-0.75f,0.75f, 0.75f, -0.75f);
This is has been removed however in OpenGL 3.2 and later versions.
Is there some other simple, quick and dirty, way in OpenGL 4 to draw a rectangle with a fixed color (without using shaders or anything fancy)?
Is there some ... way ... to draw a rectangle ... without using shaders ...?
Yes. In fact, AFAIK, it is supported on all OpenGL versions in existence: you can draw a solid rectangle by enabling scissor test and clearing the framebuffer:
glEnable(GL_SCISSOR_TEST);
glScissor(x, y, width, height);
glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT);
This is different from glRect in multiple ways:
The coordinates are specified in pixels relative to the window origin.
The rectangle must be axis aligned and cannot be transformed in any way.
Most of the per-sample processing is skipped. This includes blending, depth and stencil testing.
However, I'd rather discourage you from doing this. You're likely to be better off by building a VAO with all the rectangles you want to draw on the screen, then draw them all with a very simple shader.

Drawing a thick line with legacy OpenGL (immediate mode) in C++

i wanted to create a line that is thick using OpenGL library in c++ but it is not working.
i tried this code:
glBegin(GL_LINES);
glLineWidth(3);
glVertex2f(-0.7f, -1.0f);
glVertex2f(-0.7f, 1.0f);
glEnd();
is there something wrong here?
Note that rendering with glBegin/glEnd sequences and also glLineWidth is deprecated. See OpenGL Line Width for a solution using "modern" OpenGL.
It is not allowed to call glLineWidth with in a glBegin/glEnd sequence. Set the line width before:
glLineWidth(3);
glBegin(GL_LINES);
glVertex2f(-0.7f, -1.0f);
glVertex2f(-0.7f, 1.0f);
glEnd();
Once drawing of primitives was started by glBegin it is only allowed to specify vertex coordinates (glVertex) and change attributes (e.g. glColor, glTexCoord, etc.), till the drawn is ended (glEnd).
All other instruction will be ignored and cause a GL_INVALID_OPERATION error, which can be get by glGetError.

Drawing a primitive ( GL_QUADS ) on top of a 2D texture - no quad rendered, texture colour changed

I am trying to draw a 2D scene with a texture as background and then ( as the program flows and does computations ) draw different primitives on the "canvas". As a test case I wanted to draw a blue quad on the background image.
I have looked at several resources and SO questions to try get the information I need to accomplish the task ( e.g. this tutorial for first primitive rendering, SOIL "example" for texture loading ).
My understanding was that the texture will be drawn on Z=0, and quad as well. Quad would thus "cover" a portion of texture - be drawn on it, which is what I want. Instead the result of my display function is my initial texture in black/blue colour, and not my texture ( in original colour ) with a blue quad drawn on it. This is the display function code :
void display (void) {
glClearColor (0.0,0.0,0.0,1.0);
glClear (GL_COLOR_BUFFER_BIT);
// background render
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(0.0f, 1024.0, 512.0, 0.0, 0.0, 1.f); // window size is 1024x512
glEnable( GL_TEXTURE_2D );
glBindTexture( GL_TEXTURE_2D, texture );
glBegin (GL_QUADS);
glTexCoord2d(0.0,0.0); glVertex2d(0.0,0.0);
glTexCoord2d(1.0,0.0); glVertex2d(1024.0,0.0);
glTexCoord2d(1.0,1.0); glVertex2d(1024.0,512.0);
glTexCoord2d(0.0,1.0); glVertex2d(0.0,512.0);
glEnd(); // here I get the texture properly displayed in window
glDisable(GL_TEXTURE_2D);
// foreground render
glLoadIdentity();
gluPerspective (60, (GLfloat)winWidth / (GLfloat)winHeight, 1.0, 100.0);
glMatrixMode(GL_MODELVIEW);
glColor3f(0.0, 0.0, 1.0);
glBegin (GL_QUADS);
glVertex2d(400.0,100.0);
glVertex2d(400.0,500.0);
glVertex2d(700.0,100.0);
glVertex2d(700.0,500.0);
glEnd(); // now instead of a rendered blue quad I get my texture coloured in blue
glutSwapBuffers(); }
I have already tried with many modifications, but since I am just beginning with OpenGL and don't yet understand a lot of it, my attempts failed. For example, I tried with pushing and popping matrices before and after drawing the quad, clearing the depth buffer, changing parameters in gluPerspective etc.
How do I have to modify my code so it will render the quad properly on top of the background texture image of my 2D scene ? Being a beginner, extra explanations of the modifications ( as well as mistakes in the present code ) and principles in general will be greatly appreciated.
EDIT - after answer by Reto Koradi :
I have tried to follow the instructions, and the modified code now looks like :
// foreground render
glLoadIdentity();
glMatrixMode(GL_MODELVIEW);
glOrtho(0.0f, 1024.0, 512.0, 0.0, 0.0, 1.f);
glColor3f(0.0, 0.0, 1.0);
glBegin (GL_QUADS); // same from here on
Now I can see the blue "quad", but it is not displayed properly, it looks something like this .
Beside that, the whole scene is flashing really quickly.
What do I have to change in my code so that quad will get displayed properly and screen won't be flashing ?
You are setting up a perspective transformation before rendering the blue quad:
glLoadIdentity();
gluPerspective (60, (GLfloat)winWidth / (GLfloat)winHeight, 1.0, 100.0);
The way gluPerspective() is defined, it sets up a transformation that looks from the origin down the negative z-axis, with the near and far values specifying the distance range that will be visible. With this transformation, z-values from -1.0 to -100.0 will be visible. Which does not include your quad at z = 0.0.
If you want to draw your quad in 2D coordinate space, the easiest solution is to not use gluPerspective() at all. Just use a glOrtho() type transformation like you did for your initial drawing.
If you want perspective, you will need a GL_MODELVIEW transformation as well. You can start with a translation in the negative z-direction, within a range of 1.0 to 100.0. You may have to adjust your coordinates for the different coordinate system as well, or use additional transformations that also translate in xy-direction, and possibly scale.
The code also has the coordinates in the wrong order for drawing the blue quad. You either have to change the draw call to GL_TRIANGLE_STRIP (recommended because it at least gets you one step closer to using features that are not deprecated), or swap the order of the last two vertices:
glBegin (GL_QUADS);
glVertex2d(400.0,100.0);
glVertex2d(400.0,500.0);
glVertex2d(700.0,500.0);
glVertex2d(700.0,100.0);
glEnd(GL_QUADS);

OpenGL - PBuffer render to Texture

After my last post, when someone recommended me to use pBuffers, I digged a bit on Google and I found some cool examples to make Offscreen Rendering, using pbuffers. Some example, available on nVidia's website, does a simple offscreen rendering, which just renders on the pbuffer context, reads the pixels into an array and then calls the opengl functions to DrawPixels.
I changed this example, in order to create a texture from the pixels read - Render it offscreen, read the pixels to the array, and then initialize a texture with this colorbit array. But this looks very redundant to me - We render the image, we copy it from Graphical Card memory into our memory (the array), to later copy it back to the graphical card in order to display it on screen, but just in a different rendering context. It looks kinda stupid the copies that I am making just to display the rendered texture, so I tried a different approach using glCopyTexImage2D(), which unfortunately doesn't work. I'll display code and explanations:
mypbuffer.Initialize(256, 256, false, false);
- The false values are for Sharing context and sharing object. They are false cause this fantastic graphical card doesn't support it.
Then I perform the usual initializations, to enable Blending, and GL_TEXTURE_2D.
CreateTexture();
mypbuffer.Activate();
int viewport[4];
glGetIntegerv(GL_VIEWPORT,(int*)viewport);
glViewport(0,0,xSize,ySize);
DrawScene(hDC);
//save data to texture using glCopyTexImage2D
glBindTexture(GL_TEXTURE_2D,texture);
glCopyTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA,
0,0, xSize, ySize, 0);
glClearColor(.0f, 0.5f, 0.5f, 1.0f); // Set The Clear Color To Medium Blue
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glViewport(viewport[0],viewport[1],viewport[2],viewport[3]);
// glBindTexture(GL_TEXTURE_2D,texture);
first = false;
mypbuffer.Deactivate();
- The DrawScene function is very simple, it just renders a triangle and a rectangle, which is suposed to be offscreen rendered (I HOPE). CreateTexture() creates an empty texture. The function should work, as it was tested in the previous way I described and it works.
After this, in the main loop, i just do the following:
glClear(GL_COLOR_BUFFER_BIT);
glBindTexture(GL_TEXTURE_2D,texture);
glRotatef(theta, 0.0f, 0.0f, 0.01f);
glBegin(GL_QUADS);
//Front Face
glTexCoord2f(0.0f, 0.0f);
glVertex3f(-0.5, -0.5f, 0.5f);
glTexCoord2f(1.0f, 0.0f);
glVertex3f( 0.5f, -0.5f, 0.5f);
glTexCoord2f(1.0f, 1.0f);
glVertex3f( 0.5f, 0.5f, 0.5f);
glTexCoord2f(0.0f, 1.0f);
glVertex3f(-0.5f, 0.5f, 0.5f);
glEnd();
SwapBuffers(hDC);
theta = 0.10f;
Sleep (1);
The final result is just a window with a blue background, nothing got actually Rendered. Any Idea why is this happening? My Graphical Card doesn't support the extension wgl_ARB_render_texture, but this shouldn't be a problem when calling the glCopyTexImage2D() right?
My Card doesn't support FBO either
What you must do is, sort of "connect" your two OpenGL contexts so that the textures of your PBuffer context also show up in the main render context. The term you need to look for is "display list sharing". In Windows you connect the contexts retroactively using wglShareLists, on X11 and MacOS X you must supply the handle of the context to be shared at context creation.
An entirely different possibility and working just as well is reusing the same context on the PBuffer. It's a little known fact, that you can use OpenGL render contexts not only on the drawable it has been created with first, but on any drawable with compatible settings. So if your PBuffer matches your main window's pixel format, you can detach the render context from the main window and attach it to the PBuffer. Of course you then need low level access to the main window's device context/drawable, which is normally hidden behind a framework.
You should check whether your OpenGL implementation supports framebuffer objects: these object are able to be render targets, and they can have attached textures as color buffers, indeed rendering directly into a texture.
This should be the way to go, otherwise your method is the alternative.

OpenGL Texture Loading

Evening everyone,
I'm going around in circles here but I think I'm looking in the wrong places.
My question is how would I go about loading an image and applying it to a primative in openGL. For example loading a bmp or jpg and applying it to a glutsolidsphere.
Would the solution be limited to one platform or could it work across all?
Thanks for your help
Well, if you want to, you can write your own bmp loader. Here is the specification and some code. Otherwise, I happen to have a tga loader here. Once you do that, it will return the data in an unsigned character array, otherwise known as GL_UNSIGNED_BYTE.
To make an OpenGL texture from this array, you first define a variable that will serve as a reference to that texture in OpenGL's memory.
GLuint textureid;
Then, you need to tell OpenGL to make space for a new texture:
glGenTextures(1, &textureid);
Then, you need to bind that texture as the currently used texture.
glBindTexture(GL_TEXTURE_2D, textureid);
Finally, you need to tell OpenGL where the data is for the current texture.
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, your_data);
Then, when you render a primitive, you apply it to the primitive by calling glBindTexture again:
glBindTexture(GL_TEXTURE_2D, textureid);
glBegin(GL_QUAD);
glTexCoord2f(0.0, 0.0);
glVertex3f(0.0, 0.0, 0.0);
glTexCoord2f(1.0, 0.0);
glVertex3f(1.0, 0.0, 0.0);
glTexCoord2f(1.0, 1.0);
glVertex3f(1.0, 1.0, 0.0);
glTexCoord2f(0.0, 1.0);
glVertex3f(0.0, 1.0, 0.0);
glEnd();
However, any time you apply the texture to a primitive, you need to have texture coordinate data along with vertex data, and glutSolidSphere does not generate texture coordinate data. To texture a sphere, either generate it yourself, or call texgen functions, or use shaders.
There are an infinite number of ways to texture map an image onto a sphere. You could look into the gltexgen commands to automatically generate texture coordinates for a glut sphere.
It might be more complex than you'd expect.
First of all, I'm quite sure you can't texture a sphere drawn using glutSolidSphere, as that function doesn't specify texture coordinates for the vertices it draws. So you'll need to code a method to draw a sphere either vertex by vertex or using a vertex buffer and be sure you specify texture coordinates for each vertex.
Once this is done, you need to load your images. OpenGL does not provide functions to load images from files and, if I remember well, glu and glut do not either. So you'll have to either read the image file yourself, or use a library that does so. DevIL is one of them.
Once you have the pixel data of the image, you need to create a new OpenGL texture using it. Some image loading libraries can do this for you. DevIL can.
I highly recommend googling for some tutorials, but please do not use NeHe tutorials, their quality is generally quite poor.
Ned has described it nicely. You should read tutorial from NeHe, where each step is described in detail. As others have pointed out, texturing quad would be good start and sphere may be bit tricky.
That's funny, TGAs were the first type of image file I was able to load successfully into a texture. Unfortunately OpenGL does not have its own built in API functions to read files directly. So any external libraries or code would be needed. The TGA format is straightforward and Ned has provided some good code for it.