OpenGL - Using glTexImage2d to fill the entire screen with a texture - c++

Two questions -
What is the best way to use a texture in OpenGL to fill the entire window?
I want to use glTexImage2D to take in an array of ints containing colour data, how would I go about doing this? (I've found a couple of pages of reference on glTexImage2D but tutorial on using it would be great)
Clarification:
I have done texturing before. I simply need help on these two specific parts.

glTexImage2D just uploads texture data, nothing more. When you have your texture, draw a texture mapped quad the size of the screen and you will draw your texture pixels to the screen.
A ortographic projection is usually used for this.

NeHe provides tutorials for almost any OpenGL topic.
The first lesson on using textures is #6.

Also you could just upload the pixels with glDrawPixels if you don't need to update to much.
There is a nice example from Nehe on how to use textures here:

Related

Displaying a framebuffer in OpenGL

I've been learning a bit of OpenGL lately, and I just got to the Framebuffers.
So by my current understanding, if you have a framebuffer of your own, and you want to draw the color buffer onto the window, you'll need to first draw a quad, and then wrap the texture over it? Is that right? Or is there something like glDrawArrays(), glDrawElements() version for framebuffers?
It seems a bit... Odd (clunky? Hackish?) to me that you have to wrap a texture over a quad in order to draw the framebuffer. This doesn't have to be done with the default framebuffer. Or is that done behind your back?
Well. The main point of framebuffer objects is to render scenes to buffers that will not get displayed but rather reused somewhere, as a source of data for some other operation (shadow maps, High dynamic range processing, reflections, portals...).
If you want to display it, why do you use a custom framebuffer in the first place?
Now, as #CoffeeandCode comments, there is indeed a glBlitFramebuffer call to allow transfering pixels from one framebuffer to another. But before you go ahead and use that call, ask yourself why you need that extra step. It's not a free operation...

Overwrite pixel per pixel in an openGL 2d texture

I want to create an openGL 2D texture and set the RGBA values of every pixel by its own. Can someone give me an explanation for my problem? I didn't find one in the internet.
If you're just looking to write the pixels of a 2D texture, you can simply use glTexImage2D, which takes a buffer specifying the pixel data you wish to upload to the texture (https://www.opengl.org/sdk/docs/man/html/glTexImage2D.xhtml). Alternatively, you can use glTexSubImage2D to write a portion of the texture's pixels (https://www.khronos.org/opengles/sdk/docs/man/xhtml/glTexSubImage2D.xml). If you're instead looking to do the analogous thing with the framebuffer, you can use glDrawPixels (https://www.opengl.org/sdk/docs/man2/xhtml/glDrawPixels.xml).
If the target is the backbuffer, attempting to draw to a exact pixel values to a texture by binding it as a framebuffer, and then rendering a textured quad completely covering it is possible. However, this process is subject to blending and potentially pixel-center issues, whereas glDrawPixels is not.
I did something like this some time ago, when playing around with OpenGL.
Have a look at the code here, on GitHub.
You can find it in main.cpp.
Basically, my idea was to create an array of floats, set the values, copy to GPU with glBufferData and draw with glDrawElements.
As I remember it, doing it often was very bad in terms of performance, so it's probably not the best direction.
Please also note that this code is just my sandbox, and may not be the best possible example to be copied.

"Rendering" polygons that are transparent OpenGL

I am developing an engine and the way I am handling boundaries the player is not supposed to reach is to have actual polygons as these boundaries. Now, I am wondering how to "render" the polgon but have it non visible.
My main question is: does OpenGL have a way to do this natively?
If not, what if I was to create a texture the way I usually load in the texture but have this texture simply be a single pixel. I could set the alpha channel to that specific pixel color and then use an alpha mask as I do normally with masked textures.
Any advice?
My main question is: does OpenGL have a way to do this natively?
No!
OpenGL only draws nicely colored triangles, lines and point to a framebuffer, and that's it.
It is not a scene graph.
It is not a geometry library.
It is not a collision detection framework.
The question comments solved the problem you are having but there's a way to do what you actually asked so I'm putting it here anyway.
The question as I understood it is about rendering something in the depth buffer (and maybe stencil) but nowhere else. To achieve this, you simply have to use glColorMask like so before drawing the transparent polygons:
glColorMask(GL_FALSE, GL_FALSE, GL_FALSE, GL_FALSE);
and restore the color mask afterwards.
If you want to render the boundaries as transparent polygons - you could use the blending method as given here : http://www.opengl.org/resources/faq/technical/transparency.htm
Is that what you were looking for ?

Applying a shader to framebuffer object to get fisheye affect

Lets say i have an application ( the details of the application should be irrelevent for solving the problem ). Instead of rendering to the screen, i am somehow able to force the application to render to a framebuffer object instead of rendering to the screen ( messing with glew or intercepting a call in a dll ).
Once the application has rendered its content to the FBO is it possible to apply a shader to the contents of the FB? My knowledge is limited here, so from what i understand at this stage all information about vertices is no longer available and all the necessary tests have been applied, so whats left in the buffer is just pixel data. Is this correct?
If it is possible to apply a shader to the FBO, is is possible to get a fisheye affect? ( like this for example: http://idea.hosting.lv/a/gfx/quakeshots.html )
The technique used in the linke above is to create 6 different viewports and render each viewport to a cubemap face and then apply the texture to a mesh.
Thanks
A framebuffer object encapsulates several other buffers, specifically those that are implicitly indexed by fragment location. So a single framebuffer object may bundle together a colour buffer, a depth buffer, a stencil buffer and a bunch of others. The individual buffers are known as renderbuffers.
You're right — there's no geometry in there. For the purposes of reading back the scene you get only final fragment values, which if you're highjacking an existing app will probably be a 2d pixel image of the frame and some other things that you don't care about.
If your GPU has render-to-texture support (originally an extension circa OpenGL 1.3 but you'd be hard pressed to find a GPU without it nowadays, even in mobile phones) then you can link a texture as a renderbuffer within a framebuffer. So the rendering code is exactly as it would be normally but ends up writing the results to a texture that you can then use as a source for drawing.
Fragment shaders can programmatically decide which location of a texture map to sample in order to create their output. So you can write a fragment shader that applies a fisheye lens, though you're restricted to the field of view rendered in the original texture, obviously. Which would probably be what you'd get in your Quake example if you had just one of the sides of the cube available rather than six.
In summary: the answer is 'yes' to all of your questions. There's a brief introduction to framebuffer objects here.
Look here for some relevant info:
http://www.opengl.org/wiki/Framebuffer_Object
The short, simple explanation is that a FBO is the 3D equivalent of a software frame buffer. You have direct access to individual pixels, instead of having to modify a texture and upload it. You can get shaders to point to an FBO. The link above gives an overview of the procedure.

How do draw to a texture in OpenGL

Now that my OpenGL application is getting larger and more complex, I am noticing that it's also getting a little slow on very low-end systems such as Netbooks. In Java, I am able to get around this by drawing to a BufferedImage then drawing that to the screen and updating the cached render every one in a while. How would I go about doing this in OpenGL with C++?
I found a few guides but they seem to only work on newer hardware/specific Nvidia cards. Since the cached rendering operations will only be updated every once in a while, i can sacrifice speed for compatability.
glBegin(GL_QUADS);
setColor(DARK_BLUE);
glVertex2f(0, 0); //TL
glVertex2f(appWidth, 0); //TR
setColor(LIGHT_BLUE);
glVertex2f(appWidth, appHeight); //BR
glVertex2f(0, appHeight); //BR
glEnd();
This is something that I am especially concerned about. A gradient that takes up the entire screen is being re-drawn many times per second. How can I cache it to a texture then just draw that texture to increase performance?
Also, a trick I use in Java is to render it to a 1 X height texture then scale that to width x height to increase the performance and lower memory usage. Is there such a trick with openGL?
If you don't want to use Framebuffer Objects for compatibility reasons (but they are pretty widely available), you don't want to use the legacy (and non portable) Pbuffers either. That leaves you with the simple possibility of reading the contents of the framebuffer with glReadPixels and creating a new texture with that data using glTexImage2D.
Let me add that I don't really think that in your case you are going to gain much. Drawing a texture onscreen requires at least texel access per pixel, that's not really a huge saving if the alternative is just interpolating a color as you are doing now!
I sincerely doubt drawing from a texture is less work than drawing a gradient.
In drawing a gradient:
Color is interpolated at every pixel
In drawing a texture:
Texture coordinate is interpolated at every pixel
Color is still interpolated at every pixel
Texture lookup for every pixel
Multiply lookup color with current color
Not that either of these are slow, but drawing untextured polygons is pretty much as fast as it gets.
Hey there, thought I'd give you some insight in to this.
There's essentially two ways to do it.
Frame Buffer Objects (FBOs) for more modern hardware, and the back buffer for a fall back.
The article from one of the previous posters is a good article to follow on it, and there's plent of tutorials on google for FBOs.
In my 2d Engine (Phoenix), we decided we would go with just the back buffer method. Our class was fairly simple and you can view the header and source here:
http://code.google.com/p/phoenixgl/source/browse/branches/0.3/libPhoenixGL/PhRenderTexture.h
http://code.google.com/p/phoenixgl/source/browse/branches/0.3/libPhoenixGL/PhRenderTexture.cpp
Hope that helps!
Consider using a display list rather than a texture. Texture reads (especially for large ones) are a good deal slower than 8 or 9 function calls.
Before doing any optimization you should make sure you fully understand the bottlenecks. You'll probably be surprised at the result.
Look into FBOs - framebuffer objects. It's an extension that lets you render to arbitrary rendertargets, including textures. This extension should be available on most recent hardware. This is a fairly good primer on FBOs: OpenGL Frame Buffer Object 101