This is for a 2D game with OpenGL:
Is it with using OpenGL possible to display a texture absolutely unfiltered, not streched or blurred?
So that when I have a BMP and convert it into an OpenGL texture, and then retrieve that texture and convert it back, I have no modifications or quality / data loss?
Sure, just disable filtering, that's made by setting the GL_MIN_FILTER and the GL_MAG_FILTER to GL_NEAREST. Also make sure that you draw the texture in a appropiate size so that texels are the same size as pixels.
As Matias said previously - one thing is to set GL_MIN_FILTER and GL_MAG_FITLER to GL_NEAREST (via glTexParameter*).
But for pixel-perfect rendering, there's another important thing- you don't want your texture to be rescaled to power-of-two. The easiest way is to specify the texture via the binding target GL_TEXTURE_RECTANGLE instead of GL_TEXTURE_2D. On such bound texture, the texture coordinates are not in range (0..1,0..1) as usually, but (0..w, 0..h) instead. You can have per-texel indexing easily this way.
Related
On OpenGL, I'm using glTexSubImage2d to overwrite specific parts of a 2D texture with rectangular sprites. Those sprites have, though, some transparent pixels (0x00000000) that I want to be ignored - that is, I don't want those pixels to overwrite whatever is on their positions on the target texture. Is there any way to tell OpenGL not to overwrite those pixels?
This must be compatible with OpenGL versions as low as possible.
No, the glTexSubImage2d will copy the data to the texture directly no matter what the source or the target is.
I can only suggest you to create another texture with the data you are trying to push using glTexSubImage2d and then draw this texture to your target texture. This will lead to a pretty standard drawing pipeline so you can do whatever you want using blend functions or shaders.
I have a 512X512 texture which holds a number of images that i want to use in my application. After adding the image data to the texture i save the texture coords for the individual images. Later i apply these on some quads that i am drawing. The texture has mipmapping activated.
When i take a screenshot of the rendered scene at exactly the same instance in two different runs of the applications, i notice that there are differences in the image only among those quads textured using this mipmapped texture. Can mipmapping cause such an issue?
My best guess is that it has to do with precisions in your shader. Check out this problem that I had (and fought with for a while) and my solution:
opengl texture mapping off by 5-8 pixels
It probably is a combination of mimapping's automatic scaling of your texture atlas and the precision hints in your shader code.
Also see the other linked question:
Why is a texture coordinate of 1.0 getting beyond the edge of the texture?
I wish to bind a texture on a cube (creating cube using GlutSolidCube and not glvertex) but the whole texture is bound. In the image file I have all textures together (for speed and because the teacher requested) and I only want part of the texture to be bound. How can I do that????
Textures are the unit of texture binding. If you want to "cut out" part of a texture, you do so by adjusting the texture coordinates that you use.
Instead of using the full range of 0..1, use smaller values that match the sub-texture's location inside the texture.
What you're looking to do is not possible, because glutSolidCube does not generate texture coordinates.
However, you will also note that an answer to that question indicates that you may use the following to have OpenGL generate texture coordinates for you on a call to glutSolidCube:
glEnable(GL_TEXTURE_GEN_S);
glEnable(GL_TEXTURE_GEN_T);
Some more information on using OpenGL's automatic texture coordinate generation is available here. However, I would like to note that this seems to come out of the days of immediate-mode OpenGL, which is deprecated. Also, GLUT is no longer maintained, but freeglut is.
To summarize, you're better off using glVertex calls and specifying your own specific texture coordinates, as unwind has suggested. You can try OpenGL's texture coordinate generation, but it might be too strict to handle what you need.
To clarify, when I say 'default framebuffer' I mean the one provided by the windowing system and what ends up on your monitor.
To improve my rendering speeds for a CAD app, I've managed to separate out the 3D elements from the Qt-handled 2D ones, and they now each render into their own FBO. When the time comes to get them onto the screen, I blit the 3D FBO onto the default FB, and then I want to blend my 2D FBO on top of it.
I've gotten to the blitting part fine, but I can't see how to blend my 2D FBO onto it? Both FBOs are identical in size and format, and they are both the same as the default FB.
I'm sure it's a simple operation, but I can't find anything on the net - presumably I'm missing the right term for what I am trying to do. Although I'm using Qt, I can use native OpenGL commands without issue.
A blit operation is ultimately a pixel copy operation. If you want to layer one image on top of another, you can't blit it. You must instead render a full-screen quad as a texture and use the proper blending parameters for your blending operation.
You can use GL_EXT_framebuffer_blit to blit contents of the framebuffer object to the application framebuffer (or to any other). Although, as the spec states, it is not possible to use blending:
The pixel copy bypasses the fragment pipeline. The only fragment
operations which affect the blit are the pixel ownership test and
the scissor test.
So any blending means to use fragment shader as suggested. One fullscreen pass with blending should be pretty cheap, I believe there is nothing to worry about.
use shader to read back from frame buffer. this is OpenGL ES extension, not support by all hardware.
https://www.khronos.org/registry/gles/extensions/EXT/EXT_shader_framebuffer_fetch.txt
in each frame (as in frames per second) I render, I make a smaller version of it with just the objects that the user can select (and any selection-obstructing objects). In that buffer I render each object in a different color.
When the user has mouseX and mouseY, I then look into that buffer what color corresponds with that position, and find the corresponding objects.
I can't work with FBO so I just render this buffer to a texture, and rescale the texture orthogonally to the screen, and use glReadPixels to read a "hot area" around mouse cursor.. I know, not the most efficient but performance is ok for now.
Now I have the problem that this buffer with "colored objects" has some accuracy problems. Of course I disable all lighting and frame shaders, but somehow I still get artifacts. Obviously I really need clean sheets of color without any variances.
Note that here I put all the color information in an unsigned byte in GL_RED. (assumiong for now I maximally have 255 selectable objects).
Are these caused by rescaling the texture? (I could replace this by looking up scaled coordinates int he small texture.), or do I need to disable some other flag to really get the colors that I want.
Can this technique even be used reliably?
It looks like you're using GL_LINEAR for your GL_TEXTURE_MAG_FILTER. Use GL_NEAREST instead if you don't want interpolated colors.
I could replace this by looking up scaled coordinates int he small texture.
You should. Rescaling is more expensive than converting the coordinates for sure.
That said, scaling a uniform texture should not introduce artifacts if you keep an integer ratio (like upscale 2x), with no fancy filtering. It looks blurry on the polygon edges, so I'm assuming that's not what you use.
Also, the rescaling should introduce variations only at the polygon boundaries. Did you check that there are no variations in the un-scaled texture ? That would confirm whether it's the scaling that introduces your "artifacts".
What exactly do you mean by "variance"? Please explain in more detail.
Now some suggestion: In case your rendering doesn't depend on stencil buffer operations, you could put the object ID into the stencil buffer in the render pass to the window itself, don't use the detour over a separate texture. On current hardware you usually get 8 bits of stencil. Of course the best solution, if you want to use a index buffer approach, is using multiple render targets and render the object ID into an index buffer together with color and the other stuff in one pass. See http://www.opengl.org/registry/specs/ARB/draw_buffers.txt