OpenGL: How to update subimage of a rectangular texture? - c++

I am trying to update a small square section in a large rectangular texture.
I've tried using glTexSubImage2D with the target set to GL_TEXTURE_RECTANGLE_ARB, but I'm running into issues. It may just be that I don't know how to use glTexSubImage2D correctly though. The issues may be due to the fact that I'm trying not to load the whole texture into main memory using glTexImage2D before updating the subimage?
Can anyone tell me if it's possible to update a subimage of a rectangular texture without having to read the entire texture into main memory? I see glTexCopyTexSubImage2D... Still wondering if these methods work with rectangular textures though.

I'm quite sure you have to at least upload full-size texture even once with glTexImage2D. You can even use a blank array if you don't have any access to whole image at the beginning.
glCopyTexSubImage2D is not really like what you want. It copies a specific portion from "frame buffer" to a texture. But, you want to upload from main memory, right?
I don't see any reason why rectangular textures cannot be supported by those methods unless there is a broken driver.
Consider that rectangular textures do not use texture coordinates in [0..1] range like the other textures. Instead they use [0..Width] and [0..Height] range.

Related

Is it possible to have a display framebuffer in OpenGL

I want to display a 2D array of pixels directly to the screen. The pixel-data is not static and changes on user triggered event like a mousemove. I wish to have a display framebuffer to which I could write directly to the screen.
I have tried to create a texture with glTexImage2D(). I then render this texture to a QUAD. And then I update the texture with glTexSubImage2D() whenever a pixel is modified.
It works!
But this is not the efficient way I guess. The glTexSubImage2D copies whole array including the unmodified pixels back to the texture which is not good performance wise.
Is there any other way, like having a "display-framebuffer" to which I could write only the modified pixels and change will reflect on the screen.
glBlitFramebuffer is what you want.
Copies a rectangular block of pixels from one frame buffer to another. Can stretch or compress, but doesn't go through shaders and whatnot.
You'll probably also need some combination of glBindFramebuffer, glFramebufferTexture, glReadBuffer to set up the source and destination.

On OpenGL, is there any way to tell glTexSubImage2d not to overwrite transparent pixels?

On OpenGL, I'm using glTexSubImage2d to overwrite specific parts of a 2D texture with rectangular sprites. Those sprites have, though, some transparent pixels (0x00000000) that I want to be ignored - that is, I don't want those pixels to overwrite whatever is on their positions on the target texture. Is there any way to tell OpenGL not to overwrite those pixels?
This must be compatible with OpenGL versions as low as possible.
No, the glTexSubImage2d will copy the data to the texture directly no matter what the source or the target is.
I can only suggest you to create another texture with the data you are trying to push using glTexSubImage2d and then draw this texture to your target texture. This will lead to a pretty standard drawing pipeline so you can do whatever you want using blend functions or shaders.

How do I render two different images to two different primitives in OpenGL? 2D Texture arrays?

So I have a simple OpenGL viewer where you can draw any number of boxes that the user wants. Ive also added the ability to take a PNG or JPG image and texture map it to a primitive.
I want to be able to have the user specify any of the cubes on the screen and apply different textures to them. Im fairly new to OpenGL. Right now I can easily map an image to a single primitive, but Im wondering whats the best way to map 2 seperate images (which may be different sizes) to 2 separate primitives.
Ive done a fair amount of reading up on 2D Texture arrays and it would seem this would be the way I wanna go since I can store multiple textures in one texture unit, but I'm not sure if this is possible considering what I mentioned above. If the images are both different dimensions then I dont think I can do this (at least I dont think so). I know I can just store each image into separate texture units but doing it in an array seemed like the cleaner way to do it.
What would be the best way to do this? Can you in fact store different size images into a 2d texture array? And if so how? Or am I better off just storing them on separate texture units?
Texture arrays are mainly meant if you want to draw a single primitive (or a whole mesh) with the shader being able to select between images without exhausting the number of available texture sampling units. You can use them in the way you thought, but I doubt it will benefit you. Another approach (which is similiar to texture arrays) is using a texture atlas, i.e. creating a patchwork of images that constitutes a single texture and using appropriate texture coordinates to select the subimage.
In your case, I suggest simply load each picture into a separate texture and bind the appropriate texture before drawing the cube.

Overwrite pixel per pixel in an openGL 2d texture

I want to create an openGL 2D texture and set the RGBA values of every pixel by its own. Can someone give me an explanation for my problem? I didn't find one in the internet.
If you're just looking to write the pixels of a 2D texture, you can simply use glTexImage2D, which takes a buffer specifying the pixel data you wish to upload to the texture (https://www.opengl.org/sdk/docs/man/html/glTexImage2D.xhtml). Alternatively, you can use glTexSubImage2D to write a portion of the texture's pixels (https://www.khronos.org/opengles/sdk/docs/man/xhtml/glTexSubImage2D.xml). If you're instead looking to do the analogous thing with the framebuffer, you can use glDrawPixels (https://www.opengl.org/sdk/docs/man2/xhtml/glDrawPixels.xml).
If the target is the backbuffer, attempting to draw to a exact pixel values to a texture by binding it as a framebuffer, and then rendering a textured quad completely covering it is possible. However, this process is subject to blending and potentially pixel-center issues, whereas glDrawPixels is not.
I did something like this some time ago, when playing around with OpenGL.
Have a look at the code here, on GitHub.
You can find it in main.cpp.
Basically, my idea was to create an array of floats, set the values, copy to GPU with glBufferData and draw with glDrawElements.
As I remember it, doing it often was very bad in terms of performance, so it's probably not the best direction.
Please also note that this code is just my sandbox, and may not be the best possible example to be copied.

Accessing rendered OpenGL image

I am rendering an image using OpenGL on C++, and want to access the resulting image to do some more processing on it. (I'm rendering an image, have an actual image it's supposed to look like, and want to compute the pixel difference between the two.)
So far I have only been rendering images to the screen, though, and I can't figure out how to render an image and then later get access at the direct pixels which were drawn. I don't especially care if I can see the image on the screen or not, all I want is that the image gets rendered to some region of memory which I can access from the CPU. How do you do this?
Alternatively, would it be possible to send the image it's supposed to look like to OpenGL and compute the pixel difference on the GPU? Either option is fine with me, but the faster I can make it the better. (Right now, I can render about 100 frames per second, but still haven't figured out how to do the comparisons.)
Yes, you could do it on the GPU. Put the 2 images in textures. Draw a frame-filling quad multi-textured with the two textures, and be sure to provide texture coordinates. Write a fragment shader to compute the difference. (When a commenter asked if you wanted to use a programmable pipeline, this is one reason it matters. If you only use the fixed-function pipeline, you wouldn't have the option of writing a fragment shader.)
The obvious way would be to use glReadPixels to read the rendered results in the framebuffer to host memory.