Do i need to recreate a texture when using opengl/CUDA interoperability? - opengl

I want to manipulate a texture which I use in opengl using CUDA. Knowing that I need to use a PBO for this I wonder if I have to recreate the texture every time I make changes to the PBO like this:
// Select the appropriate buffer
glBindBuffer( GL_PIXEL_UNPACK_BUFFER, bufferID);
// Select the appropriate texture
glBindTexture( GL_TEXTURE_2D, textureID);
// Make a texture from the buffer
glTexSubImage2D( GL_TEXTURE_2D, 0, 0, 0, Width, Height,GL_BGRA, GL_UNSIGNED_BYTE, NULL);
Does glTexSubImage2D and the like copy the data from the PBO?

All pixel transfer operations work with buffer objects. Since glTexSubImage2D initiates a pixel transfer operation, it can be used with buffer objects.
There is no long-term connection made between buffer objects used for pixel transfers and textures. The buffer object is used much like a client memory pointer would be used for glTexSubImage2D calls. It's there to store the data while OpenGL formats and pulls it into the texture. Once it's done, you can do whatever you want with it.
The only difference is that, because OpenGL manages the buffer object, the upload from the buffer can be asynchronous. Well that and you get to play games like filling the buffer object from GPU operations (whether from OpenGL, OpenCL, or CUDA).

Related

Rendering to TBO

I need to render to Buffer texture. Why TBO? TBO can be mapped easily as CUDA resource for graphic interop. It can also store byte sized data, which is what I need. I was trying to find related info in GL specs. Here it is stated that:
Buffer Textures work like 1D texture, only they have a single image,
identified by mipmap level​ 0.
But when I try to attach TBO to FBO I am always getting "Missing attachment" error when checking completeness, which leads me to a conclusion that GL_TEXTURE_BUFFER is not supported as FBO attachment.
Two questions:
Is it true?
Is the only alternative here to write to SSBO
More detailed info:
I am getting
GL_FRAMEBUFFER_INCOMPLETE_MISSING_ATTACHMENT
When I am trying to attach the TBO to the framebuffer.There is no point to attach the whole code here as it is both heavily abstracted API and belive me,I am pretty experienced in using OpenGL and framebuffers.Attaching a regular GL_TEXTURE_2D works great.
Here is the chunk of the TBO creation and the attachment stage:
GLuint tbo;
GLuint tboTex;
glGenBuffers(1, &tbo);
glBindBuffer(GL_TEXTURE_BUFFER, tbo);
glBufferData(GL_TEXTURE_BUFFER, viewportWidth * viewportHeight * 4, NULL, GL_DYNAMIC_DRAW);
glBindBuffer(GL_TEXTURE_BUFFER, 0);
glGenTextures(1, &tboTex);
glBindTexture(GL_TEXTURE_BUFFER, tboTex);
glTexBuffer(GL_TEXTURE_BUFFER, GL_RGBA8, tbo);
glBindTexture(GL_TEXTURE_BUFFER, 0);
Then attach to FBO:
glBindFramebuffer(GL_FRAMEBUFFER, fbo);
glFramebufferTexture(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, tboTex, 0);
///also tried:
/// glFramebufferTexture1D
/// glFramebufferTexture2D
Buffer textures cannot be attached to FBOs:
GL_INVALID_OPERATION is generated by[sic] if texture is a buffer texture.
A good reminder for why you should always check your OpenGL errors.
Is the only alternative here to write to SSBO
If your goal is to use a rendering operation to write stuff to a buffer object, you could also use Image Load/Store operations with buffer textures. But if your hardware could handle that, then it should handle SSBOs too.
You could also try to use geometry shaders and transform feedback operations to write whatever you're trying to write.
However:
It can also store byte sized data
Images can store "byte sized data" as well. The image format GL_R8UI represents a single-channel, 8-bit unsigned integer.
That doesn't resolve any CUDA-interop issues, but rendering to bytes is very possible.

glDrawPixels vs textures to draw a 2d buffer in OpenGL

I have a 2d graphic library that I want to use in OpenGL, to be able to mix 2d and 3d graphic. The simplest way seems to be with glDrawPixels, but many recent tutorial, and forums, suggest to use a texture with the command glTexSubImage2D, and then to draw a square with such a texture.
My question is: why? where is the advantage? It just adds one more step (memory buffer->texture->video buffer, instead of memory buffer->video buffer).
There are two main reasons:
glDrawPixels() is deprecated, and not available in the OpenGL core profile, or in OpenGL ES.
When drawing the image multiple times, a lot of repeated work can be saved by storing the image data in a texture.
It's quite rare that you would have to draw an image only once. Much more commonly, you'll draw it repeatedly, on each redraw. With glDrawPixels() you have to pass the image data into OpenGL each time. If you store it in a texture, you can draw it repeatedly, and OpenGL can reuse the same data each time.
To draw the content of a texture, you don't necessarily have to set up a shader, draw a quad, etc. You can use glBlitFramebuffer() to copy the texture content to the display.
Since OpenGL use a video memory, use a simple "draw pixel" must be really slow because you will do a lot GPU/CPU synchronisation for each draw.
When you use glTexSubImage2D, you ensure that your image will reside(all the time) into the video memory which is fast.
One way to load a texture inside video memory could be :
glCreateTextures(GL_TEXTURE_2D, 1, &texture->mId);
glTextureParameteri(mId, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
glTextureParameteri(mId, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
GLsizei numMipmaps = ((GLsizei)log2(std::max(surface->w, surface->h)) + 1);
glTextureStorage2D(*texture, numMipmaps, internalFormat, surface->w, surface->h);
glTextureSubImage2D(*texture, 0, 0, 0, surface->w, surface->h,
format, GL_UNSIGNED_BYTE, surface->pixels);
glGenerateTextureMipmap(*texture);
Don't forget binding if you do not want to use direct state access.
However, if you still want to perform pixel draw (for example for procedural rendering), you must write your own fragment shader to be as fast as possible

(OpenGL) How to read back texture buffer?

Is glGetBufferSubData used both for regular and texture buffers ?
I am trying to troubleshoot why my texture is not showing up and when I use glGetBufferSubData to read the buffer I get some garbage
struct TypeGLtexture //Associate texture data with a GL buffer
{
TypeGLbufferID GLbuffer;
TypeImageFile ImageFile;
void GenerateGLbuffer ()
{
if (GLbuffer.isActive==true || ImageFile.GetPixelArray().size()==0) return;
GLbuffer.isActive=true;
GLbuffer.isTexture=true;
GLbuffer.Name="Texture Buffer";
GLbuffer.ElementCount=ImageFile.GetPixelArray().size();
glEnable(GL_TEXTURE_2D);
glGenTextures (1,&GLbuffer.ID); //instantiate ONE buffer object and return its handle/ID
glBindTexture (GL_TEXTURE_2D,GLbuffer.ID); //connect the object to the GL_TEXTURE_2D docking point
glTexImage2D (GL_TEXTURE_2D,0,GL_RGB,ImageFile.GetProperties().width, ImageFile.GetProperties().height,0,GL_RGB,GL_UNSIGNED_BYTE,&(ImageFile.GetPixelArray()[0]));
if(ImageFile.GetProperties().width==6){
cout<<"Actual Data"<<endl;
for (unsigned i=0;i<GLbuffer.ElementCount;i++) cout<<(int)ImageFile.GetPixelArray()[i]<<" ";
cout<<endl<<endl;
cout<<"Buffer data"<<endl;
GLubyte read[GLbuffer.ElementCount]; //Read back from the buffer (to make sure)
glGetBufferSubData(GL_TEXTURE_2D,0,GLbuffer.ElementCount,read);
for (unsigned i=0;i<GLbuffer.ElementCount;i++) cout<<(int)read[i]<<" ";
cout<<endl<<endl;}
}
EDIT:
Using glGetTexImage(GL_TEXTURE_2D,0,GL_RGB,GL_UNSIGNED_BYTE,read); the data still differs:
Yes, this would work for texture buffers, if this were in fact one of those.
glGetBufferSubData (...) is for Buffer Objects. What you have here is a Texture Object, and you should actually be getting API errors if you call glGetError (...) to check the error state. This is because GL_TEXTURE_2D is not a buffer target, that is a type of texture object.
It is unfortunate, but you are confusing terminology. Even more unfortunate, there is something literally called a buffer texture (it is a special 1D texture) that allows you to treat a buffer object as a very limited form of texture.
Rather than loosely using the term 'buffer' to think about these things, you should consider "data store." That is the terminology that OpenGL uses to avoid any ambiguity; texture objects have a data store and buffer objects do as well. Unless you create a texture buffer object to link these two things they are separate concepts.
Reading back data from a texture object is much more complicated than this.
Before you can read pixel data from anything in OpenGL, you have to define a pixel format and data type. OpenGL is designed to convert data from a texture's internal format to whatever (compatible) format you request. This is why the function you are actually looking for has the following signature:
void glGetTexImage (GLenum target,
GLint level,
GLenum format, // GL will convert to this format
GLenum type, // Using this data type per-pixel
GLvoid * img);
This applies to all types of OpenGL objects that store pixel data. You can, in fact, use a Pixel Buffer Object to transfer pixel data from your texture object into a separate buffer object. You may then use glGetBufferSubData (...) on that Pixel Buffer Object like you were attempting to do originally.

Pass stream hint to existing texture?

I have a texture that was created by another part of my code (with QT5's bindTexture, but this isn't relevant).
How can I set an OpenGL hint that this texture will be frequently updated?
glBindTexture(GL_TEXTURE_2D, textures[0]);
//Tell opengl that I plan on streaming this texture
glBindTexture(GL_TEXTURE_2D, 0);
There is no mechanism to indicating that a texture will be updated repeatedly; that is only related to buffers (e.g., VBOs, etc.) through the usage parameter. However, there are two possibilities:
Attache your texture as a framebuffer object and update it that way. That's probably the most efficient method to do what you're asking. The memory associated with the texture remains resident on the GPU, and you can update it at rendering speeds.
Try using a pixel buffer object (commonly called a PBO, and has an OpenGL buffer type of GL_PIXEL_UNPACK_BUFFER) as the buffer that Qt writes its generated texture into, and mark that buffer as GL_DYNAMIC_DRAW. You'll still need to call glTexImage*D() with the buffer offset of the PBO (i.e., probably zero) for each update, but that approach may afford some efficiency over just blasting texels to the pipe directly through glTexImage*D().
There is no such hint. OpenGL defines functionality, not performance. Just upload to it whenever you need to.

Using a framebuffer as a vertex buffer without moving the data to the CPU

In OpenGL, is there a way to use framebuffer data as vertex data without moving the data through the CPU? Ideally, a framebuffer object could be recast as a vertex buffer object directly on the GPU. I'd like to use the fragment shader to generate a mesh and then render that mesh.
There's a couple ways you could go about this, the first has already been mentioned by spudd86 (except you need to use GL_PIXEL_PACK_BUFFER, that's the one that's written to by glReadPixels).
The other is to use a framebuffer object and then read from its texture in your vertex shader, mapping from a vertex id (that you would have to manage) to a texture location. If this is a one-time operation though I'd go with copying it over to a PBO and then binding into GL_ARRAY_BUFFER and then just using it as a VBO.
Just use the functions to do the copy and let the driver figure out how to do what you want, chances are as long as you copy directly into the vertex buffer it won't actually do a copy but will just make your VBO a reference to the data.
The main thing to be careful of is that some drivers may not like you using something you told it was for vertex data with an operation for pixel data...
Edit: probably something like the following may or may not work... (IIRC the spec says it should)
int vbo;
glBindBufferARB(GL_PIXEL_PACK_BUFFER_ARB, vbo);
// use appropriate pixel formats and size
glReadPixels(0, 0, w, h, GL_RGBA, GL_UNSIGNED_BYTE, 0);
glBindBufferARB(GL_PIXEL_PACK_BUFFER_ARB, 0);
glEnableClientState(GL_VERTEX_ARRAY);
glBindBufferARB(GL_ARRAY_BUFFER_ARB, vbo);
// draw stuff
Edited to correct buffer bindings thanks Phineas
The specification for GL_pixel_buffer_object gives an example demonstrating how to render to a vertex array under "Usage Examples".
The following extensions are helpful for solving this problem:
GL_texture_float - floating point internal formats to use for the color buffer attachment
GL_color_buffer_float - disable automatic clamping for fragment colors and glReadPixels
GL_pixel_buffer_object - operations for transferring pixel data to buffer objects
If you can do your work in a vertex/geometry shader, you can use transform feedback to write directly into a buffer object. This also has the option of skip the rasterizer and fragment shading.
Transform feedback is available as EXT_transform_feedback or core version since GL 3.0 (and the ARB equivalent).