(OpenGL) How to read back texture buffer? - opengl

Is glGetBufferSubData used both for regular and texture buffers ?
I am trying to troubleshoot why my texture is not showing up and when I use glGetBufferSubData to read the buffer I get some garbage
struct TypeGLtexture //Associate texture data with a GL buffer
{
TypeGLbufferID GLbuffer;
TypeImageFile ImageFile;
void GenerateGLbuffer ()
{
if (GLbuffer.isActive==true || ImageFile.GetPixelArray().size()==0) return;
GLbuffer.isActive=true;
GLbuffer.isTexture=true;
GLbuffer.Name="Texture Buffer";
GLbuffer.ElementCount=ImageFile.GetPixelArray().size();
glEnable(GL_TEXTURE_2D);
glGenTextures (1,&GLbuffer.ID); //instantiate ONE buffer object and return its handle/ID
glBindTexture (GL_TEXTURE_2D,GLbuffer.ID); //connect the object to the GL_TEXTURE_2D docking point
glTexImage2D (GL_TEXTURE_2D,0,GL_RGB,ImageFile.GetProperties().width, ImageFile.GetProperties().height,0,GL_RGB,GL_UNSIGNED_BYTE,&(ImageFile.GetPixelArray()[0]));
if(ImageFile.GetProperties().width==6){
cout<<"Actual Data"<<endl;
for (unsigned i=0;i<GLbuffer.ElementCount;i++) cout<<(int)ImageFile.GetPixelArray()[i]<<" ";
cout<<endl<<endl;
cout<<"Buffer data"<<endl;
GLubyte read[GLbuffer.ElementCount]; //Read back from the buffer (to make sure)
glGetBufferSubData(GL_TEXTURE_2D,0,GLbuffer.ElementCount,read);
for (unsigned i=0;i<GLbuffer.ElementCount;i++) cout<<(int)read[i]<<" ";
cout<<endl<<endl;}
}
EDIT:
Using glGetTexImage(GL_TEXTURE_2D,0,GL_RGB,GL_UNSIGNED_BYTE,read); the data still differs:

Yes, this would work for texture buffers, if this were in fact one of those.
glGetBufferSubData (...) is for Buffer Objects. What you have here is a Texture Object, and you should actually be getting API errors if you call glGetError (...) to check the error state. This is because GL_TEXTURE_2D is not a buffer target, that is a type of texture object.
It is unfortunate, but you are confusing terminology. Even more unfortunate, there is something literally called a buffer texture (it is a special 1D texture) that allows you to treat a buffer object as a very limited form of texture.
Rather than loosely using the term 'buffer' to think about these things, you should consider "data store." That is the terminology that OpenGL uses to avoid any ambiguity; texture objects have a data store and buffer objects do as well. Unless you create a texture buffer object to link these two things they are separate concepts.
Reading back data from a texture object is much more complicated than this.
Before you can read pixel data from anything in OpenGL, you have to define a pixel format and data type. OpenGL is designed to convert data from a texture's internal format to whatever (compatible) format you request. This is why the function you are actually looking for has the following signature:
void glGetTexImage (GLenum target,
GLint level,
GLenum format, // GL will convert to this format
GLenum type, // Using this data type per-pixel
GLvoid * img);
This applies to all types of OpenGL objects that store pixel data. You can, in fact, use a Pixel Buffer Object to transfer pixel data from your texture object into a separate buffer object. You may then use glGetBufferSubData (...) on that Pixel Buffer Object like you were attempting to do originally.

Related

Transferring large voxel data to a GLSL shader

I'm working a program which renders a dynamic high resolution voxel landscape.
Currently I am storing the voxel data in 32x32x32 blocks with 4 bits each:
struct MapData {
char data[32][32][16];
}
MapData *world = new MapData[(width >> 5) * (height >> 5) * (depth >> 5)];
What I'm trying to do with this, is send it to my vertex and fragment shaders for processing and rendering. There are several different methods I've seen to do this, but I have no idea which one will be best for this.
I started with a sampler1D format, but that results in floating point output between 0 and 1. I also had the hinting suspicion that it was storing it as 16 bits per voxel.
As for Uniform Buffer Objects I tried and failed to implement this.
My biggest concern with all of this is not having to send the whole map to the GPU every frame. I want to be able to load maps up to ~256MB (1024x2048x256 voxels) in size, so I need to be able to send it all once, and then resend only the blocks that were changed.
What is the best solution for this short of writing OpenCL to handle the video memory for me. If there's a better way to store my voxels that makes this easier, I'm open to other formats.
If you just want a large block of memory to access from in a shader, you can use a buffer texture. This obviously requires a semi-recent GL version (3.0 or better), so you need DX10 hardware or better.
The concept is pretty straightforward. You make a buffer object that stores your data. You create a buffer texture using the typical glGenTextures command, then glBindTexture it to the GL_TEXTURE_BUFFER target. Then you use glTexBuffer to associate your buffer object with the texture.
Now, you seem to want to use 4 bits per voxel. So your image format needs to be a single-channel, unsigned 8-bit integral format. Your glTexBuffer call should be something like this:
glTexBuffer(GL_TEXTURE_BUFFER, GL_RUI8, buffer);
where buffer is the buffer object that stores your voxel data.
Once this is done, you can change the contents of this buffer object using the usual mechanisms.
You bind the buffer texture for rendering just like any other texture.
You use a usamplerBuffer sampler type in your shader, because it's an unsigned integral buffer texture. You must use the texelFetch command to access data from it, which takes integer texture coordinates and ignores filtering. Which is of course exactly what you want.
Note that buffer textures do have size limits. However, the size limits are often some large percentage of video memory.

Differences and relationship between glActiveTexture and glBindTexture

From what I gather, glActiveTexture sets the active "texture unit". Each texture unit can have multiple texture targets (usually GL_TEXTURE_1D, 2D, 3D or CUBE_MAP).
If I understand correctly, you have to call glActiveTexture to set the texture unit first (initialized to GL_TEXTURE0), and then you bind (one or more) "texture targets" to that texture unit?
The number of texture units available is system dependent. I see enums for up to 32 in my library. I guess this essentially means I can have the lesser of my GPU's limit (which I think is 16 8) and 32 textures in GPU memory at any one time? I guess there's an additional limit that I don't exceed my GPU's maximum memory (supposedly 1 GB).
Am I understanding the relationship between texture targets and texture units correctly? Let's say I'm allowed 16 units and 4 targets each, does that mean there's room for 16*4=64 targets, or does it not work like that?
Next you typically want to load a texture. You can do this via glTexImage2D. The first argument of which is a texture target. If this works like glBufferData, then we essentially bind the "handle"/"texture name" to the texture target, and then load the texture data into that target, and thus indirectly associate it with that handle.
What about glTexParameter? We have to bind a texture target, and then choose that same target again as the first argument? Or does the texture target not need to be bound as long as we have the correct active texture unit?
glGenerateMipmap operates on a target too...that target has to still be bound to the texture name for it to succeed?
Then when we want to draw our object with a texture on it, do we have to both choose an active texture unit, and then a texture target? Or do we choose a texture unit, and then we can grab data from any of the 4 targets associated with that unit? This is the part that's really confusing me.
All About OpenGL Objects
The standard model for OpenGL objects is as follows.
Objects have state. Think of them as a struct. So you might have an object defined like this:
struct Object
{
int count;
float opacity;
char *name;
};
The object has certain values stored in it and it has state. OpenGL objects have state too.
Changing State
In C/C++, if you have an instance of type Object, you would change its state as follows: obj.count = 5; You would directly reference an instance of the object, get the particular piece of state you want to change, and shove a value into it.
In OpenGL, you don't do this.
For legacy reasons better left unexplained, to change the state of an OpenGL object, you must first bind it to the context. This is done with some from of glBind* call.
The C/C++ equivalent to this is as follows:
Object *g_objs[MAX_LOCATIONS] = {NULL};
void BindObject(int loc, Object *obj)
{
g_objs[loc] = obj;
}
Textures are interesting; they represent a special case of binding. Many glBind* calls have a "target" parameter. This represents different locations in the OpenGL context where objects of that type can be bound. For example, you can bind a framebuffer object for reading (GL_READ_FRAMEBUFFER) or for writing (GL_DRAW_FRAMEBUFFER). This affects how OpenGL uses the buffer. This is what the loc parameter above represents.
Textures are special because when you first bind them to a target, they get special information. When you first bind a texture as a GL_TEXTURE_2D, you are actually setting special state in the texture. You are saying that this texture is a 2D texture. And it will always be a 2D texture; this state cannot be changed ever. If you have a texture that was first bound as a GL_TEXTURE_2D, you must always bind it as a GL_TEXTURE_2D; attempting to bind it as GL_TEXTURE_1D will give rise to an error (while run-time).
Once the object is bound, its state can be changed. This is done via generic functions specific to that object. They too take a location that represents which object to modify.
In C/C++, this looks like:
void ObjectParameteri(int loc, ObjectParameters eParam, int value)
{
if(g_objs[loc] == NULL)
return;
switch(eParam)
{
case OBJECT_COUNT:
g_objs[loc]->count = value;
break;
case OBJECT_OPACITY:
g_objs[loc]->opacity = (float)value;
break;
default:
//INVALID_ENUM error
break;
}
}
Notice how this function sets whatever happens to be in currently bound loc value.
For texture objects, the main texture state changing functions are glTexParameter. The only other functions that change texture state are the glTexImage functions and their variations (glCompressedTexImage, glCopyTexImage, the recent glTexStorage). The various SubImage versions change the contents of the texture, but they do not technically change its state. The Image functions allocate texture storage and set the texture's format; the SubImage functions just copy pixels around. That is not considered the texture's state.
Allow me to repeat: these are the only functions that modify texture state. glTexEnv modifies environment state; it doesn't affect anything stored in texture objects.
Active Texture
The situation for textures is more complex, again for legacy reasons best left undisclosed. This is where glActiveTexture comes in.
For textures, there aren't just targets (GL_TEXTURE_1D, GL_TEXTURE_CUBE_MAP, etc). There are also texture units. In terms of our C/C++ example, what we have is this:
Object *g_objs[MAX_OBJECTS][MAX_LOCATIONS] = {NULL};
int g_currObject = 0;
void BindObject(int loc, Object *obj)
{
g_objs[g_currObject][loc] = obj;
}
void ActiveObject(int currObject)
{
g_currObject = currObject;
}
Notice that now, we not only have a 2D list of Objects, but we also have the concept of a current object. We have a function to set the current object, we have the concept of a maximum number of current objects, and all of our object manipulation functions are adjusted to select from the current object.
When you change the currently active object, you change the entire set of target locations. So you can bind something that goes into current object 0, switch to current object 4, and will be modifying a completely different object.
This analogy with texture objects is perfect... almost.
See, glActiveTexture does not take an integer; it takes an enumerator. Which in theory means that it can take anything from GL_TEXTURE0 to GL_TEXTURE31. But there's one thing you must understand:
THIS IS FALSE!
The actual range that glActiveTexture can take is governed by GL_MAX_COMBINED_TEXTURE_IMAGE_UNITS. That is the maximum number of simultaneous multitextures that an implementation allows. These are each divided up into different groupings for different shader stages. For example, on GL 3.x class hardware, you get 16 vertex shader textures, 16 fragment shader textures, and 16 geometry shader textures. Therefore, GL_MAX_COMBINED_TEXTURE_IMAGE_UNITS will be 48.
But there aren't 48 enumerators. Which is why glActiveTexture doesn't really take enumerators. The correct way to call glActiveTexture is as follows:
glActiveTexture(GL_TEXTURE0 + i);
where i is a number between 0 and GL_MAX_COMBINED_TEXTURE_IMAGE_UNITS.
Rendering
So what does all of this have to do with rendering?
When using shaders, you set your sampler uniforms to a texture image unit (glUniform1i(samplerLoc, i), where i is the image unit). That represents the number you used with glActiveTexture. The sampler will pick the target based on the sampler type. So a sampler2D will pick from the GL_TEXTURE_2D target. This is one reason why samplers have different types.
Now this sounds suspiciously like you can have two GLSL samplers, with different types that use the same texture image unit. But you can't; OpenGL forbids this and will give you an error when you attempt to render.
I'll give it a try ! All this is not that complicated, just a question of terms, hope I'll make myself clear.
You can create roughly as many Texture Objects as there is available memory in your system. These objects hold the actual data (texels) of your textures, along with parameters, provided by glTexParameter (see FAQ).
When being created, you have to assign one Texture Target to one texture object, which represents the type of the texture (GL_TEXTURE_2D, GL_TEXTURE_3D, GL_TEXTURE_CUBE, ...).
These two items, texture object and texture target represent the texture data. We'll come back to them later.
Texture units
Now, OpenGL provides an array of texture units, that can be used simultaneously while drawing. The size of the array depends of the OpenGL system, yours has 8.
You can bind a texture object to a texture unit to use the given texture while drawing.
In a simple and easy world, to draw with a given texture, you'd bind a texture object to the texture unit, and you'd do (pseudocode):
glTextureUnit[0] = textureObject
As GL is a state machine, it, alas, does not work this way. Supposing that our textureObject has data for the GL_TEXTURE_2D texture target, we'll express the previous assignment as :
glActiveTexture(GL_TEXTURE0); // select slot 0 of the texture units array
glBindTexture(GL_TEXTURE_2D, textureObject); // do the binding
Note that GL_TEXTURE_2D really depends on the type of the texture you want to bind.
Texture objects
In pseudo code, to set texture data or texture parameters, you'd do for example :
setTexData(textureObject, ...)
setTexParameter(textureObject, TEXTURE_MIN_FILTER, LINEAR)
OpenGL can't directly manipulate texture objects, to update/set their content, or change their parameters, you have to first bind them to the active texture unit (whichever it is). The equivalent code becomes :
glBindTexture(GL_TEXTURE_2D, textureObject) // this 'installs' textureObject in texture unit
glTexImage2D(GL_TEXTURE_2D, ...)
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR)
Shaders
Shaders have access to all the texture units, they don't care about the active texture.
Sampler uniforms are int values representing the index of the texture unit to use for the sampler (and not the texture object to use).
So you have to bind your texture objects to the units you want to use.
The type of the sampler will do the match with the texture target that is used in the texture unit : Sampler2D for GL_TEXTURE_2D, and so on...
Imagine the GPU like some paint processing plant.
There are a number of tanks, which delivers dye to some painting machine. In the painting machine the dye is then applied to the object. Those tanks are the texture units
Those tanks can be equipped with different kinds of dye. Each kind of dye requires some other kind of solvent. The "solvent" is the texture target. For convenience each tank is connected to some solvent supply, and but only one kind of solvent can be used at a time in each tank. So there's a valve/switch TEXTURE_CUBE_MAP, TEXTURE_3D, TEXTURE_2D, TEXTURE_1D. You can fill all the dye types into the tank at the same time, but since only one kind of solvent goes in, it will "dilute" only the kind of dye matching. So you can have each kind of texture being bound, but the binding with the "most important" solvent will actually go into the tank and mix with the kind of dye it belongs to.
And then there's the dye itself, which comes from a warehouse and is filled into the tank by "binding" it. That's your texture.

Do i need to recreate a texture when using opengl/CUDA interoperability?

I want to manipulate a texture which I use in opengl using CUDA. Knowing that I need to use a PBO for this I wonder if I have to recreate the texture every time I make changes to the PBO like this:
// Select the appropriate buffer
glBindBuffer( GL_PIXEL_UNPACK_BUFFER, bufferID);
// Select the appropriate texture
glBindTexture( GL_TEXTURE_2D, textureID);
// Make a texture from the buffer
glTexSubImage2D( GL_TEXTURE_2D, 0, 0, 0, Width, Height,GL_BGRA, GL_UNSIGNED_BYTE, NULL);
Does glTexSubImage2D and the like copy the data from the PBO?
All pixel transfer operations work with buffer objects. Since glTexSubImage2D initiates a pixel transfer operation, it can be used with buffer objects.
There is no long-term connection made between buffer objects used for pixel transfers and textures. The buffer object is used much like a client memory pointer would be used for glTexSubImage2D calls. It's there to store the data while OpenGL formats and pulls it into the texture. Once it's done, you can do whatever you want with it.
The only difference is that, because OpenGL manages the buffer object, the upload from the buffer can be asynchronous. Well that and you get to play games like filling the buffer object from GPU operations (whether from OpenGL, OpenCL, or CUDA).

Using a framebuffer as a vertex buffer without moving the data to the CPU

In OpenGL, is there a way to use framebuffer data as vertex data without moving the data through the CPU? Ideally, a framebuffer object could be recast as a vertex buffer object directly on the GPU. I'd like to use the fragment shader to generate a mesh and then render that mesh.
There's a couple ways you could go about this, the first has already been mentioned by spudd86 (except you need to use GL_PIXEL_PACK_BUFFER, that's the one that's written to by glReadPixels).
The other is to use a framebuffer object and then read from its texture in your vertex shader, mapping from a vertex id (that you would have to manage) to a texture location. If this is a one-time operation though I'd go with copying it over to a PBO and then binding into GL_ARRAY_BUFFER and then just using it as a VBO.
Just use the functions to do the copy and let the driver figure out how to do what you want, chances are as long as you copy directly into the vertex buffer it won't actually do a copy but will just make your VBO a reference to the data.
The main thing to be careful of is that some drivers may not like you using something you told it was for vertex data with an operation for pixel data...
Edit: probably something like the following may or may not work... (IIRC the spec says it should)
int vbo;
glBindBufferARB(GL_PIXEL_PACK_BUFFER_ARB, vbo);
// use appropriate pixel formats and size
glReadPixels(0, 0, w, h, GL_RGBA, GL_UNSIGNED_BYTE, 0);
glBindBufferARB(GL_PIXEL_PACK_BUFFER_ARB, 0);
glEnableClientState(GL_VERTEX_ARRAY);
glBindBufferARB(GL_ARRAY_BUFFER_ARB, vbo);
// draw stuff
Edited to correct buffer bindings thanks Phineas
The specification for GL_pixel_buffer_object gives an example demonstrating how to render to a vertex array under "Usage Examples".
The following extensions are helpful for solving this problem:
GL_texture_float - floating point internal formats to use for the color buffer attachment
GL_color_buffer_float - disable automatic clamping for fragment colors and glReadPixels
GL_pixel_buffer_object - operations for transferring pixel data to buffer objects
If you can do your work in a vertex/geometry shader, you can use transform feedback to write directly into a buffer object. This also has the option of skip the rasterizer and fragment shading.
Transform feedback is available as EXT_transform_feedback or core version since GL 3.0 (and the ARB equivalent).

Equiv of glDrawpixels that operates on GPU memory?

glDrawPixels(GLsizei width, GLsizei height, GLenum format, GLenum type, const ovid *pixels);
Is there a function like this, except instead of accessing CPU memory, it accesses GPU memory? [Either a texture of a frame buffer object]
Let's cover all the bases here.
First, a direct answer: yes, there is such a function. It's called glDrawPixels. I'm not kidding.
glDrawPixels can certainly read from GPU memory, provided that you are using buffer objects as their source data (commonly called "pixel buffer objects"). glDrawPixels can use pixel buffer objects as the source for pixel data. Buffer objects are (theoretically, at least) in GPU memory, so they qualify.
However, you add onto this "Either a texture of a frame buffer object". Under this qualification, you're asking, "is there a way to copy pixel data from one texture/framebuffer to the current framebuffer?"
Yes. glBlitFramebuffer can do that. It blits from the GL_READ_FRAMEBUFFER to the GL_DRAW_FRAMEBUFFER. And since you can add images from textures to FBOs, you can copy from images just fine. You can even copy from the default framebuffer to some renderbuffer or texture.
You can also employ glCopyImageSubData, which copies pixel rectangles from one image to another. It's a lot more convenient than glBlitFramebuffer if all you're doing is copying pixel data. This is quite new at present (GL 4.3, or ARB_copy_image). It cannot be used to copy data to the default framebuffer.
If it is in a texture:
set up orthographic frustum
disable blending, depth test, etc.
bind texture
draw screen-aligned textured quad with correct texture coordinates
I use this in for example in Compositor::_drawPixels
glDrawPixels can read from a Buffer Object. Just do a
glBindBuffer(GL_PIXEL_UNPACK_BUFFER, XXX)
before calling glDrawPixels.
Caveat: glDrawPixels is deprecated...
Use glBlitFramebuffer, which operates on frambuffer objects (Link). Ans this is not deprecated.
You can take advantage of format conversion, scaling and multisampling.