how to match pbo buffer with a texture? - opengl

From question/answer i know that once I bind GL_PIXEL_UNPACK_BUFFER, OpenGL will feed texture2D using the data in buffer bind to GL_PIXEL_UNPACK_BUFFER.
But while there are various shapes of textures, such as GL_TEXTURE_3D, GL_TEXTURE_1D... there is no such a attribute of GL_PIXEL_UNPACK_BUFFER's bound buffer which specifies the internal shape of its data. If I want to transfer data to a texture storage bind to GL_TEXTURE_3D using pbo, I need to first send data to the buffer bind to the pbo.
Do I need to care the data's layout I send to pbo? Like the data should be (xxx...,yyy...,zzz...) or (xyz,xyz,xyz...).
An example or pseudocode is helpful.

Using a pixel buffer in a pixel transfer operation changes one thing about pixel transfer operations: where the data comes from. When you don't use a PBO, the data comes from (or goes to) an address in memory you directly allocated. When you do use a PBO, the data comes from (or goes to) a byte offset into a buffer object's storage.
Nothing else about the pixel transfer operation changes. How the "packed" pixel data is laid out remains the same regardless of the location of the pixel transfer.
The matters you are talking about are governed by the nature of the pixel transfer operation you started (ie: what function you called) and the pixel transfer parameters you passed to it. Again, these work the same way regardless of whether you're using a buffer object or client memory.

Related

Drawing Generated Image Every Frame?

Well i have a texture that is generated every frame and I was wondering the best way to render it in opengl. It is simply pixel data that is generated on the cpu in rgba8 (32-bit, 8 bit for each component) format, I simply need to transfer it to the gpu and draw it onto the screen. I remember there being some sort of pixel buffer or frame buffer that does this without having to generate a new texture every frame in association with glTexImage2d?
Pixel Buffer Objects do not change the fact that you need to call glTexImage2D (...) to (re-)allocate texture storage and copy your image. PBOs provide a means of asynchronous pixel transfer - basically making it so that a call to glTexImage2D (...) does not have to block until it finishes copying your memory from the client (CPU) to the server (GPU).
The only way this is really going to improve performance for you is if you map the memory in a PBO (Pixel Unpack Buffer) and write to that mapped memory every frame while you are computing the image on the CPU.
While that buffer is bound to GL_PIXEL_UNPACK_BUFFER, call glTexImage2D (...) with NULL for the data parameter and this will upload your texture using memory that is already owned by the server, so it avoids an immediate client->server copy. You might get a marginal improvement in performance by doing this, but do not expect anything huge. It depends on how much work you do between the time you map/unmap the buffer's memory and when you upload the buffer to your texture and use said texture.
Moreover, if you call glTexSubImage2D (...) every frame instead of allocating new texture image storage by calling glTexImage2D (...) (do not worry -- the old storage is reclaimed when no pending command is using it anymore) you may introduce a new source of synchronization overhead that could reduce your performance. What you are looking for here is known as buffer object streaming.
You are more likely to improve performance by using a pixel format that requires no conversion. Newer versions of GL (4.2+) let you query the optimal pixel transfer format using glGetInternalFormativ (...).
On a final, mostly pedantic note, glTexImage2D (...) does not generate textures. It allocates storage for their images and optionally transfers pixel data. Texture Objects (and OpenGL objects in general) are actually generated the first time they are bound (e.g. glBindTexture (...)). From that point on, glTexImage2D (...) merely manages the memory belonging to said texture object.

Convert texture data to... vertex data?

I need some clarifications about a paragraph of this wiki page:
https://www.opengl.org/wiki/Buffer_Object#Buffer_Object_Usage
COPY is used when a buffer object is used to pass data from one place in OpenGL to another. For example, you can read image data into a buffer, then use that image data as vertex data in a draw call. [...]
Does that mean that I can theoretically byte-by-byte copy data from a conventional 2D texture into a VBO? How do I do this? That may come in handy.

Transferring large voxel data to a GLSL shader

I'm working a program which renders a dynamic high resolution voxel landscape.
Currently I am storing the voxel data in 32x32x32 blocks with 4 bits each:
struct MapData {
char data[32][32][16];
}
MapData *world = new MapData[(width >> 5) * (height >> 5) * (depth >> 5)];
What I'm trying to do with this, is send it to my vertex and fragment shaders for processing and rendering. There are several different methods I've seen to do this, but I have no idea which one will be best for this.
I started with a sampler1D format, but that results in floating point output between 0 and 1. I also had the hinting suspicion that it was storing it as 16 bits per voxel.
As for Uniform Buffer Objects I tried and failed to implement this.
My biggest concern with all of this is not having to send the whole map to the GPU every frame. I want to be able to load maps up to ~256MB (1024x2048x256 voxels) in size, so I need to be able to send it all once, and then resend only the blocks that were changed.
What is the best solution for this short of writing OpenCL to handle the video memory for me. If there's a better way to store my voxels that makes this easier, I'm open to other formats.
If you just want a large block of memory to access from in a shader, you can use a buffer texture. This obviously requires a semi-recent GL version (3.0 or better), so you need DX10 hardware or better.
The concept is pretty straightforward. You make a buffer object that stores your data. You create a buffer texture using the typical glGenTextures command, then glBindTexture it to the GL_TEXTURE_BUFFER target. Then you use glTexBuffer to associate your buffer object with the texture.
Now, you seem to want to use 4 bits per voxel. So your image format needs to be a single-channel, unsigned 8-bit integral format. Your glTexBuffer call should be something like this:
glTexBuffer(GL_TEXTURE_BUFFER, GL_RUI8, buffer);
where buffer is the buffer object that stores your voxel data.
Once this is done, you can change the contents of this buffer object using the usual mechanisms.
You bind the buffer texture for rendering just like any other texture.
You use a usamplerBuffer sampler type in your shader, because it's an unsigned integral buffer texture. You must use the texelFetch command to access data from it, which takes integer texture coordinates and ignores filtering. Which is of course exactly what you want.
Note that buffer textures do have size limits. However, the size limits are often some large percentage of video memory.

How to read a 3D texture from GPU memory with Pixel Buffer Objects

I'm writing data into a 3D texture from within a fragment shader, and I need to asynchronously read back said data into system memory. The only means of asynchronously initiating the packing operation into the buffer object seems to be calling glReadPixels() with a NULL pointer. But this function insists on getting passed a rectangle defining the region to read back. Now I don't know if these parameters are ignored when using PBOs, but I assume not. In this case, I have no idea what to pass to this function in order to obtain the whole 3D texture.
Even if have to read back individual slices (which would be kind of stupid IMO), I still have no idea how to communicate to OpenGL which slice to read from. Am I missing something?
BTW, I could use individual 2D textures for every slice, but that would screw up (3D-)mipmapping if I'm not mistaken. I wanted to use the 3D mipmaps in order to efficiently find regions of interest in the resulting 3D texture.
P.S. Sorry for the sub-optimal tags, apparently no one ever asked about 3d textures before and since I'm not allowed to create new tags...
Who says that glReadPixels is the only way to read image data? Maybe in OpenGL ES it is, but if you're using ES, you should say so. The rest of this answer will be assuming you're talking about desktop GL.
If you have a texture, and you want to read its contents, you should use glGetTexImage. The switch that controls whether it reads into a buffer object or not is the same switch that controls it for glReadPixels: whether a buffer is bound to GL_PIXEL_PACK_BUFFER.
Note that glGetTexImage will retrieve the entire texture (for a given mipmap level).

How do I update part of a texture?

It seems as though glTexSubImage2D requires a pointer to the full texture buffer. How does one make partial updates to a texture by only providing a pointer to the update region rather than the whole buffer?
For example if I want to overlay a second image onto an existing texture, rather than copy the image data onto my texture buffer then call glTexSubImage2D, I just get opengl to update the texture without having to copy data between RAM locations.
Where do you get the notion that glTexSubImage2D requires a pointer to the full texture buffer ?
From the documentation linked above, it seems to me that the last parameter is a pointer to the buffer containing your new data only. The other parameters are what you use to specify which texture object to update (just an OpenGL identifier, no pointer to the original data required) and the offset and size to copy your new data to.
TL;DR: glTexSubImage2D takes a pointer to your new data and does exactly what you think it should in your example :)