I'm using texture in grids: firstly a large texture (such as 1024x1024 or 2048x2048) is created without data, then areas being used are set with glTexSubImage2d calls. However, I want to have all pixels to have initial value of 0xffff, not zero. And I feel it's stupid to allocate megabytes of all-0xffff host memory only for initialize texture value. So is it possible to set all pixels of a texture to a specific value, with just a few calls?
Specifically, is it possible in OpenGL 2.1?
There is glClearTexImage, but it was introduced in OpenGL 4.4; see if it's available to you with the ARB_clear_texture extension.
If you're absolutely restricted to the core OpenGL 2.1, allocating client memory and issuing a glTexImage2D call is the only way of doing that. In particular you cannot even render to a texture with unextended OpenGL 2.1, so tricks like binding the texture to a framebuffer (OpenGL 3.0+) and calling glClearColor aren't applicable. However, a one-time allocation and initialization of a 1-16MB texture isn't that big of a problem, even if it feels 'stupid'.
Also note that a newly created texture image is undetermined; you cannot rely on it being all zeros, thus you have to initialize it one way or another.
Related
I need to change parts from a texture, but to be aware of the current texture data instead of just replaceing it.
I tried to use glTexSubImage2D, but it replaces the current data without giving me the posibility to specify some kind of operation between current data and new data.
One solution wold be to cache texture data in memory and do the blend operation before using glTexSubImage2Dand use glTexSubImage2D with the result, but this will just waste the memory...
Is there any function common to both desktop OpenGL and OpenGL ES 2.0 that will allow me to do this?
Of course glTexSubImage2D overwrites any previous data and doing it on the CPU isn't an option at all (it won't just waste memory, but even more important, time).
What you can do though is use a framebuffer object (FBO). You attach the destination texture as color render target of the FBO and then just render the new data on top of it by rendering a textured quad. The sub-region can be adjusted by either the viewport setting or the quad size and position. For the actual operation you can then either use the existing OpenGL blending functionality if sufficient, or you use a custom fragment shader for it (but in this case you can't just render the new data on top of the old, but have to use both new and old data as textures and render the stuff into a completely new texture, since otherwise you don't have access to the old data inside the shader).
This question already has answers here:
How to use GLUT/OpenGL to render to a file?
(6 answers)
Closed 9 years ago.
My aim is to render OpenGL scene without a window, directly into a file. The scene may be larger than my screen resolution is.
How can I do this?
I want to be able to choose the render area size to any size, for example 10000x10000, if possible?
It all starts with glReadPixels, which you will use to transfer the pixels stored in a specific buffer on the GPU to the main memory (RAM). As you will notice in the documentation, there is no argument to choose which buffer. As is usual with OpenGL, the current buffer to read from is a state, which you can set with glReadBuffer.
So a very basic offscreen rendering method would be something like the following. I use c++ pseudo code so it will likely contain errors, but should make the general flow clear:
//Before swapping
std::vector<std::uint8_t> data(width*height*4);
glReadBuffer(GL_BACK);
glReadPixels(0,0,width,height,GL_BGRA,GL_UNSIGNED_BYTE,&data[0]);
This will read the current back buffer (usually the buffer you're drawing to). You should call this before swapping the buffers. Note that you can also perfectly read the back buffer with the above method, clear it and draw something totally different before swapping it. Technically you can also read the front buffer, but this is often discouraged as theoretically implementations were allowed to make some optimizations that might make your front buffer contain rubbish.
There are a few drawbacks with this. First of all, we don't really do offscreen rendering do we. We render to the screen buffers and read from those. We can emulate offscreen rendering by never swapping in the back buffer, but it doesn't feel right. Next to that, the front and back buffers are optimized to display pixels, not to read them back. That's where Framebuffer Objects come into play.
Essentially, an FBO lets you create a non-default framebuffer (like the FRONT and BACK buffers) that allow you to draw to a memory buffer instead of the screen buffers. In practice, you can either draw to a texture or to a renderbuffer. The first is optimal when you want to re-use the pixels in OpenGL itself as a texture (e.g. a naive "security camera" in a game), the latter if you just want to render/read-back. With this the code above would become something like this, again pseudo-code, so don't kill me if mistyped or forgot some statements.
//Somewhere at initialization
GLuint fbo, render_buf;
glGenFramebuffers(1,&fbo);
glGenRenderbuffers(1,&render_buf);
glBindRenderbuffer(render_buf);
glRenderbufferStorage(GL_RENDERBUFFER, GL_BGRA8, width, height);
glBindFramebuffer(GL_DRAW_FRAMEBUFFER,fbo);
glFramebufferRenderbuffer(GL_DRAW_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, render_buf);
//At deinit:
glDeleteFramebuffers(1,&fbo);
glDeleteRenderbuffers(1,&render_buf);
//Before drawing
glBindFramebuffer(GL_DRAW_FRAMEBUFFER,fbo);
//after drawing
std::vector<std::uint8_t> data(width*height*4);
glReadBuffer(GL_COLOR_ATTACHMENT0);
glReadPixels(0,0,width,height,GL_BGRA,GL_UNSIGNED_BYTE,&data[0]);
// Return to onscreen rendering:
glBindFramebuffer(GL_DRAW_FRAMEBUFFER,0);
This is a simple example, in reality you likely also want storage for the depth (and stencil) buffer. You also might want to render to texture, but I'll leave that as an exercise. In any case, you will now perform real offscreen rendering and it might work faster then reading the back buffer.
Finally, you can use pixel buffer objects to make read pixels asynchronous. The problem is that glReadPixels blocks until the pixel data is completely transfered, which may stall your CPU. With PBO's the implementation may return immediately as it controls the buffer anyway. It is only when you map the buffer that the pipeline will block. However, PBO's may be optimized to buffer the data solely on RAM, so this block could take a lot less time. The read pixels code would become something like this:
//Init:
GLuint pbo;
glGenBuffers(1,&pbo);
glBindBuffer(GL_PIXEL_PACK_BUFFER, pbo);
glBufferData(GL_PIXEL_PACK_BUFFER, width*height*4, NULL, GL_DYNAMIC_READ);
//Deinit:
glDeleteBuffers(1,&pbo);
//Reading:
glBindBuffer(GL_PIXEL_PACK_BUFFER, pbo);
glReadPixels(0,0,width,height,GL_BGRA,GL_UNSIGNED_BYTE,0); // 0 instead of a pointer, it is now an offset in the buffer.
//DO SOME OTHER STUFF (otherwise this is a waste of your time)
glBindBuffer(GL_PIXEL_PACK_BUFFER, pbo); //Might not be necessary...
pixel_data = glMapBuffer(GL_PIXEL_PACK_BUFFER, GL_READ_ONLY);
The part in caps is essential. If you just issue a glReadPixels to a PBO, followed by a glMapBuffer of that PBO, you gained nothing but a lot of code. Sure the glReadPixels might return immediately, but now the glMapBuffer will stall because it has to safely map the data from the read buffer to the PBO and to a block of memory in main RAM.
Please also note that I use GL_BGRA everywhere, this is because many graphics cards internally use this as the optimal rendering format (or the GL_BGR version without alpha). It should be the fastest format for pixel transfers like this. I'll try to find the nvidia article I read about this a few monts back.
When using OpenGL ES 2.0, GL_DRAW_FRAMEBUFFER might not be available, you should just use GL_FRAMEBUFFER in that case.
I'll assume that creating a dummy window (you don't render to it; it's just there because the API requires you to make one) that you create your main context into is an acceptable implementation strategy.
Here are your options:
Pixel buffers
A pixel buffer, or pbuffer (which isn't a pixel buffer object), is first and foremost an OpenGL context. Basically, you create a window as normal, then pick a pixel format from wglChoosePixelFormatARB (pbuffer formats must be gotten from here). Then, you call wglCreatePbufferARB, giving it your window's HDC and the pixel buffer format you want to use. Oh, and a width/height; you can query the implementation's maximum width/heights.
The default framebuffer for pbuffer is not visible on the screen, and the max width/height is whatever the hardware wants to let you use. So you can render to it and use glReadPixels to read back from it.
You'll need to share you context with the given context if you have created objects in the window context. Otherwise, you can use the pbuffer context entirely separately. Just don't destroy the window context.
The advantage here is greater implementation support (though most drivers that don't support the alternatives are also old drivers for hardware that's no longer being supported. Or is Intel hardware).
The downsides are these. Pbuffers don't work with core OpenGL contexts. They may work for compatibility, but there is no way to give wglCreatePbufferARB information about OpenGL versions and profiles.
Framebuffer Objects
Framebuffer Objects are more "proper" offscreen rendertargets than pbuffers. FBOs are within a context, while pbuffers are about creating new contexts.
FBOs are just a container for images that you render to. The maximum dimensions that the implementation allows can be queried; you can assume it to be GL_MAX_VIEWPORT_DIMS (make sure an FBO is bound before checking this, as it changes based on whether an FBO is bound).
Since you're not sampling textures from these (you're just reading values back), you should use renderbuffers instead of textures. Their maximum size may be larger than those of textures.
The upside is the ease of use. Rather than have to deal with pixel formats and such, you just pick an appropriate image format for your glRenderbufferStorage call.
The only real downside is the narrower band of hardware that supports them. In general, anything that AMD or NVIDIA makes that they still support (right now, GeForce 6xxx or better [note the number of x's], and any Radeon HD card) will have access to ARB_framebuffer_object or OpenGL 3.0+ (where it's a core feature). Older drivers may only have EXT_framebuffer_object support (which has a few differences). Intel hardware is potluck; even if they claim 3.x or 4.x support, it may still fail due to driver bugs.
If you need to render something that exceeds the maximum FBO size of your GL implementation libtr works pretty well:
The TR (Tile Rendering) library is an OpenGL utility library for doing
tiled rendering. Tiled rendering is a technique for generating large
images in pieces (tiles).
TR is memory efficient; arbitrarily large image files may be generated
without allocating a full-sized image buffer in main memory.
The easiest way is to use something called Frame Buffer Objects (FBO). You will still have to create a window to create an opengl context though (but this window can be hidden).
The easiest way to fulfill your goal is using FBO to do off-screen render. And you don't need to render to texture, then get the teximage. Just render to buffer and use function glReadPixels. This link will be useful. See Framebuffer Object Examples
The OpenGL SuperBible discusses texture buffer objects, which are textures formed from data inside VBOs. It looks like there are benefits to using them, but all the examples I've found create regular textures. Does anyone have any advice regarding when to use one over the other?
According to the extension registry, texture buffers are only 1-dimensional, cannot do any filtering and have to be accessed by accessing explicit texels (by index), instead of normalized [0,1] floating point texture coordinates. So they are not really a substitution for regular textures, but for large uniform arrays (for example skinning matrices or per instance data). It would make much more sense to compare them to uniform buffers than to regular textures, like done here.
EDIT: If you want to use VBO data for regular, filtered, 2D textures, you won't get around a data copy (best done by means of PBOs). But when you just want plain array access to VBO data and attributes won't suffice for this, then a texture buffer should be the method of choice.
EDIT: After checking the corresponding chapter in the SuperBible, I found that they on the one hand mention, that texture buffers are always 1-dimensional and accessed by discrete integer texel offsets, but on the other hand fail to mention explicitly the lack of filtering. It seems to me they more or less advertise them as textures just sourcing their data from buffers, which explains the OP's question. But as mentioned above this is just the wrong comparison. Texture buffers just provide a way for directly accessing buffer data in shaders in the form of a plain array (though with an adjustable element type), not more (making them useless for regular texturing) but also not less (they are still a great feature).
Buffer textures are unique type of texture that allow a buffer object to be accessed from a shader like a texture. They are completely unique from normal OpenGL textures, including Texture1D, Texture2D, and Texture3D. There are two main reasons why you would use a Buffer Texture instead of a normal texture:
Since Texture Buffers are read like textures, you can read their contents from every vertex freely using texelFetch. This is something that you cannot do with vertex attributes, as those are only accessable on a per-vertex basis.
Buffer Textures can be useful as an alternative to uniforms when you need to pass in large arrays of data. Uniforms are limited in the size, while Buffer Textures can be massive in size.
Buffer Textures are supported in older versions of OpenGL than Shader Storage Buffer Objects (SSBO), making them good for use as a fallback if SSBOs are not supported on a GPU.
Meanwhile, regular textures in OpenGL work differently and are designed for actual texturing. These have the following features not shared by Texture Buffers:
Regular textures can have filters applied to them, so that when you sample pixels from them in your shaders, your GPU will automatically interpolate colors based on nearby pixels. This prevents pixelation when textures are upscaled heavily, though they will get progressively more blurry.
Regular textures can use mipmaps, which are lower quality versions of the same texture used at further view distances. OpenGL has built in functionality to generate mipmaps, or you can supply your own. Mipmaps can be helpful for performance in large 3d scenes. Mipmaps also can help prevent flickering in textures that are rendered further away.
In summary of these points, you could say that normal textures are good for actual texturing, while Buffer Textures are good as a method for passing in raw arrays of values.
Regular textures are used when VBOs are not supported.
I have discovered that there are still a fair number of drivers out there that don't support NPOT textures so I'm trying to retro-fit my 2D engine (based on OpenTK, which is in turn based on OpenGL) with Texture2D support instead of relying on GL_ARB_texture_rectangle. As part of this I am forcing all NPOTS texture bitmaps to allocate extra space up to the next power-of-2 size so they won't cause errors on these drivers. My question is, do I really have to resize the real bitmap and texture and allocate all that extra memory, or is there a way to tell OpenGL that I want a power-of-2 size texture, but I'm only going to use a portion of it in the upper left?
Right now my call looks like this:
GL.TexImage2D(texTarget, 0, PixelInternalFormat.Rgba8, bmpUse.Width, bmpUse.Height, 0, PixelFormat.Bgra, PixelType.UnsignedByte, bits.Scan0);
This is after I have made bmpUse be a copy of my real texture bitmap with extra space on the right and bottom.
Use glTexImage2D with empty data to initialize the texture and glTexSubImage2D to fill a portion of it with data. Technically OpenGL allows the data parameter given to glTexImage{1,2,3}D to be a null pointer, indicating that the texture object is just to be initializd. It depends on the language binding, if that feature remains supported in the target language – just test what happens if you pass a null pointer.
datenwolf is right on how to initialize the texture with just a partial image, but there are 2 issues with this you need to be aware of:
you need to remap the texture coordinates of your mesh, as the [0-1] texture range of the full texture now also contains uninitialized data, as opposed to your full texture. The useful range is now [0-orig_width/padded_width]
wrapping of your texture will only wrap the whole texture, not your sub-part.
Can anyone pls tell me how to use hardware memory to create textures in OpenGL ? Currently I'm running my game in window mode, do I need to switch to fullscreen to get the use of hardware ?
If I can create textures in hardware, is there a limit for no of textures (other than the hardware memory) ? and then how can I cache my textures into hardware ? Thanks.
This should be covered by almost all texture tutorials for OpenGL. For example here, here and here.
For every texture you first need a texture name. A texture name is like a unique index for a single texture. Every name points to a texture object that can have its own parameters, data, etc. glGenTextures is used to get new names. I don't know if there is any limit besides the uint range (2^32). If there is then you will probably get 0 for all new texture names (and a gl error).
The next step is to bind your texture (see glBindTexture). After that all operations that use or affect textures will use the texture specified by the texture name you used as parameter for glBindTexture. You can now set parameters for the texture (glTexParameter) and upload the texture data with glTexImage2D (for 2D textures). After calling glTexImage you can also free the system memory with your texture data.
For static textures all this has to be done only once. If you want to use the texture you just need to bind it again and enable texturing (glEnable(GL_TEXTURE_2D)).
The size (width/height) for a single texture is limited by GL_MAX_TEXTURE_SIZE. This is normally 4096, 8192 or 16384. It is also limited by the available graphics memory because it has to fit into it together with some other resources like the framebuffer or vertex buffers. All textures together can be bigger then the available memory but then they will be swapped.
In most cases the graphics driver should decide which textures are stored in system memory and which in graphics memory. You can however give certain textures a higher priority with either glPrioritizeTextures or with glTexParameter.
Edit:
I wouldn't worry too much about where textures are stored because the driver normally does a very good job with that. Textures that are used often are also more likely to be stored in graphics memory. If you set a priority that's just a "hint" for the driver on how important it is for the texture to stay on the graphics card. It's also possible the the priority is completely ignored. You can also check where textures currently are with glAreTexturesResident.
Usually when you talk about generating a texture on the GPU, you're not actually creating texture images and applying them like normal textures. The simpler and more common approach is to use Fragment shaders to procedurally calculate the colors of for each pixel in real time from scratch for every single frame.
The canonical example for this is to generate a Mandelbrot pattern on the surface of an object, say a teapot. The teapot is rendered with its polygons and texture coordinates by the application. At some stage of the rendering pipeline every pixel of the teapot passes through the fragment shader which is a small program sent to the GPU by the application. The fragment shader reads the 2D texture coordinates and calculates the Mandelbrot set color of the 2D coordinates and applies it to the pixel.
Fullscreen mode has nothing to do with it. You can use shaders and generate textures even if you're in window mode. As I mentioned, the textures you create never actually occupy space in the texture memory, they are created on the fly. One could probably think of a way to capture and cache the generated texture but this can be somewhat complex and require multiple rendering passes.
You can learn more about it if you look up "GLSL" in google - the OpenGL shading language.
This somewhat dated tutorial shows how to create a simple fragment shader which draws the Mandelbrot set (page 4).
If you can get your hands on the book "OpenGL Shading Language, 2nd Edition", you'll find it contains a number of simple examples on generating sky, fire and wood textures with the help of an external 3D Perlin noise texture from the application.
To create a texture on GPU look into "render to texture" tutorials. There are two common methods: Binding a PBuffer context as texture, or using Frame Buffer Objects. PBuffer render to textures are the older method, and have the wider support. Frame Buffer Objects are easier to use.
Also you don't have to switch to "fullscreen" mode for OpenGL to be HW accelerated. In fact OpenGL doesn't know about windows at all. A fullscreen OpenGL window is just that: A toplvel window on top of all other windows with no decorations and the input focus grabed. Some drivers bypass window masking and clipping code, and employ a simpler, faster buffer swap method if the window with the active OpenGL context covers the whole screen, thus gaining a little performance, but with current hard- and software the effect is very small compared to other influences.