is it possible to to read specific area of the texture from specific mipmap level to buffer? I'm looking for a method to save texture into a PNG/JPG file. Each file should represent each mipmap level. Why? Because I'm loading specific file depending on level of detail quadtree (to much complicate to explain but it's necessarily to not use whole GPU memory but only a few mb). Is it possible to do that using PBOs? I need a function like glTexSubImage2D (which allows to choose x, y, width, height and mipmap level) to read pixels from texture to buffer.
Related
OpenGL version 450 introduces a new function which can be used to allocate memory for a texture2D:
glTextureStorage2D(textureObj,levels,internal_format,width,height);
//here, parameter 'levels' refers to the levels of mipmaps
//we need to set it 1 if we only want to use the original texture we loaded because then there is only one mipmap level
And, there is a function can help us generate mipmaps for a texture conveniently:
glGenerateTextureMipmap(textureObj);
So, I have a question: as I need to speciy the size of the storage when I use glTextureStorage2D, do I need to reserve extra space for later using of glGenerateTextureMipmap as mipmap requires extra memory?
I know that I can use glTexImage2D to avoid this problem, that I can first bind the texture to target GL_TEXTURE_2D, then copy the image's data to the texture's memory by using glTexImage2D which only asks me to give the target mipmap level instead of number of mipmap levels, and finally use glGenerateMipmap(GL_TEXTURE_2D).
I am a little confused about glGenerateMipmap. Will it allocate extra space for generated mipmap levels?
What do you mean by "reserve extra memory"? You told OpenGL how many mipmaps you wanted when you used glTextureStorage2D. When it comes to immutable storage, you don't get to renege. If you say a texture has width W, height H, and mipmap count L, then that's what that texture has. Forever.
As such, glGenerateTextureMipmaps will only generate data for the number of mipmaps you put into the texture (minus the base level).
I am now using FFMPEG to read a high resolution video (6480*1920) and use opengl to show it
after decoding, I get 3 pointer that point to the Y,U,V.
At first, I use swsscale to convert it rgb and show it, but I find it's too slow. So I directly deal with YUV. My second try is generate 3 one channel texture and convert it to rgb in fragment shader. It is faster, but still cannot achieve 60fps
I find the bottleneck is this function : texture(texy, tex_coord.xy). When the texture is large, it cost a lot of time. So instead of call it 3 times, my idea is to put the YUV in one single texture since a texture can have 4 channel. But I wonder that how can I update a certain channel of a texture.
I try the following code, but it seems do not work. Instead of update a channel, glTexSubImage2D changes the whole texture:
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, frame->width, frame->height,0, GL_RED, GL_UNSIGNED_BYTE, Y);
glTexSubImage2D(GL_TEXTURE_2D,0,0,0,frame->width, frame->height, GL_GREEN,U);
glTexSubImage2D(GL_TEXTURE_2D,0,0,0,frame->width, frame->height, GL_BLUE,V);
So how can I use one texture to pass the YUV data ? I also try that gather the YUV data into one array then generate the texture. But it does not help since it need a lot of time to generate that array.
Any good idea?
You're approaching this from the wrong angle, since you don't actually understand what is causing the poor performance in the first place. Yes, texture access is a rather expensive operation. But it is not that expensive; I mean, just think about of the amount of texture data that gets pushed around in modern games at very high frame rates.
The problem is not the channel format of the texture, and it is also not the call of GLSL texture.
Your problem is this:
(…) high resolution video (6480*1920)
Plain and simple the dimensions of the frame are outside the range of what the GPU is comfortable working with. Try breaking down the picture into a set of smaller textures. Using glPixelStorei paramters GL_UNPACK_ROW_LENGTH, GL_UNPACK_SKIP_PIXELS and GL_UNPACK_SKIP_ROWS you can select the rectangle inside your source picture to copy.
You don't have to make several draw calls BTW, just select the texture inside the shader based on the target fragment position or texture coordinate.
Unfortunately OpenGL doesn't offer a convenient function to determine the sweet spot, for most GPUs these days the maximum size in either direction for dense textures is 2048. Go above it and in my experience the performance tanks for dense textures.
Sparse textures are an entirely different chapter, and irrelevant for this problem.
And just for the sake of completeness: I take it, that you don't reinitialize the texture for each and every frame with a call to glTexImage2D. Do that only once at the start of the video, then just update the texture(s).
I baked texture on a 4k image and I downscaled it to 512 using Gimp but the quality is bad in Unity 3D. The image format is png.
Is it a good idea to bake texture on a 4k image and to downscale to 512 for mobile game?
What can I do to keep a good quality on baked texture with small size (512 or below) for mobile game development?
In general, you can expect to sacrifice some texture quality if you use a smaller texture. It makes sense: with less data, something must be lost, and that's usually fine details and sharp edges. There are some strategies available, though, to get the most out of your texture data:
Make sure your model's UVs are laid out as efficiently as possible. Parts where detail is important -- like faces -- should be given more UV space than parts where detail is unimportant -- like low-frequency clothing or the undersides of models. Minimize unused texture space.
Where feasible, you can use overlapping UVs to reuse similar portions of your UV space. If your model is exactly symmetrical, you could map the halves to the same space. Or if your model is a crate with identical sides, you could map each side to the same UV square.
See if you can decompose your texture into multiple textures that are less sensitive to downsizing, then layer them together. For instance, a landscape texture might be decomposable into a low-frequency color texture and a high-frequency noise texture. In that case, the low-frequency texture could be scaled much smaller without loss of quality, while the high-frequency texture might be replaceable with a cropped, tiled texture, or procedurally generated in the shader.
Experiment with different resizing algorithms. Most image editors have an option to resize using bicubic or bilinear methods. Bicubic does a better job preserving edges, while bilinear can sometimes be a better match for some content/shaders. You may also be able to improve a resized texture with careful use of blur or sharpen filters.
If the hardware targeted by your game supports it, you should evaluate using hardware compression formats, like ETC2 or ASTC. The ETC2 format, for example, can compress 24-bit RGB texture data to be six times smaller while still having the same image dimensions. The compression is lossy, so some image information is lost and there are some artifacts, but for some content it can look better than raw textures rescaled to a similar data size. Depending on content, some combination of resizing and hardware texture compression may get you closest to your desired quality/size tradeoff. The Unity manual claims it automatically picks a good default texture format when exporting, but you can try other ones in the export options.
Usually, the motivation for using smaller textures is to consume less video memory and improve performance. However, if the motivation is simply to reduce the download size of the game, you could try alternate image formats. JPG, for instance, lets you choose an image quality value to get a smaller file size. It's rarely used for games, though, because it takes up as much video memory as a similarly sized PNG, can have artifacts, and doesn't support alpha.
I have been trying to make a Cross-platform 2D Online Game, and my maps are made of tiles.
My tileset, which I render the tiles from, is quite huge.
I wanted to know how can I disable hardware rendering, or at least making it more capable.
Hence, I wanted to know what are the basic limits of the video ram, as far as I know, Direct3D has a texture size limits (by that I don't mean the power-of-two texture sizes).
If you want to use a software renderer, link against Mesa.
You can get an estimate of the maximum texture size using these methods:
21.130 What's the maximum size texture map my device will render hardware accelerated?
A good OpenGL implementation will render with hardware acceleration whenever possible. However, the implementation is free to not render hardware accelerated. OpenGL doesn't provide a mechanism to ensure that an application is using hardware acceleration, nor to query that it's using hardware acceleration. With this information in mind, the following may still be useful:
You can obtain an estimate of the maximum texture size your implementation supports with the following call:
GLint texSize;
glGetIntegerv(GL_MAX_TEXTURE_SIZE, &texSize);
If your texture isn't hardware accelerated, but still within the size restrictions returned by GL_MAX_TEXTURE_SIZE, it should still render correctly.
This is only an estimate, because the glGet*() function doesn't know what format, internalformat, type, and other parameters you'll be using for any given texture. OpenGL 1.1 and greater solves this problem by allowing texture proxy.
Here's an example of using texture proxy:
glTexImage2D(GL_PROXY_TEXTURE_2D, level, internalFormat, width, height, border, format, type, NULL);
Note the pixels parameter is NULL, because OpenGL doesn't load texel data when the target parameter is GL_PROXY_TEXTURE_2D. Instead, OpenGL merely considers whether it can accommodate a texture of the specified size and description. If the specified texture can't be accommodated, the width and height texture values will be set to zero. After making a texture proxy call, you'll want to query these values as follows:
GLint width;
glGetTexLevelParameteriv(GL_PROXY_TEXTURE_2D, 0, GL_TEXTURE_WIDTH, &width);
if (width==0) {
/* Can't use that texture */
}
I'm attempting to draw a 2D image to the screen in Direct3D, which I'm assuming must be done by mapping a texture to a rectangular billboard polygon projected to fill the screen. (I'm not interested or cannot use Direct2D.) All the texture information I've found in the SDK describes loading a bitmap from a file and assigning a texture to use that bitmap, but I haven't yet found a way to manipulate a texture as a bitmap pixel by pixel.
What I'd really like is a function such as
void TextureBitmap::SetBitmapPixel(int x, int y, DWORD color);
If I can't set the pixels directly in the texture object, do I need to keep around a DWORD array that is the bitmap and then assign the texture to that every frame?
Finally, while I'm initially assuming that I'll be doing this on the CPU, the per-pixel color calculations could probably also be done on the GPU. Is the HLSL code that sets the color of a single pixel in a texture, or are pixel shaders only useful for modifying the display pixels?
Thanks.
First, your direct question:
You can, technically, set pixels in a texture. That would require use of LockRect and UnlockRect API.
In D3D context, 'locking' usually refers to transferring a resource from GPU memory to system memory (thereby disabling its participation in rendering operations). Once locked, you can modify the populated buffer as you wish, and then unlock - i.e., transfer the modified data back to the GPU.
Generally locking was considered a very expensive operation, but since PCIe 2.0 that is probably not a major concern anymore. You can also specify a small (even 1-pixel) RECT as a 2nd argument to LockRect, thereby requiring the memory-transfer of a negligible data volume, and hope the driver is indeed smart enough to transfer just that (I know for a fact that in older nVidia drivers this was not the case).
The more efficient (and code-intensive) way of achieving that, is indeed to never leave the GPU. If you create your texture as a RenderTarget (that is, specify D3DUSAGE_RENDERTARGET as its usage argument), you could then set it as the destination of the pipeline before making any draw calls, and write a shader (perhaps passing parameters) to paint your pixels. Such usage of render targets is considered standard, and you should be able to find many code samples around - but unless you're already facing performance issues, I'd say that's an overkill for a single 2D billboard.
HTH.