I am trying to create a mipmapped textured image that represents elevation. The image must be 940 x 618. I realize that my texture must have a width and height of a power of 2. As of now I have tried to incrementally go through doing all my texturing in squares (eg 64 x 64, or 128 x 128, even 512 x 512), but the image still comes out blurry. Any idea of how to better texture an image of this size?
Use a 1024x1024 texture and put your image in just a part of the image, 940x618. Then use the values 940.0/1024.0 and 618.0/1024.0 for the max texture coordinates, or scale the TEXTURE_MATRIX. This will make a 1:1 mapping for your pixels. You might also need to shift the model half a pixel to get a perfect fit, this depends on your model setup and view.
This is the technic I used in this screensaver I made for the Mac. http://memention.com/void/ It grabs the screen contents and uses it as a texture on some 3D effects and I really wanted a pixel perfect fit.
As far as I know, modern technologies do not require you to use a power of 2 for your dimensions. Just know however, that if this code is run an older machine, you'll have some problems. How old is your machine?
The texture is probably not mapped 1:1, and you have GL_LINEAR or GL_NEAREST filtering. Try higher resolution texture, mipmapping, and 1:1 screen mapping.
Use a 940x618 sized texture (if this is truly the size of the surface it's applied to) and set the texture's minification/magnification to use GL_LINEAR. That should give you the results you're after.
Related
I am now using FFMPEG to read a high resolution video (6480*1920) and use opengl to show it
after decoding, I get 3 pointer that point to the Y,U,V.
At first, I use swsscale to convert it rgb and show it, but I find it's too slow. So I directly deal with YUV. My second try is generate 3 one channel texture and convert it to rgb in fragment shader. It is faster, but still cannot achieve 60fps
I find the bottleneck is this function : texture(texy, tex_coord.xy). When the texture is large, it cost a lot of time. So instead of call it 3 times, my idea is to put the YUV in one single texture since a texture can have 4 channel. But I wonder that how can I update a certain channel of a texture.
I try the following code, but it seems do not work. Instead of update a channel, glTexSubImage2D changes the whole texture:
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, frame->width, frame->height,0, GL_RED, GL_UNSIGNED_BYTE, Y);
glTexSubImage2D(GL_TEXTURE_2D,0,0,0,frame->width, frame->height, GL_GREEN,U);
glTexSubImage2D(GL_TEXTURE_2D,0,0,0,frame->width, frame->height, GL_BLUE,V);
So how can I use one texture to pass the YUV data ? I also try that gather the YUV data into one array then generate the texture. But it does not help since it need a lot of time to generate that array.
Any good idea?
You're approaching this from the wrong angle, since you don't actually understand what is causing the poor performance in the first place. Yes, texture access is a rather expensive operation. But it is not that expensive; I mean, just think about of the amount of texture data that gets pushed around in modern games at very high frame rates.
The problem is not the channel format of the texture, and it is also not the call of GLSL texture.
Your problem is this:
(…) high resolution video (6480*1920)
Plain and simple the dimensions of the frame are outside the range of what the GPU is comfortable working with. Try breaking down the picture into a set of smaller textures. Using glPixelStorei paramters GL_UNPACK_ROW_LENGTH, GL_UNPACK_SKIP_PIXELS and GL_UNPACK_SKIP_ROWS you can select the rectangle inside your source picture to copy.
You don't have to make several draw calls BTW, just select the texture inside the shader based on the target fragment position or texture coordinate.
Unfortunately OpenGL doesn't offer a convenient function to determine the sweet spot, for most GPUs these days the maximum size in either direction for dense textures is 2048. Go above it and in my experience the performance tanks for dense textures.
Sparse textures are an entirely different chapter, and irrelevant for this problem.
And just for the sake of completeness: I take it, that you don't reinitialize the texture for each and every frame with a call to glTexImage2D. Do that only once at the start of the video, then just update the texture(s).
Shortly: I need a quick way to resize the buffer image and then return the pixels to me to save them to file etc.
Currently I first use glReadPixels(), and then I go through the pixels myself to resize them with my own resize function.
Is there any way to speed up this resizing, for example make OpenGL do the work for me? I think I could use glGetTexImage() with miplevel and mipmapping enabled, but as I noticed earlier, that function is bugged on my GFX card, so I can't use it.
I only need one miplevel, which could be anything from 1 to 4, but not all of them, to conserve some GPU memory. So is it possible to generate only one miplevel of wanted size?
Note: i dont think i can use multisampling, because i need pixel precise rendering for stencil tests, so if i rendered it with multisampling, it would make blurry pixels and they would fail with stencil test and masking and result would be incorrect (AFAIK). Edit: i only want to scale the colors (RGBA) buffer!
If you have OpenGL 3.0 or alternatively EXT_framebuffer_blit available (very likely -- all nVidia cards since around 2005, all ATI cards since around 2008 have it, and even Intel HD graphics claims to support it), then you can glBlitFramebuffer[EXT] into a smaller framebuffer (with a respectively smaller rectangle) and have the graphics card do the work.
Note that you cannot ever safely rescale inside the same frambuffer even if you were to say "I don't need the original", because overlapped blits are undefined (allowed, but undefined).
Or, you can of course just draw a fullscreen quad with a simple downscaling pixel shader (aniso decimation, if you want).
In fact, since you mention stencil in your last paragraph... if it is stencil (or depth) that you want to rescale, then you most definitively want to draw a fullscreen quad with a shader, because it will very likely not give the desired result otherwise. Usually, one will choose a max filter rather than interpolation in such a case (e.g. what reasonable, meaningful result could interpolating a stencil value of 0 and a value of 10 give -- something else is needed, such as "any nonzero" or "max value in sample area").
Create a framebuffer of the desired target size and draw your source image with a full-resized-buffer-sized textured quad. Then read the resized framebuffer contents using glReadPixels.
Psuedocode:
unbind_texture(OriginalSizeFBOattachmentColorTex);
glBindFramebuffer(OriginalSizeFBO);
render_picture();
glBindFramebuffer(TargetSizeFBO); // TargetSizeFBO used a Renderbuffer color attachment
glBindTexture(OriginalSizeFBOattachmentColorTex);
glViewport(TargetSize);
render_full_viewport_quad_with_texture();
glReadPixels(...);
I need to display image in openGL window.
Image changes every timer tick.
I've checked on google how, and as I can see it can be done using or glBitmap or glTexImage2D functions.
What is the difference between them?
The difference? These two functions have nothing in common.
glBitmap is a function for drawing binary images. That's not a .BMP file or an image you load (usually). The function's name doesn't refer to the colloquial term "bitmap". It refers to exact that: a map of bits. Each bit in the bitmap represents a pixel. If the bit is 1, then the current raster color will be written to the framebuffer. If the bit is 0, then the pixel in the framebuffer will not be altered.
glTexImage2D is for allocating textures and optionally uploading pixel data to them. You can later draw triangles that have that texture mapped to them. But glTexImage2D by itself does not draw anything.
What you are probably looking for is glDrawPixels, which draws an image directly into the framebuffer. If you use glTexImage2D, you have to first update the texture with the new image, then draw a shape with that texture (say, a fullscreen quad) to actually render the image.
That said, you'll be better off with glTexImage2D if...
You're using a library like JOGL that makes binding textures from images an easy operation, or
You want to scale the image or display it in perspective
in each frame (as in frames per second) I render, I make a smaller version of it with just the objects that the user can select (and any selection-obstructing objects). In that buffer I render each object in a different color.
When the user has mouseX and mouseY, I then look into that buffer what color corresponds with that position, and find the corresponding objects.
I can't work with FBO so I just render this buffer to a texture, and rescale the texture orthogonally to the screen, and use glReadPixels to read a "hot area" around mouse cursor.. I know, not the most efficient but performance is ok for now.
Now I have the problem that this buffer with "colored objects" has some accuracy problems. Of course I disable all lighting and frame shaders, but somehow I still get artifacts. Obviously I really need clean sheets of color without any variances.
Note that here I put all the color information in an unsigned byte in GL_RED. (assumiong for now I maximally have 255 selectable objects).
Are these caused by rescaling the texture? (I could replace this by looking up scaled coordinates int he small texture.), or do I need to disable some other flag to really get the colors that I want.
Can this technique even be used reliably?
It looks like you're using GL_LINEAR for your GL_TEXTURE_MAG_FILTER. Use GL_NEAREST instead if you don't want interpolated colors.
I could replace this by looking up scaled coordinates int he small texture.
You should. Rescaling is more expensive than converting the coordinates for sure.
That said, scaling a uniform texture should not introduce artifacts if you keep an integer ratio (like upscale 2x), with no fancy filtering. It looks blurry on the polygon edges, so I'm assuming that's not what you use.
Also, the rescaling should introduce variations only at the polygon boundaries. Did you check that there are no variations in the un-scaled texture ? That would confirm whether it's the scaling that introduces your "artifacts".
What exactly do you mean by "variance"? Please explain in more detail.
Now some suggestion: In case your rendering doesn't depend on stencil buffer operations, you could put the object ID into the stencil buffer in the render pass to the window itself, don't use the detour over a separate texture. On current hardware you usually get 8 bits of stencil. Of course the best solution, if you want to use a index buffer approach, is using multiple render targets and render the object ID into an index buffer together with color and the other stuff in one pass. See http://www.opengl.org/registry/specs/ARB/draw_buffers.txt
This is for a 2D game with OpenGL:
Is it with using OpenGL possible to display a texture absolutely unfiltered, not streched or blurred?
So that when I have a BMP and convert it into an OpenGL texture, and then retrieve that texture and convert it back, I have no modifications or quality / data loss?
Sure, just disable filtering, that's made by setting the GL_MIN_FILTER and the GL_MAG_FILTER to GL_NEAREST. Also make sure that you draw the texture in a appropiate size so that texels are the same size as pixels.
As Matias said previously - one thing is to set GL_MIN_FILTER and GL_MAG_FITLER to GL_NEAREST (via glTexParameter*).
But for pixel-perfect rendering, there's another important thing- you don't want your texture to be rescaled to power-of-two. The easiest way is to specify the texture via the binding target GL_TEXTURE_RECTANGLE instead of GL_TEXTURE_2D. On such bound texture, the texture coordinates are not in range (0..1,0..1) as usually, but (0..w, 0..h) instead. You can have per-texel indexing easily this way.