How to apply a custom minification algorithm to OpenGL texture? - opengl

Suppose we have a texture of size 2560*240 that we want to render in an screen area of 320*240 pixels. So, each screen pixel is overlap of 2560/320=8 texture samples. I want the OpenGL shader be able to choose maximum color value among these 4 texture samples. How can I achieve this?
Next step is to downsample a texture of size 2560*240 to 640*480 screen, in such a way that each two consecutive screen pixels cover minimum and maximum of 8 texture samples that fall in two consecutive screen pixels. So, user can always spot minimum and maximum color values if texture minification happens.

Related

render two images to the screen separately

I want to render two textures on the screen at the same time at different positions, but, I'm confused about the vertex coordinates.
How could I write a vertex shader to meet my goal?
Just to address the "two images to the screen separately" bit...
A texture maps image colours onto geometry. To be pedantic, you can't draw a texture but you can blit and you can draw geometry with a mapped texture (using per-vertex texture coordinates).
You can bind two textures at once while drawing, but you'll need both a second set of texture coordinates and to handle how they blend (or don't in your case). Even then the shader will be quite specific and because the images are separate there'll be unnecessary code running for each pixel to handle the other image. What happens when you want to draw 3 images, or 100?
Instead, just draw a quad with one image twice (binding each texture in turn before drawing). The overhead will be tiny unless you're drawing lots, at which point you might look at texture atlases and drawing all the geometry with one draw call (really getting towards the "at the same time" part of the question).

Why fragments do not necessarily correspond to pixels one to one?

Here is a good explanation what is a fragment:
https://gamedev.stackexchange.com/questions/8977/what-is-a-fragment/8981#8981
But here (and not only here) I have read that "I want to stress the fact that one pixel is not necessarily one fragment, multiple fragment can be combined to make one pixel....". But I don't understand clearly what are fragments and why they are not necessarily correspond to pixels one to one?
EDIT: When multiple fragments form one pixel it is only in the case when they overlap after projection, or it is because the pixel is bigger than the fragment, hence you need to put together next to each other multiple fragments with the same color to form a pixel?
A fragment has a location that can be queried via its built-in gl_FragCoord variable where the x and y component directly correspond to pixels on your screen. So you could say that a fragment indeed corresponds to a pixel.
However, a fragment outputs a color and stores that color in a color buffer at its coordinates. This does not mean this color is the actual pixel color that is shown to the viewer.
Because a fragment shader is run for each object, it could happen that other objects are drawn after your first object that also output a fragment at the same screen coordinate. When taking depth-testing, stencil testing and blending into account, the resulting color value in the color buffer might get overwritten and/or merged with new colors.
Think of it like this:
Object 1 gets drawn and draws the color purple at screen coordinate (200, 300);
Object 2 gets drawn and draws the color red at same coordinate, overwriting it.
Object 3 (is blue) has transparency of 50% at same coordinate, and merges colors.
Final fragment color output is then combination of red and blue (50%).
The final resulting pixel could then be a color from a single fragment shader run, a color that is overwritten by many other fragment shader runs, or a combination of colors via blending.
A fragment is not equal to a pixel when multi sample anti-aliasing (MSAA) or any of the other modes that change the ratio of rendered pixels to screen pixels is activated.
In the case of 4x MSAA, each screen pixel will be represented by 4 (2x2) fragments in the display buffer. The fragment shader for a particular polygon will only be run once for the screen pixel no matter how many of the fragments are covered by the polygon. Since a polygon may not cover all the fragments within a pixel it will only store color into the fragments it covers. This is repeated for every polygon that may cover one or more of the fragments. Then at the final display all 4 fragments are blended to produce the final screen pixel.

How to make sure there is a 1:1 pixel/texel ratio for a texture

I want to use Direct3D to display a texture on screen, not only that I want it to cover the whole window, but I want to be the actual size of the texture (i.e. every pixel on the screen to be mapped on one texel from the texture)
Just use a texture with the same screen width and height and a fullscreen quad.
You can also scale the texture, but this is only a bad fix because you make it 1:1 pixel/texel but the quality goes down.
Maybe this will help: Rendering full-screen quads

Terrain rendering from bitmap in OpenGL

I have an assignment to render a terrain from a greyscale 8bit bmp and get colors to the terrain from a texture 24bit bmp. I managed to get a proper landscape with heights and so on, and also I get the colors from the texture bitmap. The problem is that the full color rendered terrain is very "blocky", it shows right colors and height but it's so blocky. I use glShadeModel(GL_SMOOTH) but it still looks so blocky, almost like I can see the pixels from the bitmap. So any hints are appreciated.
Do you use the bitmap as texture, or do you set vertex colours from the bitmap? I suggest you use a texture, using the planar vertex position as texture coordinate.
One thing you have to take into consideration is when you are rendering are you using GL_TRIANGLES or GL_TRIANGLESTRIPS this makes a difference on performance, second if you are using lighting you have to define your normals and each triangle or each vertex of each triangle, the problem then becomes tricky because almost every triangle is on a different plane. Not having proper normals would make it look blocky. The third thing that makes a difference is how big or small the triangles are; the smaller the triangles or the more divisions in your [x,z] plane increases you resolution thus increases the visual quality, but also slows down your frame rate. You have to find a good balance between the two.

Manual GL_REPEAT in GLSL

Currently I have a texture atlas that is 2048 x 2048 pixels set up with three 512 x 512 textures stored, and I am only applying one texture to the object. So I used the following code to position the texture coordinates (from zero to 1) to the correct position on the texture atlas for that texture:
color = texture2D(tex_0, vec2(0.0, 1024.0/2048.0) + mod(texture_coordinate*vec2(40.0), vec2(1.0))*vec2(512.0/2048.0));
The problem is that when I apply this, there is a black border around the texture. I presume that this is because OpenGL can't blend the two pixels at the place of that border.
So how do I get rid of the border?
Edit*
I have already tried to move the starting and ending boundaries in toward the center of the texture and that didn't work.
Edit*
I found the source of the problem, the automatic mipmap generation is blending the textures in the texture atlas together. This means I have to write my own mipmapping function. (As far as I can tell)
If anyone has any better ideas, please do post.
Instead of using a normal 2D texture as the texture atlas with a grid of textures, I used the GL_2D_TEXTURE_ARRAY functionality to create a 3D texture that mipmapped correctly and repeated correctly. That way the textures did not blend together at higher mipmap levels.