I have a pixel array like this int[256 * 256]. First 256 is for x axis and second is for y axis. And my main window size is 1024x1024. This is 4 times greater than my pixel array size. So my scale value is 4. I want to render this 256x256 array in 1024x1024 window but one pixel should be 4x4 pixel size in window.
I currently just created window, created my pixel array and do some operations in that array. How can i render my array scaled?
Load image into texture, render screen-sized textured quad.
Related
Suppose we have a texture of size 2560*240 that we want to render in an screen area of 320*240 pixels. So, each screen pixel is overlap of 2560/320=8 texture samples. I want the OpenGL shader be able to choose maximum color value among these 4 texture samples. How can I achieve this?
Next step is to downsample a texture of size 2560*240 to 640*480 screen, in such a way that each two consecutive screen pixels cover minimum and maximum of 8 texture samples that fall in two consecutive screen pixels. So, user can always spot minimum and maximum color values if texture minification happens.
I have big binary file. It contains 32x32 pixels tiles. Each pixel is 32 bits RBG color. Because of the binary file structure, it cannot be rendered to the texture image.
Last time I've tried to load generated texture with SFML with next dimensions 40 416 x 512 pixels produces exception that such textures are not supported.
How can I render tiles on screen without texture and uv coords manipulation?
Because in each tutorial related to tilemaps I see uv texture coords manipulation only. I need some other way to render map with tiles from file.
Binary file has next sections:
Array of megatile groups.
Array of megatiles.
Array of minitiles.
Color palette.
Each megatile group is array with 16 megatile indecies.
Each megatile is 8x8 array with minitile indecies.
Each minitile is 4x4 array of color indecies from palette.
Palette is array of 256 32bits RGB colors.
For example:
First megatile group just contains 16 0:
[0, 0, ..., 0] (0 x 16)
0-indexed megatile is array with minitile indexes size of 8x8. All its elementes are 0.
[0, 0, ..., 0] (0 x 64)
0-indexed minitile is an 4x4 array. Each element represent color from palette. All its elements are 0.
[0, 0, ... 0] (0 x 16)
0-indexed color from pallete is just black color in 32 bits rgb.
Tilemap cell is defined by megatile group index and megatile index in a group.
So at any point for some tile (defined by pair of megatile group and megatile from this group) from tilemap I can get array of 32x32 pixels.
How can I render tilemap?
How do I draw a many GB image by OpenGL, when there's not enough GPU memory?
First idea: By chunks.
If you are going to draw a fixed image, no camera change, no zoom change, then the method may be: Fill a texture with every chunk, draw it, repeat with another chunk. The GPU will discard out-of-field parts, or overlap different parts of the image in the same pixel. For not fixed view, this is impractical, horribly slow.
But, wait, all GB really?
A 4K 3840x2160 = 8.3 MPixels monitor needs 8.3 x 4 = 33.2 MB of RGBA data.
The question is how to select 33.2 MB among so many GB of raw data.
Let's say you have an array of tiles (each tile is a chunck of the big image).
The first improvement is not sending to the GPU the tiles which will fall out of the field of view. This can be tested using in the CPU side the typical MVP matrix with the four corners of the tile.
The second improvement is when a tile is too far from the camera but inside the perspective/orthogonal-projection fustrum. It will be seen as a single pixel. Why send to the GPU the whole tile, when a point for that pixel is enough?
Both improvements can be achieved with a quadtree better than an array.
It stores pointers or identifyiers to the tiles. But also for intermediate nodes a representative point with the average color of its sub-nodes. Or a representative tile that "compresses" several tiles.
Traverse the qtree. Discard nodes (and thus their branches) out of the fustrum. Render representative points/tiles instead of textures when the tile is too far. Render the whole tile when some edge is more that 1 pixel size.
You won't send just 33.2 MB, but something below 100 MB, which is fairly easy to deal with.
I have a rectangleshape that can change size in a program of mine (I won't copy it here as it is too large), and I have assigned a 64x64 pixel texture to it. The shape itself is much larger than the texture, but the texture just gets spread over the whole shape. Is there a way to change it so that the texture remains 64x64, but tiles across the rectangleshape?
Figured out how to do it, I just had to use the line
tex.setRepeated(true);
and the line
rect.setTextureRect(sf::IntRect(0, 0, xRect, yRect));
with xRect and yRect being the dimensions of the rectangleobject, tex bing the texture name and and rect being the rectangle object that the texture has been assigned to.
What is the difference between glTexImage2D() and glTexImage1D()? Actually, I can't imagine 1D texturing. How can something have a 1D texture?
A texture is not a picture you draw onto triangles. A texture is a look-up table of values, which your shaders can access and get data from. You can use textures as "pictures you draw onto triangles", but you should not limit your thinking to just that.
A 1D texture is a texture with only one dimension: width. It's a line. It is a function of one dimension: f(x). You provide one texture coordinate, and you get a value.
A 2D texture is a texture with two dimensions: width and height. It is a rectangle. It is a function of two dimensions: f(x, y). You provide two texture coordinates, and you get a value.
A 1D texture can be used for a discrete approximation of any one-dimensional function. You could precompute some Fresnel specular factors and access a 1D texture to get them, rather than computing them in the shader. A 1D texture could represent the Gaussian specular term, as I do in the first chapter on texturing in my book.
A 1D texture can be any one-dimensional function.
A 2D texture has both height and width whereas a 1D texture has a height of just 1 pixel. This basically means that the texture is a line of pixels. They are frequently used when we want to map some numeric value to a colour or map colour to a different colour (as in cell-shading techniques).
So I'm trying to replace a part of a texture over another in GLSL, first step in a grand scheme.
So I have a image, 2048x2048, with 3 textures on the top left, each 512x512. For testing purposes I'm trying to just repeatedly draw the first one.
//get coord of smaller texture
coord = vec2(int(gl_TexCoord[0].s)%512,int(gl_TexCoord[0].t)%512);
//grab color from it and return it
fragment = texture2D(textures, coord);
gl_FragColor = fragment;
It seems that it only grabs the same pixel, I get one color from the texture returned to me. Everything ends up grey. Anyone know what's off?
Unless that's a rectangle texture (which is isn't since you're using texture2D), your texture coordinates are normalized. That means that the range [0, 1] maps to the entire range of the texture. 0.5 always means halfway, whether for a 256 sized texture or a 8192 one.
Therefore, you need to stop passing non-normalized texture coordinates (texel values). Pass normalized texture coordinates and adjust those.