For example, I have 16x9 picture which has 14 white pixels. Using imageAtomicAdd I can count total amount of white pixels.
But how I can enumerate these pixels in order of appearance (from 1 to 14).
Related
Suppose we have a texture of size 2560*240 that we want to render in an screen area of 320*240 pixels. So, each screen pixel is overlap of 2560/320=8 texture samples. I want the OpenGL shader be able to choose maximum color value among these 4 texture samples. How can I achieve this?
Next step is to downsample a texture of size 2560*240 to 640*480 screen, in such a way that each two consecutive screen pixels cover minimum and maximum of 8 texture samples that fall in two consecutive screen pixels. So, user can always spot minimum and maximum color values if texture minification happens.
I have big binary file. It contains 32x32 pixels tiles. Each pixel is 32 bits RBG color. Because of the binary file structure, it cannot be rendered to the texture image.
Last time I've tried to load generated texture with SFML with next dimensions 40 416 x 512 pixels produces exception that such textures are not supported.
How can I render tiles on screen without texture and uv coords manipulation?
Because in each tutorial related to tilemaps I see uv texture coords manipulation only. I need some other way to render map with tiles from file.
Binary file has next sections:
Array of megatile groups.
Array of megatiles.
Array of minitiles.
Color palette.
Each megatile group is array with 16 megatile indecies.
Each megatile is 8x8 array with minitile indecies.
Each minitile is 4x4 array of color indecies from palette.
Palette is array of 256 32bits RGB colors.
For example:
First megatile group just contains 16 0:
[0, 0, ..., 0] (0 x 16)
0-indexed megatile is array with minitile indexes size of 8x8. All its elementes are 0.
[0, 0, ..., 0] (0 x 64)
0-indexed minitile is an 4x4 array. Each element represent color from palette. All its elements are 0.
[0, 0, ... 0] (0 x 16)
0-indexed color from pallete is just black color in 32 bits rgb.
Tilemap cell is defined by megatile group index and megatile index in a group.
So at any point for some tile (defined by pair of megatile group and megatile from this group) from tilemap I can get array of 32x32 pixels.
How can I render tilemap?
How do I draw a many GB image by OpenGL, when there's not enough GPU memory?
First idea: By chunks.
If you are going to draw a fixed image, no camera change, no zoom change, then the method may be: Fill a texture with every chunk, draw it, repeat with another chunk. The GPU will discard out-of-field parts, or overlap different parts of the image in the same pixel. For not fixed view, this is impractical, horribly slow.
But, wait, all GB really?
A 4K 3840x2160 = 8.3 MPixels monitor needs 8.3 x 4 = 33.2 MB of RGBA data.
The question is how to select 33.2 MB among so many GB of raw data.
Let's say you have an array of tiles (each tile is a chunck of the big image).
The first improvement is not sending to the GPU the tiles which will fall out of the field of view. This can be tested using in the CPU side the typical MVP matrix with the four corners of the tile.
The second improvement is when a tile is too far from the camera but inside the perspective/orthogonal-projection fustrum. It will be seen as a single pixel. Why send to the GPU the whole tile, when a point for that pixel is enough?
Both improvements can be achieved with a quadtree better than an array.
It stores pointers or identifyiers to the tiles. But also for intermediate nodes a representative point with the average color of its sub-nodes. Or a representative tile that "compresses" several tiles.
Traverse the qtree. Discard nodes (and thus their branches) out of the fustrum. Render representative points/tiles instead of textures when the tile is too far. Render the whole tile when some edge is more that 1 pixel size.
You won't send just 33.2 MB, but something below 100 MB, which is fairly easy to deal with.
As in Bilinear filtering, sampled color is calculated based on the weighted average of 4 closest texels, then why corner texels get the same color when magnified?
Eg:
In this case (image below) when a 3x3 image is magnified/scaled to 5x5 pixel image (using Bilinear filtering) corner 'Red' pixels get exact same color and border 'Green' as well?
In some documents, it is explained that corner texels are extended with the same color to give 4 adjacent texels which explains why corner 'Red' texels are getting the same color in 5x5 image but how come border 'Green' texels are getting same color (if they are calculated based on weighted average of 4 closest texels)
When you are using bilinear texture sampling, the texels in the texture are not treated as colored squares but as samples of a continuous color field. Here is this field for a red-green checkerboard, where the texture border is outlined:
The circles represent the texels, i.e., the sample locations of the texture. The colors between the samples are calculated by bilinear interpolation. As a special case, the interpolation between two adjacent texels is a simple linear interpolation. When x is between 0 and 1, then: color = (1 - x) * leftColor + x * rightColor.
The interpolation scheme only defines what happens in the area between the samples, i.e. not even up to the edge of the texture. What OpenGL uses to determine the missing area is the texture's or sampler's wrap mode. If you use GL_CLAMP_TO_EDGE, the texel values from the edge will just be repeated like in the example above. With this, we have defined the color field for arbitrary texture coordinates.
Now, when we render a 5x5 image, the fragments' colors are evaluated at the pixel centers. This looks like the following picture, where the fragment evaluation positions are marked with black dots:
Assuming that you draw a full-screen quad with texture coordinates ranging from 0 to 1, the texture coordinates at the fragment evaluation positions are interpolations of the vertices' texture coordinates. We can now just overlay the color field from before with the fragments and we will find the color that the bilinear sampler produces:
We can see a couple of things:
The central fragment coincides exactly with the red texel and therefore gets a perfect red color.
The central fragments on the edges fall exactly between two green samples (where one sample is a virtual sample outside of the texture). Therefore, they get a perfect green color. This is due to the wrap mode. Other wrap modes produce different colors. The interpolation is then: color = (1 - t) * outsideColor + t * insideColor, where t = 3 * (0.5 / 5 + 0.5 / 3) = 0.8 is the interpolation parameter.
The corner fragments are also interpolations from four texel colors (1 real inside the texture and three virtual outside). Again, due to the wrap mode, these will get a perfect red color.
All other colors are some interpolation of red and green.
You're looking at bilinear interpolation incorrectly. Look at it as a mapping from the destination pixel position to the source pixel position. So for each desintation pixel, there is a source coordinate that corresponds to it. This source coordinate is what determines the 4 neighboring pixels, as well as the bilinear weights assigned to them.
Let us number your pixels with (0, 0) at the top left.
Pixel (0, 0) in the destination image maps to the coordinate (0, 0) in the source image. The four neighboring pixels in the source image are (0, 0), (1, 0), (0, 1) and (1, 1). We compute the bilinear weights with simple math: the weight in the X direction for a particular pixel is 1 - (pixel.x - source.x), where source is the source coordinate. The same goes for Y. So the bilinear weights for each of the four neighboring pixels are (respective to the above order): (1, 1), (0, 0), (0, 0) and (0, 0).
In short, because the destination pixel mapped exactly to a source pixel, it gets exactly that source pixel's value. This is as it should be.
please forgive any incorrect terminology, I'll do my best to explain.
I'd like to know how a rendering technology - gpu/cpu etc blends/merges the samples generated from multisample rendering? (Presumably over multiple passes.)
To be clear - I'm not asking for DirectX / OpenGL examples, I'm asking how it actually works.
Background: I've written a 2d polygon drawing function - in C/C++ - which is based on the common model or dividing each horizontal scanline into multiple 'samples' (in my case 4) and then using this to estimate coverage. I clamp these points to 4 vertical positions as well, giving me a 4x4 grid of 'samples' per pixel.
I currently generate a bitmask mask per pixel of which 'samples' are covered and also an 'alpha' of how covered this pixel is from 0 to 256. This works perfectly with a single polygon and all the edges are nicely antialiased. The issue arises when drawing something like a pie chart, the first piece is drawn perfectly but the second piece which shares edges with it will draw over those edge pixels.
For example: Multisample Grid Picture in this picture my renderer will draw the orange section, and the bottom middle pixel will be 50% covered by this orange polygon, so will be 50% orange and 50% background colour (say black for instance). The green polygon will then be drawn and also cover the bottom middle pixel by 50% - so it will blend 50% green with the existing 50% orange and 50% black, giving us 50% green and 25% orange and 25% black - but realistically the black background colour should never come into it as the pixel is fully covered, just not by any one polygon.
This page describes the process and says "In situations like this OpenGL will use coverage percentages to blend the samples from the foreground polygon with the colors already in the framebuffer. For example, for the pixel in the bottom center of the image above, OpenGL will perform two samples in the green polygon, average those samples together, and then blend the resulting color 50% with the color already in the framebuffer for that pixel." but doesn't describe how that process actually works: https://www2.lawrence.edu/fast/GREGGJ/CMSC420/chapter16/Chapter_16.html
I haven't posted source code because it's quite a large project and I'm not doing anything particularly different from most simple polygon renderers except split the main loop out to callback functions.
I can't switch up to a render buffer size 4xwidth and 4xheight as it's used for more than just polygon drawing. I'm happy to accept that all 'joined' polygons be known at function run time - such as the user being required to pass in all the pie chart polygons rather that one at a time as that seems a fair requirement.
Any guidance would be appreciated.
I have grey scale images with objects darker than the background with each object and the background having the same shade throughout itself. There are mainly 3-4 "groups of shades" in each picture. I want to group these pixels to find the approximate background shade (brightness) to later extract it.
And a side question: How can I calculate the angles on a contour produced by findContours.or maybe the minimum angle on a contour.
I think that you can set a range to group pixels. For example, all the pixel which have intensity value in the range (50 - 100) should have the intensity value 100. Similarly, all the pixels which have intensity value in the range (100-150) should have intensity value 150. And so on.
After doing the above procedure, you can have have only 3-4 fixed values for all pixels (as you have mentioned that there are 3-4 groups in each image.)