Render a portion of 1d array as 2D? - c++

I am trying to render a bunch of quads on a screen, but I cannot get it to render correctly.
I have a 1D array that is 10000 (100x100) in size and holds texture ids:
mapping = { 1, 22, 55, 28, 95, 105, ...}
The texture file contains 512x512 pixels, with 16x16 pixels for each image. So, that is 32x32 images in total. And the texture ids correspond by going left to right starting from 0:
0 1 2 3 ... 31
32 33 34 35 ... 63
............... 1023
Given a screen resolution of 800x600 pixels, I want to render only a subset of my quads that will fit on this resolution, so I don't want to draw all 10000 quads from my 1D array.
To draw from the (0,0) tile, this is what I have:
for (int j = 0; j < 37; j++) { // 600/16 = 37
for(int i = 0; i < 50; i++) { // 800/16 = 50
int quadIndex = j*800/16 + i;
int textureID = mapping[quadIndex];
int x = (textureID % 512) * 16;
int y = (textureID / 512) * 16;
// Take image from texture starting at (x,y) and draw on screen at (i, j)
draw(i, j, x, y);
}
}
The problem with this is the quadIndex is not accurate. It draws the first row correctly, but the second row is just a continuation of the first row instead of the actual second row. Basically the first row is overflowing onto the second row, and it's throwing everything off.
I am sure it is because I can calculating the quadIndex incorrectly, but I don't know what the solution is.
Also, as an added bonus, how would I specify rendering from any (a,b) offset instead of always from (0,0)? That is, with my 1D array, I want to draw from (4,4) onto my 800x600 resolution.

You should use width of source array, not screen width. As far as I understand, it is width of mapping = 100?
int quadIndex = j * 100 + i;

Related

How to determine pixel intensity with respect to pixel range in x-axis?

I want to see the distribution of a color with respect to image width. That is, if a (black and white) image has width of 720 px, then I want to conclude that a specific range (e.g. pixels [500,720]) has more white color in compared to rest of the image. What I thought is, I need a slice of the image of 720x1 px, then I need to check the values and distribute them w.r.t. width of 720 px. But I don't know the way I can apply this in a suitable way?
edit: I use OpenCV 4.0.0 with C++.
Example Case: In the first image, it is obvious that right hand side pixels are white. I want to get estimate coordinates of this dense line or zone. The light pink zone is where I am interested in and the red borders are the range where I want to find it.
If you want to get minimum continious range of image columns which contain more white than the rest of the image, than you need first to calculate number of white pixels in each column. Lets assume we have an image 720x500 (500 pixels high and 720 pixels wide). Than you will get an array Arr of 720 elements that equal number of white pixels in each column (1x500) respectively.
const int Width = img.cols;
int* Arr = new int[Width];
for( int x = 0; x < Width; x++ ) {
Arr[x] = 0;
for( int y = 0; y < img.rows; y++ ) {
if ( img.at<cv::Vec3b>(y,x) == cv::Vec3b(255,255,255) ) {
Arr[x]++;
}
}
}
You need to find a minimum range [A;B] in this array that satisfies condition Sum(Arr[0 to A-1]) + Sum(Arr[B+1 to Width-1]) < Sum(Arr[A to B]).
// minimum range width is guaranteed to be less or equal to (Width/2 + 1)
int bestA = 0, minimumWidth = Width/2 + 1;
int total = RangeSum(Arr, 0, Width-1);
for (int i = 0; i < Width; i++) {
for (int j = i; j < Width && j < i + minimumWidth; j++) {
int rangeSum = RangeSum(Arr, i, j);
if (rangeSum > total - rangeSum) {
bestA = i;
minimumWidth = j - i + 1;
break;
}
}
}
std::cout << "Most white minimum range - [" << bestA << ";" << bestA + minimumWidth - 1 << "]\n";
You can optimize the code if you precalculate sums for all [0; i] ranges, i from 0 to Width - 1. Than you can calculate RangeSum(Arr, A, B) as PrecalculatedSums[B] - PrecalculatedSums[A] (in O(1) complexity).

Procedurally generate seamless fractal noise textures

I have been generating noise textures to use as height maps for terrain generation. In this application, initially there is a 256x256 noise texture that is used to create a block of land that the user is free to roam around. When the user reaches a certain boundary in-game the application generates a new texture and thus another block of terrain.
In the code, a table of 64x64 random values are generated, and the values in the texture are the result of interpolating between these points at various 'frequencies' and 'wavelengths' using a smoothstep function, and then combined to form the final noise texture; and finally the values in the texture are divided through by its largest value to effectively normalize it. When the player is at the boundary and a new texture is created, the random number table that is created re-uses the values from the appropriate edge of the previous texture (eg. if the new texture is for a block of land that is on the +X side of the previous one, the last value in every row of the previous texture is used as the first value in every row of random numbers in the next.)
My problem is this: even though the same values are being used across the edges of adjacent textures, they are nowhere near seamless - some neighboring points on the terrain are mismatched by many many metres. My guess is that the changing frequencies that are used to sample the random number table are probably having a significant effect on all areas of the texture. So how might one generate fractal noise poceduraly, ie. as needed, AND have it look continuous with adjacent values?
Here is a section of the code that returns a value interpolated between the points on the random number table given a point P:
float MainApp::assessVal(glm::vec2 P){
//Integer component of P
int xi = (int)P.x;
int yi = (int)P.y;
//Decimal component ofP
float xr = P.x - xi;
float yr = P.y - yi;
//Find the grid square P lies inside of
int x0 = xi % randX;
int x1 = (xi + 1) % randX;
int y0 = yi % randY;
int y1 = (yi + 1) % randY;
//Get random values for the 4 nodes
float r00 = randNodes->randNodes[y0][x0];
float r10 = randNodes->randNodes[y0][x1];
float r01 = randNodes->randNodes[y1][x0];
float r11 = randNodes->randNodes[y1][x1];
//Smoother interpolation so
//texture appears less blocky
float sx = smoothstep(xr);
float sy = smoothstep(yr);
//Find the weighted value of the 4
//random values. This will be the
//final value in the noise texture
float sx0 = mix(r00, r10, sx);
float sx1 = mix(r01, r11, sx);
return mix(sx0, sx1, sy);
}
Where randNodes is a 2 dimensional array containing the random values.
And here is the code that takes all the values returned from the above function and constructs texture data:
int layers = 5;
float wavelength = 1, frequency = 1;
for (int k = 0; k < layers; k++) {
for (int i = 0; i < stepsY; i++) {
for(int j = 0; j < stepsX; j++){
//Compute value for (stepsX * stepsY) interpolation points
//across the grid of random numbers
glm::vec2 P = glm::vec2((float)j/stepsX * randX, (float)i/stepsY * randY);
buf[i * stepsY + j] += assessVal(P * wavelength) * frequency;
}
}
//repeat (layers) times with different signals
wavelength *= 0.5;
frequency *= 2;
}
for(int i = 0; i < buf.size(); i++){
//divide all data by the largest value.
//this normalises the data to avoid saturation
buf[i] /= largestVal;
}
Finally, here is an example of two textures generated by these functions that should be seamless, but aren't:
The 2 images placed side by side as they are now are obviously mis-matched.
Your code wraps the values only in the domain of the noise texture you read from, but not in the domain of the texture being generated.
For the texture T of size stepX to be repeatable (let's consider 1-d case for simplicity) you must have
T(0) == T(stepX)
Or in your case (substitute j = 0 and j = stepX):
assessVal(0) == assessVal(randX * wavelength)
For when k >= 1 this is clearly not true in your code, because
(randX / pow(2, k)) % randX != 0
One solution is to decrease randX and randY while you go up the frequencies.
But my typical approach would rather be starting from a 2x2 random texture, upscale it to 4x4 with GL_REPEAT, add a bit more per-pixel noise, continue upscaling to 8x8 etc.. till I get to the desired size.
The root cause of course is that your smoothing changes pixels to match their neighbors, but you later add new neighbors and do not re-smooth the pixels who got new neighbors.
One simple and common workaround is to keep an edge of invisible pixels, the width of which is half that of your smoothing kernel. Now, when expanding the area, you can resmooth those invisible pixels just before they're revealed. Don't forget to add a new edge of invisible pixels!

Modifying only the beginning of an image and not it fully, as I wish

I currently have some code that reads an image stored in the tga format, then do something with it and then store it in a new tga file.
The problem is that only the bottom one third is being modified, the other two thirds are equal to the original image. Here is the code:
int size = width*height*bpp;
char imageArray [size];
char * arrayPtr = &imageArray[0];
......
for (int x=0; x<width; x++) {
for (int y=0; y<height; y++) {
imageArray [x*height + 3*y] = 255;
imageArray [x*height + 3*y + 1] = 0;
imageArray [x*height + 3*y + 2] = 0;
}
}
fileWriter.write (arrayPtr, size);
As can be seen inside the loops, I am modifying each color value, in this case making it into a single color image. Unfortunately only the bottom third will be modified, even with the number of loop iterations being equal to the number of pixels, and doing three operations by iteration, the number of it is equal to the number of bytes of the original image.
So I have no idea of what I am doing wrong and would be thankful for any recommendations.
The whole offset has to be multiplied by bpp, not only y:
imageArray [bpp*(x*height + y)] = 255;
imageArray [bpp*(x*height + y) + 1] = 0;
....
I think I understand your problem now, but it relies on some assumptions about how you are bringing in your data and what bpp means.
You are trying to loop over every pixel here and update the 3 values.
You set size = width*height*bpp where I can only assume bpp means bits-per-pixel and is the 3 showing up in your loop. Try stepping through this with x=1 and y=0. If the data is being layed out contiguously like:
RGB # x=0,y=0; RGB # x=1,y=0; ... then you can see you end up writing over your data from the first iteration of the loop. Everytime you nest the loop, the index should get multiplied entirely by the next levels dimension. Just replace x*height + 3*y with (x*height + y)*bpp assuming bpp = 3.
It all depends on the order that bytes a stored in the image array.
Your formulation suggest a by-column/by-row/by-color. But it can also be by-row/by-column/by-color or even by-color/by-row/by-column.
The index formulation should be
x*(b*h)+y*b+c
y*(b*w)+x*b+c
c*(w*h)+y*h+x
(b, w ,and h are color bytes, width and height)
Note how indexes cumulate in the sums. You have at least forgotten one multiplication, assuming the order is correct.

Optimized float Blur variations

I am looking for optimized functions in c++ for calculating areal averages of floats. the function is passed a source float array, a destination float array (same size as source array), array width and height, "blurring" area width and height.
The function should "wrap-around" edges for the blurring/averages calculations.
Here is example code that blur with a rectangular shape:
/*****************************************
* Find averages extended variations
*****************************************/
void findaverages_ext(float *floatdata, float *dest_data, int fwidth, int fheight, int scale, int aw, int ah, int weight, int xoff, int yoff)
{
printf("findaverages_ext scale: %d, width: %d, height: %d, weight: %d \n", scale, aw, ah, weight);
float total = 0.0;
int spos = scale * fwidth * fheight;
int apos;
int w = aw;
int h = ah;
float* f_temp = new float[fwidth * fheight];
// Horizontal
for(int y=0;y<fheight ;y++)
{
Sleep(10); // Do not burn your processor
total = 0.0;
// Process entire window for first pixel (including wrap-around edge)
for (int kx = 0; kx <= w; ++kx)
if (kx >= 0 && kx < fwidth)
total += floatdata[y*fwidth + kx];
// Wrap
for (int kx = (fwidth-w); kx < fwidth; ++kx)
if (kx >= 0 && kx < fwidth)
total += floatdata[y*fwidth + kx];
// Store first window
f_temp[y*fwidth] = (total / (w*2+1));
for(int x=1;x<fwidth ;x++) // x width changes with y
{
// Substract pixel leaving window
if (x-w-1 >= 0)
total -= floatdata[y*fwidth + x-w-1];
// Add pixel entering window
if (x+w < fwidth)
total += floatdata[y*fwidth + x+w];
else
total += floatdata[y*fwidth + x+w-fwidth];
// Store average
apos = y * fwidth + x;
f_temp[apos] = (total / (w*2+1));
}
}
// Vertical
for(int x=0;x<fwidth ;x++)
{
Sleep(10); // Do not burn your processor
total = 0.0;
// Process entire window for first pixel
for (int ky = 0; ky <= h; ++ky)
if (ky >= 0 && ky < fheight)
total += f_temp[ky*fwidth + x];
// Wrap
for (int ky = fheight-h; ky < fheight; ++ky)
if (ky >= 0 && ky < fheight)
total += f_temp[ky*fwidth + x];
// Store first if not out of bounds
dest_data[spos + x] = (total / (h*2+1));
for(int y=1;y< fheight ;y++) // y width changes with x
{
// Substract pixel leaving window
if (y-h-1 >= 0)
total -= f_temp[(y-h-1)*fwidth + x];
// Add pixel entering window
if (y+h < fheight)
total += f_temp[(y+h)*fwidth + x];
else
total += f_temp[(y+h-fheight)*fwidth + x];
// Store average
apos = y * fwidth + x;
dest_data[spos+apos] = (total / (h*2+1));
}
}
delete f_temp;
}
What I need is similar functions that for each pixel finds the average (blur) of pixels from shapes different than rectangular.
The specific shapes are: "S" (sharp edges), "O" (rectangular but hollow), "+" and "X", where the average float is stored at the center pixel on destination data array. Size of blur shape should be variable, width and height.
The functions does not need to be pixelperfect, only optimized for performance. There could be separate functions for each shape.
I am also happy if anyone can tip me of how to optimize the example function above for rectangluar blurring.
What you are trying to implement are various sorts of digital filters for image processing. This is equivalent to convolving two signals where the 2nd one would be the filter's impulse response. So far, you regognized that a "rectangular average" is separable. By separable I mean, you can split the filter into two parts. One that operates along the X axis and one that operates along the Y axis -- in each case a 1D filter. This is nice and can save you lots of cycles. But not every filter is separable. Averaging along other shapres (S, O, +, X) is not separable. You need to actually compute a 2D convolution for these.
As for performance, you can speed up your 1D averages by properly implementing a "moving average". A proper "moving average" implementation only requires a fixed amount of little work per pixel regardless of the averaging "window". This can be done by recognizing that neighbouring pixels of the target image are computed by an average of almost the same pixels. You can reuse these sums for the neighbouring target pixel by adding one new pixel intensity and subtracting an older one (for the 1D case).
In case of arbitrary non-separable filters your best bet performance-wise is "fast convolution" which is FFT-based. Checkout www.dspguide.com. If I recall correctly, there is even a chapter on how to properly do "fast convolution" using the FFT algorithm. Although, they explain it for 1-dimensional signals, it also applies to 2-dimensional signals. For images you have to perform 2D-FFT/iFFT transforms.
To add to sellibitze's answer, you can use a summed area table for your O, S and + kernels (not for the X one though). That way you can convolve a pixel in constant time, and it's probably the fastest method to do it for kernel shapes that allow it.
Basically, a SAT is a data structure that lets you calculate the sum of any axis-aligned rectangle. For the O kernel, after you've built a SAT, you'd take the sum of the outer rect's pixels and subtract the sum of the inner rect's pixels. The S and + kernels can be implemented similarly.
For the X kernel you can use a different approach. A skewed box filter is separable:
You can convolve with two long, thin skewed box filters, then add the two resulting images together. The center of the X will be counted twice, so will you need to convolve with another skewed box filter, and subtract that.
Apart from that, you can optimize your box blur in many ways.
Remove the two ifs from the inner loop by splitting that loop into three loops - two short loops that do checks, and one long loop that doesn't. Or you could pad your array with extra elements from all directions - that way you can simplify your code.
Calculate values like h * 2 + 1 outside the loops.
An expression like f_temp[ky*fwidth + x] does two adds and one multiplication. You can initialize a pointer to &f_temp[ky*fwidth] outside the loop, and just increment that pointer in the loop.
Don't do the division by h * 2 + 1 in the horizontal step. Instead, divide by the square of that in the vertical step.

C++ Filling an 1D array to represent a n-dimensional object based on a straight line segment

READ FIRST: I have rewritten this question with the help of a friend to be hopefully more specific in what is required. It can be found here
I'm not very clear on n-cubes, but I believe they are what I am referring to as the square family.
New Question Wording:
Perhaps I wasn't clear enough. What I'm asking, is how to set a 1D array to hold data for a cloud of a number of evenly-spaced points that form the most complete representation of the space occupied by an n-cube of n dimensions.
In 1D this would simply fill the array with a series of 1D co-ordinates creating a line segment. A 1-cube.
In 2D however this would fill every first co-ordinate to the x value and the every second to the y, generating the most complete square possible for that spacing and number of particles. The most complete possible 2-cube.
In 3D, this would fill ever first with x, every second with y and every third with z, generating the most complete possible cube for that spacing and number of particles. The most complete possible 3-cube.
I wish to be able to do this for any reasonable combination of number of particles, spacing and dimensions. Ideally I could do at least up to a 4-cube using a generic fill algorithm for all n-cubes initialised to double * parts_
Yet another definition of what kind of object I'm trying to represent:
In 1D its a line. Sweep it through the second dimension it becomes a square. Sweep that square through the third, it becomes a cube. I presume this behaviour extends past three dimensions and wish to store a cloud of points representing the space taken up by one of these objects of any reasonable dimension, spacing and number of points in a 1D array.
The original wording of the question:
I'm struggling to find a good way to put this question but here goes. I'm making a system that uses a 1D array implemented as double * parts_ = new double[some_variable];. I want to use this to hold co-ordinates for a particle system that can run in various dimensions.
What I want to be able to do is write a generic fill algorithm for filling this in n-dimensions with a common increment in all direction to a variable size. Examples will serve best I think.
Consider the case where the number of particles stored by the array is 4
In 1D this produces 4 elements in the array because each particle only has one co-ordinate.
1D:
{0, 25, 50, 75};
In 2D this produces 8 elements in the array because each particle has two co-ordinates..
2D:
{0, 0, 0, 25, 25, 0, 25, 25}
In 3D this produces 12 elements in the array because each particle now has three co-ordinates
{0, 0, 0, 0, 0, 25, 0, 0, 50, ... }
These examples are still not quite accurate, but they hopefully will suffice.
The way I would do this normally for two dimensions:
int i = 0;
for(int x = 0; x < parts_size_ / dims_ / dims_ * 25; x += 25) {
for(int y = 0; y < parts_size_ / dims_ / dims_ * 25; y += 25) {
parts_[i] = x;
parts_[i+1] = y;
i+=2;
}
}
How can I implement this for n-dimensions where 25 can be any number?
The straight line part is because it seems to me logical that a line is a somewhat regular shape in 1D, as is a square in 2D, and a cube in 3D. It seems to me that it would follow that there would be similar shapes in this family that could be implemented for 4D and higher dimensions via a similar fill pattern. This is the shape I wish to set my array to represent.
EDIT: Apparently I'm trying to fill this array to represent the n-cube with the fewest missing elements for the given n, spacing and number of elements. If that makes my goal any clearer.
As I understand it, you aren't sure how to process every element in multi-dimensional array (stored as 1D array), where N is arbitrary number of dimensions.
Processing of multidimensional array with arbitrary number of dimensions goes like this:
#include <stdio.h>
#include <vector>
using std::vector;
int main(int argc, char** argv){
int index = 0;
const int numDimensions = 10;
vector<int> counters;
vector<int> dimensionSizes;
counters.resize(numDimensions);
dimensionSizes.resize(numDimensions);
for (int i = 0; i < numDimensions; i++){
counters[i] = 0;
dimensionSizes[i] = 13;
}
long long arraySize = 1;
for (int i = 0; i < numDimensions; i++)
arraySize *= dimensionSizes[i];
printf("%d\n", arraySize);
for (int elementIndex = 0; elementIndex < arraySize; elementIndex++){
fprintf(stderr, "element %08d: ", elementIndex);
for (int i = 0; i < numDimensions; i++)
fprintf(stderr, "%04d ", counters[i]);
fprintf(stderr, "\n");
//at this point you have 1D element index
//AND all n-dimensional coordinates stored in counters array.
//Just use them to for your data
//"counters" is N-dimensional coord. XYZW etc.
for (int i = 0; i < numDimensions; i++){
counters[i] = counters[i] + 1;
if (counters[i] < dimensionSizes[i])
break;
else
counters[i] = 0;
}
}
return 0;
}
Just make an array of structs you need to access in N dimensions, and access them using calculated index somewhere after comment. It is better to use array of structs representing the data you want to be stored in N dimensionals. If you don't want to do that, you'll have to multiply elementIndex by number of doubles per element.