Traversing all the subarrays of a 2-D array - c++

I have a 2-D array given of size P*Q and I have to answer K questions based on the array. The 2-D array given consists of numbers 0 and 1. For each question, we have to count the maximum size of any square subarray in which no two same elements are adjacent.
For example if P=Q=8 and our given array be
00101010
00010101
10101010
01010101
10101010
01010101
10101010
01010101
Then the question Ki allows us to do Ki number of flips(0's to 1 or 1's to 0.)
Here K=4(number of questions)
1 2 0 10001
Output: 7 8 6 8
I have understood that as for K1=1, we can change the value of array index(1,1) to 1 and get a 7*7 sized valid matrix, the output is 7. If we have Ki>=2 our answer will be 8.
What I think is that we have to maintain an array ans[k] which stores the maximum size of a square sub matrix which is valid. For this, we can start each index of the original array and traverse through its sub-arrays and count the value of maximum size for flip=i if we start from this index. We have to do this for the subarrays starting from each index and then store the maximum of them in flip[i].
I have problems implementing this as I don't know how to traverse all the sub-arrays for a given index. I'm trying this for so long but still not achieving it. Can anyone please help?

It helps to simplify the problem to depend only on individual values (rather than pairs of neighboring values). So XOR the grid with each perfect checkerboard:
01111111 10000000
10111111 01000000
11111111 00000000
11111111 00000000
11111111 00000000
11111111 00000000
11111111 00000000
11111111 00000000
where the goal is now to find the largest square in either grid that has no more than K_i 0s (obviously favoring the left one here).
Start with K_i=0. To find the largest square of 1s, compute for each cell the number of 1s in a row and a column starting at it (0 for a cell that contains a 0); the largest square with that cell as its upper-left corner (assuming it's a 1) is then one more than the minimum of the row length of its right neighbor, the column length of its lower neighbor, and the square-size of its lower-right neighbor. (All these are 0 for the non-existent cells outside the grid.) Visit the cells in diagonal-major order to have these values available when you need them; note the largest square size produced.
To generalize to K_i>0, store for each cell those three values (row length, column length, and square size) for each number of flips up to K_i. A cell with a 1 adds 1 to each row/column length as before, while a cell with a 0 shifts those lengths to the next flip count, discarding those whose flip count is now too large and adding a new value of 0 for 0 flips. For each combination of row-length-east, column-length-south, and square-size-southeast, each with a flip count, a cell gets a candidate square size that is their minimum with the sum of their flip counts, plus one if the cell is a 0 itself. For each flip count (that isn't too large), keep the largest square size, noting if it is the largest so far encountered (for that flip count).
Note that the brute-force solution may be nearly as fast when the squares are much smaller than the array, since it need visit each one only a small number of times.

Related

How to calculate range of data type eg int

I want to know why in formula to calculate range of any data type
i.e.2^(n-1),why it is n-1 ,n is the number of bits occupied by the given data type
Assuming that the type is unsigned, the maximum value is (2n)-1, because there are 2n values, and one of them is zero.
2(n-1) is the value of the n:th bit alone - bit 1 is 20, bit 2 is 21, and so on.
This is the same for any number base - in the decimal system, n digits can represent 10n different values, with the maximum value being 10n-1, and the n:th digit is "worth" 10(n-1).
For example, the largest number with three decimal digits is 999 (that is, 103-1), and the third decimal digit is "the hundreds digit", 102.
First 2^(n-1) is not correct, the maximum (unsigned) number represented by the data type is:
max = 2^n - 1
So for a 8 Bit data type, the maximum represented value is 255
2^n tells you the amount of numbers represented (256 for the 8-Bit example) but because you want to include 0 the range is 0 to 255 and not 1 to 256

Arduino eInk Image2LCD - Size of c-array

This Image2LCD software (https://www.buydisplay.com/default/image2lcd) converts images to c-arrays. I want to write this basic operation myself, but I dont understand why the software outputs an array of length 5000 for an input image of size 200x200. For 400x400 the array size is 20000. It seems like its always 1/8 of the number of pixels.
The output array for the square 200x200 image begins and ends like this:
const unsigned char gImage_test[5000] = { /* 0X00,0X01,0XC8,0X00,0XC8,0X00, */
0X00,0X00,0X00,0X00,0X00,0X00,0X00,0X00,0X00,0X00,0X00,0X00,0X00,0X00,0X00,0X00,
0X00,0X00,0X00,0X00,0X00,0X00,0X00,0X00,0X00,0X00,0X00,0X00,0X00,0X00,0X00,0X00,
0X00,0X00,0X00,0X00,0X00,0X00,0X00,0X00,0X00,0X00,0X00,0X00,0X00,0X00,0X00,0X00,
0X00,0X00,0X00,0X00,0X00,0X00,0X00,0X00,0X00,0X00,0X00,0X00,0X00,0X00,0X00,0X00,
0X00,0X00,0X00,0X00,0X00,0X00,0X00,0X00,0X00,0X00,0X00,0X60,0X00,0X00,0X00,0X00,
0X3C,0X60,0X00,0X0C,0X00,0X00,0X00,0X00,0X00,0X00,0X00,0X00,0X00,0X00,0X00,0X00,
0X00,0X00,0X00,0X00,0X70,0X00,0X00,0X00,0X00,0X7E,0X70,0X00,0X0E,0X00,0X00,0X00,
0X00,0X00,0X00,0X00,0X00,0X00,0X00,0X00,0X00,0X00,0X00,0X00,0X00,0X78,0X00,0X00,
0X00,0X00,0X7F,0X78,0X00,0X0F,0X00,0X00,0X00,0X00,0X00,0X00,0X00,0X00,0X00,0X00,
0X00,0X00,0X00,0X00,0X00,0X00,0X7F,0XFC,0X3C,0X3E,0X3C,0X3F,0XF8,0X3C,0X7F,0X00,
0X00,0X00,0X00,0X00,0X00,0X00,0X00,0X00,0X00,0X00,0X00,0X00,0X00,0X00,0X00,0X7F,
...
,0X00,0X00,0X00,0X00,0X00,0X00,0X00,0X00,0X00,
0X00,0X00,0X00,0X00,0X00,0X00,0X00,0X00,};
(Yes there is a lot of white in the image.)
Why don't you need one value for each pixel?
Shooting from the hip here, but if you're using monochrome, you only need one bit per pixel (Byte = 8 bits). These bits can be packed into bytes for storage efficiency. Say the first 8 pixels of your image are these:
0 1 0 0 0 0 0 1
If we interpret these eight bits as one binary number, this is 1000001, which is 65 in decimal - so just storing 65 in an 8-bit integer, taking up only one byte, will store all 8 monochrome pixels. The downside is that it's not as intuitive as having each pixel as a separate value in the array.
I may be wrong, but 1/8th points straight to this kind of compression.

Why is the max value of an unsigned int -> 2^n - 1 in C++

If I have a 4 byte unsigned int, I have a space of 2^32 values.
The largest value should be 4294967296 (2^32), so why is it 4294967295 (2^32 - 1) instead?
It's just a basic math. Look at this example:
If you had a 1-bit long integer, what would the maximal value be?
According to what you say, it should be 2^1 = 2. How would you repesent a value 2 with just one bit?
Because counting starts from 0.
for 2 bit integer you can have 4 different values. (0,1,2,3) i.e 0 to 2^2 - 1.
(00,01,10,11)
Similarly for 32 bit integer you can have max value as 2^32 - 1.
2^1 occupies 2 bits.
2^32 occupies 33 bits.
But you have only 32 bits. So 2^32 - 1
Basic mathematics.
A bit can represent 2 values. A set of n bits can represent 2^n (using ^ as notation to represent "to the power of", not as a bitwise operation as is its default meaning in C++) distinct values.
A variable of unsigned integral type represents a sequence containing every consecutive integral value between 0 and the maximum value that type can represent. If the total number of sequential values is 2^n, and the first of them is zero, then the type can represent 2^n - 1 consecutive positive (non-zero) values. The maximum value must therefore be 2^n - 1.

Bit operation used in a for loop

I found this loop in the source code of an algorithm. I think that details about the problems aren't relevant here, because this is a really small part of the solution.
void update(int i, int value, int array[], int n) {
for(; i < n; i += ~i & (i + 1)) {
array[i] += value;
}
}
I don't really understand what happens in that for loop, is it some sort of trick? I found something similar named Fenwick trees, but they look a bit different than what I have here.
Any ideas what this loop means?
Also, found this :
"Bit Hack #9. Isolate the rightmost 0-bit.
y = ~x & (x+1)
"
You are correct: the bit-hack ~i & (i + 1) should evaluate to an integer which is all binary 0's, except the one corresponding to the rightmost zero-bit of i, which is set to binary 1.
So at the end of each pass of the for loop, it adds this value to itself. Since the corresponding bit in i is zero, this has the effect of setting it, without affecting any other bits in i. This will strictly increase the value of i at each pass, until i overflows (or becomes -1, if you started with i<0). In context, you can probably expect that it is called with i>=0, and that i < n is set terminate the loop before your index walks off the array.
The overall function should have the effect of iterating through the zero-bits of the original value of i from least- to most-significant, setting them one by one, and incrementing the corresponding elements of the array.
Fenwick trees are a clever way to accumulate and query statistics efficiently; as you say, their update loop looks a bit like this, and typically uses a comparable bit-hack. There are bound to be multiple ways to accomplish this kind of bit-fiddling, so it is certainly possible that your source code is updating a Fenwick tree, or something comparable.
Assume that from the right to the left, you have some number of 1 bits, a 0 bit, and then more bits in x.
If you add x + 1, then all the 1's at the right are changed to 0, the 0 is changed to 1, the rest is unchanged. For example xxxx011 + 1 = xxxx100.
In ~x, you have the same number of 0 bits, a 1 bit, and the inverses of the other bits. The bitwise and produces the 0 bits, one 1 bit, and since the remaining bits are and'ed with their negation, those bits are 0.
So the result of ~x & (x + 1) is a number with one 1 bit where x had its rightmost zero bit.
If you add this to x, you change the rightmost 0 to a 1. So if you do this repeatedly, you change the 0 bits in x to 1, from the right to the left.
The update function iterates and sets the 0-bits of i from the leftmost zero to the rightmost zero and add value to the ith element of array.
The for loop checks if i is less than n, if so, ~i & (i + 1) would be an integer has all binary 0's, except for the rightmost bit ( i.e. 1). Then array[i] += value adds value to iterated itself.
Setting i to 8 and going through iterations may clear things to you.

Why is this "reduction factor" algo doing "+ div/2"

So I am running through "OpenCV 2 Computer Vision Application Programming Cookbook" by Robert Laganiere. Around page 42 it is talking about a image reduction algorithm. I understand the algorithm ( i think) but I do not understand exactly why one part was put in. I think I know why but if I am wrong I would like corrected. I am going to copy and paste a little bit of it in here:
"Color images are composed of 3-channel pixels. Each of these channels
corresponds to the intensity value of one of the three primary colors
(red, green, blue). Since each of these values is an 8-bit unsigned
char, the total number of colors is 256x256x256, which is more than 16
million colors. Consequently, to reduce the complexity of an analysis,
it is sometimes useful to reduce the number of colors in an image. One
simple way to achieve this goal is to simply subdivide the RGB space
into cubes of equal sizes. For example, if you reduce the number of
colors in each dimension by 8, then you would obtain a total of
32x32x32 colors. Each color in the original image is then assigned a
new color value in the color-reduced image that corresponds to the
value in the center of the cube to which it belongs. Therefore, the
basic color reduction algorithm is simple. If N is the reduction
factor, then for each pixel in the image and for each channel of this
pixel, divide the value by N (integer division, therefore the reminder
is lost). Then multiply the result by N, this will give you the
multiple of N just below the input pixel value. Just add N/2 and you
obtain the central position of the interval between two adjacent
multiples of N. if you repeat this process for each 8-bit channel
value, then you will obtain a total of 256/N x 256/N x 256/N possible
color values. How to do it... The signature of our color reduction
function will be as follows: void colorReduce(cv::Mat &image, int
div=64); The user provides an image and the per-channel reduction
factor. Here, the processing is done in-place, that is the pixel
values of the input image are modified by the function. See the
There's more... section of this recipe for a more general function
signature with input and output arguments. The processing is simply
done by creating a double loop that goes over all pixel values: "
void colorReduce(cv::Mat &image, int div=64) {
int nl= image.rows; // number of lines
// total number of elements per line
int nc= image.cols * image.channels();
for (int j=0; j<nl; j++) {
// get the address of row j
uchar* data= image.ptr<uchar>(j);
for (int i=0; i<nc; i++) {
// process each pixel ---------------------
data[i]=
data[i]/div*div + div/2;// <-HERE IS WHERE I NEED UNDERSTANDING!!!
// end of pixel processing ---------------
}}}
So I get how I am reducing the 0:255 pixel value by div amount. I then lose whatever remainder was left. Then by multiplying it by the div amount again we are scaling it back up to keep it in the range of 0:255. Why are we then adding (div/2) back into the answer? The only reason I can think is that this will cause some values to be rounded down and some rounded up. If you don't use it then all your values are rounded down. So in a way it is giving a "better" average?
Don't know, so what do you guys/girls think?
The easiest way to illustrate this is using an example.
For simplicity, let's say we are processing a single channel of an image. There are 256 distinct colors, ranging from 0 to 255. We are also going to use N=64 in our example.
Using these numbers, we will reduce the number of colors from 256 to 256/64 = 4. Let's draw a graph of our color space:
|......|......|......|......|
0 63 127 191 255
The dotted line represents our colorspace, going from 0 to 255. We have split this interval into 4 parts, and the splits are represented by the vertical lines.
In order to reduce all 256 colors to 4 colors, we are going to divide each color by 64 (losing the remainder), and then multiply it by 64 again. Let's see how this goes:
[0 , 63 ] / 64 * 64 = 0
[64 , 127] / 64 * 64 = 64
[128, 191] / 64 * 64 = 128
[192, 255] / 64 * 64 = 192
As you can see, all the colors from the first part became 0, all the colors from the second part became 64, third part 128, fourth part 192. So our color space looks like this:
|......|......|......|......|
0 63 127 191 255
|______/|_____/|_____/|_____/
| | | |
0 64 128 192
But this is not very useful. You can see that all our colors are slanted to the left of the intervals. It would be more helpful if they were in the middle of the intervals. And that's why we add 64/2 = 32 to the values. Adding half of the interval length shifts the colors to the center of the intervals. That's also what it says in the book: "Just add N/2 and you obtain the central position of the interval between two adjacent multiples of N."
So let's add 32 to our values and see how everything looks:
[0 , 63 ] / 64 * 64 + 32 = 32
[64 , 127] / 64 * 64 + 32 = 96
[128, 191] / 64 * 64 + 32 = 160
[192, 255] / 64 * 64 + 32 = 224
And the interval looks like this:
|......|......|......|......|
0 63 127 191 255
\______/\_____/\_____/\_____/
| | | |
32 96 160 224
This is a much better color reduction. The algorithm reduced our colorspace from 256 to 4 colors, and those colors are in the middle of the intervals that they reduce.
It is done to give an average of the quantization bounds, not floor of it.
For example for N = 32, all data from 0 to 31 will give 16 instead of 0.
Please check following picture or my excel file.