How to list all possible combinations of data with restrictions - combinations

i'm in a bit of a bind. Trying to write code in C++ that tests every possible combination of items in a matrix, with restrictions on the quadrants in which each item can be placed. Here is the problem:
I have a 4 (down) x 2 (across) matrix
Matrix Here
There are 4 quadrants in the Matrix 1-4, and each quadrant can occupy up to 2 of the same type of item.
I have a list of x (say 4) of each type of item a - d, which can fit into their respective quadrants 1-4.
the items:
a1,a2,a3,a4 - Only Quadrant 1
b1,b2,b3,b4 - Only Quadrant 2
c1,c2,c3,c4 - Only Quadrant 3
d1,d2,d3,d4 - Only Quadrant 4
I need to write a code that lists every possible combination of the items a to d, into their respective quadrants. Every possible way to combine all the items a-d into their respective quadrants (restriction)
Any ideas? So far I think I have to find the combination for each quadrant and multiply these across the quadrants to give the total number of combinations, but I am not sure how to develop code to list each possible combination.

Related

Number of ways to color exactly K cells in a 3xN matrix such that no two colored cell are adjacent(do not share edges)?

I tried to solve this problem using Dynamic Programming but it seems I am missing some cases that I am unable to find.
Here is the equation that I used for getting values from sub-problem
dp[i][j] = dp[i][j-1] + 3*(dp[i-1][j-1] - dp[i-2][j-2]) + dp[i-3][j-2]
(i = k = no of cells to be colored and j = n = number of columns, note the row is fixed i.e 3)
The terms are as defined below:
dp[i][j-1] : case when I don't color any cell in the nth column.
dp[i-1][j-1] - dp[i-2][j-2] : case when I color one cell in the last column and then have to subtract the case where I color the adjacent cell in the n-1th column and since this can be done for each of the 3 cells in the nth column I multiplied it with 3.
dp[i-3][j-2] : case when I color two cells(top and bottom ones) in the nth column and thus have only one choice for the n-1th column, that is the middle one, hence subtracting 3 from i and since we have already considered the last two columns I reduce 2 from j.
I couldn't find any mistake in the above approach, If you see any mistake please help.
Below is the actual question where an extra condition of P consecutive column not be empty is also mentioned and should be taken care of.
My approach is to first find all the possible ways to color k cells in 3xN matrix such that they are not adjacent and then finding the number of ways where P consecutive columns exist such that there are no cells colored in them and subtracting it with the total count, but in this approach, I'm missing the correct answer by a small margin for smaller inputs and a large margin for larger inputs. I must be missing something here.

HOG: What is done in the contrast-normalization step?

According to the HOG process, as described in the paper Histogram of Oriented Gradients for Human Detection (see link below), the contrast normalization step is done after the binning and the weighted vote.
I don't understand something - If I already computed the cells' weighted gradients, how can the normalization of the image's contrast help me now?
As far as I understand, contrast normalization is done on the original image, whereas for computing the gradients, I already computed the X,Y derivatives of the ORIGINAL image. So, if I normalize the contrast and I want it to take effect, I should compute everything again.
Is there something I don't understand well?
Should I normalize the cells' values?
Is the normalization in HOG not about contrast anyway, but is about the histogram values (counts of cells in each bin)?
Link to the paper:
http://lear.inrialpes.fr/people/triggs/pubs/Dalal-cvpr05.pdf
The contrast normalization is achieved by normalization of each block's local histogram.
The whole HOG extraction process is well explained here: http://www.geocities.ws/talh_davidc/#cst_extract
When you normalize the block histogram, you actually normalize the contrast in this block, if your histogram really contains the sum of magnitudes for each direction.
The term "histogram" is confusing here, because you do not count how many pixels has direction k, but instead you sum the magnitudes of such pixels. Thus you can normalize the contrast after computing the block's vector, or even after you computed the whole vector, assuming that you know in which indices in the vector a block starts and a block ends.
The steps of the algorithm due to my understanding - worked for me with 95% success rate:
Define the following parameters (In this example, the parameters are like HOG for Human Detection paper):
A cell size in pixels (e.g. 6x6)
A block size in cells (e.g. 3x3 ==> Means that in pixels it is 18x18)
Block overlapping rate (e.g. 50% ==> Means that both block width and block height in pixels have to be even. It is satisfied in this example, because the cell width and cell height are even (6 pixels), making the block width and height also even)
Detection window size. The size must be dividable by a half of the block size without remainder (so it is possible to exactly place the blocks within with 50% overlapping). For example, the block width is 18 pixels, so the windows width must be a multiplication of 9 (e.g. 9, 18, 27, 36, ...). Same for the window height. In our example, the window width is 63 pixels, and the window height is 126 pixels.
Calculate gradient:
Compute the X difference using convolution with the vector [-1 0 1]
Compute the Y difference using convolution with the transpose of the above vector
Compute the gradient magnitude in each pixel using sqrt(diffX^2 + diffY^2)
Compute the gradient direction in each pixel using atan(diffY / diffX). Note that atan will return values between -90 and 90, while you will probably want the values between 0 and 180. So just flip all the negative values by adding to them +180 degrees. Note that in HOG for Human Detection, they use unsigned directions (between 0 and 180). If you want to use signed directions, you should make a little more effort: If diffX and diffY are positive, your atan value will be between 0 and 90 - leave it as is. If diffX and diffY are negative, again, you'll get the same range of possible values - here, add +180, so the direction is flipped to the other side. If diffX is positive and diffY is negative, you'll get values between -90 and 0 - leave them the same (You can add +360 if you want it positive). If diffY is positive and diffX is negative, you'll again get the same range, so add +180, to flip the direction to the other side.
"Bin" the directions. For example, 9 unsigned bins: 0-20, 20-40, ..., 160-180. You can easily achieve that by dividing each value by 20 and flooring the result. Your new binned directions will be between 0 and 8.
Do for each block separately, using copies of the original matrix (because some blocks are overlapping and we do not want to destroy their data):
Split to cells
For each cell, create a vector with 9 members (one for each bin). For each index in the bin, set the sum of all the magnitudes of all the pixels with that direction. We have totally 6x6 pixels in a cell. So for example, if 2 pixels have direction 0 while the magnitude of the first one is 0.231 and the magnitude of the second one is 0.13, you should write in index 0 in your vector the value 0.361 (= 0.231 + 0.13).
Concatenate all the vectors of all the cells in the block into a large vector. This vector size should of course be NUMBER_OF_BINS * NUMBER_OF_CELLS_IN_BLOCK. In our example, it is 9 * (3 * 3) = 81.
Now, normalize this vector. Use k = sqrt(v[0]^2 + v[1]^2 + ... + v[n]^2 + eps^2) (I used eps = 1). After you computed k, divide each value in the vector by k - thus your vector will be normalized.
Create final vector:
Concatenate all the vectors of all the blocks into 1 large vector. In my example, the size of this vector was 6318

Armadillo port conv2 from matlab [duplicate]

I am studying image processing these days and I am a beginner to the subject. I got stuck on the subject of convolution and how to implement it for images. Let me brief - there is a general formula of convolution for images like so:
x(n1,n2) represents a pixel in the output image, but I do not know what k1 and k2 stand for. Actually, this is what would like to learn. In order to implement this in some programming language, I need to know what k1 and k2 stand for. Can someone explain me this to me or lead me to an article? I would be really appreciative of any help.
Convolution in this case deals with extracting out patches of image pixels that surround a target image pixel. When you perform image convolution, you perform this with what is known as a mask or point spread function or kernel and this is usually much smaller than the size of the image itself.
For each target image pixel in the output image, you grab a neighbourhood of pixel values from the input, including the pixel that is at the same target coordinates in the input. The size of this neighbourhood coincides with exactly the same size as the mask. At that point, you rotate the mask so that it's 180 degrees, then do an element-by-element multiplication of each value in the mask with the pixel values that coincide at each location in the neighbourhood. You add all of these up, and that is the output for the target pixel in the target image.
For example, let's say I had this small image:
1 2 3 4 5
6 7 8 9 10
11 12 13 14 15
16 17 18 19 20
21 22 23 24 25
And let's say I wanted to perform an averaging within a 3 x 3 window, so my mask would all be:
[1 1 1]
1/9*[1 1 1]
[1 1 1]
To perform 2D image convolution, rotating the mask by 180 degrees still gives us the same mask, and so let's say I wanted to find the output at row 2, column 2. The 3 x 3 neighbourhood I would extract is:
1 2 3
6 7 8
11 12 13
To find the output, I would multiply each value in the mask by the same location of the neighbourhood:
[1 2 3 ] [1 1 1]
[6 7 8 ] ** (1/9)*[1 1 1]
[11 12 13] [1 1 1]
Perform a point by point multiplication and adding the values would give us:
1(1/9) + 2(1/9) + 3(1/9) + 6(1/9) + 7(1/9) + 8(1/9) + 11(1/9) + 12(1/9) + 13(1/9) = 63/9 = 7
The output at location (2,2) in the output image would be 7.
Bear in mind that I didn't tackle the case where the mask would go out of bounds. Specifically, if I tried to find the output at row 1, column 1 for example, there would be five locations where the mask would go out of bounds. There are many ways to handle this. Some people consider those pixels outside to be zero. Other people like to replicate the image border so that the border pixels are copied outside of the image dimensions. Some people like to pad the image using more sophisticated techniques like doing symmetric padding where the border pixels are a mirror reflection of what's inside the image, or a circular padding where the border pixels are copied from the other side of the image.
That's beyond the scope of this post, but in your case, start with the most simplest case where any pixels that go outside the bounds of the image when you're collecting neighbourhoods, set those to zero.
Now, what does k1 and k2 mean? k1 and k2 denote the offset with respect to the centre of the neighbourhood and mask. Notice that the n1 - k1 and n2 - k2 are important in the sum. The output position is denoted by n1 and n2. Therefore, n1 - k1 and n2 - k2 are the offsets with respect to this centre in both the horizontal sense n1 - k1 and the vertical sense n2 - k2. If we had a 3 x 3 mask, the centre would be k1 = k2 = 0. The top-left corner would be k1 = k2 = -1. The bottom right corner would be k1 = k2 = 1. The reason why they go to infinity is because we need to make sure we cover all elements in the mask. Masks are finite in size so that's just to ensure that we cover all of the mask elements. Therefore, the above sum simplifies to that point by point summation I was talking about earlier.
Here's a better illustration where the mask is a vertical Sobel filter which finds vertical gradients in an image:
Source: http://blog.saush.com/2011/04/20/edge-detection-with-the-sobel-operator-in-ruby/
As you can see, for each output pixel in the target image, we take a look at a neighbourhood of pixels in the same spatial location in the input image, and that's 3 x 3 in this case, we perform a weighted element by element sum between the mask and the neighbourhood and we set the output pixel to be the total sum of these weighted elements. Bear in mind that this example does not rotate the mask by 180 degrees, but that's what you do when it comes to convolution.
Hope this helps!
$k_1$ and $k_2$ are variables that should cover the whole definition area of your kernel.
Check out wikipedia for further description:
http://en.wikipedia.org/wiki/Kernel_%28image_processing%29

C++ alternative algorithm for solution

I need some help with an algorithm, I have a problem with an program.
I need to make a program where user inputs cordinates for 3 points and coefficient
for linear funciton that crosses the triangle made by those 3 points and i need to compare area of the shapes what is made function crossing that triangle.
I would paste code here but there is things in my native language and i just want to know your alogrithms for this solution, becuase my wokrs only if the points are entered in exact sequence and I cant get handle of that
http://pastebin.com/vNzGuqX4 - code
and for example i use this http://goo.gl/j18Ch0
The code is not finnished, I just noticed if I enter it in different sequence it does not work like when entering points " 1 1 2 5 4 4 0.5 1 5 " works but " 4 4 1 1 2 5 0.5 1 5 " does not
The linear must cross with 2 edges of the triangle at least. So you can find these 2 crossing points first, these 2 points with one of the 3 vertices will make a small triangle. Use this equation to calculate the area of a triangle S = sqrt(l * (l-a) * (l-b) * (l-c)) where l = (a+b+c)/2 and a, b, c are the length of the edge. It should be easy to get the length of an edge given the coordinate of the vertex. One is the area of the small triangle, the other one is the area of the big triangle minus the small one.
If your triangle is ABC, a good approach would be the following:
Find lines that go through points A and B, B and C, and C and A.
Find the intersection of your line with these three lines.
Check which two intersections lie on the triangle sides.
Depending on the intersections calculate the surface of the new small
triangle.

Find optimal route in farm land-dynamic programming/Dijkstra's

I was trying to solve a question on InterviewStreet (the competition has since ended). The problem is to build a ditch from a pond to a farm, given a N*M grid of elevations. The pond and the farm are one of the tiles within the N*M grid and won't be the same tile.
The elevations are numbers between 0 and 9. Additionally, you are given the coordinates of the pond and the farm (1-indexed, row followed by column), which each take up exactly one tile on the grid. You are to write a program that, given this data, computes the minimum cost to build an irrigation ditch.
More specifically, the input that will be fed into your program will be formatted as follows:
N M
pondLocationX pondLocationY
farmLocationX farmLocationY
elevationX1Y1elevationX1Y2...elevationX1YM
elevationX2Y1elevationX2Y2...elevationX2YM
.
.
.
elevationXNY1elevationXNY2...elevationXNYM
where pondLocationX and farmLocationX are integers in the interval [1, N], and pondLocationY and farmLocationY are integers in the interval [1, M], and all elements are integers in the interval [0, 9]. Note that a single space separates the X and Y coordinates of the farm and pond, but there are no spaces separating the elevations.
Given such an input, your program should print out the minimum cost to build an irrigation ditch from the pond to the farm. The constraints are as follows. The pond and farm will not be at the same location. The elevation of all tiles except for the pond can be increased or decreased at a cost of one for every unit of change (you may leave the elevation the same for a cost of 0). N and M will each be at most 300. After paying for any excavation that is necessary, you can build a ditch at 0 additional cost if there is a sequence of tiles starting at the pond and ending at the farm such that the following are true:
(Contiguous path) Each tile in the sequence is adjacent to the previous tile (no diagonal adjacency -- tiles in the interior of the map have exactly 4 adjacent tiles)
(Downhill path) Each tile in the sequence, including the pond and farm, has an elevation that is at most that of the previous tile in the sequence.
For example, if the input is the following:
3 5
1 1
3 4
27310
21171
77721
then we can build an irrigation ditch at a cost of just 4, since it suffices to lower the tile at location (1, 3) from 3 to 1 (cost 2), raise the tile at position (1, 5) from 0 to 1 (cost 1), and lower the farm, which is at location (3, 4), from 2 to 1 (cost 1). Note that you cannot travel diagonally to get from (2, 3) to (3, 4) in one step.
Solution:
I think this is a variation of the Djikstra's algorithm, i.e. use the farm as the source node, and stop when you calculate the shortest path to the pond. The "adjacent" tiles are your neighbours, and your edge weights are the differences in your elevations.
However, since you can modify the weights in two ways i.e. if you are higher than your neighbour, then you can either 1) decrease your height to match your neighbour's or 2) increase your neighbour's height to match yours. This effect can percolate outwards and I'm not able to capture this in the algorithm.
How can I adjust Djikstra's algorithm to acommodate for the fact that the weights can be changed?
Use the Dijkstra algorithm on the 3D grid N*M*10. Two vertices (x,y,z) and (x',y',z') are connected (with an oriented arc) if (x,y) and (x',y') are adjacent and z' is not greater than z. The cost on the arc is given by the difference between z' and the initial height at (x',y'). Then find the shortedst path from the pond (with its initial length) to the farm (even if the z coordinate is not the same.
It is possible that the minimal path finded in this way passes two times on the same point (x,y). For example it could pass first from (x,y,z') and then from (x,y,z''). But if this happens you can remove the path from (x,y,z') to (x,y,z'') since replacing (x,y,z') with (x,y,z'') costs equal or less then the path from (x,y,z') to (x,y,z''). So you can assume that for every point (x,y) the path uses only a single value of z.
So the path you have found is the solution to the given problem.