I am trying to solve a weird bin packing problem. A link for the original problem is here
(sorry for the long question, thanks for your patience)
I am re-iterating the problem as follows:
I am trying to write an application that generates drawing for compartmentalized Panel.
I have N cubicles (2D rectangles) (N <= 40). For each cubicle there is a minimum height (minHeight[i]) and minimum width (minWidth[i]) associated. The panel itself also has a MAXIMUM_HEIGHT constraint.
These N cubicles have to be stacked one on top of the other in a column-wise grid such that the above constraints are met for each cubicle.
Also, the width of each column is decided by the maximum of minWidths of each cubicle in that column.
Also, the height of each column should be the same. This decides the height of the panel
We can add spare cubicles in the empty space left in any column or we can increase the height/width of any cubicle beyond the specified minimum. However we cannot rotate any of the cubicles.
OBJECTIVE: TO MINIMIZE TOTAL PANEL WIDTH.
MAXIMUM_HEIGHT of panel = 2100mm, minwidth range (350mm to 800mm), minheight range (225mm to 2100mm)
As per the answer chosen, I formulated the Integer Linear Program. However, given the combinatorial nature of the problem, the solver appears to 'hang' on N > 20.
I am now trying to implement a work-around solution.
The cubicles are sorted in descending order of minWidths. If the minWidths are equal, then they are sorted in descending order of their minHeights.
I then solve it using the First Fit decreasing heuristic. This gives me an upper bound on the total panel width, and a list of present column widths.
Now I try to make the panel width smaller and try to fit my feeders in that smaller sized panel. (I am able to check whether the feeders fit in a given list of column widths in an efficient manner)
The panel width can be made smaller in the following ways:
1. Take any column, replace it with a column of next lower minWidth feeder. If the column is already of the lowest minWidth, then try to remove it and check.
2. Take any column, replace it with a column of a higher minWidth feeder and remove another column.
3. Any other way, I don't know shall be glad if anyone can point out.
I have implemented the 1st way correctly. Following is the code. However, I am not able put the other way in code correctly.
for ( int i = 0; i < columnVector.size(); i++ ) {
QVector< Notepad::MyColumns > newVec( columnVector );
if ( newVec[i].quantity > 0
&& ( i > 0 || newVec[i].quantity > 1 ) ) {
newVec[i].quantity--;
if ( i < columnVector.size() - 1 )
newVec[i+1].quantity++;
float fitResult = tryToFit( newVec, feederVector );
myPanelWidth = fitResult ? fitResult : myPanelWidth;
if ( fitResult ) { // if feeders fit, then start the iteration again.
columnVector = newVec;
i = -1;
}
}
}
Any help shall be greatly appreciated.
Thanks
try this https://stackoverflow.com/a/21282418/2521214
and swap x,y axises
because that solution minimize page height (fixed page width)
if you do not want border then set it to zero
it is basically what you are coding now
Related
I have a large list of items, each item has a weight.
I'd like to select N items randomly without replacement, while the items with more weight are more probable to be selected.
I'm looking for the most performing idea. Performance is paramount. Any ideas?
If you want to sample items without replacement, you have lots of options.
Use a weighted-choice-with-replacement algorithm to choose random indices. There are many algorithms like this. One of them is WeightedChoice, described later in this answer, and another is rejection sampling, described as follows. Assume that the highest weight is max, there are n weights, and each weight is 0 or greater. To choose an index in [0, n) using rejection sampling:
Choose a uniform random integer i in [0, n).
With probability weights[i]/max, return i. Otherwise, go to step 1. (For example, if all the weights are integers greater than 0, choose a uniform random integer in [1, max] and if that number is weights[i] or less, return i, or go to step 1 otherwise.)
Each time the weighted choice algorithm chooses an index, set the weight for the chosen index to 0 to keep it from being chosen again. Or...
Assign each index an exponentially distributed random number (with a rate equal to that index's weight), make a list of pairs assigning each number to an index, then sort that list by those numbers. Then take each item from first to last, in ascending order. This sorting can be done on-line using a priority queue data structure (a technique that leads to weighted reservoir sampling). Notice that the naïve way to generate the random number, -ln(1-RNDU01())/weight, where RNDU01() is a uniform random number in [0, 1], is not robust, however ("Index of Non-Uniform Distributions", under "Exponential distribution").
Tim Vieira gives additional options in his blog.
A paper by Bram van de Klundert compares various algorithms.
EDIT (Aug. 19): Note that for these solutions, the weight expresses how likely a given item will appear first in the sample. This weight is not necessarily the chance that a given sample of n items will include that item (that is, an inclusion probability). The methods given above will not necessarily ensure that a given item will appear in a random sample with probability proportional to its weight; for that, see "Algorithms of sampling with equal or unequal probabilities".
Assuming you want to choose items at random with replacement, here is pseudocode implementing this kind of choice. Given a list of weights, it returns a random index (starting at 0), chosen with a probability proportional to its weight. This algorithm is a straightforward way to implement weighted choice. But if it's too slow for you, see my section "Weighted Choice With Replacement" for a survey of other algorithms.
METHOD WChoose(weights, value)
// Choose the index according to the given value
lastItem = size(weights) - 1
runningValue = 0
for i in 0...size(weights) - 1
if weights[i] > 0
newValue = runningValue + weights[i]
lastItem = i
// NOTE: Includes start, excludes end
if value < newValue: break
runningValue = newValue
end
end
// If we didn't break above, this is a last
// resort (might happen because rounding
// error happened somehow)
return lastItem
END METHOD
METHOD WeightedChoice(weights)
return WChoose(weights, RNDINTEXC(Sum(weights)))
END METHOD
Let A be the item array with x itens. The complexity of each method is defined as
< preprocessing_time, querying_time >
If sorting is possible: < O(x lg x), O(n) >
sort A by the weight of the itens.
create an array B, for example:
B = [ 0, 0, 0, x/2, x/2, x/2, x/2, x/2 ].
it's clear to see that B has a bigger probability from choosing x/2.
if you haven't picked n elements yet, choose a random element e from B.
pick a random element from A within the interval e : x-1.
If iterating through the itens is possible: < O(x), O(tn) >
iterate through A and find the average weight w of the elements.
define the maximum number of tries t.
try (at most t times) to pick a random number in A whose weight is bigger than w.
test for some t that gives you good/satisfactory results.
If nothing above is possible: < O(1), O(tn) >
define the maximum number of tries t.
if you haven't picked n elements yet, take t random elements in A.
pick the element with biggest value.
test for some t that gives you good/satisfactory results.
(As I am new and may not be aware of the code of conduct, feel free to edit this post to make this better and more helpful to other people.)
Greetings everybody!
This problem is related to this: Problem Link
The problem in brief:
Given a 2xM array and we want to tile it with 2x1 tiles such that the sum of absolute values of the differences of the values "covered" via the individual tiles is maximized. We want to report this max sum.
The problem in detail:
In Domino Solitaire, you have a grid with two rows and many columns. Each square in the grid contains an integer. You are given a supply of rectangular 2×1 tiles, each of which exactly covers two adjacent squares of the grid. You have to place tiles to cover all the squares in the grid such that each tile covers two squares and no pair of tiles overlap. The score for a tile is the difference between the bigger and the smaller number that are covered by the tile. The aim of the game is to maximize the sum of the scores of all the tiles.
Below is my code for it. Basically, I've done a sort of a recursive thing because there are two cases: (1) One vertical 2x1 tile in the start and (2) Two horizontal 2x1 laid together to cover 2 columns.
#include <bits/stdc++.h>
using namespace std;
int maxScore(int array[][2], int N, int i);
int main(){
ios::sync_with_stdio(0);
cin.tie(0);
int N; cin >> N;
int array[N][2]; for(int i=0;i<N;i++) cin >> array[i][0] >> array[i][1];
cout << maxScore(array, N, 0);
return 0;
}
int maxScore(int array[][2], int N, int i){
int score1 = abs(array[i][0] - array[i][1]) + maxScore(array, N, i+1);
int score2 = abs(array[i][0] - array[i+1][0]) + abs(array[i][1] - array[i+1][1]) + maxScore(array, N, i+2);
return max(score1, score2);
}
However, this seems to be a very inefficient solution and I can't really understand how to cover the base cases (otherwise this would go on forever).
Any help would be really appreciated. Thank You! (BTW I want to create a new tag - Competitive Programming, can anybody help me do so?)
Maintain an array of the best solutions, where the value in column i of the array is the best solution considering only the matching colums of the input matrix. Then arr[i] = max possible by adding either one tile to the arr[i-1] solution, or 2 to the arr[i-2] solution. Treat arr[-1] as 0 and set arr[0] to the val of one vertical dominoe.
This is intentionally not a complete solution, but should help you find a much faster implementation.
Since you need to cover every square of a 2xM grid, there is no way you have dominoes placed like this:
. . .[#|#]. .
. .[#|#]. . .
So essentially, for every sub-block the right most domino is vertical, or there are two horizontal ones above each other.
If you start from the left, you only need to remember what your best result was for the first n or n-1 tiles, then try placing a vertical domino right to the n-solution or two horizontal dominoes right to the n-1 solution. The better solution is the best n+1 solution. You can compute this in a simple for-loop, as a first step, store all partial solutions in a std::vector.
Here we have a box that is 4 * 7 and it can be filled with rectangles that are either 1 * 2 or 2 * 1. This depiction is from the book Competitive Programmer's Handbook.
To solve this problem most efficiently, the book mentions using the parts that can be in a particular row:
Since there are 4 things in this set, the maximum unique rows we can have is 4^m, where m is the number of columns. From each constructed row, we construct the next row such that it is valid. Valid means we cannot have vertical fragments out of order. Only if all vertical "caps" in the top row correspond to vertical "cups" in the bottom row and vice versa is the solution valid. (Obviously for the horizontal fragments, their construction is restricted in row creation itself, so it is not possible for there to be inter-row discrepancy here.)
The book then says this:
Since a row consists of m characters and there are four choices for
each character, the number of distinct rows is at most 4^m. Thus, the
time complexity of the solution is O(n4^{2m}) because we can go through
the O(4^m) possible states for each row, and for each state, there are
O(4^m) possible states for the previous row.
Everything is fine until the last phrase, "there are O(4^m) possible states for the previous row." Why do we only consider the previous row? There are more rows, and this time complexity should consider the entire problem, not just the previous row, right?
Here is my ad hoc C++ implementation for 2 by n matrix, which would not work in this case, but I was trying to abstract it:
int ways[251];
int f(int n){
if (ways[n] != 1) return ways[n];
return (ways[n] = f(n-1) + f(n-2));
}
int main(){
ways[0] = 1;
ways[1] = 1;
for (int i = 2; i <= 250; i++){
ways[i] = -1;
cout << f(250) << '\n';
}
}
Here we have a box that is 4 * 7 and it can be filled with rectangles that are either 1 * 2 or 2 * 1. This depiction is from the book Competitive Programmer's Handbook.
To solve this problem most efficiently, the book mentions using the parts that can be in a particular row:
Since there are 4 things in this set, the maximum unique rows we can have is 4^m, where m is the number of columns. From each constructed row, we construct the next row such that it is valid. Valid means we cannot have vertical fragments out of order. Only if all vertical "caps" in the top row correspond to vertical "cups" in the bottom row and vice versa is the solution valid. (Obviously for the horizontal fragments, their construction is restricted in row creation itself, so it is not possible for there to be inter-row discrepancy here.)
The book then mysteriously says this:
It is possible to make the solution more efficient by using a more
compact representation for the rows. It turns out that it is
sufficient to know which columns of the previous row contain the upper
square of a vertical tile. Thus, we can represent a row using only
characters upper square of a vertical tile and □, where □ is a combination of characters lower vertical square, left horizontal square and right horizontal square.
Using this representation, there are only 2^m distinct rows and the
time complexity is O(n2^(2m)).
Why does this simple square work? How would you know if there is a horizontal box underneath the top vertical fragment? How would you know left and right horizontal fragments are aligned? It breaks my mind why this is possible. Does anyone know?
Here is my ad hoc C++ implementation for 2 by n matrix, which would not work in this case, but I was trying to abstract it:
int ways[251];
int f(int n){
if (ways[n] != 1) return ways[n];
return (ways[n] = f(n-1) + f(n-2));
}
int main(){
ways[0] = 1;
ways[1] = 1;
for (int i = 2; i <= 250; i++){
ways[i] = -1;
cout << f(250) << '\n';
}
}
I'm trying to binarise a picture, firstly of course having it prepared(grayscaling)
My method is to find the maximum and minimum values of grayscale, then find the middle value(which is my threshold) and then, iterating over all the pixels I compare the current one with a threshold and if the grayscale is larger than the threshold, I put 0 in a matrix, or for the others I put 1.
But now I'm facing the problem. In common I'm binarising images with white background, so my algorithm is further based on this feature. But when I meet an image with black background everything collapses, but I still can see the number clearly(now 0's and 1's switch places)
How can i solve this problem, make my program more common?
Maybe I'd better look for another ways of binarization/
P.S. I looked for an understandable explanation of Otsu threshold method, but it seems either I'm not prepared for this way of difficulty or I find very complicated explanations every time, but I can't write it in C. If anyone could hrlp here, it'd be wonderful.
Sorry for not answering the questions, just didn't see them
Firstly - the code
for (int y=1;y<Source->Picture->Height;y++)
for (int x=1;x<Source->Picture->Width;x++)
{
unsigned green = GetGValue(Source->Canvas->Pixels[x][y]);
unsigned red = GetRValue(Source->Canvas->Pixels[x][y]);
unsigned blue = GetBValue(Source->Canvas->Pixels[x][y]);
threshold = (0.2125*red+0.7154*green+0.0721*blue);
if (min>threshold)
min=threshold;
if (max<threshold)
max = threshold;
}
middle = (max+min)/2;
Then iterating through the image
if (threshold<middle)
{
picture[x][y]=1;
fprintf( fo,"1");
} else {
picture[x][y]=0;
fprintf( fo,"0");
}
}
fprintf( fo,"\n");
}
fclose(fo);
So I get a file, something like this
000000000
000001000
000001000
000011000
000101000
000001000
000001000
000001000
000000000
Here you can see an example of one.
Then I can interpolate it, or do something else (recognize), depending on zero's and one's.
But if I switch the colors, the numbers won't be the same. So the recognition will not work. I wonder if there's an algoritm that can help me out.
I've never heard of Otsu's method, but I understand some of the wikipedia page so I'll try to simplify that.
1 Count how many pixels are at each level of darkness.
2 "Guess" a threshold.
3 Calculate the variance of the counts of darkness less than the threshold
4 Calculate the variance of the counts of darkness greater than the threshold
5 If the variance of the darker side is greater, guess a darker threshold,
else guess a higher threshold.
Do this like a binary search so that it ends.
6 Turn all pixels darker than threshold black, the rest white.
Otsu's method is actually "maximizing inter-class variance", but I don't understand that part of the math.
The concept of variance, is "how far apart are the values from each other." A low variance means everything is similar. A high variance means the values are far apart. The variance of a rainbow is very high, lots of colors. The variance of the background of stackoverflow is 0, since it's all perfectly white, with no other colors. Variance is calculated more or less like this
double variance(unsigned int* counts, int size, int threshold, bool above) {
//this is a quick trick to turn the "upper" into lower, save myself code
if (above) return variance(counts, size-threshold, size-threshold, false);
//first we calculate the average value
unsigned long long atotal=0;
unsigned long long acount=0;
for(int i=0; i<threshold; ++i) {
atotal += counts[i]*i //number of px times value
acount += counts[i];
}
//finish calculating average
double average = double(atotal)/count;
//next we calculate the variance
double vtotal=0;
for(int i=0; i<threshold; ++i) {
//to do so we get each values's difference from the average
double t = std::abs(i-average);
//and square it (I hate mathmaticians)
vtotal += counts[i]*t*t;
}
//and return the average of those squared values.
return vtotal/count;
}
I would tackle this problem with another approach:
Compute the cumulative histogram of greyscaled values of the image.
Use as threshold the pixel value in which this cumulative
reaches half of the total pixels of the image.
The algorithm would go as follows:
int bin [256];
foreach pixel in image
bin[pixelvalue]++;
endfor // this computes the histogram of the image
int thresholdCount = ImageWidth * ImageSize / 2;
int count = 0;
for int i = 0 to 255
count = count + bin[i];
if( count > thresholdCount)
threshold = i;
break; // we are done
endif
endfor
This algorithm does not compute the cumulative histogram itself but rather uses the image histogram to do what I said earlier.
If your algorithm works properly for white backgrounds but fails for black backgrounds, you simply need to detect when you have a black background and invert the values. If you assume the background value will be more common, you can simply count the number of 1s and 0s in the result; if the 0s are greater, invert the result.
Instead of using mean of min and max, you should use median of all points as threshold. In general kth percentile (k = what percentage of points you want as black) is more appropriate.
Another solution is to cluster the data into two clusters.