For my task I had to write a bin-packing algorithm where there are N objects with different volumes. They all had to be packed into boxes of volume V. Using Decreasing Sorting I successfully wrote the algorithm. But another task includes writing out all possible variations of bin-packing in an amount of boxes that I previously found most effective. So for example:
There are 4 objects with volumes: 4, 6, 3, 2. Volume of boxes is 10. Using the bin-packing algorithm I find that I will need 2 boxes.
All possible variations would be:
4,6 and 3,2
4,3 and 6,2
4,2 and 6,3
6 and 4,3,2
I'm having trouble coming up with an appropriate algorithm for this problem, where should I start ? Any help would be greatly appreciated.
The general algorithm for solving this problem goes like this:
Try to fit all objects in n bins by creating all possible split configurations into n groups and test if any such configuration fits in the bins.
If not, increase n and try again.
Now, how do you find all possible split configurations?
Consider putting a tag on each object to decide into which bin it belongs. If you have 3 objects and 2 bins, then each object can get the tag 0 or 1 (for any of the two bins). This makes 2^3 = 8 combinations:
000
001
010
...
Now it also becomes clear how to create all combinations. You can use a counter and convert it into the base of the number of bins (2 in this case) and use the digits as tags. There are other options, e. g. you could use a recursive solution. I prefer that.
When you have a solution you just need to check that for each bin the volume sum of the objects of this tag is not greater than the bin size.
Here would be some pseudo code for creating a list of all the combinations recursively:
combinations(object_counter, bin_counter) {
if (object_counter == 0) {
return [[]] // a list of one empty list
}
result = [] // empty list
for i in 0 .. bin_counter-1 {
sub_results = combinations(object_counter-1, bin_counter)
for sub_result in sub_results {
result.append([i] + sub_result)
}
}
return result
}
Related
When we specify the data for a set we have the ability to give it tuples of data. For example, we could write in our .dat file the following:
set A : 1 2 3 :=
1 + - -
2 - - +
3 - + +
This would specify that we would have 4 tuples in our set: (1,1), (2,3), (3,2), (3,3)
But I guess that I am struggling to understand exactly why we would want to do this? Furthermore, suppose we instantiated a Set object in our code as:
model.Aset = RangeSet(4, dimen=2)
Would this then specify that our tuples would have the indices 1, 2, 3, and 4?
I am thinking that specifying tuples in our set could potentially be useful when working with some data in which it's important to have a bit of a "spatial" understanding of the problem. But I would be curious to hear from the community what the potential applications of specifying set data this way might be.
The most common place this appears is when you're trying to model edges between nodes in a network. Networks aren't usually completely dense (have edges between every pair of nodes) so it's beneficial to represent just the edges that appear using a sparse set of tuples.
Assume I have the following array of objects:
Object 0:
[0]=1.1344
[1]=2.18
...
[N]=1.86
-----------
Object 1 :
[0]=1.1231
[1]=2.16781
...
[N]=1.8765
-------------
Object 2 :
[0]=1.2311
[1]=2.14781
...
[N]=1.5465
--------
Object 17:
[0]=1.31
[1]=2.55
...
[N]=0.75
How can I compare those objects?
You can see that object 0 and object 1 are very similar but object 17 not like any of them.
I would like to have algorithm tha twill give me all the similar object in my array
You tag this question with Algorithm (and I am not expert in C++) so lets give a pseudo code.
First, you should set a threshold which define 2 var with different under that threshold as similar. Second step will be to loop over all pair of elements and check for similarity.
Consider A to be array with n objects and m to be number of fields in each object.
threshold = 0.1
for i in (0, n):
for j in (i+1,n):
flag = true;
for k in (1,m):
if (abs(A[i][k] - A[j][k]) > threshold)
flag = false // if the absolute value of the diff is above the threshold object are not similar
break // no need to continue checks
if (flag)
print: element i and j similar // and do what ever
Time complexity is O(m * n^2).
Notice that you can use the same algorithm to sort the objects array - declare compare function as the max diff between field and then sort accordingly.
Hope that helps!
Your problem essentially boils down to nearest neighbor search which is a well researched problem in data mining.
There are diffent approaches to this problem.
I would suggest to decide first what number of similar elements you want OR to set a given threshold for the similarity. Than you have to iterate through all the vectors and compute a distance function between the query vector and each vector in the database.
I would suggest you to use Euclidean distance in your case since you have real nominal data.
You can read more about the topic of nearest neighbor search and Euclidean distancehere and here. Good luck!
What you need is a classifier, for your problem there are 2 algorithms depends on what you wanted.
If you need to find which object is most similar to the choosen object-m, you can use nearest neighbor algorithm or else if you need to find similar sets of objects you can use k-means algorithm to find k sets.
I need to sort a vector with tuples
[
(a_11, ..., a_1n),
... ,
(a_m1, ..., a_mn)
]
based on a list of attributes and their comparison operators < or >.
For example: sort first by a_2 with the > operator and by a_57 with the < operator.
Question: I am looking for a data structure to do this efficiently under the assumption that sorting happens much more often than updates to the vector.
My current idea is to store the sorting order for each attribute by adding pointers similar to a linked list for each attribute:
For example, this vector:
0: (1, 7, 4)
1: (2, 5, 6)
2: (3, 4, 5)
Would get the data structure
0: (1 next:1 prev:-, 7 next:- prev:1, 4 next:2 prev:-)
1: (2 next:2 prev:1, 5 next:0 prev:2, 6 next:- prev:2)
2: (3 next:- prev:2, 4 next:1 prev:-, 5 next:1 prev:0)
Edit:
At any given time I need only one sorting order. After I get a user request for a different sorting order I need to recompute as quickly as possible.
The incremental idea is very good, but I need to make an estimate on how much time I need and this is way more easy if I have an idea how it should be done.
Once i am finished I need random access to groups of 100 elements, i.e. the first 100, the second 100, or elements 5100-5199.
I would use boost::MultiIndex for this. – drescherjm
I am writing a code to do some template matching using cv::matchTemplate but I have run into some problems with the 2-dimensional vector of vectors (vov) I created which I have called vvABC. At the moment, my vov has 10 elements which can change based on the values I pass while running the code.
My problem is moving from one column in my vov to the next so I can calculate the size. From my understanding of how vov works, if I have my elements stored in my vov as:
C_A C_B
0 0
1 1
2 2
3
4
5
6
To calculate the size of the first column, I should simply do something like:
vvABC[0].size() to get the size of the first column (which would give 3 in this case) and vvABC[1].size() to get the size of the second column (which would give 7). The problem I am now faced with is both of them give '3' in both cases which is obviously wrong.
Can someone please help me out on how I can get the correct size of the next column?
I stored my detections in my vvABC, now I want to match them one at a time.
It seems like you made a mistake here:
for (uint iCaTemplate = iCa + 1; iCaTemplate < vvABC[iCa].size(); ++iCaTemplate) {
iCa is an index on the 'first level' of vector (of size 2 in your example above), i.e. columns, and you use it to go through the elements of the 'second level' of vector, i.e. rows.
Thanks a lot guys, esp. JGab, after several debug outputs, I finally found that my vector of vectors wasn't being filled up the way I thought it was...thanks once more and my apologies for my belated response.
I'm a real speed freak if it gets to algorithms, and in the plugins I made for a game.
The speed is.. a bit.. not satisfying. Especially while driving around with a car and you do not follow your path, the path has to be recalculated.. and it takes some time, So the in-game GPS is stacking up many "wrong way" signals (and stacking up the signals means more calculations afterward, for each wrong way move) because I want a fast, live-gps system which updates constantly.
I changed the old algorithm (some simple dijkstra implementation) to boost::dijkstra's to calculate a path from node A to node B
(total node list is around ~15k nodes with ~40k connections, for curious people here is the map: http://gz.pxf24.pl/downloads/prv2.jpg (12 MB), edges in the red lines are the nodes),
but it didn't really increase in speed. (At least not noticeably, maybe 50 ms).
The information that is stored in the Node array is:
The ID of the Node,
The position of the node,
All the connections to the node (and which way it is connected to the other nodes, TO, FROM, or BOTH)
Distance to the connected nodes.
I'm curious if anybody knows some faster alternatives in C/C++?
Any suggestions (+ code examples?) are appreciated!
If anyone is interested in the project, here it is (source+binaries):
https://gpb.googlecode.com/files/RouteConnector_177.zip
In this video you can see what the gps-system is like:
http://www.youtu.be/xsIhArstyU8
as you can see the red route is updating slowly (well, for us - gamers - it is slow).
( ByTheWay: the gaps between the red lines have been fixed a long time ago :p )
Since this is a GPS, it must have a fixed destination. Instead of computing the path from your current node to the destination each time you change the current node, you can instead find the shortest paths from your destination to all the nodes: just run Dijkstra once starting from the destination. This will take about as long as an update takes right now.
Then, in each node, keep prev = the node previous to this on the shortest path to this node (from your destination). You update this as you compute the shortest paths. Or you can use a prev[] array outside of the nodes - basically whatever method you are using to reconstruct the path now should still work.
When moving your car, your path is given by currentNode.prev -> currentNode.prev.prev -> ....
This will solve the update lag and keep your path optimal, but you'll still have a slight lag when entering your destination.
You should consider this approach even if you plan on using A* or other heuristics that do not always give the optimal answer, at least if you still get lag with those approaches.
For example, if you have this graph:
1 - 2 cost 3
1 - 3 cost 4
2 - 4 cost 1
3 - 4 cost 2
3 - 5 cost 5
The prev array would look like this (computed when you compute the distances d[]):
1 2 3 4 5
prev = 1 1 1 2 3
Meaning:
shortest path FROM TO
1 2 = prev[2], 2 = 1, 3
1 3 = prev[3], 3 = 1, 3
1 4 = prev[ prev[4] ], prev[4], 4 = 1, 2, 4 (fill in right to left)
1 5 = prev[ prev[5] ], prev[5], 5 = 1, 3, 5
etc.
To make the start instant, you can cheat in the following way.
Have a fairly small set of "major thoroughfare nodes". For each node define its "neighborhood" (all nodes within a certain distance). Store the shortest routes from every major thoroughfare node to every other one. Store the shortest routes to/from each node to its major thoroughfares.
If the two nodes are in the same neighborhood you can calculate the best answer on the fly. Else consider only routes of the form, "here to major thoroughfare node near me to major thoroughfare near it to it". Since you've already precalculated those, and have a limited number of combinations, you should very quickly be able to calculate a route on the fly.
Then the challenge becomes picking a set of major thoroughfare nodes. It should be a fairly small set of nodes that most good routes should go through - so pick a node every so often along major streets. A list of a couple of hundred should be more than good enough for your map.