How do I find the number of combinations of apartments which can be built in a building of n floors, where a large apartment has 2 floors and a small apartment has only 1 floor.
Start from a ground.
You can build 1-floor building with the only way, so F(1) = 1.
You can build 2-floor building with two ways as 1+1 and 2, so F(2) = 2.
Look for general approach:
You can build n-floor building making small apartment on top of n-1 building, or making large apartment on top of n-2 building, so F(n) = F(n-1) + F(n-2)
Now implement this logic in code.
Related
A construction company has 6 projects, for each they need $d_i$ workers. The company has no workers at the beginning of project 1.
Each new worker must take a safety course that costs 300, and 50 more for each worker.
If there is no new worker there is no course.
Firing a worker does not cost any money, and a workers can't be rehired.
Given that the salary of a worker is 100 per project, formulate a linear programming problem that minimizes the workers costs.
What I tried:
Let $x_i$ be the number of new workers for project $i$.
Let $y_i$ be the number of old workers remaining from previous projects until project $i$ (all the workers hired - all the workers that were fired)
Let $z_i$ be an indicator such that $z_i =0 \iff x_i>0$
The function I'm trying to solve is:
$\min(\sum_{i=1}^6 150x_i + 300(1-z_i) + 100y_i)$
s.t:
\begin{align}
x_i,y_i,z_i &\ge 0 \\
z_i &\ge 1-x_i \\
y_i + x_i &\ge d_i \\
y_i &\ge y_{i-1} + x_i
\end{align}
Something feels not right to me. The main reason is that I tried to use matlab to solve this and it failed.
What did I do wrong? How can I solve this question?
When I see this correctly you have two small mistakes in your constraints.
The first appears when you use z_i >= 1-x_i. This allows z_i to take the value 1 all the time, which will never give you the extra cost of 300. You need to upper bound z_i such that z_i will not be 1 when you have x_i>0. For this constraint you need something called big M. For sufficiently large M you would then use z_i <= 1-x_i/M. This way when x_i=0 you can have z_i=1, otherwise the right hand side is smaller than 1 and due to integrality z_i has to be zero. Note that you usually want to choose M as tight as possible. So in your case d_i might be a good choice.
The second small mistake lays in y_i >= y_{i-1} + x_i. This way you can increase y_i over y_{i-1} without having to set any x_i. To force x_i to increase you need to flip the inequality. Additionally by the way you defined y_i this inequality should refer to x_{i-1}. Thus you should end up with y_i <= y_{i-1} + x_{i-1}. Additionally you need to take care of corner cases (i.e. y_1 = 0)
I think with these two changes it should work. Let me know whether it helped you. And if it still doesn't work I might have missed something.
For my task I had to write a bin-packing algorithm where there are N objects with different volumes. They all had to be packed into boxes of volume V. Using Decreasing Sorting I successfully wrote the algorithm. But another task includes writing out all possible variations of bin-packing in an amount of boxes that I previously found most effective. So for example:
There are 4 objects with volumes: 4, 6, 3, 2. Volume of boxes is 10. Using the bin-packing algorithm I find that I will need 2 boxes.
All possible variations would be:
4,6 and 3,2
4,3 and 6,2
4,2 and 6,3
6 and 4,3,2
I'm having trouble coming up with an appropriate algorithm for this problem, where should I start ? Any help would be greatly appreciated.
The general algorithm for solving this problem goes like this:
Try to fit all objects in n bins by creating all possible split configurations into n groups and test if any such configuration fits in the bins.
If not, increase n and try again.
Now, how do you find all possible split configurations?
Consider putting a tag on each object to decide into which bin it belongs. If you have 3 objects and 2 bins, then each object can get the tag 0 or 1 (for any of the two bins). This makes 2^3 = 8 combinations:
000
001
010
...
Now it also becomes clear how to create all combinations. You can use a counter and convert it into the base of the number of bins (2 in this case) and use the digits as tags. There are other options, e. g. you could use a recursive solution. I prefer that.
When you have a solution you just need to check that for each bin the volume sum of the objects of this tag is not greater than the bin size.
Here would be some pseudo code for creating a list of all the combinations recursively:
combinations(object_counter, bin_counter) {
if (object_counter == 0) {
return [[]] // a list of one empty list
}
result = [] // empty list
for i in 0 .. bin_counter-1 {
sub_results = combinations(object_counter-1, bin_counter)
for sub_result in sub_results {
result.append([i] + sub_result)
}
}
return result
}
H_ello lovely people,
my program is written as a scalable network framework. It consists of several components that run as individual programs. Additional instances of the individual components can be added dynamically.
The components initially register with IP and Port at a central unit. This manager periodically sends to the components where other components can be found. But not only that, each component is assigned a weight / probability / chance of how often it should be addressed compared to the others.
As an example: 1Master, Component A, B, C
All Components registered at Master, Master sends to A: [B(127.0.0.1:8080, 3); C(127.0.0.1:8081. 5)]
A runs in a loop and calculates the communication partner over and over again from this data.
So, A should request B and C in a 3 to 5 ratio. How many requests each one ultimately gets depends on the running performance. This is about the ratio.
Of course, the numbers 3 and 5 come periodically and change dynamically. And it's not about 3 components but potentially hundreds.
My idea was:
Add 3 and 5. Calculate a random number between 1 and 8. If it is greater than 3, take C else B ....
But I think that's not a clean solution. Probably computationally intensive in every loop. In addition, management and data structures are expensive. In addition, I think that a random number from the STL is not balanced enough.
Can someone perhaps give me a hint, how I implemented this cleanly or does someone have experiences with it or an idea?
Thank you in every case;)
I have an idea for you:
Why not try it with cummulative probabilities?
1.) Generate a uniformly distributed random number.
2.) Iterate through your list until the cumulative probability of the visited element is greater than the random number.
Look at this (Java code but will also work in C++), (your hint that you use C++ was very good!!!)
double p = Math.random();
double cumulativeProbability = 0.0;
for (Item item : items) {
cumulativeProbability += item.probability();
if (p <= cumulativeProbability) {
return item;
}
}
To better learn Haskell, I'm trying to basically build a variant of the well-known Indie Game Super Hexagon.
I am however having a problem with the level generation:
Right now, level generation is done by having a list containing all different "gauntlets" (patterns of walls, implemented as [[Bool]],a variable number of rows, each containing True or False for ); these are the building blocks of the level. Using the getStdGen() number generator, I'm able to create an infinite list of gauntlets.
However, at a single time, we only want to render part of these. To keep track of if a gauntlet has already been passed or not, a second argument is introduced, containing the sum of the number of rows of all gauntlets before this one. (e.g, length (gauntletData currentRandomNumber) + snd $ randomGauntletList !! (n-1), where randomGauntletList has the type [(Gauntlet,Integer)]
The problem is in how this list is used in the rendering function and the updating function: take 30 $ dropWhile (\(_,distance) -> currentDistance > distance) randomGauntletList
The dropWhile results in the program taking longer and longer to return the current 'starting point' of the list, therefore slowing down the game after +- 30 seconds.
I'm out of my depth: Is there a way to solve this problem?
I'm a real speed freak if it gets to algorithms, and in the plugins I made for a game.
The speed is.. a bit.. not satisfying. Especially while driving around with a car and you do not follow your path, the path has to be recalculated.. and it takes some time, So the in-game GPS is stacking up many "wrong way" signals (and stacking up the signals means more calculations afterward, for each wrong way move) because I want a fast, live-gps system which updates constantly.
I changed the old algorithm (some simple dijkstra implementation) to boost::dijkstra's to calculate a path from node A to node B
(total node list is around ~15k nodes with ~40k connections, for curious people here is the map: http://gz.pxf24.pl/downloads/prv2.jpg (12 MB), edges in the red lines are the nodes),
but it didn't really increase in speed. (At least not noticeably, maybe 50 ms).
The information that is stored in the Node array is:
The ID of the Node,
The position of the node,
All the connections to the node (and which way it is connected to the other nodes, TO, FROM, or BOTH)
Distance to the connected nodes.
I'm curious if anybody knows some faster alternatives in C/C++?
Any suggestions (+ code examples?) are appreciated!
If anyone is interested in the project, here it is (source+binaries):
https://gpb.googlecode.com/files/RouteConnector_177.zip
In this video you can see what the gps-system is like:
http://www.youtu.be/xsIhArstyU8
as you can see the red route is updating slowly (well, for us - gamers - it is slow).
( ByTheWay: the gaps between the red lines have been fixed a long time ago :p )
Since this is a GPS, it must have a fixed destination. Instead of computing the path from your current node to the destination each time you change the current node, you can instead find the shortest paths from your destination to all the nodes: just run Dijkstra once starting from the destination. This will take about as long as an update takes right now.
Then, in each node, keep prev = the node previous to this on the shortest path to this node (from your destination). You update this as you compute the shortest paths. Or you can use a prev[] array outside of the nodes - basically whatever method you are using to reconstruct the path now should still work.
When moving your car, your path is given by currentNode.prev -> currentNode.prev.prev -> ....
This will solve the update lag and keep your path optimal, but you'll still have a slight lag when entering your destination.
You should consider this approach even if you plan on using A* or other heuristics that do not always give the optimal answer, at least if you still get lag with those approaches.
For example, if you have this graph:
1 - 2 cost 3
1 - 3 cost 4
2 - 4 cost 1
3 - 4 cost 2
3 - 5 cost 5
The prev array would look like this (computed when you compute the distances d[]):
1 2 3 4 5
prev = 1 1 1 2 3
Meaning:
shortest path FROM TO
1 2 = prev[2], 2 = 1, 3
1 3 = prev[3], 3 = 1, 3
1 4 = prev[ prev[4] ], prev[4], 4 = 1, 2, 4 (fill in right to left)
1 5 = prev[ prev[5] ], prev[5], 5 = 1, 3, 5
etc.
To make the start instant, you can cheat in the following way.
Have a fairly small set of "major thoroughfare nodes". For each node define its "neighborhood" (all nodes within a certain distance). Store the shortest routes from every major thoroughfare node to every other one. Store the shortest routes to/from each node to its major thoroughfares.
If the two nodes are in the same neighborhood you can calculate the best answer on the fly. Else consider only routes of the form, "here to major thoroughfare node near me to major thoroughfare near it to it". Since you've already precalculated those, and have a limited number of combinations, you should very quickly be able to calculate a route on the fly.
Then the challenge becomes picking a set of major thoroughfare nodes. It should be a fairly small set of nodes that most good routes should go through - so pick a node every so often along major streets. A list of a couple of hundred should be more than good enough for your map.