I have a question about clarification of my homework.
http://www.cs.bilkent.edu.tr/~gunduz/teaching/cs201/cs201_homework3.pdf
To see the handout please go to page 25 of http://www.scribd.com/nanny24/d/36657378-Data-Structures-and-Algorithm-Analysis-in-C-Weiss.
Following is what I need to do, but I didn't understand what this means. Does it mean -for algorithm 1- compare actual running time versus (n^3 + 3*(n^2) + 2*n)/6, n=array size?
I don't think so, but I couldn't infer anything else. Can you please explain me what this is?
2- Plot the expected growth rates obtained from the theoretical analysis (as given for each solution) by
using the same N values that you used in obtaining your results. Compare the expected growth rates
and the obtained results, and discuss your observations in a paragraph.
EDIT 2:
Algorithm 1:
n actual running time(ms) (n^3 + 3*(n^2) + 2*n)/6 (I don't know whether the type is millisecond or not)
100 1 171700
1000 851 167167000
So considering this huge difference between actual running time and theoretical running time, what the instructor means may be different than theoretical time complexity function which is (n^3 + 3*(n^2) + 2*n)/6 for the algorithm 1. This is the function: http://www.diigo.com/item/image/2lxmz/m7y3?size=o
Yes, your instructor means by "expected growth rate" the predicted running time after you plug in the value of n in the theoretical time complexity function.
While this usage is standard, I would still check with the instructor if I were you.
The theoretical number is probably the number of operations or comparisons or something similar.
I guess that growth rate means how fast does the value grow?. When n goes from 100 to 1000, the theoretical value grows by the factor 167167000/171700 = 973.6, compared to the real-word measured factor of 851.
Related
I am trying to improve the speed of my code, written in C++. Based on profilers, the function cbrt()/cbrtf32x is the function I spend the most time in/on (or more specifically):
double test_func(const double &test_val){
double cbrt_test_val = cbrt(test_val);
return (1 - 1e-10*cbrt_test_val);
}
According to data, I spend more then three times the time for cbrt()/cbrtf32x() than for the closest cost-expensive function. Thus I was wondering how to improve this function, and how to speed it up? The input values range from 1e18 to 1e30.
There is little that can be done if you are doing the cubic roots one at a time, and you want the exact result.
As is, I would be surprised if you can improve the cubic root calculation more than 10-20% - if that - while getting the same result numerically. (Note: I got that 10%-20% number out of thin air; it's an opinion, not a scientific number at all.)
If you can batch up the calculations, you might be able to SIMD the operation, or multi-thread them, or if you know more about the distribution of the data (or can find out more,) you might be able to sort them and - I don't know - maybe calculate an incremental cubic root or something.
If you can get away with an approximation, then there are more things that you can do. For example, you are calculating the function f(x) = 1 - cbrt(x) / 1e10, which is the same as 1 - cbrt(x / 1e30) which is a strictly decreasing function that maps the domain [1e18..1e30] to the range [0..0.9999]. With y = x / 1e30 it becomes f(y) = 1 - cbrt(y) and now y is in the range [1e-12..1] and it can be pre-calculated and approximated using a look-up table.
Depending on the number of times you need a cubic root, how much accuracy loss you can get away with (which determines the size of the table,) and whether you can sort or bucket your input (to improve the CPU cache utilization for your LUT look-ups) you might get a nice speed boost out of this.
Short version of the question: overflow or timeout in current settings when calculating large int64_t and double, anyway to avoid these?
Test case:
If only demand is 80,000,000,000, solved with correct result. But if it's 800,000,000,000, returned incorrect 0.
If input has two or more demands (means more inequalities need to be calculated), smaller value will also cause incorrectness. e.g., three equal demands of 20,000,000,000 will cause the problem.
I'm using COIN-OR CLP linear programming solver to solve some network flow problems. I use int64_t when representing the link bandwidth. But CLP uses double most of time and cannot transfer to other types easily.
When the values of the variables are not that large (typically smaller than 10,000,000,000) and the constraints (inequalities) are relatively few, it will give the solution I want it to. But if either of the above factors increases, the tool will stop and return a 0 value solution. I think the reason is the calculation complexity is over its maximum, so program breaks at some trivial point (it uses LP simplex method).
The inequality is some kind of:
totalFlowSum <= usePercentage * demand
I changed it to
totalFlowSum - usePercentage * demand <= 0
Since totalFLowSum and demand are very large int64_t, usePercentage is double, if the constraints like this are too many (several or even more), or if the demand is larger than 100,000,000,000, the returned solution will be wrong.
Is there any way to correct this, like increase the break threshold or avoid this level of calculation magnitude?
Decrease some accuracy is acceptable. I have a possible solution is that 1,000 times smaller on inputs and 1,000 time larger on outputs. But this is kind of naïve and may cause too much code modification in the program.
Update:
I have changed the formulation to
totalFlowSum / demand - usePercentage <= 0
but the problem still exists.
Update 2:
I divided usePercentage by 1000, making its coefficient from 1 to 0.001, it worked. But if I also divide totalFlowSum/demand by 1000 simultaneously, still no result. I don't know why...
I changed the rhs of equalities from 0 to 0.1, the problem is then solved! Since the inputs are very large, 0.1 offset won't impact the solution at all.
I think the reason is that previous coeffs are badly scaled, so the complier failed to find an exact answer.
I have LP problem
It is similar to assignment problem for workers
I have three assignments with constraint scores(Constraint 1 is more important than constraint 2)
Constraint1 Constraint2
assign1 590 5
assign2 585 580
assign3 585 336
My current greedy solver will compare assignments by the first constraint. The best one becomes a solution. Solver will compare Constraint 2 if and only if he found two assignments with the same score value for previous constraint and so on.
For a given example in the first round assign2 and assign3 will be chosen because they have the same lowest constraint score. After second round solver choses assign3. So I need a cost function which will represent that behavior.
So I expect the
assign1_cost > assign2_cost > assign3_cost.
Is it posible to do?
I believe that I can apply some weighted math function or something like that.
There's two ways to do this simply, that I know of:
Assume that you can put an upper bound on each objective function (besides the first). Then you rewrite your objective as follows: (objective_1 * max_2 + objective_2) * max_3 + objective_3 and so on, and minimize said objective. For your example, say you know that objective_2 will be less than 1000. You can then solve, minimizing objective_1*1000 + objective_2. This may fail with some solvers if your upper bounds are too large.
This method requires n solver iterations, where n is the number of objectives, but is general - it doesn't require you to know upper bounds beforehand. Solve, minimizing objective_1 and ignoring the other objective functions. Add a constraint that objective_1 == <value you just got>. Solve, minimizing objective_2 and ignoring the remaining objective functions. Repeat until you've gone through every objective function.
Motivation is kind of hard to explain so I'll provide an example: Assume you receive high number of samples every second and your task is to classify them.
Lets also say this: You have two classifiers: heuristicFast, and heuristicSlow. So lets say that for every sample you run heuristicFast() and then if the result is close to undecided (lets say [0.45,0.55] range for classifier where 0 is class 1 and 1 is class2) I run more precise heuristicSlow.
Now the problem is that this is real time system so I want to be sure that I don't overload the CPUs (I'm using threading) even when high perchentage of calls to heuristicFast returns results in the [0.45,0.55] range.
What is the best way to accomplish this?
My best idea is to have entrycount for the heuristicSlow and then dont enter it if the entrycount is > number_of_cores / 2?
std::atomic<int> entrycount(0);
//...
if (classificationNotClear(result_heuristic_fast) && (entrycount<kMaxConcurrantCalls))
{
entrycount++;
final_result=heuristicSlow();
entrycount--;
}
else
final_result=result_heuristic_fast;
//...
Since you are building a real-time system, you have crucial information available: The maximum allowed running times for your classification and for both heuristics.
You could simply compute the leftover time for a fully fast heuristic ( total time minus sample count times fast heuristic time ) and determine how many applications of the slow heuristic fit into this time. Write this number into a counter and decrement.
Even fancier solution:
Sort your fast heuristic results by uncertainty (i.e. by abs(result-0.5)) and run the slow heuristic for as many cases as you've got time left.
I just have a quick question, on how to speed up calculations of infinite series.
This is just one of the examples:
arctan(x) = x - x^3/3 + x^5/5 - x^7/7 + ....
Lets say you have some library which allow you to work with big numbers, then first obvious solution would be to start adding/subtracting each element of the sequence until you reach some target N.
You also can pre-save X^n so for each next element instead of calculating x^(n+2) you can do lastX*(x^2)
But over all it seems to be very sequential task, and what can you do to utilize multiple processors (8+)??.
Thanks a lot!
EDIT:
I will need to calculate something from 100k to 1m iterations. This is c++ based application, but I am looking for abstract solution, so it shouldn't matter.
Thanks for reply.
You need to break the problem down to match the number of processors or threads you have. In your case you could have for example one processor working on the even terms and another working on the odd terms. Instead of precalculating x^2 and using lastX*(x^2), you use lastX*(x^4) to skip every other term. To use 8 processors, multiply the previous term by x^16 to skip 8 terms.
P.S. Most of the time when presented with a problem like this, it's worthwhile to look for a more efficient way of calculating the result. Better algorithms beat more horsepower most of the time.
If you're trying to calculate the value of pi to millions of places or something, you first want to pay close attention to choosing a series that converges quickly, and which is amenable to parallellization. Then, if you have enough digits, it will eventually become cost-effective to split them across multiple processors; you will have to find or write a bignum library that can do this.
Note that you can factor out the variables in various ways; e.g.:
atan(x)= x - x^3/3 + x^5/5 - x^7/7 + x^9/9 ...
= x*(1 - x^2*(1/3 - x^2*(1/5 - x^2*(1/7 - x^2*(1/9 ...
Although the second line is more efficient than a naive implementation of the first line, the latter calculation still has a linear chain of dependencies from beginning to end. You can improve your parallellism by combining terms in pairs:
= x*(1-x^2/3) + x^3*(1/5-x^2/7) + x^5*(1/9 ...
= x*( (1-x^2/3) + x^2*((1/5-x^2/7) + x^2*(1/9 ...
= [yet more recursive computation...]
However, this speedup is not as simple as you might think, since the time taken by each computation depends on the precision needed to hold it. In designing your algorithm, you need to take this into account; also, your algebra is intimately involved; i.e., for the above case, you'll get infinitely repeating fractions if you do regular divisions by your constant numbers, so you need to figure some way to deal with that, one way or another.
Well, for this example, you might sum the series (if I've got the brackets in the right places):
(-1)^i * (x^(2i + 1))/(2i + 1)
Then on processor 1 of 8 compute the sum of the terms for i = 1, 9, 17, 25, ...
Then on processor 2 of 8 compute the sum of the terms for i = 2, 11, 18, 26, ...
and so on, finally adding up the partial sums.
Or, you could do as you (nearly) suggest, give i = 1..16 (say) to processor 1, i = 17..32 to processor 2 and so on, and they can compute each successive power of x from the previous one. If you want more than 8x16 elements in the series, then assign more to each processor in the first place.
I doubt whether, for this example, it is worth parallelising at all, I suspect that you will get to double-precision accuracy on 1 processor while the parallel threads are still waking up; but that's just a guess for this example, and you can probably many series for which parallelisation is worth the effort.
And, as #Mark Ransom has already said, a better algorithm ought to beat brute-force and a lot of processors every time.