So I have this algorithm and I am trying to determine the basic operation for an algorithm analysis problem.
here is the code:
median(int array[]){
int k = array.length();
int n = k/2;
for(int i = 0; i < k; i++){
int numsmaller = 0;
int numequal = 0;
for(int j = 0; j < k; k++){
if(array[j] < array[i]){
numsmaller++;
}else
if(array[j] == array[i]){
numequal++;
}
if(numsmaller < n && n <= (numsmaller + numequal){
return array[i]
}
}//inner loop
}//outter loop
}//end of function
I am under the current impression that the basic operation of this Algorithm is the two if statements within the inner loop of the function.
What is confusing me is that, I am unsure if the basic operation is the boolean expression itself which would be executed every iteration checking if array[j] < array[i] and if array[j] is equal to array[i]. Or weather the basic operation is the code execution that occurs when either of the if statements are true. Can someone please give me a solid explanation in terms of algorithm analysis what the basic operation of this algorithm would be :) please and much thanks
Basic operations may be things like:
Array indexing
Conditionals, i.e. if (x == y)
Assignments, i.e. x = 10
And even basic math operations, i.e. y + 2
Note this is not an exhaustive list by any means. Also note that the worst case scenario of some code requires the maximum number of basic operations to be performed; so in the following code, you'll see three basic operations in the worst case.
if (variable == true) {
int x = y + 2;
}
...this is because we really just composed several of the above list items. We have to perform the first conditional no matter one (one basic op) but after that the "worst case scenario" is when variable = true, because we then continue to perform an assignment. Of course in order to compute the non-obvious value that x will assume via the assignment, we have to perform another basic operation (arithmetic between y and 2) which gives us a total of three basic operations.
So in your case, the basic operations performed in the inner loop are the conditionals, the incrementing (basically assignment) of a variable given one of the conditions are met, and the two conditionals plus arithmetic done in the
if(numsmaller < n && n <= (numsmaller + numequal)
line.
Hopefully this helps.
Related
I am currently working on an assignment which asks us to implement a few different sorts and introduce counter variables to measure the runtime.
My question is that I'm confused about whether or not to include certain "operations" as something that would increment my counter. For instance, my textbook says this:
....
So, from what I understand, I should be counting "comparisons" but I do not understand if this applies to if statements, while loops, etc.
For instance, here is my insertion sort.
float insertionSort(int theArray[], int n) {
float count = 0;
for (int unsorted = 1; unsorted < n; unsorted++) {
int nextItem = theArray[unsorted];
int loc = unsorted;
while ((loc > 0) && (theArray[loc - 1] > nextItem)) {
theArray[loc] = theArray[loc - 1];
theArray[loc] = nextItem;
loc--;
count += 4;
}
}
return count;
}
As you can see, I increment count by 4 for each iteration of the while loop. This really highlights my question, I think.
My reasoning is that we make two comparisons in the conditional statement of the while loop:
(loc > 0 && theArray[loc - 1] > nextItem)
Afterwards, we make two moves in the array. From my understanding, this means that we have performed 4 "operations" and we would increment counter by 4 for the sake of measuring runtime at the end of execution.
Is this correct? Thank you SO much for any help.
In this case, your number of exchanges is proportional to your number of comparisons. Also, your loc > 0 is what I'd consider an "incidental operation" as stated in that excerpt. So, assuming comparisons and movements are constant time operations (which they are for integers), you'll get the same trends in your data by simply incrementing your counter once each loop iteration.
The full problem statement is here. Suppose we have a double ended queue of known values. Each turn, we can take a value out of one or the other end and the values still in the queue increase as value*turns. The goal is to find maximum possible total value.
My first approach was to use straightforward top-down DP with memoization. Let i,j denote starting, ending indexes of "subarray" of array of values A[].
A[i]*age if i == j
f(i,j,age) =
max(f(i+1,j,age+1) + A[i]*age , f(i,j-1,age+1) + A[j]*age)
This works, however, proves to be too slow, as there are superfluous stack calls. Iterative bottom-up should be faster.
Let m[i][j] be the maximum reachable value of the "subarray" of A[] with begin/end indexes i,j. Because i <= j, we care only about the lower triangular part.
This matrix can be built iteratively using the fact that m[i][j] = max(m[i-1][j] + A[i]*age, m[i][j-1] + A[j]*age), where age is maximum on the diagonal (size of A[] and linearly decreases as A.size()-(i-j).
My attempt at implementation meets with bus error.
Is the described algorithm correct? What is the cause for the bus error?
Here is the only part of the code where the bus error might occur:
for(T j = 0; j < num_of_treats; j++) {
max_profit[j][j] = treats[j]*num_of_treats;
for(T i = j+1; i < num_of_treats; i++)
max_profit[i][j] = max( max_profit[i-1][j] + treats[i]*(num_of_treats-i+j),
max_profit[i][j-1] + treats[j]*(num_of_treats-i+j));
}
for(T j = 0; j < num_of_treats; j++) {
Inside this loop, j is clearly a valid index into the array max_profit. But you're not using just j.
The bus error is caused by trying to access array via negative index when j=0 and i=1 as I should have noticed during the debugging. The algorithm is wrong as well. First, the relationship used to construct the max_profit[][] array should is
max_profit[i][j] = max( max_profit[i+1][j] + treats[i]*(num_of_treats-i+j),
max_profit[i][j-1] + treats[j]*(num_of_treats-i+j));
Second, the array must by filled diagonally, so that max_profit[i+1][j] and max_profit[i][j-1] is already computed with exception of the main diagonal.
Third, the data structure chosen is extremely inefficient. I am using only half of the space allocated for max_profit[][]. Plus, at each iteration, I only need the last computed diagonal. An array of size num_of_treats should suffice.
Here is a working code using this improved algorithm. I really like it. I even used bit operators for the first time.
Wikipedia claims that the failure function table can be computed in O(n) time.
Let's look at its `canonical' implementation (in C++):
vector<int> prefix_function (string s) {
int n = (int) s.length();
vector<int> pi (n);
for (int i=1; i<n; ++i) {
int j = pi[i-1];
while (j > 0 && s[i] != s[j])
j = pi[j-1];
if (s[i] == s[j]) ++j;
pi[i] = j;
}
return pi;
}
Why does it work in O(n) time, even if there is an inner while-loop? I'm not really strong at the analysis of algorithms, so may somebody explain it?
This line: if (s[i] == s[j]) ++j; is executed at most O(n) times.
It caused increase in the value of p[i]. Note that p[i] starts at same value as p[i-1].
Now this line: j = pi[j-1]; causes decrease of p[i] by at least one. And since it was increased at most O(n) times (we count also increases and decreases on previous values), it cannot be decreased more than O(n) times.
So it as also executed at most O(n) times.
Thus the whole time complexity is O(n).
There's already two answers here that are correct, but I often think a fully laid out
proof can make things clearer. You said you wanted an answer for a 9-year-old, but
I don't think it's feasible (I think it's easy to be fooled into thinking it's true
without actually having any intuition for why it's true). Maybe working through this answer will help.
First off, the outer loop runs n times clearly because i is not modified
within the loop. The only code within the loop that could run more than once is
the block
while (j > 0 && s[i] != s[j])
{
j = pi[j-1]
}
So how many times can that run? Well notice that every time that condition is
satisfied we decrease the value of j which, at this point, is at most
pi[i-1]. If it hits 0 then the while loop is done. To see why this is important,
we first prove a lemma (you're a very smart 9-year-old):
pi[i] <= i
This is done by induction. pi[0] <= 0 since it's set once in the initialization of pi and never touched again. Then inductively we let 0 < k < n and assume
the claim holds for 0 <= a < k. Consider the value of p[k]. It's set
precisely once in the line pi[i] = j. Well how big can j be? It's initialized
to pi[k-1] <= k-1 by induction. In the while block it then may be updated to pi[j-1] <= j-1 < pi[k-1]. By another mini-induction you can see that j will never increase past pi[k-1]. Hence after the
while loop we still have j <= k-1. Finally it might be incremented once so we have
j <= k and so pi[k] = j <= k (which is what we needed to prove to finish our induction).
Now returning back to the original point, we ask "how many times can we decrease the value of
j"? Well with our lemma we can now see that every iteration of the while loop will
monotonically decrease the value of j. In particular we have:
pi[j-1] <= j-1 < j
So how many times can this run? At most pi[i-1] times. The astute reader might think
"you've proven nothing! We have pi[i-1] <= i-1 but it's inside the while loop so
it's still O(n^2)!". The slightly more astute reader notices this extra fact:
However many times we run j = pi[j-1] we then decrease the value of pi[i] which shortens the next iteration of the loop!
For example, let's say j = pi[i-1] = 10. But after ~6 iterations of the while loop we have
j = 3 and let's say it gets incremented by 1 in the s[i] == s[j] line so j = 4 = pi[i].
Well then at the next iteration of the outer loop we start with j = 4... so we can only execute the while at most 4 times.
The final piece of the puzzle is that ++j runs at most once per loop. So it's not like we can have
something like this in our pi vector:
0 1 2 3 4 5 1 6 1 7 1 8 1 9 1
^ ^ ^ ^ ^
Those spots might mean multiple iterations of the while loop if this
could happen
To make this actually formal you might establish the invariants described above and then use induction
to show that the total number of times that while loop is run, summed with pi[i] is at most i.
From that, it follows that the total number of times the while loop is run is O(n) which means that the entire outer loop has complexity:
O(n) // from the rest of the outer loop excluding the while loop
+ O(n) // from the while loop
=> O(n)
Let's start with the fact the outer loop executes n times, where n is the length of the pattern we seek. The inner loop decreases the value of j by at least 1, since pi[j] < j. The loop terminates at the latest when j == -1, therefore it can decrease the value of j at most as often as it has been increased previously by j++ (the outer loop). Since j++ is executed in the outer loop exactly n times, the overall number of executions of the inner while loop is limited to n. The preprocessing algorithm therefore requires O(n) steps.
If you care, consider this simpler implementation of the preprocessing stage:
/* ff stands for 'failure function': */
void kmp_table(const char *needle, int *ff, size_t nff)
{
int pos = 2, cnd = 0;
if (nff > 1){
ff[0] = -1;
ff[1] = 0;
} else {
ff[0] = -1;
}
while (pos < nff) {
if (needle[pos - 1] == needle[cnd]) {
ff[pos++] = ++cnd;
} else if (cnd > 0) {
cnd = ff[cnd]; /* This is O(1) for the reasons above. */
} else {
ff[pos++] = 0;
}
}
}
from which it is painfully obvious the failure function is O(n), where n is the length of the pattern sought.
This question is unlikely to help any future visitors; it is only relevant to a small geographic area, a specific moment in time, or an extraordinarily narrow situation that is not generally applicable to the worldwide audience of the internet. For help making this question more broadly applicable, visit the help center.
Closed 10 years ago.
1)
x = 25;
for (int i = 0; i < myArray.length; i++)
{
if (myArray[i] == x)
System.out.println("found!");
}
I think this one is O(n).
2)
for (int r = 0; r < 10000; r++)
for (int c = 0; c < 10000; c++)
if (c % r == 0)
System.out.println("blah!");
I think this one is O(1), because for any input n, it will run 10000 * 10000 times. Not sure if this is right.
3)
a = 0
for (int i = 0; i < k; i++)
{
for (int j = 0; j < i; j++)
a++;
}
I think this one is O(i * k). I don't really know how to approach problems like this where the inner loop is affected by variables being incremented in the outer loop. Some key insights here would be much appreciated. The outer loop runs k times, and the inner loop runs 1 + 2 + 3 + ... + k times. So that sum should be (k/2) * (k+1), which would be order of k^2. So would it actually be O(k^3)? That seems too large. Again, don't know how to approach this.
4)
int key = 0; //key may be any value
int first = 0;
int last = intArray.length-1;;
int mid = 0;
boolean found = false;
while( (!found) && (first <= last) )
{
mid = (first + last) / 2;
if(key == intArray[mid])
found = true;
if(key < intArray[mid])
last = mid - 1;
if(key > intArray[mid])
first = mid + 1;
}
This one, I think is O(log n). But, I came to this conclusion because I believe it is a binary search and I know from reading that the runtime is O(log n). I think it's because you divide the input size by 2 for each iteration of the loop. But, I don't know if this is the correct reasoning or how to approach similar algorithms that I haven't seen and be able to deduce that they run in logarithmic time in a more verifiable or formal way.
5)
int currentMinIndex = 0;
for (int front = 0; front < intArray.length; front++)
{
currentMinIndex = front;
for (int i = front; i < intArray.length; i++)
{
if (intArray[i] < intArray[currentMinIndex])
{
currentMinIndex = i;
}
}
int tmp = intArray[front];
intArray[front] = intArray[currentMinIndex];
intArray[currentMinIndex] = tmp;
}
I am confused about this one. The outer loop runs n times. And the inner for loop runs
n + (n-1) + (n-2) + ... (n - k) + 1 times? So is that O(n^3) ??
More or less, yes.
1 is correct - it seems you are searching for a specific element in what I assume is an un-sorted collection. If so, the worst case is that the element is at the very end of the list, hence O(n).
2 is correct, though a bit strange. It is O(1) assuming r and c are constants and the bounds are not variables. If they are constant, then yes O(1) because there is nothing to input.
3 I believe that is considered O(n^2) still. There would be some constant factor like k * n^2, drop the constant and you got O(n^2).
4 looks a lot like a binary search algorithm for a sorted collection. O(logn) is correct. It is log because at each iteration you are essentially halving the # of possible choices in which the element you are looking for could be in.
5 is looking like a bubble sort, O(n^2), for similar reasons to 3.
O() doesn't mean anything in itself: you need to specify if you are counting the "worst-case" O, or the average-case O. For some sorting algorithm, they have a O(n log n) on average but a O(n^2) in worst case.
Basically you need to count the overall number of iterations of the most inner loop, and take the biggest component of the result without any constant (for example if you have k*(k+1)/2 = 1/2 k^2 + 1/2 k, the biggest component is 1/2 k^2 therefore you are O(k^2)).
For example, your item 4) is in O(log(n)) because, if you work on an array of size n, then you will run one iteration on this array, and the next one will be on an array of size n/2, then n/4, ..., until this size reaches 1. So it is log(n) iterations.
Your question is mostly about the definition of O().
When someone say this algorithm is O(log(n)), you have to read:
When the input parameter n becomes very big, the number of operations performed by the algorithm grows at most in log(n)
Now, this means two things:
You have to have at least one input parameter n. There is no point in talking about O() without one (as in your case 2).
You need to define the operations that you are counting. These can be additions, comparison between two elements, number of allocated bytes, number of function calls, but you have to decide. Usually you take the operation that's most costly to you, or the one that will become costly if done too many times.
So keeping this in mind, back to your problems:
n is myArray.Length, and the number of operations you're counting is '=='. In that case the answer is exactly n, which is O(n)
you can't specify an n
the n can only be k, and the number of operations you count is ++. You have exactly k*(k+1)/2 which is O(n2) as you say
this time n is the length of your array again, and the operation you count is ==. In this case, the number of operations depends on the data, usually we talk about 'worst case scenario', meaning that of all the possible outcome, we look at the one that takes the most time. At best, the algorithm takes one comparison. For the worst case, let's take an example. If the array is [[1,2,3,4,5,6,7,8,9]] and you are looking for 4, your intArray[mid] will become successively, 5, 3 and then 4, and so you would have done the comparison 3 times. In fact, for an array which size is 2^k + 1, the maximum number of comparison is k (you can check). So n = 2^k + 1 => k = ln(n-1)/ln(2). You can extend this result to the case when n is not = 2^k + 1, and you will get complexity = O(ln(n))
In any case, I think you are confused because you don't exactly know what O(n) means. I hope this is a start.
I have an array of char pointers of length 175,000. Each pointer points to a c-string array of length 100, each character is either 1 or 0. I need to compare the difference between the strings.
char* arr[175000];
So far, I have two for loops where I compare every string with every other string. The comparison functions basically take two c-strings and returns an integer which is the number of differences of the arrays.
This is taking really long on my 4-core machine. Last time I left it to run for 45min and it never finished executing. Please advise of a faster solution or some optimizations.
Example:
000010
000001
have a difference of 2 since the last two bits do not match.
After i calculate the difference i store the value in another array
int holder;
for(int x = 0;x < UsedTableSpace; x++){
int min = 10000000;
for(int y = 0; y < UsedTableSpace; y++){
if(x != y){
//compr calculates difference between two c-string arrays
int tempDiff =compr(similarity[x]->matrix, similarity[y]->matrix);
if(tempDiff < min){
min = tempDiff;
holder = y;
}
}
}
similarity[holder]->inbound++;
}
With more information, we could probably give you better advice, but based on what I understand of the question, here are some ideas:
Since you're using each character to represent a 1 or a 0, you're using several times more memory than you need to use, which creates a big performance impact when it comes to caching and such. Instead, represent your data using numeric values that you can think of in terms of a series of bits.
Once you've implemented #1, you can grab an entire integer or long at a time and do a bitwise XOR operation to end up with a number that has a 1 in every place where the two numbers didn't have the same values. Then you can use some of the tricks mentioned here to count these bits speedily.
Work on "unrolling" your loops somewhat to avoid the number of jumps necessary. For example, the following code:
total = total + array[i];
total = total + array[i + 1];
total = total + array[i + 2];
... will work faster than just looping over total = total + array[i] three times. Jumps are expensive, and interfere with the processor's pipelining. Update: I should mention that your compiler may be doing some of this for you already--you can check the compiled code to see.
Break your overall data set into chunks that will allow you to take full advantage of caching. Think of your problem as a "square" with the i index on one axis and the j axis on the other. If you start with one i and iterate across all 175000 j values, the first j values you visit will be gone from the cache by the time you get to the end of the line. On the other hand, if you take the top left corner and go from j=0 to 256, most of the values on the j axis will still be in a low-level cache as you loop around to compare them with i=0, 1, 2, etc.
Lastly, although this should go without saying, I guess it's worth mentioning: Make sure your compiler is set to optimize!
One simple optimization is to compare the strings only once. If the difference between A and B is 12, the difference between B and A is also 12. Your running time is going to drop almost half.
In code:
int compr(const char* a, const char* b) {
int d = 0, i;
for (i=0; i < 100; ++i)
if (a[i] != b[i]) ++d;
return d;
}
void main_function(...) {
for(int x = 0;x < UsedTableSpace; x++){
int min = 10000000;
for(int y = x + 1; y < UsedTableSpace; y++){
//compr calculates difference between two c-string arrays
int tempDiff = compr(similarity[x]->matrix, similarity[y]->matrix);
if(tempDiff < min){
min = tempDiff;
holder = y;
}
}
similarity[holder]->inbound++;
}
}
Notice the second-th for loop, I've changed the start index.
Some other optimizations is running the run method on separate threads to take advantage of your 4 cores.
What is your goal, i.e. what do you want to do with the Hamming Distances (which is what they are) after you've got them? For example, if you are looking for the closest pair, or most distant pair, you probably can get an O(n ln n) algorithm instead of the O(n^2) methods suggested so far. (At n=175000, n^2 is 15000 times larger than n ln n.)
For example, you could characterize each 100-bit number m by 8 4-bit numbers, being the number of bits set in 8 segments of m, and sort the resulting 32-bit signatures into ascending order. Signatures of the closest pair are likely to be nearby in the sorted list. It is easy to lower-bound the distance between two numbers if their signatures differ, giving an effective branch-and-bound process as less-distant numbers are found.