I tried an alternative approach to the 3sum problem: given an array find all triplets that sum up to a given number.
Basically the approach is this: Sort the array. Once a pair of elements (say A[i] and A[j]) is selected, a binary search is done for the third element [using the equal_range function]. The index one past the last of the matching elements is saved in a variable 'c'. Since A[j+1] > A[j], we to search only upto and excluding index c (since numbers at index c and beyond would definitely sum greater than the target sum). For the case j=i+1, we save the end index as 'd' instead and make c=d. For the next value of i, when j=i+1, we need to search only upto and excluding index d.
C++ implementation:
int sum3(vector<int>& A,int sum)
{
int count=0, n=A.size();
sort(A.begin(),A.end());
int c=n, d=n; //initialize c and d to array length
pair < vector<int>::iterator, vector<int>::iterator > p;
for (int i=0; i<n-2; i++)
{
for (int j=i+1; j<n-1; j++)
{
if(j == i+1)
{
p=equal_range (A.begin()+j+1, A.begin()+d, sum-A[i]-A[j]);
d = p.second - A.begin();
if(d==n+1) d--;
c=d;
}
else
{
p=equal_range (A.begin()+j+1, A.begin()+c, sum-A[i]-A[j]);
c = p.second - A.begin();
if(c==n+1) c--;
}
count += p.second-p.first;
for (auto it=p.first; it != p.second; ++it)
cout<<A[i]<<' '<<A[j]<<' '<<*it<<'\n';
}
}
return count;
}
int main() //driver function for testing
{
vector <int> A = {4,3,2,6,4,3,2,6,4,5,7,3,4,6,2,3,4,5};
int sum = 17;
cout << sum3(A,sum) << endl;
return 0;
}
I am unable to work out the upper bound time needed for this algorithm. I understand that the worst case scenario will be when the target sum is unachievably large.
My calculations yield something like:
For i=0, no. of binary searches is lg(n-2) + lg(n-3) + ... +lg(1)
For i=1, lg(n-3) + lg(n-4) + ... + lg(1)
...
...
...
For i=n-3, lg(1)
So totally, lg((n-2)!) + lg((n-3)!) + ... + lg(1!)
= lg(1^n*2^(n-1)3^(n-2)...*(n-1)^2*n^1)
But how to deduce the O(n) bound from this expression?
In addition to James' good answer I would like to point out that this can actually go upto O (n^3) in the worst case because you are running 3 nested for loops. Consider the case
{1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1}
and the demanded sum is 3.
When computing complexity, I'll start by referring to the Big-O Cheat sheet. I use this sheet to classify smaller sections of the code to get their runtime performance.
E.g. if I had a simple loop it would be O(n). BinSearch (according to the cheat sheet) is O(log(n)), etc..
Next, I use the Properties of Big-O notation to composite the smaller pieces together.
So for instance if I had two loops independent of each other it would be O(n) + O(n) or O(2n) => O(n). If one of my loops were inside the other, I would multiply them. So g( f(x) ) turns into O(n^2).
Now, I know you're saying: "hey, wait, I'm changing the upper and lower bounds of the inner loop" but I don't think that really matters...here's a university level example.
So my back-of-the-napkin calculation of your runtime is O(n^2) * O(Log(n)) or O(n^2 Log(n)).
But this need not be the case. I could've done something horribly wrong. So my next step would be to start graphing the runtimes of your worst possible case. Set sum to the impossibly large value and generate larger and larger arrays. You can avoid integer overflow by using lots and lots of repeated smaller numbers.
Also, compare it to the Quadratic 3Sum Solution. That's a known O(n^2) solution. Be sure to compare worst cases, or at least the same array on both. Do both timed tests at the same time so you can start getting a feel for which is faster while you are empirically testing the runtime.
Release builds, optimized for speed.
1. For your analysis, note that
log(1) + log(2) + ... + log(k) = Theta(k log(k)).
Indeed, the upper half of this sum is log(k/2) + log(k/2+1) + ... + log(k),
so it is at least log(k/2)*k/2, which is asymptotically the same as log(k)*k already.
Similarly, we can conclude that
log(n-1) + log(n-2) + log(n-3) + ... + log(1) + // Theta((n-1) log(n-1))
log(n-2) + log(n-3) + ... + log(1) + // Theta((n-2) log(n-2))
log(n-3) + ... + log(1) + // Theta((n-3) log(n-3))
... +
log(1) = Theta(n^2 log(n))
Indeed, if we consider the logarithms which are at least log(n/2), it's the half-triangle (thus ~1/2) of the upper left quadrant (thus ~n^2/4) of the above sum, so there are Theta(n^2/8) such terms.
2. As noted by satvik in another answer, your output loop can take up to Theta(n^3) steps when the number of outputs itself is Theta(n^3), which is when they are all equal.
3. There are O(n^2) solutions to the 3-sum problem, which are therefore asymptotically faster than this one.
Related
I wrote the following code:
class Solution {
public:
int countPrimes(int n) {
if (n==0 || n==1)
return 0;
int counter=n-2;
vector<bool> res(n,true);
for (int i=2;i<=sqrt(n)+1;++i)
{
if (res[i]==false)
continue;
for (int j=i*i;j<n;j+=i)
{
if (res[j]==true)
{
--counter;
res[j]=false;
}
}
}
return counter;
}
};
but couldn't find its complexity, the inner loop according to my calculations runs n/2 + n/3 + ... + n/sqrt(n)
Ok, let try to get the sum from your formula first (I am going to use your convention naming the variables):
Now, please note that n is a constant in the sum, so it can be moved outside the summary.
Now, we have one part which is linear and one part that we still need to estimate, but if you look closely it is very similar to the harmonic series, indeed for n that goes to infinity is the harmonic series - 1.
The grow rate of it is well know ln(n) + 1.(https://en.wikipedia.org/wiki/Harmonic_series_(mathematics)
So, complexity of the algorithm is n*ln(n).
Update
The Beta answer has the correct result (using the correct starting point), I will leave the above answer because the procedure remain the same and the answer, IMHO, it is still useful.
"...The inner loop according to my calculations runs n/2 + n/3 + ... + n/sqrt(n)"
Ah, be careful with that ellipsis. It actually runs
n/2 + n/3 + n/5 + n/7 + n/11 + ... + n/sqrt(n)
This is not n times the harmonic series, this is n times the sum of the reciprocals of the primes, a sum which grows as log(log(greatest denominator)).
So the complexity of the algorithm is O(n log log(n)).
First, here's my Shell sort code (using Java):
public char[] shellSort(char[] chars) {
int n = chars.length;
int increment = n / 2;
while(increment > 0) {
int last = increment;
while(last < n) {
int current = last - increment;
while(current >= 0) {
if(chars[current] > chars[current + increment]) {
//swap
char tmp = chars[current];
chars[current] = chars[current + increment];
chars[current + increment] = tmp;
current -= increment;
}
else { break; }
}
last++;
}
increment /= 2;
}
return chars;
}
Is this a correct implementation of Shell sort (forgetting for now about the most efficient gap sequence - e.g., 1,3,7,21...)? I ask because I've heard that the best-case time complexity for Shell Sort is O(n). (See http://en.wikipedia.org/wiki/Sorting_algorithm). I can't see this level of efficiency being realized by my code. If I added heuristics to it, then yeah, but as it stands, no.
That being said, my main question now - I'm having difficulty calculating the Big O time complexity for my Shell sort implementation. I identified that the outer-most loop as O(log n), the middle loop as O(n), and the inner-most loop also as O(n), but I realize the inner two loops would not actually be O(n) - they would be much less than this - what should they be? Because obviously this algorithm runs much more efficiently than O((log n) n^2).
Any guidance is much appreciated as I'm very lost! :P
The worst-case of your implementation is Θ(n^2) and the best-case is O(nlogn) which is reasonable for shell-sort.
The best case ∊ O(nlogn):
The best-case is when the array is already sorted. The would mean that the inner if statement will never be true, making the inner while loop a constant time operation. Using the bounds you've used for the other loops gives O(nlogn). The best case of O(n) is reached by using a constant number of increments.
The worst case ∊ O(n^2):
Given your upper bound for each loop you get O((log n)n^2) for the worst-case. But add another variable for the gap size g. The number of compare/exchanges needed in the inner while is now <= n/g. The number of compare/exchanges of the middle while is <= n^2/g. Add the upper-bound of the number of compare/exchanges for each gap together: n^2 + n^2/2 + n^2/4 + ... <= 2n^2 ∊ O(n^2). This matches the known worst-case complexity for the gaps you've used.
The worst case ∊ Ω(n^2):
Consider the array where all the even positioned elements are greater than the median. The odd and even elements are not compared until we reach the last increment of 1. The number of compare/exchanges needed for the last iteration is Ω(n^2).
Insertion Sort
If we analyse
static void sort(int[] ary) {
int i, j, insertVal;
int aryLen = ary.length;
for (i = 1; i < aryLen; i++) {
insertVal = ary[i];
j = i;
/*
* while loop exits as soon as it finds left hand side element less than insertVal
*/
while (j >= 1 && ary[j - 1] > insertVal) {
ary[j] = ary[j - 1];
j--;
}
ary[j] = insertVal;
}
}
Hence in case of average case the while loop will exit in middle
i.e 1/2 + 2/2 + 3/2 + 4/2 + .... + (n-1)/2 = Theta((n^2)/2) = Theta(n^2)
You saw here we achieved (n^2)/2 even though divide by two doesn't make more difference.
Shell Sort is nothing but insertion sort by using gap like n/2, n/4, n/8, ...., 2, 1
mean it takes advantage of Best case complexity of insertion sort (i.e while loop exit) starts happening very quickly as soon as we find small element to the left of insert element, hence it adds up to the total execution time.
n/2 + n/4 + n/8 + n/16 + .... + n/n = n(1/2 + 1/4 + 1/8 + 1/16 + ... + 1/n) = nlogn (Harmonic Series)
Hence its time complexity is some thing close to n(logn)^2
After watching some Terence Tao videos, I wanted to try implementing algorithms into c++ code to find all the prime numbers up to a number n. In my first version, where I simply had every integer from 2 to n tested to see if they were divisible by anything from 2 to sqrt(n), I got the program to find the primes between 1-10,000,000 in ~52 seconds.
Attempting to optimize the program, and implementing what I now know to be the Sieve of Eratosthenes, I assumed the task would be done much faster than 51 seconds, but sadly, that wasn't the case. Even going up to 1,000,000 took a considerable amount of time (didn't time it, though)
#include <iostream>
#include <vector>
using namespace std;
void main()
{
vector<int> tosieve = {};
for (int i = 2; i < 1000001; i++)
{
tosieve.push_back(i);
}
for (int j = 0; j < tosieve.size(); j++)
{
for (int k = j + 1; k < tosieve.size(); k++)
{
if (tosieve[k] % tosieve[j] == 0)
{
tosieve.erase(tosieve.begin() + k);
}
}
}
//for (int f = 0; f < tosieve.size(); f++)
//{
// cout << (tosieve[f]) << endl;
//}
cout << (tosieve.size()) << endl;
system("pause");
}
Is it the repeated referencing of the vectors or something? Why is this so slow? Even if I'm completely overlooking something (could be, complete beginner at this :I) I would think that finding the primes between 2 and 1,000,000 with this horrible inefficient method would be faster than my original way of finding them from 2 to 10,000,000.
Hope someone has a clear answer to this - hopefully I can use whatever knowledge is gleaned in the future when optimizing programs using a lot of recursion.
The problem is that 'erase' moves every element in the vector down one, meaning it is an O(n) operation.
There are three alternative choices:
1) Just mark deleted elements as 'empty' (make them 0, for example). This will mean future passes have to pass over those empty positions, but that isn't that expensive.
2) Make a new vector, and push_back new values into there.
3) Use std::remove_if: This will move the elements down, but do it in a single pass so will be more efficient. If you use std::remove_if, then you will have to remember it doesn't resize the vector itself.
Most of vector operations, including erase() have a O(n) linear time complexity.
Since you have two loops of size 10^6, and a vector of size 10^6, your algorithm executes up to 10^18 operations.
Qubic algorithms for such a big N will take a huge amount of time.
N = 10^6 is even big enough for quadratic algorithms.
Please, read carefully about Sieve of Eratosthenes. The fact that both full search and Sieve of Eratosthenes algorithms took the same time, means that you have done the second one wrong.
I see two performanse issues here:
First of all, push_back() will have to reallocate the dynamic memory block once in a while. Use reserve():
vector<int> tosieve = {};
tosieve.resreve(1000001);
for (int i = 2; i < 1000001; i++)
{
tosieve.push_back(i);
}
Second erase() has to move all Elements behind the one you try to remove. You set the elements to 0 instead and do a run over the vector in the end (untested code):
for (auto& x : tosieve) {
for (auto y = tosieve.begin(); *y < x; ++y) // this check works only in
// the case of an ordered vector
if (y != 0 && x % y == 0) x = 0;
}
{ // this block will make sure, that sieved will be released afterwards
auto sieved = vector<int>{};
for(auto x : tosieve)
sieved.push_back(x);
swap(tosieve, sieved);
} // the large memory block is released now, just keep the sieved elements.
consider to use standard algorithms instead of hand written loops. They help you to state your intent. In this case I see std::transform() for the outer loop of the sieve, std::any_of() for the inner loop, std::generate_n() for filling tosieve at the beginning and std::copy_if() for filling sieved (untested code):
vector<int> tosieve = {};
tosieve.resreve(1000001);
generate_n(back_inserter(tosieve), 1000001, []() -> int {
static int i = 2; return i++;
});
transform(begin(tosieve), end(tosieve), begin(tosieve), [](int i) -> int {
return any_of(begin(tosieve), begin(tosieve) + i - 2,
[&i](int j) -> bool {
return j != 0 && i % j == 0;
}) ? 0 : i;
});
swap(tosieve, [&tosieve]() -> vector<int> {
auto sieved = vector<int>{};
copy_if(begin(tosieve), end(tosieve), back_inserter(sieved),
[](int i) -> bool { return i != 0; });
return sieved;
});
EDIT:
Yet another way to get that done:
vector<int> tosieve = {};
tosieve.resreve(1000001);
generate_n(back_inserter(tosieve), 1000001, []() -> int {
static int i = 2; return i++;
});
swap(tosieve, [&tosieve]() -> vector<int> {
auto sieved = vector<int>{};
copy_if(begin(tosieve), end(tosieve), back_inserter(sieved),
[](int i) -> bool {
return !any_of(begin(tosieve), begin(tosieve) + i - 2,
[&i](int j) -> bool {
return i % j == 0;
});
});
return sieved;
});
Now instead of marking elements, we don't want to copy afterwards, but just directly copy only the elements, we want to copy. This is not only faster than the above suggestion, but also better states the intent.
Very interesting task you have. Thanks!
With pleasure I implemented from scratch my own versions of solving it.
I created 3 separate (independent) functions, all based on Sieve of Eratosthenes. These 3 versions are different in their complexity and speed.
Just a quick note, my simplest (slowest) version finds all primes below your desired limit of 10'000'000 within just 0.025 sec (i.e. 25 milli-seconds).
I also tested all 3 versions to find primes below 2^32 (4'294'967'296), which is solved by "simple" version within 47 seconds, by "intermediate" version within 30 seconds, by "advanced" within 12 seconds. So within just 12 seconds it finds all primes below 4 Billion (there are 203'280'221 such primes below 2^32, see OEIS sequence)!!!
For simplicity I will describe in details only Simple version out of 3. Here's code:
template <typename T>
std::vector<T> GenPrimes_SieveOfEratosthenes(size_t end) {
// https://en.wikipedia.org/wiki/Sieve_of_Eratosthenes
if (end <= 2)
return {};
size_t const cnt = end >> 1;
std::vector<u8> composites((cnt + 7) / 8);
auto Get = [&](size_t i){ return bool((composites[i / 8] >> (i % 8)) & 1); };
auto Set = [&](size_t i){ composites[i / 8] |= u8(1) << (i % 8); };
std::vector<T> primes = {2};
size_t i = 0;
for (i = 1; i < cnt; ++i) {
if (Get(i))
continue;
size_t const p = 2 * i + 1, start = (p * p) >> 1;
primes.push_back(p);
if (start >= cnt)
break;
for (size_t j = start; j < cnt; j += p)
Set(j);
}
for (i = i + 1; i < cnt; ++i)
if (!Get(i))
primes.push_back(2 * i + 1);
return primes;
}
This code implements simplest but fast algorithm of finding primes, called Sieve of Eratosthenes. As a small optimization of speed and memory, I search only over odd numbers. This odd numbers optimization gives me ability to store 2x times less memory and do 2x times less steps, hence improves both speed and memory consumption exactly 2 times.
Algorithm is simple, we allocate array of bits, this array at position K has bit 1 if K is composite, or has 0 if K is probably prime. At the end all 0 bits in array signify Definite primes (that are for sure primes). Also due to odd numbers optimization this bit-array stores only odd numbers, so K-th bit is actually a number 2 * K + 1.
Then left to right we go over this array of bits and if we meet 0 bit at position K then it means we found a prime number P = 2 * K + 1 and now starting from position (P * P) / 2 we mark every P-th bit with 1. It means we mark all numbers bigger than P*P that are composite, because they are divisible by P.
We do this procedure only until P * P becomes greater or equal to our limit End (we're finding all primes < End). This limit guarantees that after reaching it ALL zero bits inside array signify prime numbers.
Second version of code does only one optimization to this Simple version, it makes all multi-core (multi-threaded). But this only optimization makes code much bigger and more complex. Basically it slices whole range of bits into all cores, so that they write bits to memory in parallel.
I'll explain only my third Advanced version, it is most complex of 3 versions. It does not only multi-threaded optimization, but also so-called Primorial optimization.
What is Primorial, it is a product of first smallest primes, for example I take primorial 2 * 3 * 5 * 7 = 210.
We can see that any primorial splits infinite range of integers into wheels by modulus of this primorial. For example primorial 210 splits into ranges [0; 210), [210; 2210), [2210; 3*210), etc.
Now it is easy to mathematically prove that inside All ranges of primorial we can mark same positions of numbers as complex, exactly we can mark all numbers that are multiple of 2 or 3 or 5 or 7 as composite.
We can see that out of 210 remainders there are 162 remainders that are for sure composite, and only 48 remainders are probably prime.
Hence it is enough for us to check primality of only 48/210=22.8% of whole search space. This reduction of search space makes task more than 4x times faster, and 4x times less memory consuming.
One can see that my first Simple version in fact due to odd-only optimization was actually using Primorial equal to 2 optimization. Yes, if we take primorial 2 instead of primorial 210, then we gain exactly first version (Simple) algorithm.
All of my 3 versions are tested for correctness and speed. Although still some tiny bugs can remain. Note. Yet it is recommended not to use my code straight away in production, unless it is tested thoroughly.
All 3 versions are tested for correctness by re-using each other answers. I thoroughly test correctness by feeding all limits (end value) from 0 to 2^18. It takes some time to do this.
See main() function to figure out how to use my functions.
Try it online!
SOURCE CODE GOES HERE. Due to StackOverflow limit of 30K symbols per post, I can't inline source code here, as it is almost 30K in size and together with English post above it takes more than 30K. So I'm providing source code on separate Github Gist server, link below. Note that Try it online! link above also contains full source code, but I reduced search limit of 2^32 to smaller one due to GodBolt limit of running time to 3 seconds.
Github Gist code
Output:
10M time 'Simple' 0.024 sec
Time 2^32 'Simple' 46.924 sec, number of primes 203280221
Time 2^32 'Intermediate' 30.999 sec
Time 2^32 'Advanced' 11.359 sec
All checked till 0
All checked till 5000
All checked till 10000
All checked till 15000
All checked till 20000
All checked till 25000
I'm having trouble figuring the process of finding the big theta notation for this selection sort sample. I've read online that and the tl;dr's that nested loops means it will = O(n^2)however, I don't know how they got it. I need a step by step process of finding the notation, i.e adding the cost of operations and everything. would be nice if someone did it for this sample code, so I can understand it more clearly. Thanks in advance...
void select(int selct[])
{
int key;
int comp;
for (int i = 0; i < 5; i++)
{
key = i;
for (int j = i + 1; j < 5; j++)
{
if (selct[key] > selct[j])
{
key = j;
}
}
comp = selct[i];
selct[i] = selct[key];
selct[key] = comp;
}
};
When analyzing the time complexity of an algorithm, I actually find it helpful to not look at the code and to instead think about the core idea driving the algorithm. If you know conceptually what the algorithm is doing, it's often easier to figure out the time complexity by just thinking through what the algorithm is going to do and then deriving the time complexity from there.
Let's apply that approach here. So how exactly does selection sort work? Well, it starts off by finding the minimum value in the last n elements and swapping it to position 0, then finding the minimum value in the last n - 1 elements and swapping it to position 1, then finding the minimum value in the last n - 2 elements and swapping it to position 2, etc.
The "hard part" of the algorithm is figuring out which of the last n - k elements is the smallest. Selection sort does this by iterating over those elements and comparing each against the element that currently is known to be the smallest. That requires n - k - 1 comparisons.
Let's see how many comparisons that is. On the first iteration, we need to make n - 1 comparisons. On the second iteration, we make n - 2 comparisons. On the third, we make n - 3 comparisons. Summing up the number of comparisons gives us a good way of measuring the total work:
(n - 1) + (n - 2) + (n - 3) + ... + 3 + 2 + 1 = n(n - 1) / 2
This is a famous summation - it's worth committing it to memory - and tells us how many comparisons are required. The number of comparisons made is a great proxy for the total amount of work done. Since there are n(n - 1) / 2 = n2 / 2 - n / 2 = Θ(n2) comparisons made, the time complexity of selection sort is Θ(n2).
This question is unlikely to help any future visitors; it is only relevant to a small geographic area, a specific moment in time, or an extraordinarily narrow situation that is not generally applicable to the worldwide audience of the internet. For help making this question more broadly applicable, visit the help center.
Closed 10 years ago.
1)
x = 25;
for (int i = 0; i < myArray.length; i++)
{
if (myArray[i] == x)
System.out.println("found!");
}
I think this one is O(n).
2)
for (int r = 0; r < 10000; r++)
for (int c = 0; c < 10000; c++)
if (c % r == 0)
System.out.println("blah!");
I think this one is O(1), because for any input n, it will run 10000 * 10000 times. Not sure if this is right.
3)
a = 0
for (int i = 0; i < k; i++)
{
for (int j = 0; j < i; j++)
a++;
}
I think this one is O(i * k). I don't really know how to approach problems like this where the inner loop is affected by variables being incremented in the outer loop. Some key insights here would be much appreciated. The outer loop runs k times, and the inner loop runs 1 + 2 + 3 + ... + k times. So that sum should be (k/2) * (k+1), which would be order of k^2. So would it actually be O(k^3)? That seems too large. Again, don't know how to approach this.
4)
int key = 0; //key may be any value
int first = 0;
int last = intArray.length-1;;
int mid = 0;
boolean found = false;
while( (!found) && (first <= last) )
{
mid = (first + last) / 2;
if(key == intArray[mid])
found = true;
if(key < intArray[mid])
last = mid - 1;
if(key > intArray[mid])
first = mid + 1;
}
This one, I think is O(log n). But, I came to this conclusion because I believe it is a binary search and I know from reading that the runtime is O(log n). I think it's because you divide the input size by 2 for each iteration of the loop. But, I don't know if this is the correct reasoning or how to approach similar algorithms that I haven't seen and be able to deduce that they run in logarithmic time in a more verifiable or formal way.
5)
int currentMinIndex = 0;
for (int front = 0; front < intArray.length; front++)
{
currentMinIndex = front;
for (int i = front; i < intArray.length; i++)
{
if (intArray[i] < intArray[currentMinIndex])
{
currentMinIndex = i;
}
}
int tmp = intArray[front];
intArray[front] = intArray[currentMinIndex];
intArray[currentMinIndex] = tmp;
}
I am confused about this one. The outer loop runs n times. And the inner for loop runs
n + (n-1) + (n-2) + ... (n - k) + 1 times? So is that O(n^3) ??
More or less, yes.
1 is correct - it seems you are searching for a specific element in what I assume is an un-sorted collection. If so, the worst case is that the element is at the very end of the list, hence O(n).
2 is correct, though a bit strange. It is O(1) assuming r and c are constants and the bounds are not variables. If they are constant, then yes O(1) because there is nothing to input.
3 I believe that is considered O(n^2) still. There would be some constant factor like k * n^2, drop the constant and you got O(n^2).
4 looks a lot like a binary search algorithm for a sorted collection. O(logn) is correct. It is log because at each iteration you are essentially halving the # of possible choices in which the element you are looking for could be in.
5 is looking like a bubble sort, O(n^2), for similar reasons to 3.
O() doesn't mean anything in itself: you need to specify if you are counting the "worst-case" O, or the average-case O. For some sorting algorithm, they have a O(n log n) on average but a O(n^2) in worst case.
Basically you need to count the overall number of iterations of the most inner loop, and take the biggest component of the result without any constant (for example if you have k*(k+1)/2 = 1/2 k^2 + 1/2 k, the biggest component is 1/2 k^2 therefore you are O(k^2)).
For example, your item 4) is in O(log(n)) because, if you work on an array of size n, then you will run one iteration on this array, and the next one will be on an array of size n/2, then n/4, ..., until this size reaches 1. So it is log(n) iterations.
Your question is mostly about the definition of O().
When someone say this algorithm is O(log(n)), you have to read:
When the input parameter n becomes very big, the number of operations performed by the algorithm grows at most in log(n)
Now, this means two things:
You have to have at least one input parameter n. There is no point in talking about O() without one (as in your case 2).
You need to define the operations that you are counting. These can be additions, comparison between two elements, number of allocated bytes, number of function calls, but you have to decide. Usually you take the operation that's most costly to you, or the one that will become costly if done too many times.
So keeping this in mind, back to your problems:
n is myArray.Length, and the number of operations you're counting is '=='. In that case the answer is exactly n, which is O(n)
you can't specify an n
the n can only be k, and the number of operations you count is ++. You have exactly k*(k+1)/2 which is O(n2) as you say
this time n is the length of your array again, and the operation you count is ==. In this case, the number of operations depends on the data, usually we talk about 'worst case scenario', meaning that of all the possible outcome, we look at the one that takes the most time. At best, the algorithm takes one comparison. For the worst case, let's take an example. If the array is [[1,2,3,4,5,6,7,8,9]] and you are looking for 4, your intArray[mid] will become successively, 5, 3 and then 4, and so you would have done the comparison 3 times. In fact, for an array which size is 2^k + 1, the maximum number of comparison is k (you can check). So n = 2^k + 1 => k = ln(n-1)/ln(2). You can extend this result to the case when n is not = 2^k + 1, and you will get complexity = O(ln(n))
In any case, I think you are confused because you don't exactly know what O(n) means. I hope this is a start.