Complexity for new/alloc for method vector::push_back - c++

What is the complexity for alloc(realloc)/new operations for code below, for huge values of N.
As far as I understood push_back allocates memory:
size = cst*old_size;
cst = 2; // for gcc
So we have O(1) inside k cycle and ~O(N) inside i cycle.
In summary I have O(N), am I right?
std::vector<int> data;
for (int i = 0; i < N; ++i)
{
for (int k = 0; k < 2 * N; ++k)
data.push_back(k);
for (int k = 0; k < N; ++k)
data.pop_back();
}

vector::push_back is not exactly O(1), but amortized O(1) which is required by C++ standard. See Constant Amortized Time

When reallocation happens, it doubles the allocated size of the vector so (for arbitrarily big values of N) in the given example, it will happen constant*log_2(N) times.
Yes, the complexity of the push_back call is amortized O(1) because reallocating does not take more time if the vector is big (better: the time does not depend on the size), but reallocation will still happen constant*log_2(N) times inside the loop (where constant != 0).
Finally, the complexity of reallocation in the k-loop example is O(log2(N)) and (edited) for the main loop, it's O(log2(N^2)) = O(2*log2(N)) = O(log2(N)).

Related

What is the time complexity of these nested for loops?

Given some array of numbers i.e. [5,11,13,26,2,5,1,9,...]
What is the time complexity of these loops? The first loop is O(n), but what is the second loop? It iterates the number of times specified at each index in the array.
for (int i = 0; i < nums.size(); i++) {
for (int j = 0; j < nums[i]; j++) {
// ...
}
}
This loop has time complexity O(N*M) (using * to denote multiplication).
N is the number of items in your list, M is either the average value for your numbers, or the maximum possible value. Both would yield the same order, so use whichever is easier.
That arises because the number of times ... runs is proportional to both N and M. It also assumes ... to be constant complexity. If not you need to multiply by the complexity of ....
It is O(sum(nums[i]) * nums.size())

Big O-Notation of N-Dimensional array

What is the complexity of the two algorithms below (size is the length of each dimension)?:
void a(int** arr, int size) {
int k = 0;
for (int i = 0; i < size; ++i)
{
for (int j = 0; j < size; ++j)
{
arr[i][j] += 1;
}
}
print(arr, size);
}
void b(int*** arr, int size) {
int m = 0;
for (int i = 0; i < size; ++i)
{
for (int j = 0; j < size; ++j)
{
for (int k = 0; k < size; ++k)
{
arr[i][j][k] += 1;
}
}
}
print(arr, size);
}
I believe the first function is O(N^2) and the second function is O(N^3). Is this right?
For any N-D array of N size I am saying the complexity will be N!. Is this correct?
Time Complexity :
The time complexity of first function is - O(size^2)
The time complexity of second function is - O(size^3)
The time complexity of N-dimensional array each of size N for a similar function would be - O(N^N) since the iterations required would be N * N * N... upto N times.
So, you were correct in the first two - O(N^2) and O(N^3) if by N you meant size. The last statement, however, was incorrect. N! grows slower than N^N and hence the N! as the upper bound would be wrong. It should be O(N^N).
I believe the first function is O(N^2) and the second function is O(N^3). Is this right?
Yes, it is N * N for the first, and N * N * N for the second
For any N-D array of N size I am saying the complexity will be N!. Is this correct?
Not exactly. The complexity will be N^N (N to the Nth power), which is higher
N^N = N * N * .... * N
N! = N * (N - 1) * ... * 1
(To find the ratio between the two, you can use Stirling's approximation, incidentally.)
I believe the first function is O(N^2) and the second function is O(N^3). Is this right?
For any N-D array of N size I am saying the complexity will be N!. Is this correct?
I think you skipped an important step in your analysis. You started by looking at two sample cases (2-D and 3-D). So far, so good. You analyzed the complexity in those two cases, deciding the 2-D case is O(N^2) and the 3-D is O(N^3). Also good. But then you skipped a step.
The next step should be to generalize to arbitrary dimension D. You looked at two sample cases, you see the 2 and the 3 appearing in the formulas, so it is reasonable to theorize that you can replace that with D. The theory is that for an array of dimension D, the complexity is O(N^D). Ideally you do some more work to either prove this or at least check that it holds in a case you have not looked at yet, like 4-D. Once you have confidence in this result, you are ready to move on.
It is only after getting the formula for a the arbitrary dimension case that you should specialize to the case where the dimension equals the size. This result is rather easy, as assuming D == N means it is valid to replace D with N in your formula; the complexity is O(N^N).

What is the runtime complexity of std::map in C++?

I'm still a little confused about what the runtime complexity is of a std::map in C++. I know that the first for loop in the algorithm below takes O(N) or linear runtime. However, the second for loop has another for loop iterating over the map. Does that add anything to the overall runtime complexity? In other words, what is the overall runtime complexity of the following algorithm? Is it O(N) or O(Nlog(N)) or something else?
vector<int> smallerNumbersThanCurrent(vector<int>& nums) {
vector<int> result;
map<int, int> mp;
for (int i = 0; i < nums.size(); i++) {
mp[nums[i]]++;
}
for (int i = 0; i < nums.size(); i++) {
int numElements = 0;
for (auto it = mp.begin(); it != mp.end(); it++) {
if (it->first < nums[i]) numElements += it->second;
}
result.push_back(numElements);
}
return result;
}
The complexity of a map is that of insertion, deletion, search, etc. But iteration is always linear.
Having two for loops like this inside each other will produce O(N^2) complexity time, be it a map or not, given the n iterations in the inner loop (the size of the map) for each iteration of the outer loop (the size of the vector, which is the same in your code as the size of the map).
Your second for loop runs nums.size() times, so let's call that N. Looks like the map has as many entries as nums, so this contains same N entries. The two for loops then of size N is N*N or N^2.
The begin and end functions invoked by map are constant time because they each have a pointer reference from what I can tell:
C++ map.end function documentation
Note if you do have two for loops, but the outer one is size N and inner one is different size say M, then complexity is M*N, not N^2. Be careful on that point, but yes if N is same for both loops, then N^2 is runtime.

O(NLogN) shows better performance than O(N) if unordered_set is used

//Time sorting O(nlogn) + binary search for N items logN = 2NLogN =
//Time: O(NLogN).
//space - O(1).
bool TwoSum::TwoSumSortAndBinarySearch(int* arr, int size, int sum)
{
sort(arr, arr + size);
for (int i = 0; i < size; i++)
{
if (binary_search(arr + i + 1, arr + size, sum - arr[i]))
return true;
}
return false;
}
//Time: O(N) as time complexity of Add and Search in hashset/unordered_set is O(1).
//Space: O(N)
bool TwoSum::TwoSumHashSet(int* arr, int size, int sum)
{
unordered_set<int> hash;
for (int i = 0; i < size; i++)
{
if (hash.find(sum - arr[i]) != hash.end())
return true;
hash.insert(arr[i]);
}
return false;
}
int* TwoSum::Testcase(int size)
{
int* in = new int[size];
for (int i = 0; i < size; i++)
{
in[i] = rand() % (size + 1);//random number b/w 0 to N.
}
return in;
}
int main()
{
int size = 5000000;
int* in = TwoSum::Testcase(size);
auto start = std::chrono::system_clock::now();//clock start
bool output = TwoSum::TwoSumHashSet(in, size, INT_MAX);
auto end = std::chrono::system_clock::now();//clock end
std::chrono::duration<double> elapsed_seconds = end - start;
cout << "elapsed time: " << elapsed_seconds.count() << "s\n";
}
I measured the performance of the above two methods, where I would like to find the TwoSum problem.
In the First approach, I am sorting the array then using binary search.
Time: O(NLogN).
space - O(1).
In the second approach, unordered_set is used whose complexity is constant on average, worst case linear in the size of the container.
//Time: O(N) as time complexity of Add and Search in hashset/unordered_set is O(1).
//Space: O(N)
Here are the three runs time taken by these two methods
TwoSumSortAndBinarySearch---------------TwoSumHashSet
8.05---------------------------------------15.15
7.76---------------------------------------14.47
7.74---------------------------------------14.28
So, it is clear that TwoSumSortAndBinarySearch performs definitely better than unordered_Set.
Which approach is more preferable and suggested in real scenario and why?
This is because computational complexity doesn’t take into account the behavior of multi-level memory system present in every modern computer. And it is precisely because you measure that behavior via proxy using time (!!), that your measurement is not “like” theoretical computational complexity. Computational complexity predicts execution times only in very well controlled situations, when the code is optimal for the platform. If you want to measure complexity, you can’t measure time. Measure operation counts. It will agree with theory then.
In my limited experience, it is rather rare that computational complexity theory would predict runtimes on reasonably sized data sets, when the behavior is neither exponential nor cubic (or higher terms). Cache access patterns and utilization of architectural parallelism are major predictors of performance, before computational complexity comes into play.

Improving O(n) while looping through a 2d array in C++

A goal of mine is to reduce my O(n^2) algorithms into O(n), as it's a common algorithm in my Array2D class. Array2D holds a multidimensional array of type T. A common issue I see is using doubly-nested for loops to traverse through an array, which is slow depending on the size.
As you can see, I reduced my doubly-nested for loops into a single for loop here. It's running fine when I execute it. Speed has surely improved. Is there any other way to improve the speed of this member function? I'm hoping to use this algorithm as a model for my other member functions that have similar operations on multidimensional arrays.
/// <summary>
/// Fills all items within the array with a value.
/// </summary>
/// <param name="ob">The object to insert.</param>
void fill(const T &ob)
{
if (m_array == NULL)
return;
//for (int y = 0; y < m_height; y++)
//{
// for (int x = 0; x < m_width; x++)
// {
// get(x, y) = ob;
// }
//}
int size = m_width * m_height;
int y = 0;
int x = 0;
for (int i = 0; i < size; i++)
{
get(x, y) = ob;
x++;
if (x >= m_width)
{
x = 0;
y++;
}
}
}
Make sure things are contiguous in memory as cache behavior is likely to dominate the run-time of any code which performs only simple operations.
For instance, don't use this:
int* a[10];
for(int i=0;i<10;i++)
a[i] = new int[10];
//Also not this
std::vector< std::vector<int> > a(std::vector<int>(10),10);
Use this:
int a[100];
//or
std::vector<int> a(100);
Now, if you need 2D access use:
for(int y=0;y<HEIGHT;y++)
for(int x=0;x<WIDTH;x++)
a[y*WIDTH+x];
Use 1D accesses for tight loops, whole-array operations which don't rely on knowledge of neighbours, or for situations where you need to store indices:
for(int i=0;i<HEIGHT*WIDTH;i++)
a[i];
Note that in the above two loops the number of items touched is HEIGHT*WIDTH in both cases. Though it may appear that one has a time complexity of O(N^2) and the other O(n), it should be obvious that the net amount of work done is HEIGHT*WIDTH in both cases. It is better to think of N as the total number of items touched by an operation, rather than a property of the way in which they are touched.
Sometimes you can compute Big O by counting loops, but not always.
for (int m = 0; m < M; m++)
{
for (int n = 0; n < N; n++)
{
doStuff();
}
}
Big O is a measure of "How many times is doStuff executed?" With the nested loops above it is executed MxN times.
If we flatten it to 1 dimension
for (int i = 0; i < M * N; i++)
{
doStuff();
}
We now have one loop that executes MxN times. One loop. No improvement.
If we unroll the loop or play games with something like Duff's device
for (int i = 0; i < M * N; i += N)
{
doStuff(); // 0
doStuff(); // 1
....
doStuff(); // N-1
}
We still have MxN calls to doStuff. Some days you just can't win with Big O. If you must call doStuff on every element in an array, no matter how many dimensions, you cannot reduce Big O. But if you can find a smarter algorithm that allows you to avoid calls to doStuff... That's what you are looking for.
For Big O, anyway. Sometimes you'll find stuff that has an as-bad-or-worse Big O yet it outperforms. One of the classic examples of this is std::vector vs std::list. Due to caching and prediction in a modern CPU, std::vector scores a victory that slavish obedience to Big O would miss.
Side note (Because I regularly smurf this up myself) O(n) means if you double n, you double the work. This is why O(n) is the same as O(1,000,000 n). O(n2) means if you double n you do 22 times the work. If you are ever puzzled by an algorithm, drop a counter into the operation you're concerned with and do a batch of test runs with various Ns. Then check the relationship between the counters at those Ns.