What is the time complexity of this particular code? - c++

I have created a simple program which keeps a count of the elements in an array using an unordered map. I wanted to know the time complexity of the program below.
Is it simply O(n) time?
How much time does the operations done on the unordered map require?
(i.e looking for a key in the map and if it is present incrementing its value by 1 and if not initializing the key by 1)
Is this done in constant time or some logarithmic or linear time?
If not in constant time then please suggest me a better approach.
#include <unordered_map>
#include<iostream>
int main()
{
int n;
std::cin >> n;
int arr[100];
for(int i=0;i<n;i++)
std::cin >> arr[i];
std::unordered_map<int, int> dp;
for(int i=0; i<n; i++)
{
if (dp.find(arr[i]) != dp.end())
dp[arr[i]] ++;
else
dp[arr[i]] = 1;
}
}

The documentation says, that std::unordered_map::find() has a complexity of
Constant on average, worst case linear in the size of the container.
So you got an average complexity of O(n) and a worst case complexity of O(n^2).
Addendum:
Since you use ints as keys and no custom hash function, I think it is safe to assume O(1) for find, since you probably wont get to the worst case.

Related

Find duplicate in unsorted array with best time Complexity

I know there were similar questions, but not of such specificity
Input: n-elements array with unsorted emelents with values from 1 to (n-1).
one of the values is duplicate (eg. n=5, tab[n] = {3,4,2,4,1}.
Task: find duplicate with best Complexity.
I wrote alghoritm:
int tab[] = { 1,6,7,8,9,4,2,2,3,5 };
int arrSize = sizeof(tab)/sizeof(tab[0]);
for (int i = 0; i < arrSize; i++) {
tab[tab[i] % arrSize] = tab[tab[i] % arrSize] + arrSize;
}
for (int i = 0; i < arrSize; i++) {
if (tab[i] >= arrSize * 2) {
std::cout << i;
break;
}
but i dont think it is with best possible Complexity.
Do You know better method/alghoritm? I can use any c++ library, but i don't have any idea.
Is it possible to get better complexity than O(n) ?
In terms of big-O notation, you cannot beat O(n) (same as your solution here). But you can have better constants and simpler algorithm, by using the property that the sum of elements 1,...,n-1 is well known.
int sum = 0;
for (int x : tab) {
sum += x;
}
duplicate = sum - ((n*(n-1)/2))
The constants here will be significntly better - as each array index is accessed exactly once, which is much more cache friendly and efficient to modern architectures.
(Note, this solution does ignore integer overflow, but it's easy to account for it by using 2x more bits in sum than there are in the array's elements).
Adding the classic answer because it was requested. It is based on the idea that if you xor a number with itself you get 0. So if you xor all numbers from 1 to n - 1 and all numbers in the array you will end up with the duplicate.
int duplicate = arr[0];
for (int i = 1; i < arr.length; i++) {
duplicate = duplicate ^ arr[i] ^ i;
}
Don't focus too much on asymptotic complexity. In practice the fastest algorithm is not necessarily the one with lowest asymtotic complexity. That is because constants are not taken into account: O( huge_constant * N) == O(N) == O( tiny_constant * N).
You cannot inspect N values in less than O(N). Though you do not need a full pass through the array. You can stop once you found the duplicate:
#include <iostream>
#include <vector>
int main() {
std::vector<int> vals{1,2,4,6,5,3,2};
std::vector<bool> present(vals.size());
for (const auto& e : vals) {
if (present[e]) {
std::cout << "duplicate is " << e << "\n";
break;
}
present[e] = true;
}
}
In the "lucky case" the duplicate is at index 2. In the worst case the whole vector has to be scanned. On average it is again O(N) time complexity. Further it uses O(N) additional memory while yours is using no additional memory. Again: Complexity alone cannot tell you which algorithm is faster (especially not for a fixed input size).
No matter how hard you try, you won't beat O(N), because no matter in what order you traverse the elements (and remember already found elements), the best and worst case are always the same: Either the duplicate is in the first two elements you inspect or it's the last, and on average it will be O(N).

Time Complexity of input a data

There is any cost of taking 2d array as input ?
for(int i = 0; i < n; i++) {
for(int j = 0; j < n; j++) {
cin >> a[i][j];
}
}
Is it O(n^2) time complexity or O(1)?
Time Complexity depends on how you define it. In competitive programming, there is a chance that you might be given a readymade 2D matrix and you have to find out a particular answer.
Now, what approach you take while finding out the answer defines your time complexity. But in this case, taking input is not taken into account while defining time complexity as you need a 2D matrix for your use.
In simple terms, if you are given n as a variable and you need to take input of n*n elements, the complexity is O(n^2).

What is the runtime complexity of std::map in C++?

I'm still a little confused about what the runtime complexity is of a std::map in C++. I know that the first for loop in the algorithm below takes O(N) or linear runtime. However, the second for loop has another for loop iterating over the map. Does that add anything to the overall runtime complexity? In other words, what is the overall runtime complexity of the following algorithm? Is it O(N) or O(Nlog(N)) or something else?
vector<int> smallerNumbersThanCurrent(vector<int>& nums) {
vector<int> result;
map<int, int> mp;
for (int i = 0; i < nums.size(); i++) {
mp[nums[i]]++;
}
for (int i = 0; i < nums.size(); i++) {
int numElements = 0;
for (auto it = mp.begin(); it != mp.end(); it++) {
if (it->first < nums[i]) numElements += it->second;
}
result.push_back(numElements);
}
return result;
}
The complexity of a map is that of insertion, deletion, search, etc. But iteration is always linear.
Having two for loops like this inside each other will produce O(N^2) complexity time, be it a map or not, given the n iterations in the inner loop (the size of the map) for each iteration of the outer loop (the size of the vector, which is the same in your code as the size of the map).
Your second for loop runs nums.size() times, so let's call that N. Looks like the map has as many entries as nums, so this contains same N entries. The two for loops then of size N is N*N or N^2.
The begin and end functions invoked by map are constant time because they each have a pointer reference from what I can tell:
C++ map.end function documentation
Note if you do have two for loops, but the outer one is size N and inner one is different size say M, then complexity is M*N, not N^2. Be careful on that point, but yes if N is same for both loops, then N^2 is runtime.

std multiset insert and keep length fixed

I am interested in inserting elements in a std::multiset but I would like to keep the set fixed length. Every time an element is inserted, the last element will be removed. I came up with the following solution
int main(){
std::multiset<std::pair<double, int>> ms;
for (int i=0; i<10; i++){
ms.insert(std::pair<double, int>(double(rand())/RAND_MAX, i));
}
ms.insert(std::pair<double, int>(0.5, 10));
ms.erase(--ms.end());
for(auto el : ms){std::cout<<el.first<<"\t"<<el.second<<std::endl;}
return 0;
}
I will be doing something similar to this many times in my code on sets of a size in the order of 1000 elements. Is there a more performant way of doing this? I am worried that the erase will cause memory reallocation and slow down the code.

Determining the number of shifts performed in insertion sort?

I am trying to solve this question http://www.mycodeschool.com/work-outs/sorting/7
The question is to find no of shifts in Insertion Sort.
I have written the code but couldn't figure out where I am going wrong in logic
http://ideone.com/GGjZjw
#include<iostream>
#include<cstdio>
#include<cmath>
// Include headers as needed
using namespace std;
int main()
{
// Write your code here
int T,count,n,*a;
// int imin;
cin >> T;
int value,hole;
while(T--)
{
cin >> n;
count=0;
a=new int[n];
//reading the input array
for(int i=0;i<n;i++)
{
cin >> a[i];
}
// considering the 0th element to be already sorted and
// remaining list unsorted
for(int i=1;i<n;i++)
{
value=a[i];
hole=i;
// shifting
while(hole>0&&a[hole-1]>value)
{
//
a[hole]=a[hole-1];
hole=hole-1;
count++;
}
a[hole]=value;
}
// cout << count<<endl;
}
// Return 0 to indicate normal termination
return 0;
}
The number of swaps made in insertion sort is equal to the number of inversions in the array (the number of pairs of elements that are out of order). There is a well-known divide-and-conquer algorithm for counting the number of inversions in an array that runs in time O(n log n). It's based on a slightly modified version of mergesort, and I think you shouldn't have too much trouble coding it up.
The problem with your approach is that you're not correctly implementing insertion sort, what you've achieved is an inverse bubble-sort.
for a slightly less complex (yet with worse complexity :P) than #templatetypedef 's O(n log n) solution you can solve it in the same complexity of the sort O(n^2) by applying the correct implementation.
you should implement a function for swap(int* array, int index_a, int index_b) than count how many times this function was called.
this Link to wikipedia has a good pseudo-code for you