There is any cost of taking 2d array as input ?
for(int i = 0; i < n; i++) {
for(int j = 0; j < n; j++) {
cin >> a[i][j];
}
}
Is it O(n^2) time complexity or O(1)?
Time Complexity depends on how you define it. In competitive programming, there is a chance that you might be given a readymade 2D matrix and you have to find out a particular answer.
Now, what approach you take while finding out the answer defines your time complexity. But in this case, taking input is not taken into account while defining time complexity as you need a 2D matrix for your use.
In simple terms, if you are given n as a variable and you need to take input of n*n elements, the complexity is O(n^2).
Related
I know there were similar questions, but not of such specificity
Input: n-elements array with unsorted emelents with values from 1 to (n-1).
one of the values is duplicate (eg. n=5, tab[n] = {3,4,2,4,1}.
Task: find duplicate with best Complexity.
I wrote alghoritm:
int tab[] = { 1,6,7,8,9,4,2,2,3,5 };
int arrSize = sizeof(tab)/sizeof(tab[0]);
for (int i = 0; i < arrSize; i++) {
tab[tab[i] % arrSize] = tab[tab[i] % arrSize] + arrSize;
}
for (int i = 0; i < arrSize; i++) {
if (tab[i] >= arrSize * 2) {
std::cout << i;
break;
}
but i dont think it is with best possible Complexity.
Do You know better method/alghoritm? I can use any c++ library, but i don't have any idea.
Is it possible to get better complexity than O(n) ?
In terms of big-O notation, you cannot beat O(n) (same as your solution here). But you can have better constants and simpler algorithm, by using the property that the sum of elements 1,...,n-1 is well known.
int sum = 0;
for (int x : tab) {
sum += x;
}
duplicate = sum - ((n*(n-1)/2))
The constants here will be significntly better - as each array index is accessed exactly once, which is much more cache friendly and efficient to modern architectures.
(Note, this solution does ignore integer overflow, but it's easy to account for it by using 2x more bits in sum than there are in the array's elements).
Adding the classic answer because it was requested. It is based on the idea that if you xor a number with itself you get 0. So if you xor all numbers from 1 to n - 1 and all numbers in the array you will end up with the duplicate.
int duplicate = arr[0];
for (int i = 1; i < arr.length; i++) {
duplicate = duplicate ^ arr[i] ^ i;
}
Don't focus too much on asymptotic complexity. In practice the fastest algorithm is not necessarily the one with lowest asymtotic complexity. That is because constants are not taken into account: O( huge_constant * N) == O(N) == O( tiny_constant * N).
You cannot inspect N values in less than O(N). Though you do not need a full pass through the array. You can stop once you found the duplicate:
#include <iostream>
#include <vector>
int main() {
std::vector<int> vals{1,2,4,6,5,3,2};
std::vector<bool> present(vals.size());
for (const auto& e : vals) {
if (present[e]) {
std::cout << "duplicate is " << e << "\n";
break;
}
present[e] = true;
}
}
In the "lucky case" the duplicate is at index 2. In the worst case the whole vector has to be scanned. On average it is again O(N) time complexity. Further it uses O(N) additional memory while yours is using no additional memory. Again: Complexity alone cannot tell you which algorithm is faster (especially not for a fixed input size).
No matter how hard you try, you won't beat O(N), because no matter in what order you traverse the elements (and remember already found elements), the best and worst case are always the same: Either the duplicate is in the first two elements you inspect or it's the last, and on average it will be O(N).
Given some array of numbers i.e. [5,11,13,26,2,5,1,9,...]
What is the time complexity of these loops? The first loop is O(n), but what is the second loop? It iterates the number of times specified at each index in the array.
for (int i = 0; i < nums.size(); i++) {
for (int j = 0; j < nums[i]; j++) {
// ...
}
}
This loop has time complexity O(N*M) (using * to denote multiplication).
N is the number of items in your list, M is either the average value for your numbers, or the maximum possible value. Both would yield the same order, so use whichever is easier.
That arises because the number of times ... runs is proportional to both N and M. It also assumes ... to be constant complexity. If not you need to multiply by the complexity of ....
It is O(sum(nums[i]) * nums.size())
I have created a simple program which keeps a count of the elements in an array using an unordered map. I wanted to know the time complexity of the program below.
Is it simply O(n) time?
How much time does the operations done on the unordered map require?
(i.e looking for a key in the map and if it is present incrementing its value by 1 and if not initializing the key by 1)
Is this done in constant time or some logarithmic or linear time?
If not in constant time then please suggest me a better approach.
#include <unordered_map>
#include<iostream>
int main()
{
int n;
std::cin >> n;
int arr[100];
for(int i=0;i<n;i++)
std::cin >> arr[i];
std::unordered_map<int, int> dp;
for(int i=0; i<n; i++)
{
if (dp.find(arr[i]) != dp.end())
dp[arr[i]] ++;
else
dp[arr[i]] = 1;
}
}
The documentation says, that std::unordered_map::find() has a complexity of
Constant on average, worst case linear in the size of the container.
So you got an average complexity of O(n) and a worst case complexity of O(n^2).
Addendum:
Since you use ints as keys and no custom hash function, I think it is safe to assume O(1) for find, since you probably wont get to the worst case.
A goal of mine is to reduce my O(n^2) algorithms into O(n), as it's a common algorithm in my Array2D class. Array2D holds a multidimensional array of type T. A common issue I see is using doubly-nested for loops to traverse through an array, which is slow depending on the size.
As you can see, I reduced my doubly-nested for loops into a single for loop here. It's running fine when I execute it. Speed has surely improved. Is there any other way to improve the speed of this member function? I'm hoping to use this algorithm as a model for my other member functions that have similar operations on multidimensional arrays.
/// <summary>
/// Fills all items within the array with a value.
/// </summary>
/// <param name="ob">The object to insert.</param>
void fill(const T &ob)
{
if (m_array == NULL)
return;
//for (int y = 0; y < m_height; y++)
//{
// for (int x = 0; x < m_width; x++)
// {
// get(x, y) = ob;
// }
//}
int size = m_width * m_height;
int y = 0;
int x = 0;
for (int i = 0; i < size; i++)
{
get(x, y) = ob;
x++;
if (x >= m_width)
{
x = 0;
y++;
}
}
}
Make sure things are contiguous in memory as cache behavior is likely to dominate the run-time of any code which performs only simple operations.
For instance, don't use this:
int* a[10];
for(int i=0;i<10;i++)
a[i] = new int[10];
//Also not this
std::vector< std::vector<int> > a(std::vector<int>(10),10);
Use this:
int a[100];
//or
std::vector<int> a(100);
Now, if you need 2D access use:
for(int y=0;y<HEIGHT;y++)
for(int x=0;x<WIDTH;x++)
a[y*WIDTH+x];
Use 1D accesses for tight loops, whole-array operations which don't rely on knowledge of neighbours, or for situations where you need to store indices:
for(int i=0;i<HEIGHT*WIDTH;i++)
a[i];
Note that in the above two loops the number of items touched is HEIGHT*WIDTH in both cases. Though it may appear that one has a time complexity of O(N^2) and the other O(n), it should be obvious that the net amount of work done is HEIGHT*WIDTH in both cases. It is better to think of N as the total number of items touched by an operation, rather than a property of the way in which they are touched.
Sometimes you can compute Big O by counting loops, but not always.
for (int m = 0; m < M; m++)
{
for (int n = 0; n < N; n++)
{
doStuff();
}
}
Big O is a measure of "How many times is doStuff executed?" With the nested loops above it is executed MxN times.
If we flatten it to 1 dimension
for (int i = 0; i < M * N; i++)
{
doStuff();
}
We now have one loop that executes MxN times. One loop. No improvement.
If we unroll the loop or play games with something like Duff's device
for (int i = 0; i < M * N; i += N)
{
doStuff(); // 0
doStuff(); // 1
....
doStuff(); // N-1
}
We still have MxN calls to doStuff. Some days you just can't win with Big O. If you must call doStuff on every element in an array, no matter how many dimensions, you cannot reduce Big O. But if you can find a smarter algorithm that allows you to avoid calls to doStuff... That's what you are looking for.
For Big O, anyway. Sometimes you'll find stuff that has an as-bad-or-worse Big O yet it outperforms. One of the classic examples of this is std::vector vs std::list. Due to caching and prediction in a modern CPU, std::vector scores a victory that slavish obedience to Big O would miss.
Side note (Because I regularly smurf this up myself) O(n) means if you double n, you double the work. This is why O(n) is the same as O(1,000,000 n). O(n2) means if you double n you do 22 times the work. If you are ever puzzled by an algorithm, drop a counter into the operation you're concerned with and do a batch of test runs with various Ns. Then check the relationship between the counters at those Ns.
Here's a very horrible sorting algorithm that only works on integers ranging from 0 to size. I wouldn't use such an algorithm on a large set of data or one with large numbers in it because it would require so much memory. That consider, wouldn't this algorithm technically run in O(n) time?
Thanks
#include <iostream>
#define size 33
int main(){
int b[size];
int a[size] = {1,6,32,9,3,7,4,22,5,2,1,0};
for (int n=0; n<size; n++) b[n] = 0;
for (int n=0; n<size; n++){
if (size <= a[n]) throw std::logic_error("error");
b[a[n]]++;
}
int x = 0;
for (int n=0; n<size; n++){
for (int t=0; t<b[n]; t++){
a[x] = n;
x++;
}
}
for (int n=0; n<size; n++) printf("%d ",a[n]);
}
You're showing a variation on radix sort. Along with bucket sort, these algorithms are prime examples of sorting not based on comparison, which can offer better complexity than O(n logn).
Do note, however, that your implementation is actually O(n + size).
It is O(n). More specifically it is O(n+k) where n is the number of elements in input array and k is the range of input in your case size. You are just keeping counts of all elements which occur in the array. And then you just store them in increasing order as many times as they occur in the original array. This algorithm is called count sort.