How to find 'n' largest element positions in a 2D array?
Is there a good algorithm, other than brute force?
Any suggestions are helpful.
Thank you.
You can find the n largest elements (and their positions) in a two-dimensional array as efficiently as you can in a one-dimensional array.
There are a variety of selection algorithms that run in linear time. Quickselect has linear average time complexity but, like Quicksort, can be quadratic in the worst case. The median-of-medians algorithm has both expected and worst case linear time complexity.
Of course, if the elements are sorted, then this is trivial: go to the end of the array and select the required number of elements from there. This may take a little logic depending on how the array is sorted in two dimensions, but it's not difficult.
Related
Suppose I store a matrix as a list of lists, where the first list represents the rows, and each element in that list is a list representing the values in that row. What would the time complexity for finding an element be?
If I'm not mistaken, the average time complexity finding an element in a list using linear search is O(n), does that mean the average time complexity for a list of lists is O(n2)?
If n means the width and height of a square matrix, then a linear search in the matrix will take O(n2) time. More generally, a linear search in a rectangular m×n matrix will take O(mn) time. Both are because that is the number of entries in the matrix, and a linear search will do O(1) work per entry.
If instead you use n to mean the total number of entries in the matrix, then the time complexity is O(n) for the same reason as above.
Note that the above assumes testing for the search target takes O(1) time (e.g. comparing primitive integers). If that's false, then you should multiply the above by the time complexity of the equality test; for example, if for some reason you have an m×n matrix of strings of length c, then the running time will be O(mnc).
Well, you would have two indexes. So the complexity is equal two a list, but multiplied by two. That beeing said, if you want to model a matrix, you might consider using an array of arrays (not to be confused with a jagged array), because in normal circumstances, a matrix has a fixed size (a List<T> is a wrapper around a T[] to allow adding and removing items).
What kind of algorithm can I use to merge two sorted arrays into one sorted array with worst-case time complexity of O(log(m+n)) where n, m are the length of the arrays? I have very little experience with algorithms, but I checked out merge-sort and it seems that the time-complexity for the merging step is O(n). Is there a different approach to merge in O(log(n))?
Edit: I hadn't considered initially, but maybe it's not possible to merge two sorted arrays in O(log(n))? The actual goal is to find the median of two sorted arrays. Is there a way to do this without merging them?
The only idea I've had was I read that merging two binomial heaps is O(log(n)), but turning an array into a binomial heap is O(n) I think so that won't work.
Edit2: I'm going to post a new question because I've realized that merging will never work fast enough. I think instead I need to perform a binary search on each array to find the median in log(n).
I don't think there is an algorithm that would merge two arrays in O(log(n+m)) time.
And it makes sense when you think about it. If you're trying to create a new sorted array of n+m elements you will need to do at least n+m copies. There is no way getting around that.
I think the best way would be to iterate through each array simultaneously and, at each iteration, compare the values of both elements. If one is less than the other (if you want the array sorted in descending order), then copy that element to the array and increment your indexing pointer for that array and vice versa. If the two elements are the same, you can just add them both into the newly sorted array and increment both pointers.
Continue until one of the pointers has reached the end of its respective array and then copy in the rest of the other array once one has.
That should be O(m+n)
Regarding your edit, there is a way to find the median of two separate arrays in log(n + m) time.
You can first find the median of the two sorted arrays (the middle element) and compare them. If they are equal, then that is the median. If the first's median is greater than the second's you know the median has to be in either the first half of the first array or the second half of the second array and vice versa if the first's median is less than the second's.
This method cuts your search space in half each iteration and is thus log(n + m)
You're probably thinking of The Selection Algorithm.
For a sorted data structure, finding the median is O(1). For an unsorted data structure (or a data structure where the data is sorted into two logical partitions) the runtime is O(n).
You could probably pull it off with a massively parallel reduction algorithm, but I think that's cheating in Runtime Analysis terms.
So I don't believe there's an algorithm that reduces it below O(n) (or, in your case, O(n+m))
You need to merge the arrays. so, no matter what, you need to traverse the 2 arrays at least, so the complexity can't be less than o(m+n)
In the wikipedia article on sorting algorithms,
http://en.wikipedia.org/wiki/Sorting_algorithm#Summaries_of_popular_sorting_algorithms
under Bubble sort it says:Bubble sort can also be used efficiently on a list of any length that is nearly sorted (that is, the elements are not significantly out of place)
So my question is: Without sorting the list using a sorting algoithm first, how can one know if that is nearly sorted or not?
Are you familiar with the general sorting lower bound? You can prove that in a comparison-based sorting algorithm, any sorting algorithm must make Ω(n log n) comparisons in the average case. The way you prove this is through an information-theoretic argument. The basic idea is that there are n! possible permutations of the input array, and since the only way you can learn about which permutation you got is to make comparisons, you have to make at least lg n! comparisons in order to be certain that you know the structure of your input permutation.
I haven't worked out the math on this, but I suspect that you could make similar arguments to show that it's difficult to learn how sorted a particular array is. Essentially, if you don't do a large number of comparisons, then you wouldn't be able to tell apart an array that's mostly sorted from an array that is actually quite far from sorted. As a result, all the algorithms I'm aware of that measure "sortedness" take a decent amount of time to do so.
For example, one measure of the level of "sortedness" in an array is the number of inversions in that array. You can count the number of inversions in an array in time O(n log n) using a divide-and-conquer algorithm based on mergesort, but with that runtime you could just sort the array instead.
Typically, the way that you'd know that your array was mostly sorted was to know something a priori about how it was generated. For example, if you're looking at temperature data gathered from 8AM - 12PM, it's very likely that the data is already mostly sorted (modulo some variance in the quality of the sensor readings). If your data looks at a stock price over time, it's also likely to be mostly sorted unless the company has a really wonky trajectory. Some other algorithms also partially sort arrays; for example, it's not uncommon for quicksort implementations to stop sorting when the size of the array left to sort is small and to follow everything up with a final insertion sort pass, since every element won't be very far from its final position then.
I don't believe there exists any standardized measure of how sorted or random an array is.
You can come up with your own measure - like count the number of adjacent pairs which are out of order (suggested in comment), or count the number of larger numbers which occur before smaller numbers in the array (this is trickier than a simple single pass).
I have a 2D array (an image actually) that is size N x N. I need to find the indices of the M largest values in the array ( M << N x N) . Linearized index or the 2D coords are both fine. The array must remain intact (since it's an image). I can make a copy for scratch, but sorting the array will bugger up the indices.
I'm fine with doing a full pass over the array (ie. O(N^2) is fine). Anyone have a good algorithm for doing this as efficiently as possible?
Selection is sorting's austere sister (repeat this ten times in a row). Selection algorithms are less known than sort algorithms, but nonetheless useful.
You can't do better than O(N^2) (in N) here, since nothing indicates that you must not visit each element of the array.
A good approach is to keep a priority queue made of the M largest elements. This makes something O(N x N x log M).
You traverse the array, enqueuing pairs (elements, index) as you go. The queue keeps its elements sorted by first component.
Once the queue has M elements, instead of enqueuing you now:
Query the min element of the queue
If the current element of the array is greater, insert it into the queue and discard the min element of the queue
Else do nothing.
If M is bigger, sorting the array is preferable.
NOTE: #Andy Finkenstadt makes a good point (in the comments to your question) : you definitely should traverse your array in the "direction of data locality": make sure that you read memory contiguously.
Also, this is trivially parallelizable, the only non parallelizable part is when you merge the queues when joining the sub processes.
You could copy the array into a single dimensioned array of tuples (value, original X, original Y ) and build a basic heap out of it in (O(n) time), provided you implement the heap as an array.
You could then retrieve the M largest tuples in O(M lg n) time and reference their original x and y from the tuple.
If you are going to make a copy of the input array in order to do a sort, that's way worse than just walking linearly through the whole thing to pick out numbers.
So the question is how big is your M? If it is small, you can store results (i.e. structs with 2D indexes and values) in a simple array or a vector. That'll minimize heap operations but when you find a larger value than what's in your vector, you'll have to shift things around.
If you expect M to get really large, then you may need a better data structure like a binary tree (std::set) or use sorted std::deque. std::set will reduce number of times elements must be shifted in memory, while if you use std::deque, it'll do some shifting, but it'll reduce number of times you have to go to the heap significantly, which may give you better performance.
Your problem doesn't use the 2 dimensions in any interesting way, it is easier to consiger the equivalent problem in a 2d array.
There are 2 main ways to solve this problem:
Mantain a set of M largest elements, and iterate through the array. (Using a heap allows you to do this efficiently).
This is simple and is probably better in your case (M << N)
Use selection, (the following algorithm is an adaptation of quicksort):
Create an auxiliary array, containing the indexes [1..N].
Choose an arbritary index (and corresponding value), and partition the index array so that indexes corresponding to elements less go to the left, and bigger elements go to the right.
Repeat the process, binary search style until you narrow down the M largest elements.
This is good for cases with large M. If you want to avoid worst case issues (the same quicksort has) then look at more advanced algorithms, (like median of medians selection)
How many times do you search for the largest value from the array?
If you only search 1 time, then just scan through it keeping the M largest ones.
If you do it many times, just insert the values into a sorted list (probably best implemented as a balanced tree).
The naive one is O(n). Is there a one that is O(log n) or even O(1)?
How about a sorted array? How about using binary search tree?
How about my array has a size n = [2 ^(h + 1)] − 1 ? // h=height of a complete binary tree
Unsorted
If the array is not sorted, then you can do no better than O(n). Proof: suppose you didn't look at every single element of the array, then an adversary could just make one of the elements that you didn't look at larger or smaller than the given number to make your count incorrect. So, better than O(n) is not possible.
Sorted
If the array is sorted, then you can determine the result in O(log n) time by locating the first element that is greater than or equal to the given number, and then simply subtracting that index from the size of the array.
With unsorted, you can't do better than O(n). Final.
With sorted, you can do in worst case O(log(n)) with binary search. Now you can improve upon this assuming the array layout has either decent entropy or is (mostly) linear by aiming at expected point as if the layout was purely linear.
For example, take a sorted array a[n] with a[0]=x, a[n]=y, and your threshold v.
Instead of bisecting the array at n/2, test element of a[n*(v-x)/(y-x)]
With regular layout (a[i] = const1*i+const2) you get the result in O(1), one hit +- rounding error, so at worst 2. With "white noise" random layout (all values equally probable), you get it still much faster than O(log(n)).