Suppose I store a matrix as a list of lists, where the first list represents the rows, and each element in that list is a list representing the values in that row. What would the time complexity for finding an element be?
If I'm not mistaken, the average time complexity finding an element in a list using linear search is O(n), does that mean the average time complexity for a list of lists is O(n2)?
If n means the width and height of a square matrix, then a linear search in the matrix will take O(n2) time. More generally, a linear search in a rectangular m×n matrix will take O(mn) time. Both are because that is the number of entries in the matrix, and a linear search will do O(1) work per entry.
If instead you use n to mean the total number of entries in the matrix, then the time complexity is O(n) for the same reason as above.
Note that the above assumes testing for the search target takes O(1) time (e.g. comparing primitive integers). If that's false, then you should multiply the above by the time complexity of the equality test; for example, if for some reason you have an m×n matrix of strings of length c, then the running time will be O(mnc).
Well, you would have two indexes. So the complexity is equal two a list, but multiplied by two. That beeing said, if you want to model a matrix, you might consider using an array of arrays (not to be confused with a jagged array), because in normal circumstances, a matrix has a fixed size (a List<T> is a wrapper around a T[] to allow adding and removing items).
Related
What kind of algorithm can I use to merge two sorted arrays into one sorted array with worst-case time complexity of O(log(m+n)) where n, m are the length of the arrays? I have very little experience with algorithms, but I checked out merge-sort and it seems that the time-complexity for the merging step is O(n). Is there a different approach to merge in O(log(n))?
Edit: I hadn't considered initially, but maybe it's not possible to merge two sorted arrays in O(log(n))? The actual goal is to find the median of two sorted arrays. Is there a way to do this without merging them?
The only idea I've had was I read that merging two binomial heaps is O(log(n)), but turning an array into a binomial heap is O(n) I think so that won't work.
Edit2: I'm going to post a new question because I've realized that merging will never work fast enough. I think instead I need to perform a binary search on each array to find the median in log(n).
I don't think there is an algorithm that would merge two arrays in O(log(n+m)) time.
And it makes sense when you think about it. If you're trying to create a new sorted array of n+m elements you will need to do at least n+m copies. There is no way getting around that.
I think the best way would be to iterate through each array simultaneously and, at each iteration, compare the values of both elements. If one is less than the other (if you want the array sorted in descending order), then copy that element to the array and increment your indexing pointer for that array and vice versa. If the two elements are the same, you can just add them both into the newly sorted array and increment both pointers.
Continue until one of the pointers has reached the end of its respective array and then copy in the rest of the other array once one has.
That should be O(m+n)
Regarding your edit, there is a way to find the median of two separate arrays in log(n + m) time.
You can first find the median of the two sorted arrays (the middle element) and compare them. If they are equal, then that is the median. If the first's median is greater than the second's you know the median has to be in either the first half of the first array or the second half of the second array and vice versa if the first's median is less than the second's.
This method cuts your search space in half each iteration and is thus log(n + m)
You're probably thinking of The Selection Algorithm.
For a sorted data structure, finding the median is O(1). For an unsorted data structure (or a data structure where the data is sorted into two logical partitions) the runtime is O(n).
You could probably pull it off with a massively parallel reduction algorithm, but I think that's cheating in Runtime Analysis terms.
So I don't believe there's an algorithm that reduces it below O(n) (or, in your case, O(n+m))
You need to merge the arrays. so, no matter what, you need to traverse the 2 arrays at least, so the complexity can't be less than o(m+n)
My program contains polygons which have the form of a vector containing points (2 dimensional double coordinates, stored in a self-made structure). I'm looking for a quick way of finding the smallest square containing my polygon (ie. knowing the maximal and minimal coordinates of all the points).
Is there a quicker way than just parsing all the points and storing the minimum and maximum values?
The algorithm ou are describing is straightforward: Iterate over all your points and find the minimum and maximum for each coordinate. This is an O(n) algorithm, n being the number of points you have.
You can't do better, since you will need to check at least all your points once, otherwise the last one could be outside the square you found.
Now, the complexity is at best O(n) so you just have to minimize the constant factors, but in that case it's already pretty small : Only one loop over your vector, looking for two maximums and two minimums.
You can either iterate through all points and find max and min values, or do some preprocessing, for example, store your points in treap (http://en.wikipedia.org/wiki/Treap).
There is no way w/o some preprocessing to do it better than just iterating over all points.
I'm not sure if there can be any faster way to find the min & max values in an array of values than linear time. The only 'optimization' I can think of is to find these values on one of the other occasions you're iterating the array (filling it/performing a function on all points), then perform checks on any data update.
I have a list of items; I want to sort them, but I want a small element of randomness so they are not strictly in order, only on average ordered.
How can I do this most efficiently?
I don't mind if the quality of the random is not especially good, e.g. it simply based on the chance ordering of the input, e.g. an early-terminated incomplete sort.
The context is implementing a nearly-greedy search by introducing a very slight element of inexactness; this is in a tight loop and so the speed of sorting and calling random() are to be considered
My current code is to do a std::sort (this being C++) and then do a very short shuffle just in the early part of the array:
for(int i=0; i<3; i++) // I know I have more than 6 elements
std::swap(order[i],order[i+rand()%3]);
Use first two passes of JSort. Build heap twice, but do not perform insertion sort. If element of randomness is not small enough, repeat.
There is an approach that (unlike incomplete JSort) allows finer control over the resulting randomness and has time complexity dependent on randomness (the more random result is needed, the less time complexity). Use heapsort with Soft heap. For detailed description of the soft heap, see pdf 1 or pdf 2.
You could use a standard sort algorithm (is a standard library available?) and pass a predicate that "knows", given two elements, which is less than the other, or if they are equal (returning -1, 0 or 1). In the predicate then introduce a rare (configurable) case where the answer is random, by using a random number:
pseudocode:
if random(1000) == 0 then
return = random(2)-1 <-- -1,0,-1 randomly choosen
Here we have 1/1000 chances to "scamble" two elements, but that number strictly depends on the size of your container to sort.
Another thing to add in the 1000 case, could be to remove the "right" answer because that would not scramble the result!
Edit:
if random(100 * container_size) == 0 then <-- here I consider the container size
{
if element_1 < element_2
return random(1); <-- do not return the "correct" value of -1
else if element_1 > element_2
return random(1)-1; <-- do not return the "correct" value of 1
else
return random(1)==0 ? -1 : 1; <-- do not return 0
}
in my pseudocode:
random(x) = y where 0 <= y <=x
One possibility that requires a bit more space but would guarantee that existing sort algorithms could be used without modification would be to create a copy of the sort value(s) and then modify those in some fashion prior to sorting (and then use the modified value(s) for the sort).
For example, if the data to be sorted is a simple character field Name[N] then add a field (assuming data is in a structure or class) called NameMod[N]. Fill in the NameMod with a copy of Name but add some randomization. Then 3% of the time (or some appropriate amount) change the first character of the name (e.g., change it by +/- one or two characters). And then 10% of the time change the second character +/- a few characters.
Then run it through whatever sort algorithm you prefer. The benefit is that you could easily change those percentages and randomness. And the sort algorithm will still work (e.g., it would not have problems with the compare function returning inconsistent results).
If you are sure that element is at most k far away from where they should be, you can reduce quicksort N log(N) sorting time complexity down to N log(k)....
edit
More specifically, you would create k buckets, each containing N/k elements.
You can do quick sort for each bucket, which takes k * log(k) times, and then sort N/k buckets, which takes N/k log(N/k) time. Multiplying these two, you can do sorting in N log(max(N/k,k))
This can be useful because you can run sorting for each bucket in parallel, reducing total running time.
This works if you are sure that any element in the list is at most k indices away from their correct position after the sorting.
but I do not think you meant any restriction.
Split the list into two equally-sized parts. Sort each part separately, using any usual algorithm. Then merge these parts. Perform some merge iterations as usual, comparing merged elements. For other merge iterations, do not compare the elements, but instead select element from the same part, as in the previous step. It is not necessary to use RNG to decide, how to treat each element. Just ignore sorting order for every N-th element.
Other variant of this approach nearly sorts an array nearly in-place. Split the array into two parts with odd/even indexes. Sort them. (It is even possible to use standard C++ algorithm with appropriately modified iterator, like boost::permutation_iterator). Reserve some limited space at the end of the array. Merge parts, starting from the end. If merged part is going to overwrite one of the non-merged elements, just select this element. Otherwise select element in sorted order. Level of randomness is determined by the amount of reserved space.
Assuming you want the array sorted in ascending order, I would do the following:
for M iterations
pick a random index i
pick a random index k
if (i<k)!=(array[i]<array[k]) then swap(array[i],array[k])
M controls the "sortedness" of the array - as M increases the array becomes more and more sorted. I would say a reasonable value for M is n^2 where n is the length of the array. If it is too slow to pick random elements then you can precompute their indices beforehand. If the method is still too slow then you can always decrease M at the cost of getting a poorer sort.
Take a small random subset of the data and sort it. You can use this as a map to provide an estimate of where every element should appear in the final nearly-sorted list. You can scan through the full list now and move/swap elements that are not in a good position.
This is basically O(n), assuming the small initial sorting of the subset doesn't take a long time. Hopefully you can build the map such that the estimate can be extracted quickly.
Bubblesort to the rescue!
For a unsorted array, you could pick a few random elements and bubble them up or down. (maybe by rotation, which is a bit more efficient) It will be hard to control the amount of (dis)order, even if you pick all N elements, you are not sure that the whole array will be sorted, because elements are moved and you cannot ensure that you touched every element only once.
BTW: this kind of problem tends to occur in game playing engines, where the list with candidate moves is kept more-or-less sorted (because of weighted sampling), and sorting after each iteration is too expensive, and only one or a few elements are expected to move.
I have a 2D array (an image actually) that is size N x N. I need to find the indices of the M largest values in the array ( M << N x N) . Linearized index or the 2D coords are both fine. The array must remain intact (since it's an image). I can make a copy for scratch, but sorting the array will bugger up the indices.
I'm fine with doing a full pass over the array (ie. O(N^2) is fine). Anyone have a good algorithm for doing this as efficiently as possible?
Selection is sorting's austere sister (repeat this ten times in a row). Selection algorithms are less known than sort algorithms, but nonetheless useful.
You can't do better than O(N^2) (in N) here, since nothing indicates that you must not visit each element of the array.
A good approach is to keep a priority queue made of the M largest elements. This makes something O(N x N x log M).
You traverse the array, enqueuing pairs (elements, index) as you go. The queue keeps its elements sorted by first component.
Once the queue has M elements, instead of enqueuing you now:
Query the min element of the queue
If the current element of the array is greater, insert it into the queue and discard the min element of the queue
Else do nothing.
If M is bigger, sorting the array is preferable.
NOTE: #Andy Finkenstadt makes a good point (in the comments to your question) : you definitely should traverse your array in the "direction of data locality": make sure that you read memory contiguously.
Also, this is trivially parallelizable, the only non parallelizable part is when you merge the queues when joining the sub processes.
You could copy the array into a single dimensioned array of tuples (value, original X, original Y ) and build a basic heap out of it in (O(n) time), provided you implement the heap as an array.
You could then retrieve the M largest tuples in O(M lg n) time and reference their original x and y from the tuple.
If you are going to make a copy of the input array in order to do a sort, that's way worse than just walking linearly through the whole thing to pick out numbers.
So the question is how big is your M? If it is small, you can store results (i.e. structs with 2D indexes and values) in a simple array or a vector. That'll minimize heap operations but when you find a larger value than what's in your vector, you'll have to shift things around.
If you expect M to get really large, then you may need a better data structure like a binary tree (std::set) or use sorted std::deque. std::set will reduce number of times elements must be shifted in memory, while if you use std::deque, it'll do some shifting, but it'll reduce number of times you have to go to the heap significantly, which may give you better performance.
Your problem doesn't use the 2 dimensions in any interesting way, it is easier to consiger the equivalent problem in a 2d array.
There are 2 main ways to solve this problem:
Mantain a set of M largest elements, and iterate through the array. (Using a heap allows you to do this efficiently).
This is simple and is probably better in your case (M << N)
Use selection, (the following algorithm is an adaptation of quicksort):
Create an auxiliary array, containing the indexes [1..N].
Choose an arbritary index (and corresponding value), and partition the index array so that indexes corresponding to elements less go to the left, and bigger elements go to the right.
Repeat the process, binary search style until you narrow down the M largest elements.
This is good for cases with large M. If you want to avoid worst case issues (the same quicksort has) then look at more advanced algorithms, (like median of medians selection)
How many times do you search for the largest value from the array?
If you only search 1 time, then just scan through it keeping the M largest ones.
If you do it many times, just insert the values into a sorted list (probably best implemented as a balanced tree).
How to find 'n' largest element positions in a 2D array?
Is there a good algorithm, other than brute force?
Any suggestions are helpful.
Thank you.
You can find the n largest elements (and their positions) in a two-dimensional array as efficiently as you can in a one-dimensional array.
There are a variety of selection algorithms that run in linear time. Quickselect has linear average time complexity but, like Quicksort, can be quadratic in the worst case. The median-of-medians algorithm has both expected and worst case linear time complexity.
Of course, if the elements are sorted, then this is trivial: go to the end of the array and select the required number of elements from there. This may take a little logic depending on how the array is sorted in two dimensions, but it's not difficult.