Is it possible to search a character in a string in less than O(n). Most of the algorithms I have come across takes O(n) time. string::find() takes O(N*M). Is there any algorithm in which you are not required to traverse the whole string.
Well first I must say there isn't algorithm less than linear O(n) time complexity for your problem.
Of course you can try randomized algorithm like Las Vegas way of thinking. Pick a random spot from your array if lucky you found your char if not try again and store the wrong index.
Worst case is like linear search O(n) but purely lucky if you get it in few first tries it's way less. This kind of algorithm maybe is what you are looking for. On average it will find char maybe faster but remember if it still takes k tries when for sure k < n it is still linear time. O(k)
If you know more from your input maybe you can think of way to solve it differently.
Hope this helps
Related
Just as the title, and BTW, it's just out of curiosity and it's not a homework question. It might seem to be trivial for people of CS major. The problem is I would like to find the indices of max value in an array. Basically I have two approaches.
scan over and find the maximum, then scan twice to get the vector of indices
scan over and find the maximum, along this scan construct indices array and abandon if a better one is there.
May I now how should I weigh over these two approaches in terms of performance(mainly time complexity I suppose)? It is hard for me because I have even no idea what the worst case should be for the second approach! It's not a hard problem perse. But I just want to know how to approach this problem or how should I google this type of problem to get the answer.
In term of complexity:
scan over and find the maximum,
then scan twice to get the vector of indices
First scan is O(n).
Second scan is O(n) + k insertions (with k, the number of max value)
vector::push_back has amortized complexity of O(1).
so a total O(2 * n + k) which might be simplified to O(n) as k <= n
scan over and find the maximum,
along this scan construct indices array and abandon if a better one is there.
Scan is O(n).
Number of insertions is more complicated to compute.
Number of clear (and number of element cleared) is more complicated to compute too. (clear's complexity would be less or equal to number of element removed)
But both have upper bound to n, so complexity is less or equal than O(3 * n) = O(n) but also greater than equal to O(n) (Scan) so it is O(n) too.
So for both methods, complexity is the same: O(n).
For performance timing, as always, you have to measure.
For your first method, you can set a condition to add the index to the array. Whenever the max changes, you need to clear the array. You don't need to iterate twice.
For the second method, the implementation is easier. You just find max the first go. Then you find the indices that match on the second go.
As stated in a previous answer, complexity is O(n) in both cases, and measures are needed to compare performances.
However, I would like to add two points:
The first one is that the performance comparison may depend on the compiler, how optimisation is performed.
The second point is more critical: performance may depend on the input array.
For example, let us consider the corner case: 1,1,1, .., 1, 2, i.e. a huge number of 1 followed by one 2. With your second approach, you will create a huge temporary array of indices, to provide at the end an array of one element. It is possible at the end to redefine the size of the memory allocated to this array. However, I don't like the idea to create a temporary unnecessary huge vector, independently of the time performance concern. Note that such a array could suffer of several reallocations, which would impact time performance.
This is why in the general case, without any knowledge on the input, I would prefer your first approach, two scans. The situation could be different if you want to implement a function dedicated to a specific type of data.
I'm woking on an assignment of 'Algorithm analysis' and i am stuck with this question i have to submit it tomorrow and i need help. Please answer if you can solve this.
Given an array A of n numbers, write an efficient algorithm to find the most frequently
occurred element in that array (Mode of that array). Also analyze the time complexity of your
algorithm.
Since this is an assignment, I will give you a hint only about upper bounds of complexity and a similar problem.
This problem is a bit more difficult than the Element Distinctness Problem1. The element distinctness problem is known as cannot be solved better than O(nlogn) worst case. The solutions for element distinctness are:
Sort and iterate - O(nlogn)
Create a set/histogram of the elements and check uniqeness. This is done using hashtables in O(n) average case and O(n^2) worst case and O(n) extra space.
Think about both approaches and try to think how you can modify them to solve your problem.
Also, the lower bound of element distinctness tells you there won't be any algorithm better than O(nlogn) worst case.
(1) Element Distinctness Problem: Are all elements in the array distinct? Or is there any element that also has a duplicate of itself in it?
I have 2 linked lists representing very large numbers (that cannot be saved in anything else other than a linked list).
i have an Add method with the complexity of O(n).
i wanted to know if it is possible in any way to multiply the 2 numbers WITHOUT converting the whole list to a String/int/long (calculate ON the list if possible), and keep it on a complexity of O(n^2).
for now, no matter what i try i get to an O(n^3) complexity, and it isnt good enough.
thanks for all the help.
The "long multiplication" algorithm most westerners learn in school already gives you O(n²) complexity, so maybe you could explain what algorithm you are using.
There are algorithms with lower complexity: Karatsuba, Tom-Cook, and Schönhage–Strassen algorithm. The last one has the lowest known complexity know to date, O(n log n log log n), but there may be even better algorithms yet to be discovered.
Do you understand what this question means
Find top log(n) or top sqt(n) values in an array of integers in less than linear time.
If you don't, here is the question http://www.careercup.com/question?id=9337669.
Could you please help me in understanding this question and then may be get it solved. (Although once i understand i might get it solved too)
Thanks for your time.
For non sorted array the complexity is linear, but it can be possible to improve the performance by observing that log(n) and sqrt(n) are both monotonic growing function, hence max(log(n),...) is also log(max(n,...)) and same for sqrt.
So just find max(n) (linearly) and calculate log and sqrt.
Assuming the array is not sorted, This problem is Omega(n), since you need to read all elements [finding max is Omega(n) problem in non-sorted array, and this problem is not easier then finding max]. So, there is no sublinear solution for it.
There is O(n) [linear] solution, using selection algorithm
1. find the log(n) biggest element. //or sqrt(n) biggest element...
2. scan the array and return all elements bigger/equal it.
(*)This pseudo code is not correct if the array contain dupes, but trimming the dupes in a second step is fairly easy.
Given an array of N integer such that only one integer is repeated. Find the repeated integer in O(n) time and constant space. There is no range for the value of integers or the value of N
For example given an array of 6 integers as 23 45 67 87 23 47. The answer is 23
(I hope this covers ambiguous and vague part)
I searched on the net but was unable to find any such question in which range of integers was not fixed.
Also here is an example that answers a similar question to mine but here he created a hash table with the highest integer value in C++.But the cpp does not allow such to create an array with 2^64 element(on a 64-bit computer).
I am sorry I didn't mention it before the array is immutable
Jun Tarui has shown that any duplicate finder using O(log n) space requires at least Ω(log n / log log n) passes, which exceeds linear time. I.e. your question is provably unsolvable even if you allow logarithmic space.
There is an interesting algorithm by Gopalan and Radhakrishnan that finds duplicates in one pass over the input and O((log n)^3) space, which sounds like your best bet a priori.
Radix sort has time complexity O(kn) where k > log_2 n often gets viewed as a constant, albeit a large one. You cannot implement a radix sort in constant space obviously, but you could perhaps reuse your input data's space.
There are numerical tricks if you assume features about the numbers themselves. If almost all numbers between 1 and n are present, then simply add them up and subtract n(n+1)/2. If all the numbers are primes, you could cheat by ignoring the running time of division.
As an aside, there is a well-known lower bound of Ω(log_2(n!)) on comparison sorting, which suggests that google might help you find lower bounds on simple problems like finding duplicates as well.
If the array isn't sorted, you can only do it in O(nlogn).
Some approaches can be found here.
If the range of the integers is bounded, you can perform a counting sort variant in O(n) time. The space complexity is O(k) where k is the upper bound on the integers(*), but that's a constant, so it's O(1).
If the range of the integers is unbounded, then I don't think there's any way to do this, but I'm not an expert at complexity puzzles.
(*) It's O(k) since there's also a constant upper bound on the number of occurrences of each integer, namely 2.
In the case where the entries are bounded by the length of the array, then you can check out Find any one of multiple possible repeated integers in a list and the O(N) time and O(1) space solution.
The generalization you mention is discussed in this follow up question: Algorithm to find a repeated number in a list that may contain any number of repeats and the O(n log^2 n) time and O(1) space solution.
The approach that would come closest to O(N) in time is probably a conventional hash table, where the hash entries are simply the numbers, used as keys. You'd walk through the list, inserting each entry in the hash table, after first checking whether it was already in the table.
Not strictly O(N), however, since hash search/insertion gets slower as the table fills up. And in terms of storage it would be expensive for large lists -- at least 3x and possibly 10-20x the size of the array of numbers.
As was already mentioned by others, I don't see any way to do it in O(n).
However, you can try a probabilistic approach by using a Bloom Filter. It will give you O(n) if you are lucky.
Since extra space is not allowed this can't be done without comparison.The concept of lower bound on the time complexity of comparison sort can be applied here to prove that the problem in its original form can't be solved in O(n) in the worst case.
We can do in linear time o(n) here as well
public class DuplicateInOnePass {
public static void duplicate()
{
int [] ar={6,7,8,8,7,9,9,10};
Arrays.sort(ar);
for (int i =0 ; i <ar.length-1; i++)
{
if (ar[i]==ar[i+1])
System.out.println("Uniqie Elements are" +ar[i]);
}
}
public static void main(String[] args) {
duplicate();
}
}