bsort example from programming pearls - c++

In Programming Pearls there is an algorithm that sorts varying length arrays but sorts in time proportional to the sum of their length. For example, if we have a record array x[0...n-1], and each record has an integer length and a pointer to array bit[0...length-1].
The code is implemented this way:
void bsort(l, u, depth){
if (l >= u)
return ;
for (i = l; i <= u; i++){
if (x[i].length < depth)
swap(i, l++);
}
m = l;
for (int i = l; i < u; i++){
if (x[i].bit[depth] == 0)
swap(i, m++);
}
bsort(l, m - 1, depth + 1);
bsort(m, u, depth + 1);
}
My question is that, given the record:
x[6] = {"car", "bus", "snow", "earth", "dog", "mouse"}
I know how to get the string length, but what about with a bit array? How could I make a bit array suitable for this string array? And even x[i].bit[depth] how can I implement this?

Arrays of chars (or any other type, for that matter) are also arrays of bits - chars are made of bits, after all. So you don't have to create a separate array, you just have to find a way to access a given bit in the array. For that, you'll have to use some bit manipulations. You can find a few examples of how this could be done here: Any smarter way to extract from array of bits?.
Basically, you first have to figure out the byte the required bit is at, and then get that specific bit's value. Something along:
char* array = "the array";
int required_bit = 13;
int bit = required_bit & 0x7; // get the bit's offset in its byte
int byte = required_bit >> 3; // get the bit's byte
int val = (array[byte] >> bit) & 0x1; // check if the bit is 1
Now wrap this in a function (possibly with additional bound checks, to make sure the given required_bit is not outside of the array), and use with x[i].

Related

Algorithm to generating function mapping sets of integers [0,N] to [0,M] where M <= N

Here's the problem, I'm looking to map all the integers in the range [0, N) to integers in the range [0, M] (where M < N). Specifically to map sets of integers to integers in the range [0, M] such that no set's output may collide with another. Things we have: We have all the sets of integers as inputs, we do not necessarily know or care which set maps to which number*. This adds another potential limitation (but one we'd want) that M is the # of input sets. The purpose of the mapping is to compress the range [0, N) into as small as possible a range [0, M] as possible. For completeness, any integers not found in the input sets could be considered their own set and any numbers appearing in multiple sets should be considered their own unique set. This creates a unique mapping for every integer to [0,M] where M is the # of sets as inputs. N here would be 2^(however many bits).
I'm looking to generate a function (not a table). A table isn't so bad to accomplish, the algorithm to generate the table is somewhat given away by the restrictions, it's possible to use a bit of extra memory to eventually generate a smaller table. However I'm memory bound and the input space is 32 bit integers, meaning a table 2^32 bytes large or even ones 2^16 large are just not feasible. I'll be generating quite a few of these functions, 256 bytes per may not be so bad, 2^16 or 2^32 is far too much. I've already written an algorithm to create a table for 8 bit integers because that's a nicer case to deal with and I'm happy to have 256 bytes live on the stack. So instead I'm curious if there's an algorithm, or other bithackery that can accomplish this and not consume a ton of memory. Another consideration is these generated functions will be hot, the instructions that make up the function should probably be branchless, as the inputs [0,N) which will be fed to the generated function are for all practical purposes random.
Some trivial examples, say we have only one set which contains the numbers [0, N) then the generated function should return 0 no matter the input. If we have two sets [0, N/2), [N/2, N) then the generated function should return 0 or 1, depending which set the input was in. If we have N-1 sets with unique integers from [0, N) then we have what would be an identity function or a perfect hash function. However since we're about compression here, the last case is the worst case scenario, however ideally this is the identity function so we do absolutely no work.
Since* these are sets of integers and we don't necessarily care about the order of their input here we might note that if we order the sets by their lowest member and we map sets in order to unique numbers, this should eventually work out to the identity function. And crossing fingers the optimizer works out it doesn't have to any work we'd be all good. However, since we know the sets at start we can check for this case and use the identity function instead.
My initial thought process was that maybe it'd be possible to work out the important bit indexes for every integer, making some sort of mask, eventually working towards a mapping for every bit based on which one's appear most/least important and shuffle the input integer's bits into the least significant bits. I don't bit hack much but it strikes me to be potentially in the right direction.
Here's part of what builds a 8 bit table.
//previous is the table as it stands
//equiv_class is a set of integers
//returns the new table
template<typename Ty>
constexpr stack_vector::stack_vector<Ty, 256> make_equivalence_class(stack_vector::stack_vector<Ty, 256> previous, stack_vector::stack_vector<uint8_t, 256> equiv_class) {
stack_vector::stack_vector<Ty, 256> ecs{};
::std::array<uint16_t, 256> counts{};
stack_vector::stack_vector<Ty, 256> current = previous;
if (equiv_class.size()) {
//make a mapping to collapse the equivalence classes together
for (size_t i=0; i < current.size(); i++) {
size_t tmp = current[i];
auto it = std::find(ecs.begin(), ecs.end(), tmp);
size_t ec = std::distance(ecs.begin(), it);
if (it == ecs.end())
ecs.emplace_back(tmp);
current[i] = ec;
counts[ec]+=1;
}
size_t mx = 0;
for (size_t i = 0; i < equiv_class.size(); i++) {
mx = ((mx >= equiv_class[i]) ? mx : equiv_class[i]);
}
//add indexs we're missing
size_t lim = mx + 1;
size_t last_size = ecs.size();
for (size_t i = current.size(); i < lim; i++) {
current.emplace_back(last_size);
counts[last_size]++;
}
//partition input indexs based on equivalence class
//use ecs just for the memory
ecs.clear();
for (size_t i = 0; i < equiv_class.size(); i++) {
size_t idx = equiv_class[i];
size_t key = current[equiv_class[i]];
size_t j = 0;
for (; j < ecs.size(); j++) {
size_t key2 = current[ecs[j]];
if (key == key2) {
break;
}
}
size_t j2 = j;
size_t j3 = j;
for (; j2 < ecs.size(); ++j2, j3=j2) {
size_t key2 = current[ecs[j2]];
if (idx == ecs[j2] || key != key2) {
j3 = ((idx == ecs[j2]) ? j2 : ecs.size());
break;
}
}
//if idx was not found (removes duplicates)
if (j3 == ecs.size()) {
//make space
ecs.emplace_back();
//move backwards
Ty* first = (&ecs[0]) + j2;
Ty* last = (&ecs[0]) + (ecs.size() - 1);
Ty* d_last = (&ecs[0]) + (ecs.size());
while (first != last) {
*(--d_last) = std::move(*(--last));
}
//insert
ecs[j2] = idx;
}
}
equiv_class.clear();
for (size_t i = 0; i < ecs.size(); i++) {
equiv_class.unchecked_emplace_back(ecs[i]);
}
//now we're remapping every integer to itself
ecs.clear();
for (size_t i = 0; i < (last_size + 1); i++)
ecs.unchecked_emplace_back(i);
size_t last_size_2 = ecs.size();
//merge new indexs
for (size_t j=0; j < equiv_class.size(); j++) {
size_t i = equiv_class[j];
size_t current_ec = current[i];
size_t remapped_ec = ecs[current_ec];
size_t current_count = counts[current_ec];
size_t remapped_count = counts[remapped_ec];
if (current_ec == remapped_ec) {
if (current_count == 1) {
//if there's only 1 to start w/ anyway, leave as is
} else {
//find a class which is free to remap to
size_t x = 0;
for (; x < counts.size() && counts[x]; x++) {}
ecs[current_ec] = x; //remap
//swap counts
counts[x]++;
counts[current_ec]--;
current[i] = x;
}
} else {
counts[remapped_ec]++;
counts[current_ec]--;
current[i] = remapped_ec;
}
}
}
//we can patch up the returned table at the end by remapping
//values as we did at the start
return current;
}

Implementing the Backward Nondeterministic Dawg Matching algorithm

I'm trying to implement the BNDM algorithm in my code, in order to perform a fast pattern search.
I found some code online and tried to adjust it for my use case:
I think that I did something wrong while changing the values, since the algorithm takes a few minutes to finish (I was expecting it to be faster).
Using std::search takes me 30 seconds (with wildcards).
This takes me around 4-5 minutes (without wildcards).
The reason I'm casting everything to (unsigned char) is because the program crashes otherwise, since both my data and pattern hold hex values.
What I'd like to know is, where did I go wrong with this implementation (why is it running so slow)? and how can I include the ability to search for a pattern that contains wildcards?
EDIT*
The issue with speed has been solved by switching build from debug to release.
Also changing the size of the B array to 256 made it even faster.
The only issue I currently have now is how to implement a way to use wildcards using this algorithm.
Current code:
vector<unsigned int> get_matches(const vector<char> & data, const string & pattern) {
vector<unsigned int> matches;
//vector<char>::const_iterator walk = data.begin();
std::array<std::uint32_t, 256> B{ 0 };
int m = pattern.size();
int n = data.size();
int i, j, s, d, last;
//if (m > WORD_SIZE)
// error("BNDM");
// Pre processing
//memset(B, 0, ASIZE * sizeof(int));
s = 1;
for (i = m - 1; i >= 0; i--) {
B[(unsigned char)pattern[i]] |= s;
s <<= 1;
}
// Searching phase
j = 0;
while (j <= n - m) {
i = m - 1; last = m;
d = ~0;
while (i >= 0 && d != 0) {
d &= B[(unsigned char)data[j + i]];
i--;
if (d != 0) {
if (i >= 0)
last = i + 1;
else
matches.emplace_back(j);
}
d <<= 1;
}
j += last;
}
return matches;
}
B is not big enough -- it is indexed by the bytes in the pattern so it must have 256 elements (assuming an 8-bit byte architecture.) But you define it as having pattern.size() elements, which is a much smaller number.
As a consequence, you are using memory outside of B's allocation, which is Undefined Behaviour.
I suggest you use std::array<std::uint32_t, 256>, since you don't ever need to resize B. (Or even better, std::array<std::uint32_t, std::numeric_limits<unsigned char>::max()+1>).
I'm not an expert on this particular search algorithm, but the preprocessing step appears to set bit p in element c of B if the character c matches pattern element p. Since a wildcard pattern element can match any character, it seems reasonable that every element of B should have the bits corresponding to wildcard characters set. In other words, instead of initialising every element of B to 0, initialise them to the mask of wildcard positions in the pattern.
I don't know if that is sufficient to get the algorithm to work with wildcards, but it could be worth a try.

how to find distinct substrings?

Given a string, and a fixed length l, how can I count the number of distinct substrings whose length is l?
The size of character set is also known. (denote it as s)
For example, given a string "PccjcjcZ", s = 4, l = 3,
then there are 5 distinct substrings:
“Pcc”; “ccj”; “cjc”; “jcj”; “jcZ”
I try to use hash table, but the speed is still slow.
In fact I don't know how to use the character size.
I have done things like this
int diffPatterns(const string& src, int len, int setSize) {
int cnt = 0;
node* table[1 << 15];
int tableSize = 1 << 15;
for (int i = 0; i < tableSize; ++i) {
table[i] = NULL;
}
unsigned int hashValue = 0;
int end = (int)src.size() - len;
for (int i = 0; i <= end; ++i) {
hashValue = hashF(src, i, len);
if (table[hashValue] == NULL) {
table[hashValue] = new node(i);
cnt ++;
} else {
if (!compList(src, i, table[hashValue], len)) {
cnt ++;
};
}
}
for (int i = 0; i < tableSize; ++i) {
deleteList(table[i]);
}
return cnt;
}
Hastables are fine and practical, but keep in mind that if the length of substrings is L, and the whole string length is N, then the algorithm is Theta((N+1-L)*L) which is Theta(NL) for most L. Remember, just computing the hash takes Theta(L) time. Plus there might be collisions.
Suffix trees can be used, and provide a guaranteed O(N) time algorithm (count number of paths at depth L or greater), but the implementation is complicated. Saving grace is you can probably find off the shelf implementations in the language of your choice.
The idea of using a hashtable is good. It should work well.
The idea of implementing your own hashtable as an array of length 2^15 is bad. See Hashtable in C++? instead.
You can use an unorder_set and insert the strings into the set and then get the size of the set. Since the values in a set are unique it will take care of not including substrings that are the same as ones previously found. This should give you close to O(StringSize - SubstringSize) complexity
#include <iostream>
#include <string>
#include <unordered_set>
int main()
{
std::string test = "PccjcjcZ";
std::unordered_set<std::string> counter;
size_t substringSize = 3;
for (size_t i = 0; i < test.size() - substringSize + 1; ++i)
{
counter.insert(test.substr(i, substringSize));
}
std::cout << counter.size();
std::cin.get();
return 0;
}
Veronica Kham answered good to the question, but we can improve this method to expected O(n) and still use a simple hash table rather than suffix tree or any other advanced data structure.
Hash function
Let X and Y are two adjacent substrings of length L, more precisely:
X = A[i, i + L - 1]
Y = B[i + 1, i + 1 + L - 1]
Let assign to each letter of our alphabet a single non negative integer, for example a := 1, b := 2 and so on.
Let's define a hash function h now:
h(A[i, j]) := (P^(L-1) * A[i] + P^(L-2) * A[i + 1] + ... + A[j]) % M
where P is a prime number ideally greater than the alphabet size and M is a very big number denoting the number of different possible hashes, for example you can set M to maximum available unsigned long long int in your system.
Algorithm
The crucial observation is the following:
If you have a hash computed for X, you can compute a hash for Y in
O(1) time.
Let assume that we have computed h(X), which can be done in O(L) time obviously. We want to compute h(Y). Notice that since X and Y differ by only 2 characters, and we can do that easily using addition and multiplication:
h(Y) = ((h(X) - P^L * A[i]) * P) + A[j + 1]) % M
Basically, we are subtracting letter A[i] multiplied by its coefficient in h(X), multiplying the result by P in order to get proper coefficients for the rest of letters and at the end, we are adding the last letter A[j + 1].
Notice that we can precompute powers of P at the beginning and we can do it modulo M.
Since our hashing functions returns integers, we can use any hash table to store them. Remember to make all computations modulo M and avoid integer overflow.
Collisions
Of course, there might occur a collision, but since P is prime and M is really huge, it is a rare situation.
If you want to lower the probability of a collision, you can use two different hashing functions, for example by using different modulo in each of them. If probability of a collision is p using one such function, then for two functions it is p^2 and we can make it arbitrary small by this trick.
Use Rolling hashes.
This will make the runtime expected O(n).
This might be repeating pkacprzak's answer, except, it gives a name for easier remembrance etc.
Suffix Automaton also can finish it in O(N).
It's easy to code, but hard to understand.
Here are papers about it http://dl.acm.org/citation.cfm?doid=375360.375365
http://www.sciencedirect.com/science/article/pii/S0304397509002370

Implementation of string pattern matching using Suffix Array and LCP(-LR)

During the last weeks I tried to figure out how to efficiently find a string pattern within another string.
I found out that for a long time, the most efficient way would have been using a suffix tree. However, since this data structure is very expensive in space, I studied the use of suffix arrays further (which use far less space). Different papers such as "Suffix Arrays: A new method for on-line string searches" (Manber & Myers, 1993) state, that searching for a substring can be realised in O(P+log(N)) (where P is the length of the pattern and N is length of the string) by using binary search and suffix arrays along with LCP arrays.
I especially studied the latter paper to understand the search algorithm. This answer did a great job in helping me understand the algorithm (and incidentally made it into the LCP Wikipedia Page).
But I am still looking for an way to implement this algorithm. Especially the construction of the mentioned LCP-LR arrays seems very complicated.
References:
Manber & Myers, 1993: Manber, Udi ; Myers, Gene, SIAM Journal on Computing, 1993, Vol.22(5), pp.935-948, http://epubs.siam.org/doi/pdf/10.1137/0222058
UPDATE 1
Just to emphasize on what I am interested in: I understood LCP arrays and I found ways to implement them. However, the "plain" LCP array would not be appropriate for efficient pattern matching (as described in the reference). Thus I am interested in implementing LCP-LR arrays which seems much more complicated than just implementing an LCP array
UPDATE 2
Added link to referenced paper
The termin that can help you: enchanced suffix array, which is used to describe suffix array with various other arrays in order to replace suffix tree (lcp, child).
These can be some of the examples:
https://code.google.com/p/esaxx/ ESAXX
http://bibiserv.techfak.uni-bielefeld.de/mkesa/ MKESA
The esaxx one seems to be doing what you want, plus, it has example enumSubstring.cpp how to use it.
If you take a look at the referenced paper, it mentions an useful property (4.2). Since SO does not support math, there is no point to copy it here.
I've done quick implementation, it uses segment tree:
// note that arrSize is O(n)
// int arrSize = 2 * 2 ^ (log(N) + 1) + 1; // start from 1
// LCP = new int[N];
// fill the LCP...
// LCP_LR = new int[arrSize];
// memset(LCP_LR, maxValueOfInteger, arrSize);
//
// init: buildLCP_LR(1, 1, N);
// LCP_LR[1] == [1..N]
// LCP_LR[2] == [1..N/2]
// LCP_LR[3] == [N/2+1 .. N]
// rangeI = LCP_LR[i]
// rangeILeft = LCP_LR[2 * i]
// rangeIRight = LCP_LR[2 * i + 1]
// ..etc
void buildLCP_LR(int index, int low, int high)
{
if(low == high)
{
LCP_LR[index] = LCP[low];
return;
}
int mid = (low + high) / 2;
buildLCP_LR(2*index, low, mid);
buildLCP_LR(2*index+1, mid + 1, high);
LCP_LR[index] = min(LCP_LR[2*index], LCP_LR[2*index + 1]);
}
Here is a fairly simple implementation in C++, though the build() procedure builds the suffix array in O(N lg^2 N) time. The lcp_compute() procedure has linear complexity. I have used this code in many programming contests, and it has never let me down :)
#include <stdio.h>
#include <string.h>
#include <algorithm>
using namespace std;
const int MAX = 200005;
char str[MAX];
int N, h, sa[MAX], pos[MAX], tmp[MAX], lcp[MAX];
bool compare(int i, int j) {
if(pos[i] != pos[j]) return pos[i] < pos[j]; // compare by the first h chars
i += h, j += h; // if prefvious comparing failed, use 2*h chars
return (i < N && j < N) ? pos[i] < pos[j] : i > j; // return results
}
void build() {
N = strlen(str);
for(int i=0; i<N; ++i) sa[i] = i, pos[i] = str[i]; // initialize variables
for(h=1;;h<<=1) {
sort(sa, sa+N, compare); // sort suffixes
for(int i=0; i<N-1; ++i) tmp[i+1] = tmp[i] + compare(sa[i], sa[i+1]); // bucket suffixes
for(int i=0; i<N; ++i) pos[sa[i]] = tmp[i]; // update pos (reverse mapping of suffix array)
if(tmp[N-1] == N-1) break; // check if done
}
}
void lcp_compute() {
for(int i=0, k=0; i<N; ++i)
if(pos[i] != N-1) {
for(int j=sa[pos[i]+1]; str[i+k] == str[j+k];) k++;
lcp[pos[i]] = k;
if(k) k--;
}
}
int main() {
scanf("%s", str);
build();
for(int i=0; i<N; ++i) printf("%d\n", sa[i]);
return 0;
}
Note: If you want the complexity of the build() procedure to become O(N lg N), you can replace the STL sort with radix sort, but this is going to complicate the code.
Edit: Sorry, I misunderstood your question. Although i haven't implemented string matching with suffix array, I think I can describe you a simple non-standard, but fairly efficient algorithm for string matching. You are given two strings, the text, and the pattern. Given these string you create a new one, lets call it concat, which is the concatenation of the two given strings (first the text, then the pattern). You run the suffix array construction algorithm on concat, and you build the normal lcp array. Then, you search for a suffix of length pattern.size() in the suffix array you just built. Lets call its position in the suffix array pos. You then need two pointers lo and hi. At start lo = hi = pos. You decrease lo while lcp(lo, pos) = pattern.size() and you increase hi while lcp(hi, pos) = pattern.size(). Then you search for a suffix of length at least 2*pattern.size() in the range [lo, hi]. If you find it, you found a match. Otherwise, no match exists.
Edit[2]: I will be back with an implementation as soon as I have one...
Edit[3]:
Here it is:
// It works assuming you have builded the concatenated string and
// computed the suffix and the lcp arrays
// text.length() ---> tlen
// pattern.length() ---> plen
// concatenated string: str
bool match(int tlen, int plen) {
int total = tlen + plen;
int pos = -1;
for(int i=0; i<total; ++i)
if(total-sa[i] == plen)
{ pos = i; break; }
if(pos == -1) return false;
int lo, hi;
lo = hi = pos;
while(lo-1 >= 0 && lcp[lo-1] >= plen) lo--;
while(hi+1 < N && lcp[hi] >= plen) hi++;
for(int i=lo; i<=hi; ++i)
if(total-sa[i] >= 2*plen)
return true;
return false;
}
Here is a nice post including some code to help you better understand LCP array and comparison implementation.
I understand your desire is the code, rather than implementing your own.
Although written in Java this is an implementation of Suffix Array with LCP by Sedgewick and Wayne from their Algorithms booksite. It should save you some time and should not be tremendously hard to port to C/C++.
LCP array construction in pseudo for those who might want more information about the algorithm.
I think #Erti-Chris Eelmaa 's algorithm is wrong.
L ... 'M ... M ... M' ... R
|-----|-----|
Left sub range and right sub range should all contains M. Therefore we cannot do normal segment tree partition for LCP-LR array.
Code should look like
def lcp_from_i_j(i, j): # means [i, j] not [i, j)
if (j-i<1) return lcp_2_elem(i, j)
return lcp_merge(lcp_from_i_j(i, (i+j)/2), lcp_from_i_j((i+j)/2, j)
The left and the right sub ranges overlap. The segment tree supports range-min query. However, range min between [a,b] is not equal to lcp between [a,b]. LCP array is continuous, simple range-min would not work!

Finding repeating signed integers with O(n) in time and O(1) in space

(This is a generalization of: Finding duplicates in O(n) time and O(1) space)
Problem: Write a C++ or C function with time and space complexities of O(n) and O(1) respectively that finds the repeating integers in a given array without altering it.
Example: Given {1, 0, -2, 4, 4, 1, 3, 1, -2} function must print 1, -2, and 4 once (in any order).
EDIT: The following solution requires a duo-bit (to represent 0, 1, and 2) for each integer in the range of the minimum to the maximum of the array. The number of necessary bytes (regardless of array size) never exceeds (INT_MAX – INT_MIN)/4 + 1.
#include <stdio.h>
void set_min_max(int a[], long long unsigned size,\
int* min_addr, int* max_addr)
{
long long unsigned i;
if(!size) return;
*min_addr = *max_addr = a[0];
for(i = 1; i < size; ++i)
{
if(a[i] < *min_addr) *min_addr = a[i];
if(a[i] > *max_addr) *max_addr = a[i];
}
}
void print_repeats(int a[], long long unsigned size)
{
long long unsigned i;
int min, max = min;
long long diff, q, r;
char* duos;
set_min_max(a, size, &min, &max);
diff = (long long)max - (long long)min;
duos = calloc(diff / 4 + 1, 1);
for(i = 0; i < size; ++i)
{
diff = (long long)a[i] - (long long)min; /* index of duo-bit
corresponding to a[i]
in sequence of duo-bits */
q = diff / 4; /* index of byte containing duo-bit in "duos" */
r = diff % 4; /* offset of duo-bit */
switch( (duos[q] >> (6 - 2*r )) & 3 )
{
case 0: duos[q] += (1 << (6 - 2*r));
break;
case 1: duos[q] += (1 << (6 - 2*r));
printf("%d ", a[i]);
}
}
putchar('\n');
free(duos);
}
void main()
{
int a[] = {1, 0, -2, 4, 4, 1, 3, 1, -2};
print_repeats(a, sizeof(a)/sizeof(int));
}
The definition of big-O notation is that its argument is a function (f(x)) that, as the variable in the function (x) tends to infinity, there exists a constant K such that the objective cost function will be smaller than Kf(x). Typically f is chosen to be the smallest such simple function such that the condition is satisfied. (It's pretty obvious how to lift the above to multiple variables.)
This matters because that K — which you aren't required to specify — allows a whole multitude of complex behavior to be hidden out of sight. For example, if the core of the algorithm is O(n2), it allows all sorts of other O(1), O(logn), O(n), O(nlogn), O(n3/2), etc. supporting bits to be hidden, even if for realistic input data those parts are what actually dominate. That's right, it can be completely misleading! (Some of the fancier bignum algorithms have this property for real. Lying with mathematics is a wonderful thing.)
So where is this going? Well, you can assume that int is a fixed size easily enough (e.g., 32-bit) and use that information to skip a lot of trouble and allocate fixed size arrays of flag bits to hold all the information that you really need. Indeed, by using two bits per potential value (one bit to say whether you've seen the value at all, another to say whether you've printed it) then you can handle the code with fixed chunk of memory of 1GB in size. That will then give you enough flag information to cope with as many 32-bit integers as you might ever wish to handle. (Heck that's even practical on 64-bit machines.) Yes, it's going to take some time to set that memory block up, but it's constant so it's formally O(1) and so drops out of the analysis. Given that, you then have constant (but whopping) memory consumption and linear time (you've got to look at each value to see whether it's new, seen once, etc.) which is exactly what was asked for.
It's a dirty trick though. You could also try scanning the input list to work out the range allowing less memory to be used in the normal case; again, that adds only linear time and you can strictly bound the memory required as above so that's constant. Yet more trickiness, but formally legal.
[EDIT] Sample C code (this is not C++, but I'm not good at C++; the main difference would be in how the flag arrays are allocated and managed):
#include <stdio.h>
#include <stdlib.h>
// Bit fiddling magic
int is(int *ary, unsigned int value) {
return ary[value>>5] & (1<<(value&31));
}
void set(int *ary, unsigned int value) {
ary[value>>5] |= 1<<(value&31);
}
// Main loop
void print_repeats(int a[], unsigned size) {
int *seen, *done;
unsigned i;
seen = calloc(134217728, sizeof(int));
done = calloc(134217728, sizeof(int));
for (i=0; i<size; i++) {
if (is(done, (unsigned) a[i]))
continue;
if (is(seen, (unsigned) a[i])) {
set(done, (unsigned) a[i]);
printf("%d ", a[i]);
} else
set(seen, (unsigned) a[i]);
}
printf("\n");
free(done);
free(seen);
}
void main() {
int a[] = {1,0,-2,4,4,1,3,1,-2};
print_repeats(a,sizeof(a)/sizeof(int));
}
Since you have an array of integers you can use the straightforward solution with sorting the array (you didn't say it can't be modified) and printing duplicates. Integer arrays can be sorted with O(n) and O(1) time and space complexities using Radix sort. Although, in general it might require O(n) space, the in-place binary MSD radix sort can be trivially implemented using O(1) space (look here for more details).
The O(1) space constraint is intractable.
The very fact of printing the array itself requires O(N) storage, by definition.
Now, feeling generous, I'll give you that you can have O(1) storage for a buffer within your program and consider that the space taken outside the program is of no concern to you, and thus that the output is not an issue...
Still, the O(1) space constraint feels intractable, because of the immutability constraint on the input array. It might not be, but it feels so.
And your solution overflows, because you try to memorize an O(N) information in a finite datatype.
There is a tricky problem with definitions here. What does O(n) mean?
Konstantin's answer claims that the radix sort time complexity is O(n). In fact it is O(n log M), where the base of the logarithm is the radix chosen, and M is the range of values that the array elements can have. So, for instance, a binary radix sort of 32-bit integers will have log M = 32.
So this is still, in a sense, O(n), because log M is a constant independent of n. But if we allow this, then there is a much simpler solution: for each integer in the range (all 4294967296 of them), go through the array to see if it occurs more than once. This is also, in a sense, O(n), because 4294967296 is also a constant independent of n.
I don't think my simple solution would count as an answer. But if not, then we shouldn't allow the radix sort, either.
I doubt this is possible. Assuming there is a solution, let's see how it works. I'll try to be as general as I can and show that it can't work... So, how does it work?
Without losing generality we could say we process the array k times, where k is fixed. The solution should also work when there are m duplicates, with m >> k. Thus, in at least one of the passes, we should be able to output x duplicates, where x grows when m grows. To do so, some useful information has been computed in a previous pass and stored in the O(1) storage. (The array itself can't be used, this would give O(n) storage.)
The problem: we have O(1) of information, when we walk over the array we have to identify x numbers(to output them). We need a O(1) storage than can tell us in O(1) time, if an element is in it. Or said in a different way, we need a data structure to store n booleans (of wich x are true) that uses O(1) space, and takes O(1) time to query.
Does this data structure exists? If not, then we can't find all duplicates in an array with O(n) time and O(1) space (or there is some fancy algorithm that works in a completely different manner???).
I really don't see how you can have only O(1) space and not modify the initial array. My guess is that you need an additional data structure. For example, what is the range of the integers? If it's 0..N like in the other question you linked, you can have an additinal count array of size N. Then in O(N) traverse the original array and increment the counter at the position of the current element. Then traverse the other array and print the numbers with count >= 2. Something like:
int* counts = new int[N];
for(int i = 0; i < N; i++) {
counts[input[i]]++;
}
for(int i = 0; i < N; i++) {
if(counts[i] >= 2) cout << i << " ";
}
delete [] counts;
Say you can use the fact you are not using all the space you have. You only need one more bit per possible value and you have lots of unused bit in your 32-bit int values.
This has serious limitations, but works in this case. Numbers have to be between -n/2 and n/2 and if they repeat m times, they will be printed m/2 times.
void print_repeats(long a[], unsigned size) {
long i, val, pos, topbit = 1 << 31, mask = ~topbit;
for (i = 0; i < size; i++)
a[i] &= mask;
for (i = 0; i < size; i++) {
val = a[i] & mask;
if (val <= mask/2) {
pos = val;
} else {
val += topbit;
pos = size + val;
}
if (a[pos] < 0) {
printf("%d\n", val);
a[pos] &= mask;
} else {
a[pos] |= topbit;
}
}
}
void main() {
long a[] = {1, 0, -2, 4, 4, 1, 3, 1, -2};
print_repeats(a, sizeof (a) / sizeof (long));
}
prints
4
1
-2