I have two types of queries.
1 X Y
Add element X ,Y times in the collection.
2 N
Number of queries < 5 * 10^5
X < 10^9
Y < 10^9
Find Nth element in the sorted collection.
I tried STL set but it did not work.
I think we need balanced tree with each node containing two data values.
First value will be element X. And another will be prefix sum of all the Ys of elements smaller than or equal to value.
When we are adding element X find preprocessor of that first value.Add second value associated with preprocessor to Y.
When finding Nth element. Search in tree(second value) for value immediately lower than N.
How to efficiently implement this data structure ?
This can easily be done using segment tree data structure with complexity of O(Q*log(10^9))
We should use so called "sparse" segment tree so that we only create nodes when needed, instead of creating all nodes.
In every node we will save count of elements in range [L, R]
Now additions of some element y times can easily be done by traversing segment tree from root to leaf and updating the values (also creating nodes that do not exist yet).
Since the height of segment tree is logarithmic this takes log N time where N is our initial interval length (10^9)
Finding k-th element can easily be done using binary search on segment tree, since on every node we know the count of elements in some range, we can use this information to traverse left or right to the element which contains the k-th
Sample code (C++):
#include <bits/stdc++.h>
using namespace std;
#define ll long long
const int sz = 31*4*5*100000;
ll seg[sz];
int L[sz],R[sz];
int nxt = 2;
void IncNode(int c, int l, int r, int idx, int val)
{
if(l==r)
{
seg[c]+=val;
return;
}
int m = (l+r)/2;
if(idx <= m)
{
if(!L[c])L[c]=nxt++;
IncNode(L[c],l,m,idx,val);
}
else
{
if(!R[c])R[c]=nxt++;
IncNode(R[c],m+1,r,idx,val);
}
seg[c] = seg[L[c]] + seg[R[c]];
}
int FindKth(int c, int l, int r, ll k)
{
if(l==r)return r;
int m = (l+r)/2;
if(seg[L[c]] >= k)return FindKth(L[c],l,m,k);
return FindKth(R[c],m+1,r,k-seg[L[c]]);
}
int main()
{
ios::sync_with_stdio(0);cin.tie(0);cout.tie(0);
int Q;
cin>>Q;
int L = 0, R = 1e9;
while(Q--)
{
int type;
cin>>type;
if(type==1)
{
int x,y;
cin>>x>>y;
IncNode(1,L,R,x,y);
}
else
{
int k;
cin>>k;
cout<<FindKth(1,L,R,k)<<"\n";
}
}
}
Maintaining a prefix sum in each node is not practical. It would mean that every time you add a new node, you have to update the prefix sum in every node succeeding it in the tree. Instead, you need to maintain subtree sums: each node should contain the sum of Y-values for its own key and the keys of all descendants. Maintaining subtree sums when the tree is updated should be straightforward.
When you answer a query of type 2, at each node, you would descend into the left subtree if N is less than or equal to the subtree sum value S of the left child (I'm assuming N is 1-indexed). Otherwise, subtract S + 1 from N and descend into the right subtree.
By the way, if the entire set of X values is known in advance, then instead of a balanced BST, you could use a range tree or a binary indexed tree.
Related
I have a question about this problem.
Question
You are given a sequence a[0], a 1],..., a[N-1], and set of range (l[i], r[i]) (0 <= i <= Q - 1).
Calculate mex(a[l[i]], a[l[i] + 1],..., a[r[i] - 1]) for all (l[i], r[i]).
The function mex is minimum excluded value.
Wikipedia Page of mex function
You can assume that N <= 100000, Q <= 100000, and a[i] <= 100000.
O(N * (r[i] - l[i]) log(r[i] - l[i]) ) algorithm is obvious, but it is not efficient.
My Current Approach
#include <bits/stdc++.h>
using namespace std;
int N, Q, a[100009], l, r;
int main() {
cin >> N >> Q;
for(int i = 0; i < N; i++) cin >> a[i];
for(int i = 0; i < Q; i++) {
cin >> l >> r;
set<int> s;
for(int j = l; j < r; j++) s.insert(a[i]);
int ret = 0;
while(s.count(ret)) ret++;
cout << ret << endl;
}
return 0;
}
Please tell me how to solve.
EDIT: O(N^2) is slow. Please tell me more fast algorithm.
Here's an O((Q + N) log N) solution:
Let's iterate over all positions in the array from left to right and store the last occurrences for each value in a segment tree (the segment tree should store the minimum in each node).
After adding the i-th number, we can answer all queries with the right border equal to i.
The answer is the smallest value x such that last[x] < l. We can find by going down the segment tree starting from the root (if the minimum in the left child is smaller than l, we go there. Otherwise, we go to the right child).
That's it.
Here is some pseudocode:
tree = new SegmentTree() // A minimum segment tree with -1 in each position
for i = 0 .. n - 1
tree.put(a[i], i)
for all queries with r = i
ans for this query = tree.findFirstSmaller(l)
The find smaller function goes like this:
int findFirstSmaller(node, value)
if node.isLeaf()
return node.position()
if node.leftChild.minimum < value
return findFirstSmaller(node.leftChild, value)
return findFirstSmaller(node.rightChild)
This solution is rather easy to code (all you need is a point update and the findFisrtSmaller function shown above and I'm sure that it's fast enough for the given constraints.
Let's process both our queries and our elements in a left-to-right manner, something like
for (int i = 0; i < N; ++i) {
// 1. Add a[i] to all internal data structures
// 2. Calculate answers for all queries q such that r[q] == i
}
Here we have O(N) iterations of this loop and we want to do both update of the data structure and query the answer for suffix of currently processed part in o(N) time.
Let's use the array contains[i][j] which has 1 if suffix starting at the position i contains number j and 0 otherwise. Consider also that we have calculated prefix sums for each contains[i] separately. In this case we could answer each particular suffix query in O(log N) time using binary search: we should just find the first zero in the corresponding contains[l[i]] array which is exactly the first position where the partial sum is equal to index, and not to index + 1. Unfortunately, such arrays would take O(N^2) space and need O(N^2) time for each update.
So, we have to optimize. Let's build a 2-dimensional range tree with "sum query" and "assignment" range operations. In such tree we can query sum on any sub-rectangle and assign the same value to all the elements of any sub-rectangle in O(log^2 N) time, which allows us to do the update in O(log^2 N) time and queries in O(log^3 N) time, giving the time complexity O(Nlog^2 N + Qlog^3 N). The space complexity O((N + Q)log^2 N) (and the same time for initialization of the arrays) is achieved using lazy initialization.
UP: Let's revise how the query works in range trees with "sum". For 1-dimensional tree (to not make this answer too long), it's something like this:
class Tree
{
int l, r; // begin and end of the interval represented by this vertex
int sum; // already calculated sum
int overriden; // value of override or special constant
Tree *left, *right; // pointers to children
}
// returns sum of the part of this subtree that lies between from and to
int Tree::get(int from, int to)
{
if (from > r || to < l) // no intersection
{
return 0;
}
if (l <= from && to <= r) // whole subtree lies within the interval
{
return sum;
}
if (overriden != NO_OVERRIDE) // should push override to children
{
left->overriden = right->overriden = overriden;
left->sum = right->sum = (r - l) / 2 * overriden;
overriden = NO_OVERRIDE;
}
return left->get(from, to) + right->get(from, to); // split to 2 queries
}
Given that in our particular case all queries to the tree are prefix sum queries, from is always equal to 0, so, one of the calls to children always return a trivial answer (0 or already computed sum). So, instead of doing O(log N) queries to the 2-dimensional tree in the binary search algorithm, we could implement an ad-hoc procedure for search, very similar to this get query. It should first get the value of the left child (which takes O(1) since it's already calculated), then check if the node we're looking for is to the left (this sum is less than number of leafs in the left subtree) and go to the left or to the right based on this information. This approach will further optimize the query to O(log^2 N) time (since it's one tree operation now), giving the resulting complexity of O((N + Q)log^2 N)) both time and space.
Not sure this solution is fast enough for both Q and N up to 10^5, but it may probably be further optimized.
I am solving the problem http://www.spoj.com/problems/SHOP/ in C++ but I am unable to figure out how to input the graph to furhter apply Dijkstra algorithm in it.
Here is the graph format-
4 3
X 1 S 3
4 2 X 4
X 1 D 2
First line indicated the columns & rows of the grid ,"S" & "D" -indicates source and destination respetively Numbers -indicates the time required to pass that block,"X"-indicates the no entry zone.
HOw to convert the following graph in nodes and edges as required by DIjkstra algorithm.I don't know how to convert the map into a graph.
There is no need to convert. Just imagine that you are in some point (i,j). (I assume that you have four moves allowed from each square). Then, you can go to either (i + 1, j), (i, j + 1), (i - 1, j), (i, j - 1) if:
1) That index is inside the table
2) That index is not marked with X
So, you give the position of square S to your Dijkstra algorithm. And each time you add the new set of allowed squares to your data structure. Once your reach the position of D you print it.
Besides, this problem does not seem weighted to me so you can use a simple BFS as well using a queue. But if you want to use Dijkstra and going to different squares has different costs. The you use a priority queue instead of queue.
For example, you can use a set data structure like this:
int dist[][]; // this contains the cost to get to some square
//dist is initialized with a large number
struct node{
int i, j; //location
node(int ii, int jj){
i = ii;
j = jj;
}
bool operator < (node &n)const{ //set in c++ will use this to sort
if(dist[i][j] == dist[n.i][n.j]) return i < n.i || j < n.j; //this is necessary
return dist[i][j] < dist[n.i][n.j];
}
};
set <node> q;
int main(){
//initialized dist with large number
dist[S.i][S.j] = 0; //we start from source
q.push(node(S.i, S.j));
while(true){
//pick the first element in set
//this element has the smallest cost
//update dist using this node if necessary
//for every node that you update remove from q and add it again
//this way the location of that node will be updated in q
//if you see square 'D' you are done and you can print dist[D.i][D.j]
}
return 0;
}
There is no need to convert the matrix into nodes and edges.
You can make structure which contain (row number,column number ,time ) where time will represent how much time taken to reach this coordinate from source. now make a min heap of this structure with key as time. now extract element (initially source will be in min heap with time as 0) from min heap and push the adjacent elements into min heap(only those elements which are not visited and do not contain a X) set visited of extracted element true.Go on like this until extracted element is not destination.
Given an array of values of length n, is there a way to count the number of swaps that would be performed by insertion sort to sort that array in time better than O(n2)?
For example :
arr[]={2 ,1, 3, 1, 2}; // Answer is 4.
Algorithm:
for i <- 2 to N
j <- i
while j > 1 and a[j] < a[j - 1]
swap a[j] and a[j - 1] //I want to count this swaps?
j <- j - 1
If you want to count the number of swaps needed in insertion sort, then you want to find the following number: for each element, how many previous elements inn the array are smaller than it? The sum of these values is then the total number of swaps performed.
To find the number, you can use an order statistic tree, a balanced binary search tree that can efficiently tell you how many elements in the tree are smaller then some given element. Specifically, an orde statistic tree supports O(log n) insertion, deletion, lookup, and count of how many elements in the tree are less than some value. You can then count how many swaps will be performed as follows:
Initialize a new, empty order statistic tree.
Set count = 0
For each array element, in order:
Add the element to the order statistic tree.
Add to count the number of elements in the tree less than the value added.
Return count,
This does O(n) iterations of a loop that takes O(log n) time, so the total work done is O(n log n), which is faster than the brute-force approach.
If you want to count the number of swaps in selection sort, then you can use the fact that insertion sort will only perform a swap on the kth pass if, after processing the first k-1 elements of the list, the element in position k is not the kth smallest element. If you can do this efficiently, then we have the following basic sketch of an algorithm:
Set total = 0
For k = 1 to n:
If the element at index k isn't the kth largest element:
Swap it with the kth largest element.
Increment total
Return total
So how do we implement this efficiently? We need to efficiently be able to check whether the element at a given index is the correct element, and also need to efficiently find the position of the element that really does belong at a given index otherwise. To do this, begin by creating a balanced binary search tree that maps each element to its position in the original array. This takes time O(n log n). Now that you have the balanced tree, we can augment the structure by assigning to each element in the tree the position in the sorted sequence that this element belongs. One way to do this is with an order statistic tree, and another would be to iterate over the tree with an inorder traversal, annotating each value in the tree with its position.
Using this structure, we can check in O(log n) time whether or not an element is in the right position by looking the element up in the tree (time O(log n)), then looking at the position in the sorted sequence at which it should be and at which position it's currently located (remember that we set this up when creating the tree). If it disagrees with our expected position, then it's in the wrong place, and otherwise it's in the right place. Also, we can efficiently simulate a swap of two elements by looking up those two elements in the tree (O(log n) time total) and then swapping their positions in O(1).
As a result, we can implement the above algorithm in time O(n log n) - O(n log n) time to build the tree, then n iterations of doing O(log n) work to determine whether or not to swap.
Hope this helps!
The number of interchanges of consecutive elements necessary to arrange them in their natural order is equal to the number of inversions in the given permutation.
So the solution to this problem is to find the number of inversions in the given array of numbers.
This can be solved in O(n log n) using merge sort.
In the merge step, if you copy an element from the right array, increment a global counter (that counts inversions) by the number of items remaining in the left array. This is done because the element from the right array that just got copied is involved in an inversion with all the elements in present in the left array.
I'm not sure, but I suspect finding the minimum number is a difficult problem. Unless there's a shortcut, you'll just be searching for optimal sorting networks, which you should be able to find good resources on with your favorite search engine (or Wikipedia).
If you only care about the big-O complexity, the answer is O(n log n), and you can probably get more concrete bounds (some actual constants in there) if you look at the analysis of some efficient in-place sorting algorithms like heapsort or smoothsort.
package insertoinSortAnalysis;
import java.io.File;
import java.io.FileInputStream;
import java.io.FileNotFoundException;
import java.util.Scanner;
public class Solution {
private int[] originalArray;
public static void main(String[] args) {
Scanner sc;
try {
sc = new Scanner(System.in);
int TestCases = sc.nextInt();
for (int i = 0; i < TestCases; i++) {
int sizeofarray = sc.nextInt();
Solution s = new Solution();
s.originalArray = new int[sizeofarray];
for (int j = 0; j < sizeofarray; j++)
s.originalArray[j] = sc.nextInt();
s.devide(s.originalArray, 0, sizeofarray - 1);
System.out.println(s.count);
}
} catch (Exception e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
public int[] devide(int[] originalArray, int low, int high) {
if (low < high) {
int mid = (low + high) / 2;
int[] result1 = devide(originalArray, low, mid);
int[] result2 = devide(originalArray, mid + 1, high);
return merge(result1, result2);
}
int[] result = { originalArray[low] };
return result;
}
private long count = 0;
private int[] merge(int[] array1, int[] array2) {
int lowIndex1 = 0;
int lowIndex2 = 0;
int highIndex1 = array1.length - 1;
int highIndex2 = array2.length - 1;
int result[] = new int[array1.length + array2.length];
int i = 0;
while (lowIndex2 <= highIndex2 && lowIndex1 <= highIndex1) {
int element = array1[lowIndex1];
while (lowIndex2 <= highIndex2 && element > array2[lowIndex2]) {
result[i++] = array2[lowIndex2++];
count += ((highIndex1 - lowIndex1) + 1);
}
result[i++] = element;
lowIndex1++;
}
while (lowIndex2 <= highIndex2 && lowIndex1 > highIndex1) {
result[i++] = array2[lowIndex2++];
}
while (lowIndex1 <= highIndex1 && lowIndex2 > highIndex2) {
result[i++] = array1[lowIndex1++];
}
return result;
}
}
Each swap in the insertion sort moves two adjacent elements - one up by one, one down by one - and `corrects' a single crossing by doing so. So:
Annotate each item, X, with its initial array index, Xi.
Sort the items using a stable sort (you can use quicksort if you treat the `initial position' annotation as a minor key)
Return half the sum of the absolute differences between each element's annotated initial position and its final position (i.e. just loop through the annotations summing abs(Xi - i)).
Just like most of the other answers, this is O(n) space and O(n*log n) time. If an in-place merge could be modified to count the crossings, that'd be better. I'm not sure it can though.
#include<stdio.h>
#include<string.h>
#include<iostream>
#include<algorithm>
using namespace std;
int a[200001];
int te[200001];
unsigned long long merge(int arr[],int temp[],int left,int mid,int right)
{
int i=left;
int j=mid;
int k=left;
unsigned long long int icount=0;
while((i<=mid-1) && (j<=right))
{
if(arr[i]<=arr[j])
temp[k++]=arr[i++];
else
{
temp[k++]=arr[j++];
icount+=(mid-i);
}
}
while(i<=mid-1)
temp[k++]=arr[i++];
while(j<=right)
temp[k++]=arr[j++];
for(int i=left;i<=right;i++)
arr[i]=temp[i];
return icount;
}
unsigned long long int mergesort(int arr[],int temp[],int left,int right)
{
unsigned long long int i=0;
if(right>left){
int mid=(left+right)/2;
i=mergesort(arr,temp,left,mid);
i+=mergesort(arr,temp,mid+1,right);
i+=merge(arr,temp,left,mid+1,right);
}
return i;
}
int main()
{
int t,n;
scanf("%d",&t);
while(t--){
scanf("%d",&n);
for(int i=0;i<n;i++){
scanf("%d",&a[i]);
}
printf("%llu\n",mergesort(a,te,0,n-1));
}
return 0;
}
I've been implementing selection sort problems for class and one of the assignments is to find the kth smallest element in the array using a minimum heap. I know the procedure is:
heapify the array
delete the minimum (root) k times
return kth smallest element in the group
I don't have any problems creating a minimum heap. I'm just not sure how to go about properly deleting the minimum k times and successfully return the kth smallest element in the group. Here's what I have so far:
bool Example::min_heap_select(long k, long & kth_smallest) const {
//duplicate test group (thanks, const!)
Example test = Example(*this);
//variable delcaration and initlization
int n = test._total ;
int i;
//Heapifying stage (THIS WORKS CORRECTLY)
for (i = n/2; i >= 0; i--) {
//allows for heap construction
test.percolate_down_protected(i, n);
}//for
//Delete min phase (THIS DOESN'T WORK)
for(i = n-1; i >= (n-k+1); i--) {
//deletes the min by swapping elements
int tmp = test._group[0];
test._group[0] = test._group[i];
test._group[i] = tmp;
//resumes perc down
test.percolate_down_protected(0, i);
}//for
//IDK WHAT TO RETURN
kth_smallest = test._group[0];
void Example::percolate_down_protected(long i, long n) {
//variable declaration and initlization:
int currPos, child, r_child, tmp;
currPos = i;
tmp = _group[i];
child = left_child(i);
//set a sentinel and begin loop (no recursion allowed)
while (child < n) {
//calculates the right child's position
r_child = child + 1;
//we'll set the child to index of greater than right and left children
if ((r_child > n ) && (_group[r_child] >= _group[child])) {
child = r_child;
}
//find the correct spot
if (tmp <= _group [child]) {
break;
}
//make sure the smaller child is beneath the parent
_group[currPos] = _group[child];
//shift the tree down
currPos = child;
child = left_child(currPos);
}
//put tmp where it belongs
_group[currPos] = tmp;
}
As I stated before, the minimum heap part works correctly. I understand what I what to do- it seems easy to delete the root k times but then after that what index in the array do I return... 0? This almost works- it doesn't worth with k = n or k = 1.Would the kth smallest element be in the Any help would be much appreciated!
The only array index which is meaningful to the user is zero, which is the minimum element. So, after removing k elements, the k'th smallest element will be at zero.
Probably you should destroy the heap and return the value rather than asking the user to concern themself with the heap itself… but I don't know the details of the assignment.
Note that the C++ Standard Library has algorithms to help with this: make_heap, pop_heap, and nth_element.
I am not providing a detailed answer, just explaining the key points in getting k smallest elements in a min-heap ordered tree. The approach uses skip lists.
First form a skip list of nodes of the tree with just one element the node corresponding to the root of the heap. the 1st minimum element is just the value stored at this node.
Now delete this node and insert its child nodes in the right position such that to maintain the order of values. This steps takes O(logk) time.
The second minimum value is just then the value at first node in this skip list.
Repeat the above steps until you get all the k minimum elements. The overall time complexity will be log(2)+log(3)+log(4)+... log(k) = O(k.logk). Forming a heap takes time n, so overall time complexity is O(n+klogk).
There is one more approach without making a heap that is Quickselect, which has an average time complexity of O(n) but worst case as O(n^2).
The striking difference between the two approaches is that the first approach gives all the k elements the minimum upto the kth minimum, while quickSelect gives only the kth minimum element.
Memory wise the former approach uses O(n) extra space which quickSelect uses O(1).
I have to permute N first elements of a singly linked list of length n, randomly. Each element is defined as:
typedef struct E_s
{
struct E_s *next;
}E_t;
I have a root element and I can traverse the whole linked list of size n. What is the most efficient technique to permute only N first elements (starting from root) randomly?
So, given a->b->c->d->e->f->...x->y->z I need to make smth. like f->a->e->c->b->...x->y->z
My specific case:
n-N is about 20% relative to n
I have limited RAM resources, the best algorithm should make it in place
I have to do it in a loop, in many iterations, so the speed does matter
The ideal randomness (uniform distribution) is not required, it's Ok if it's "almost" random
Before making permutations, I traverse the N elements already (for other needs), so maybe I could use this for permutations as well
UPDATE: I found this paper. It states it presents an algorithm of O(log n) stack space and expected O(n log n) time.
I've not tried it, but you could use a "randomized merge-sort".
To be more precise, you randomize the merge-routine. You do not merge the two sub-lists systematically, but you do it based on a coin toss (i.e. with probability 0.5 you select the first element of the first sublist, with probability 0.5 you select the first element of the right sublist).
This should run in O(n log n) and use O(1) space (if properly implemented).
Below you find a sample implementation in C you might adapt to your needs. Note that this implementation uses randomisation at two places: In splitList and in merge. However, you might choose just one of these two places. I'm not sure if the distribution is random (I'm almost sure it is not), but some test cases yielded decent results.
#include <stdio.h>
#include <stdlib.h>
#define N 40
typedef struct _node{
int value;
struct _node *next;
} node;
void splitList(node *x, node **leftList, node **rightList){
int lr=0; // left-right-list-indicator
*leftList = 0;
*rightList = 0;
while (x){
node *xx = x->next;
lr=rand()%2;
if (lr==0){
x->next = *leftList;
*leftList = x;
}
else {
x->next = *rightList;
*rightList = x;
}
x=xx;
lr=(lr+1)%2;
}
}
void merge(node *left, node *right, node **result){
*result = 0;
while (left || right){
if (!left){
node *xx = right;
while (right->next){
right = right->next;
}
right->next = *result;
*result = xx;
return;
}
if (!right){
node *xx = left;
while (left->next){
left = left->next;
}
left->next = *result;
*result = xx;
return;
}
if (rand()%2==0){
node *xx = right->next;
right->next = *result;
*result = right;
right = xx;
}
else {
node *xx = left->next;
left->next = *result;
*result = left;
left = xx;
}
}
}
void mergeRandomize(node **x){
if ((!*x) || !(*x)->next){
return;
}
node *left;
node *right;
splitList(*x, &left, &right);
mergeRandomize(&left);
mergeRandomize(&right);
merge(left, right, &*x);
}
int main(int argc, char *argv[]) {
srand(time(NULL));
printf("Original Linked List\n");
int i;
node *x = (node*)malloc(sizeof(node));;
node *root=x;
x->value=0;
for(i=1; i<N; ++i){
node *xx;
xx = (node*)malloc(sizeof(node));
xx->value=i;
xx->next=0;
x->next = xx;
x = xx;
}
x=root;
do {
printf ("%d, ", x->value);
x=x->next;
} while (x);
x = root;
node *left, *right;
mergeRandomize(&x);
if (!x){
printf ("Error.\n");
return -1;
}
printf ("\nNow randomized:\n");
do {
printf ("%d, ", x->value);
x=x->next;
} while (x);
printf ("\n");
return 0;
}
Convert to an array, use a Fisher-Yates shuffle, and convert back to a list.
I don't believe there's any efficient way to randomly shuffle singly-linked lists without an intermediate data structure. I'd just read the first N elements into an array, perform a Fisher-Yates shuffle, then reconstruct those first N elements into the singly-linked list.
First, get the length of the list and the last element. You say you already do a traversal before randomization, that would be a good time.
Then, turn it into a circular list by linking the first element to the last element. Get four pointers into the list by dividing the size by four and iterating through it for a second pass. (These pointers could also be obtained from the previous pass by incrementing once, twice, and three times per four iterations in the previous traversal.)
For the randomization pass, traverse again and swap pointers 0 and 2 and pointers 1 and 3 with 50% probability. (Do either both swap operations or neither; just one swap will split the list in two.)
Here is some example code. It looks like it could be a little more random, but I suppose a few more passes could do the trick. Anyway, analyzing the algorithm is more difficult than writing it :vP . Apologies for the lack of indentation; I just punched it into ideone in the browser.
http://ideone.com/9I7mx
#include <iostream>
#include <cstdlib>
#include <ctime>
using namespace std;
struct list_node {
int v;
list_node *n;
list_node( int inv, list_node *inn )
: v( inv ), n( inn) {}
};
int main() {
srand( time(0) );
// initialize the list and 4 pointers at even intervals
list_node *n_first = new list_node( 0, 0 ), *n = n_first;
list_node *p[4];
p[0] = n_first;
for ( int i = 1; i < 20; ++ i ) {
n = new list_node( i, n );
if ( i % (20/4) == 0 ) p[ i / (20/4) ] = n;
}
// intervals must be coprime to list length!
p[2] = p[2]->n;
p[3] = p[3]->n;
// turn it into a circular list
n_first->n = n;
// swap the pointers around to reshape the circular list
// one swap cuts a circular list in two, or joins two circular lists
// so perform one cut and one join, effectively reordering elements.
for ( int i = 0; i < 20; ++ i ) {
list_node *p_old[4];
copy( p, p + 4, p_old );
p[0] = p[0]->n;
p[1] = p[1]->n;
p[2] = p[2]->n;
p[3] = p[3]->n;
if ( rand() % 2 ) {
swap( p_old[0]->n, p_old[2]->n );
swap( p_old[1]->n, p_old[3]->n );
}
}
// you might want to turn it back into a NULL-terminated list
// print results
for ( int i = 0; i < 20; ++ i ) {
cout << n->v << ", ";
n = n->n;
}
cout << '\n';
}
For the case when N is really big (so it doesn't fit your memory), you can do the following (a sort of Knuth's 3.4.2P):
j = N
k = random between 1 and j
traverse the input list, find k-th item and output it; remove the said item from the sequence (or mark it somehow so that you won't consider it at the next traversal)
decrease j and return to 2 unless j==0
output the rest of the list
Beware that this is O(N^2), unless you can ensure random access in the step 3.
In case the N is relatively small, so that N items fit into the memory, just load them into array and shuffle, like #Mitch proposes.
If you know both N and n, I think you can do it simply. It's fully random, too. You only iterate through the whole list once, and through the randomized part each time you add a node. I think that's O(n+NlogN) or O(n+N^2). I'm not sure. It's based upon updating the conditional probability that a node is selected for the random portion given what happened to previous nodes.
Determine the probability that a certain node will be selected for the random portion given what happened to previous nodes (p=(N-size)/(n-position) where size is number of nodes previously chosen and position is number of nodes previously considered)
If node is not selected for random part, move to step 4. If node is selected for the random part, randomly choose place in random part based upon the size so far (place=(random between 0 and 1) * size, size is again number of previous nodes).
Place the node where it needs to go, update the pointers. Increment size. Change to looking at the node that previously pointed at what you were just looking at and moved.
Increment position, look at the next node.
I don't know C, but I can give you the pseudocode. In this, I refer to the permutation as the first elements that are randomized.
integer size=0; //size of permutation
integer position=0 //number of nodes you've traversed so far
Node head=head of linked list //this holds the node at the head of your linked list.
Node current_node=head //Starting at head, you'll move this down the list to check each node, whether you put it in the list.
Node previous=head //stores the previous node for changing pointers. starts at head to avoid asking for the next field on a null node
While ((size not equal to N) or (current_node is not null)){ //iterating through the list until the permutation is full. We should never pass the end of list, but just in case, I include that condition)
pperm=(N-size)/(n-position) //probability that a selected node will be in the permutation.
if ([generate a random decimal between 0 and 1] < pperm) //this decides whether or not the current node will go in the permutation
if (j is not equal to 0){ //in case we are at start of list, there's no need to change the list
pfirst=1/(size+1) //probability that, if you select a node to be in the permutation, that it will be first. Since the permutation has
//zero elements at start, adding an element will make it the initial node of a permutation and percent chance=1.
integer place_in_permutation = round down([generate a random decimal between 0 and 1]/pfirst) //place in the permutation. note that the head =0.
previous.next=current_node.next
if(place_in_permutation==0){ //if placing current node first, must change the head
current_node.next=head //set the current Node to point to the previous head
head=current_node //set the variable head to point to the current node
}
else{
Node temp=head
for (counter starts at zero. counter is less than place_in_permutation-1. Each iteration, increment counter){
counter=counter.next
} //at this time, temp should point to the node right before the insertion spot
current_node.next=temp.next
temp.next=current_node
}
current_node=previous
}
size++ //since we add one to the permutation, increase the size of the permutation
}
j++;
previous=current_node
current_node=current_node.next
}
You could probably increase the efficiency if you held on to the most recently added node in case you had to add one to the right of it.
Similar to Vlad's answer, here is a slight improvement (statistically):
Indices in algorithm are 1 based.
Initialize lastR = -1
If N <= 1 go to step 6.
Randomize number r between 1 and N.
if r != N
4.1 Traverse the list to item r and its predecessor.
If lastR != -1
If r == lastR, your pointer for the of the r'th item predecessor is still there.
If r < lastR, traverse to it from the beginning of the list.
If r > lastR, traverse to it from the predecessor of the lastR'th item.
4.2 remove the r'th item from the list into a result list as the tail.
4.3 lastR = r
Decrease N by one and go to step 2.
link the tail of the result list to the head of the remaining input list. You now have the original list with the first N items permutated.
Since you do not have random access, this will reduce the traversing time you will need within the list (I assume that by half, so asymptotically, you won't gain anything).
O(NlogN) easy to implement solution that does not require extra storage:
Say you want to randomize L:
is L has 1 or 0 elements you are done
create two empty lists L1 and L2
loop over L destructively moving its elements to L1 or L2 choosing between the two at random.
repeat the process for L1 and L2 (recurse!)
join L1 and L2 into L3
return L3
Update
At step 3, L should be divided into equal sized (+-1) lists L1 and L2 in order to guaranty best case complexity (N*log N). That can be done adjusting the probability of one element going into L1 or L2 dynamically:
p(insert element into L1) = (1/2 * len0(L) - len(L1)) / len(L)
where
len(M) is the current number of elements in list M
len0(L) is the number of elements there was in L at the beginning of step 3
There is an algorithm takes O(sqrt(N)) space and O(N) time, for a singly linked list.
It does not generate a uniform distribution over all permutation sequence, but it can gives good permutation that is not easily distinguishable. The basic idea is similar to permute a matrix by rows and columns as described below.
Algorithm
Let the size of the elements to be N, and m = floor(sqrt(N)). Assuming a "square matrix" N = m*m will make this method much clear.
In the first pass, you should store the pointers of elements that is separated by every m elements as p_0, p_1, p_2, ..., p_m. That is, p_0->next->...->next(m times) == p_1 should be true.
Permute each row
For i = 0 to m do:
Index all elements between p_i->next to p_(i+1)->next in the link list by an array of size O(m)
Shuffle this array using standard method
Relink the elements using this shuffled array
Permute each column.
Initialize an array A to store pointers p_0, ..., p_m. It is used to traverse the columns
For i = 0 to m do
Index all elements pointed A[0], A[1], ..., A[m-1] in the link list by an array of size m
Shuffle this array
Relink the elements using this shuffled array
Advance the pointer to next column A[i] := A[i]->next
Note that p_0 is an element point to the first element and the p_m point to the last element. Also, if N != m*m, you may use m+1 separation for some p_i instead. Now you get a "matrix" such that the p_i point to the start of each row.
Analysis and randomness
Space complexity: This algorithm need O(m) space to store the start of row. O(m) space to store the array and O(m) space to store the extra pointer during column permutation. Hence, time complexity is ~ O(3*sqrt(N)). For N = 1000000, it is around 3000 entries and 12 kB memory.
Time complexity: It is obviously O(N). It either walk through the "matrix" row by row or column by column
Randomness: The first thing to note is that each element can go to anywhere in the matrix by row and column permutation. It is very important that elements can go to anywhere in the linked list. Second, though it does not generate all permutation sequence, it does generate part of them. To find the number of permutation, we assume N=m*m, each row permutation has m! and there is m row, so we have (m!)^m. If column permutation is also include, it is exactly equal to (m!)^(2*m), so it is almost impossible to get the same sequence.
It is highly recommended to repeat the second and third step by at least one more time to get an more random sequence. Because it can suppress almost all the row and column correlation to its original location. It is also important when your list is not "square". Depends on your need, you may want to use even more repetition. The more repetition you use, the more permutation it can be and the more random it is. I remember that it is possible to generate uniform distribution for N=9 and I guess that it is possible to prove that as repetition tends to infinity, it is the same as the true uniform distribution.
Edit: The time and space complexity is tight bound and is almost the same in any situation. I think this space consumption can satisfy your need. If you have any doubt, you may try it in a small list and I think you will find it useful.
The list randomizer below has complexity O(N*log N) and O(1) memory usage.
It is based on the recursive algorithm described on my other post modified to be iterative instead of recursive in order to eliminate the O(logN) memory usage.
#include <stdlib.h>
#include <stdio.h>
#include <string.h>
typedef struct node {
struct node *next;
char *str;
} node;
unsigned int
next_power_of_two(unsigned int v) {
v--;
v |= v >> 1;
v |= v >> 2;
v |= v >> 4;
v |= v >> 8;
v |= v >> 16;
return v + 1;
}
void
dump_list(node *l) {
printf("list:");
for (; l; l = l->next) printf(" %s", l->str);
printf("\n");
}
node *
array_to_list(unsigned int len, char *str[]) {
unsigned int i;
node *list;
node **last = &list;
for (i = 0; i < len; i++) {
node *n = malloc(sizeof(node));
n->str = str[i];
*last = n;
last = &n->next;
}
*last = NULL;
return list;
}
node **
reorder_list(node **last, unsigned int po2, unsigned int len) {
node *l = *last;
node **last_a = last;
node *b = NULL;
node **last_b = &b;
unsigned int len_a = 0;
unsigned int i;
for (i = len; i; i--) {
double pa = (1.0 + RAND_MAX) * (po2 - len_a) / i;
unsigned int r = rand();
if (r < pa) {
*last_a = l;
last_a = &l->next;
len_a++;
}
else {
*last_b = l;
last_b = &l->next;
}
l = l->next;
}
*last_b = l;
*last_a = b;
return last_b;
}
unsigned int
min(unsigned int a, unsigned int b) {
return (a > b ? b : a);
}
randomize_list(node **l, unsigned int len) {
unsigned int po2 = next_power_of_two(len);
for (; po2 > 1; po2 >>= 1) {
unsigned int j;
node **last = l;
for (j = 0; j < len; j += po2)
last = reorder_list(last, po2 >> 1, min(po2, len - j));
}
}
int
main(int len, char *str[]) {
if (len > 1) {
node *l;
len--; str++; /* skip program name */
l = array_to_list(len, str);
randomize_list(&l, len);
dump_list(l);
}
return 0;
}
/* try as: a.out list of words foo bar doz li 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14
*/
Note that this version of the algorithm is completely cache unfriendly, the recursive version would probably perform much better!
If both the following conditions are true:
you have plenty of program memory (many embedded hardwares execute directly from flash);
your solution does not suffer that your "randomness" repeats often,
Then you can choose a sufficiently large set of specific permutations, defined at programming time, write a code to write the code that implements each, and then iterate over them at runtime.