Randomly swap two elements in an array in c++ - c++

Is there any way to randomly swap two elements (two different index) in an array in C++? My idea is to randomly pick the first index, then randomly pick the second one till the second index is different from the first index. Then swap these two elements. I am wondering is there any better way to do this?
I think this is different from random_shuffle because each time I only want to swap two elements in the array and keep others in the original order.

Yes, pick two numbers First from [0...N-1] and Second from [0..N-2]. If First <= Second then ++Second so Second ends up in [0...First-1] or [First+1...N-1]. No retries needed.
Example: Say you have N=10 so First runs from 0-9 inclusive. Imagine you pick First=5. You know have 9 elements left from which to pick Second, namely 0-4 and 6-9. You now pick a number 0-8 instead, and map the subrange of possible results 5-8 to 6-9 by adding one.
<= is important. If you only added 1 if First!=Second, the chances of swapping 5 and 6 would be double, and the chances of swapping 5 and 9 would be 0%.

Here is one solution based on #MSalters suggestion:
int first = randInt(myArray.size()-1);
int second = randInt(myArray.size()-1);
while (first == second) {
second = randInt(myArray.size()-1);
}
char firstLetter = letters[first];
char secondLetter = letters[second];
myArray[first] = secondLetter;
myArray[second] = firstLetter;

Related

Go through the array from left to right and collect as many numbers as possible

CSES problem (https://cses.fi/problemset/task/2216/).
You are given an array that contains each number between 1…n exactly once. Your task is to collect the numbers from 1 to n in increasing order.
On each round, you go through the array from left to right and collect as many numbers as possible. What will be the total number of rounds?
Constraints: 1≤n≤2⋅10^5
This is my code on c++:
int n, res=0;
cin>>n;
int arr[n];
set <int, greater <int>> lastEl;
for(int i=0; i<n; i++) {
cin>>arr[i];
auto it=lastEl.lower_bound(arr[i]);
if(it==lastEl.end()) res++;
else lastEl.erase(*it);
lastEl.insert(arr[i]);
}
cout<<res;
I go through the array once. If the element arr[i] is smaller than all the previous ones, then I "open" a new sequence, and save the element as the last element in this sequence. I store the last elements of already opened sequences in set. If arr[i] is smaller than some of the previous elements, then I take already existing sequence with the largest last element (but less than arr[i]), and replace the last element of this sequence with arr[i].
Alas, it works only on two tests of three given, and for the third one the output is much less than it shoud be. What am I doing wrong?
Let me explain my thought process in detail so that it will be easier for you next time when you face the same type of problem.
First of all, a mistake I often made when faced with this kind of problem is the urge to simulate the process. What do I mean by "simulating the process" mentioned in the problem statement? The problem mentions that a round takes place to maximize the collection of increasing numbers in a certain order. So, you start with 1, find it and see that the next number 2 is not beyond it, i.e., 2 cannot be in the same round as 1 and form an increasing sequence. So, we need another round for 2. Now we find that, 2 and 3 both can be collected in the same round, as we're moving from left to right and taking numbers in an increasing order. But we cannot take 4 because it starts before 2. Finally, for 4 and 5 we need another round. That's makes a total of three rounds.
Now, the problem becomes very easy to solve if you simulate the process in this way. In the first round, you look for numbers that form an increasing sequence starting with 1. You remove these numbers before starting the second round. You continue this way until you've exhausted all the numbers.
But simulating this process will result in a time complexity that won't pass the constraints mentioned in the problem statement. So, we need to figure out another way that gives the same output without simulating the whole process.
Notice that the position of numbers is crucial here. Why do we need another round for 2? Because it comes before 1. We don't need another round for 3 because it comes after 2. Similarly, we need another round for 4 because it comes before 2.
So, when considering each number, we only need to be concerned with the position of the number that comes before it in the order. When considering 2, we look at the position of 1? Does 1 come before or after 2? It it comes after, we don't need another round. But if it comes before, we'll need an extra round. For each number, we look at this condition and increment the round count if necessary. This way, we can figure out the total number of rounds without simulating the whole process.
#include <iostream>
#include <vector>
using namespace std;
int main(int argc, char const *argv[])
{
int n;
cin >> n;
vector <int> v(n + 1), pos(n + 1);
for(int i = 1; i <= n; ++i){
cin >> v[i];
pos[v[i]] = i;
}
int total_rounds = 1; // we'll always need at least one round because the input sequence will never be empty
for(int i = 2; i <= n; ++i){
if(pos[i] < pos[i - 1]) total_rounds++;
}
cout << total_rounds << '\n';
return 0;
}
Next time when you're faced with this type of problem, pause for a while and try to control your urge to simulate the process in code. Almost certainly, there will be some clever observation that will allow you to achieve optimal solution.

Is this code a bubble sorting program?

I made a simple bubble sorting program, the code works but I do not know if its correct.
What I understand about the bubble sorting algorithm is that it checks an element and the other element beside it.
#include <iostream>
#include <array>
using namespace std;
int main()
{
int a, b, c, d, e, smaller = 0,bigger = 0;
cin >> a >> b >> c >> d >> e;
int test1[5] = { a,b,c,d,e };
for (int test2 = 0; test2 != 5; ++test2)
{
for (int cntr1 = 0, cntr2 = 1; cntr2 != 5; ++cntr1,++cntr2)
{
if (test1[cntr1] > test1[cntr2]) /*if first is bigger than second*/{
bigger = test1[cntr1];
smaller = test1[cntr2];
test1[cntr1] = smaller;
test1[cntr2] = bigger;
}
}
}
for (auto test69 : test1)
{
cout << test69 << endl;
}
system("pause");
}
It is a bubblesort implementation. It just is a very basic one.
Two improvements:
the outerloop iteration may be one shorter each time since you're guaranteed that the last element of the previous iteration will be the largest.
when no swap is done during an iteration, you're finished. (which is part of the definition of bubblesort in wikipedia)
Some comments:
use better variable names (test2?)
use the size of the container or the range, don't hardcode 5.
using std::swap() to swap variables leads to simpler code.
Here is a more generic example using (random access) iterators with my suggested improvements and comments and here with the improvement proposed by Yves Daoust (iterate up to last swap) with debug-prints
The correctness of your algorithm can be explained as follows.
In the first pass (inner loop), the comparison T[i] > T[i+1] with a possible swap makes sure that the largest of T[i], T[i+1] is on the right. Repeating for all pairs from left to right makes sure that in the end T[N-1] holds the largest element. (The fact that the array is only modified by swaps ensures that no element is lost or duplicated.)
In the second pass, by the same reasoning, the largest of the N-1 first elements goes to T[N-2], and it stays there because T[N-1] is larger.
More generally, in the Kth pass, the largest of the N-K+1 first element goes to T[N-K], stays there, and the next elements are left unchanged (because they are already increasing).
Thus, after N passes, all elements are in place.
This hints a simple optimization: all elements following the last swap in a pass are in place (otherwise the swap wouldn't be the last). So you can record the position of the last swap and perform the next pass up to that location only.
Though this change doesn't seem to improve a lot, it can reduce the number of passes. Indeed by this procedure, the number of passes equals the largest displacement, i.e. the number of steps an element has to take to get to its proper place (elements too much on the right only move one position at a time).
In some configurations, this number can be small. For instance, sorting an already sorted array takes a single pass, and sorting an array with all elements swapped in pairs takes two. This is an improvement from O(N²) to O(N) !
Yes. Your code works just like Bubble Sort.
Input: 3 5 1 8 2
Output after each iteration:
3 1 5 2 8
1 3 2 5 8
1 2 3 5 8
1 2 3 5 8
1 2 3 5 8
1 2 3 5 8
Actually, in the inner loop, we don't need to go till the end of the array from the second iteration onwards because the heaviest element of the previous iteration is already at the last. But that doesn't better the time complexity much. So, you are good to go..
Small Informal Proof:
The idea behind your sorting algorithm is that you go though the array of values (left to right). Let's call it a pass. During the pass pairs of values are checked and swapped to be in correct order (higher right).
During first pass the maximum value will be reached. When reached, the max will be higher then value next to it, so they will be swapped. This means that max will become part of next pair in the pass. This repeats until pass is completed and max moves to the right end of the array.
During second pass the same is true for the second highest value in the array. Only difference is it will not be swapped with the max at the end. Now two most right values are correctly set.
In every next pass one value will be sorted out to the right.
There are N values and N passes. This means that after N passes all N values will be sorted like:
{kth largest, (k-1)th largest,...... 2nd largest, largest}
No it isn't. It is worse. There is no point whatsoever in the variable cntr1. You should be using test1 here, and you should be referring to one of the many canonical implementations of bubblesort rather than trying to make it up for yourself.

Varying initializer in a 'for loop' in C++

int i = 0;
for(; i<size-1; i++) {
int temp = arr[i];
arr[i] = arr[i+1];
arr[i+1] = temp;
}
Here I started with the fist position of array. What if after the loop I need to execute the for loop again where the for loop starts with the next position of array.
Like for first for loop starts from: Array[0]
Second iteration: Array[1]
Third iteration: Array[2]
Example:
For array: 1 2 3 4 5
for i=0: 2 1 3 4 5, 2 3 1 4 5, 2 3 4 1 5, 2 3 4 5 1
for i=1: 1 3 2 4 5, 1 3 4 2 5, 1 3 4 5 2 so on.
You can nest loops inside each other, including the ability for the inner loop to access the iterator value of the outer loop. Thus:
for(int start = 0; start < size-1; start++) {
for(int i = start; i < size-1; i++) {
// Inner code on 'i'
}
}
Would repeat your loop with an increasing start value, thus repeating with a higher initial value for i until you're gone through your list.
Suppose you have a routine to generate all possible permutations of the array elements for a given length n. Suppose the routine, after processing all n! permutations, leaves the n items of the array in their initial order.
Question: how can we build a routine to make all possible permutations of an array with (n+1) elements?
Answer:
Generate all permutations of the initial n elements, each time process the whole array; this way we have processed all n! permutations with the same last item.
Now, swap the (n+1)-st item with one of those n and repeat permuting n elements – we get another n! permutations with a new last item.
The n elements are left in their previous order, so put that last item back into its initial place and choose another one to put at the end of an array. Reiterate permuting n items.
And so on.
Remember, after each call the routine leaves the n-items array in its initial order. To retain this property at n+1 we need to make sure the same element gets finally placed at the end of an array after the (n+1)-st iteration of n! permutations.
This is how you can do that:
void ProcessAllPermutations(int arr[], int arrLen, int permLen)
{
if(permLen == 1)
ProcessThePermutation(arr, arrLen); // print the permutation
else
{
int lastpos = permLen - 1; // last item position for swaps
for(int pos = lastpos; pos >= 0; pos--) // pos of item to swap with the last
{
swap(arr[pos], arr[lastpos]); // put the chosen item at the end
ProcessAllPermutations(arr, arrLen, permLen - 1);
swap(arr[pos], arr[lastpos]); // put the chosen item back at pos
}
}
}
and here is an example of the routine running: https://ideone.com/sXp35O
Note, however, that this approach is highly ineffective:
It may work in a reasonable time for very small input size only. The number of permutations is a factorial function of the array length, and it grows faster than exponentially, which makes really BIG number of tests.
The routine has no short return. Even if the first or second permutation is the correct result, the routine will perform all the rest of n! unnecessary tests, too. Of course one can add a return path to break iteration, but that would make the code somewhat ugly. And it would bring no significant gain, because the routine will have to make n!/2 test on average.
Each generated permutation appears deep in the last level of the recursion. Testing for a correct result requires making a call to ProcessThePermutation from within ProcessAllPermutations, so it is difficult to replace the callee with some other function. The caller function must be modified each time you need another method of testing / procesing / whatever. Or one would have to provide a pointer to a processing function (a 'callback') and push it down through all the recursion, down to the place where the call will happen. This might be done indirectly by a virtual function in some context object, so it would look quite nice – but the overhead of passing additional data down the recursive calls can not be avoided.
The routine has yet another interesting property: it does not rely on the data values. Elements of the array are never compared. This may sometimes be an advantage: the routine can permute any kind of objects, even if they are not comparable. On the other hand it can not detect duplicates, so in case of equal items it will make repeated results. In a degenerate case of all n equal items the result will be n! equal sequences.
So if you ask how to generate all permutations to detect a sorted one, I must answer: DON'T.
Do learn effective sorting algorithms instead.

2 player team knowing maximum moves

Given a list of N players who are to play a 2 player game. Each of them are either well versed in making a particular move or they are not. Find out the maximum number of moves a 2-player team can know.
And also find out how many teams can know that maximum number of moves?
Example Let we have 4 players and 5 moves with ith player is versed in jth move if a[i][j] is 1 otherwise it is 0.
10101
11100
11010
00101
Here maximum number of moves a 2-player team can know is 5 and their are two teams that can know that maximum number of moves.
Explanation : (1, 3) and (3, 4) know all the 5 moves. So the maximal moves a 2-player team knows is 5, and only 2 teams can acheive this.
My approach : For each pair of players i check if any of the players is versed in ith move or not and for each player maintain the maximum pairs he can make with other players with his local maximum move combination.
vector<int> pairmemo;
for(int i=0;i<n;i++){
int mymax=INT_MIN;
int countpairs=0;
for(int j=i+1;j<n;j++){
int count=0;
for(int k=0;k<m;k++){
if(arr[i][k]==1 || arr[j][k]==1)
{
count++;
}
}
if(mymax<count){
mymax=count;
countpairs=0;
}
if(mymax==count){
countpairs++;
}
}
pairmemo.push_back(countpairs);
maxmemo.push_back(mymax);
}
Overall maximum of all N players is answer and count is corresponding sum of the pairs being calculated.
for(int i=0;i<n;i++){
if(maxi<maxmemo[i])
maxi=maxmemo[i];
}
int countmaxi=0;
for(int i=0;i<n;i++){
if(maxmemo[i]==maxi){
countmaxi+=pairmemo[i];
}
}
cout<<maxi<<"\n";
cout<<countmaxi<<"\n";
Time complexity : O((N^2)*M)
Code :
How can i improve it?
Constraints : N<= 3000 and M<=1000
If you represent each set of moves by a very large integer, the problem boils down to finding pair of players (I, J) which have maximum number of bits set in MovesI OR MovesJ.
So, you can use bit-packing and compress all the information on moves in Long integer array. It would take 16 unsigned long integers to store according to the constraints. So, for each pair of players you OR the corresponding arrays and count number of ones. This would take O(N^2 * 16) which would run pretty fast given the constraints.
Example:
Lets say given matrix is
11010
00011
and you used 4-bit integer for packing it.
It would look like:
1101-0000
0001-1000
that is,
13,0
1,8
After OR the moves array for 2 player team becomes 13,8, now count the bits which are one. You have to optimize the counting of bits also, for that read the accepted answer here, otherwise the factor M would appear in complexity. Just maintain one count variable and one maxNumberOfBitsSet variable as you process the pairs.
What Ill do is:
1. Do logical OR between all the possible pairs - O(N^2) and store it's SUM in a 2D array with the symmetric diagonal ignored. (thats we save half of the calc - see example)
2. find the max value in the 2D Array (can be done while doing task 1) -> O(1)
3. count how many cells in the 2D array equals to the maximum value in task 2 O(N^2)
sum: 2*O(N^2)+ O(1) => O(N^2)
Example (using the data in the question (with letters indexes):
A[10101] B[11100] C[11010] D[00101]
Task 1:
[A|B] = 11101 = SUM(4)
[A|C] = 11111 = SUM(5)
[A|D] = 10101 = SUM(3)
[B|C] = 11110 = SUM(4)
[B|D] = 11101 = SUM(4)
[C|D] = 11111 = SUM(5)
Task 2 (Done while is done 1):
Max = 5
Task 3:
Count = 2
By the way, O(N^2) is the minimum possible since you HAVE to check all the possible pairs.
Since you have to find all solutions, unless you find a way to find a count without actually finding the solutions themselves, you have to actually look at or eliminate all possible solutions. So the worst case will always be O(N^2*M), which I'll call O(n^3) as long as N and M are both big and similar size.
However, you can hope for much better performance on the average case by pruning.
Don't check every case. Find ways to eliminate combinations without checking them.
I would sum and store the total number of moves known to each player, and sort the array rows by that value. That should provide an easy check for exiting the loop early. Sorting at O(n log n) should be basically free in an O(n^3) algorithm.
Use Priyank's basic idea, except with bitsets, since you obviously can't use a fixed integer type with 3000 bits.
You may benefit from making a second array of bitsets for the columns, and use that as a mask for pruning players.

Find pair of elements in integer array such that abs(v[i]-v[j]) is minimized

Lets say we have int array with 5 elements: 1, 2, 3, 4, 5
What I need to do is to find minimum abs value of array's elements' subtraction:
We need to check like that
1-2 2-3 3-4 4-5
1-3 2-4 3-5
1-4 2-5
1-5
And find minimum abs value of these subtractions. We can find it with 2 fors. The question is, is there any algorithm for finding value with one and only for?
sort the list and subtract nearest two elements
The provably best performing solution is assymptotically linear O(n) up until constant factors.
This means that the time taken is proportional to the number of the elements in the array (which of course is the best we can do as we at least have to read every element of the array, which already takes O(n) time).
Here is one such O(n) solution (which also uses O(1) space if the list can be modified in-place):
int mindiff(const vector<int>& v)
{
IntRadixSort(v.begin(), v.end());
int best = MAX_INT;
for (int i = 0; i < v.size()-1; i++)
{
int diff = abs(v[i]-v[i+1]);
if (diff < best)
best = diff;
}
return best;
}
IntRadixSort is a linear time fixed-width integer sorting algorithm defined here:
http://en.wikipedia.org/wiki/Radix_sort
The concept is that you leverage the fixed-bitwidth nature of ints by paritioning them in a series of fixed passes on the bit positions. ie partition them on the hi bit (32nd), then on the next highest (31st), then on the next (30th), and so on - which only takes linear time.
The problem is equivalent to sorting. Any sorting algorithm could be used, and at the end, return the difference between the nearest elements. A final pass over the data could be used to find that difference, or it could be maintained during the sort. Before the data is sorted the min difference between adjacent elements will be an upper bound.
So to do it without two loops, use a sorting algorithm that does not have two loops. In a way it feels like semantics, but recursive sorting algorithms will do it with only one loop. If this issue is the n(n+1)/2 subtractions required by the simple two loop case, you can use an O(n log n) algorithm.
No, unless you know the list is sorted, you need two
Its simple Iterate in a for loop
keep 2 variable "minpos and maxpos " and " minneg" and "maxneg"
check for the sign of the value you encounter and store maximum positive in maxpos
and minimum +ve number in "minpos" do the same by checking in if case for number
less than zero. Now take the difference of maxpos-minpos in one variable and
maxneg and minneg in one variable and print the larger of the two . You will get
desired.
I believe you definitely know how to find max and min in one for loop
correction :- The above one is to find max difference in case of minimum you need to
take max and second max instead of max and min :)
This might be help you:
end=4;
subtractmin;
m=0;
for(i=1;i<end;i++){
if(abs(a[m]-a[i+m])<subtractmin)
subtractmin=abs(a[m]-a[i+m];}
if(m<4){
m=m+1
end=end-1;
i=m+2;
}}