Why would the following "swap" action fail at random times?
int i,p,a[11] = {0,1,2,3,4,5,6,7,8,9,10 };
srand(time(0));
for (i=0;i<11;i++)
{
p = rand() % 11;
a[i] = a[i] ^ a[p];
a[p] = a[i] ^ a[p];
a[i] = a[i] ^ a[p];
}
It is not so different from the the logic in this answer
It will work for a 3/4 runs and then start to duplicate 0
Tried it in C and C++, same results
[edit]
Solved by initializing p=0 and replacing the relevant line with while (p==i) p = rand() % 11;
Update: The reason NOT to use xor (see Mark Byers' answer and comment)
If p happens to be the same as i, then a[i] ^ a[p] will be zero, and the rest of the function fails.
Statistically, your code actually has a 65% chance of failing in this way.
Make sure that, when generating p, it is not the same number as i. For instance:
p = rand() % 10;
if( p >= i) p++;
When i equals p then a[i] ^ a[p] becomes zero. Your "swap" operation is broken.
To swap you should use a temporary variable:
int temp = a[i];
a[i] = a[p];
a[p] = temp;
Don't use the XOR hack.
Because p is randomly equal to i. In this case, a[i] instantly becomes 0.
If i == p then a[i] will store 0.
As an development practice, XOR swaping is not recommended. But if you are using logic for fun then use this line which will ensure that p will not same as i.
p = ((rand()%10)+(i+1))%11;
or
p = ((rand()%(count-1))+(i+1))%count;
But notice this trick is not good for performance perspective because it requires two modules operator and one addition (or subtraction). Use the exact comparison (which is the fastest comparison operator) and add 1 if they same.
p = rand()%10;
if (p == i) p++;
Related
#include <iostream>
using namespace std;
int main()
{
int arr[8];
int n = 0;
while (n < 8)
{
arr[n] = ++n;
}
for (n = 0; n < 8; n++)
{
cout << arr[n]<<" ";
}
}
output- garbage 1 2 3 4 5 6 7
expected output- 1 2 3 4 5 6 7 8
The statement arr[n] = ++n; has Undefined Behavior because it is unspecified if n is incremented before being used as the subscript in arr.
In your case, with your compiler, the increment happens first, so that you never assign anything to arr[0] and write past the end arr[8] of the array.
One way to address this is to split it into two statements:
arr[n] = n;
++n;
The evaluation order is known as sequencing, and the rules have changed as the language has evolved. Of significance, with C++17 the increment of n will happen before calculating the address to store the result in, so you'll always end up with an uninitalized first element and the write past the end of the array.
Turn on your warning flags! Your program can have undefined behavior;
main.cpp:34:18: warning: operation on 'n' may be undefined [-Wsequence-point]
34 | arr[n] = ++n;
| ^~~
main.cpp:34:16: warning: iteration 7 invokes undefined behavior [-Waggressive-loop-optimizations]
34 | arr[n] = ++n;
| ~~~~~~~^~~~~
main.cpp:32:14: note: within this loop
32 | while (n < 8)
Maybe you could do this instead:
int arr[8] { };
int n = 0;
while ( n < 8 )
{
arr[n] = n + 1;
++n;
}
for ( n = 0; n < 8; ++n )
{
std::cout << arr[n] << " ";
}
In C++ the expression on the right hand side (RHS) of the = sign is evaluated first, because you can't store (assign) a value in a variable if you haven't calculated what it is!
int myAge = 10 + 10; // calculates 20 first, then stores it.
You're changing the value of n before assigning to position n in your array - increasing it from 0 to 1 in the first iteration of the loop - and so assigning the result, 1, to array[1].
Why? Well, ++n and n++ are complicated (and frequently confusing) operators in c++, in that that they're both mathematical and assignment operators. ++n is essentially equivalent to:
n = n + 1;
return n;
Where n++ is closer to:
n = n+1;
return n -1;
You can code it more explicitly with simpler operators:
arr[n] = n+1;
n = n+1;
.. or use a for loop as you did in your output code. Using the same looping structures for both might help you achieve consistent outcomes.
You're not alone in struggling with this one. Chris Lattner felt the ++ and -- unary operators' potential for obfuscation and bugs sufficiently outweighed their benefits that they were dropped from Swift 3.
Suppose I have a variable x (double) that lies between 0 and 100. If x is in any of the intervals (0+10*n,5+10*n), with n (int) =0,...,9, then I return n, otherwise I break. I was thinking of doing this
bool test = false;
int k;
for(int i=0; i<10; i++){
if((0+10*i)<x<(5+10*i)){
k = i;
test = true;
}
}
if(test) return k;
else break;
would this be correct? If so, is there any other way that avoids loops?
It depends which intervals you have in mind. Since your intervals have a pattern to them, you can use a mathematical formula instead of a loop:
if(((int)x % 10) < 5) return (int)(x / 10);
else break;
(The % here is the modulo operator.)
Since C++'s % operator doesn't work on doubles, you can either cast x to an integer (as shown), or use the fmod function (works for non-integer intervals).
you can use % operator to get it's mod. get x % 10 and check if result of mod between 0 and 5. it can be faster than that.
I have the following nested for-loop:
for(k = 0; k < n; ++k) {
for(m = 0; m < n; ++m) {
/* other logic altering a */
if(a[index] != 0) count++;
}
}
where a contains uint32_t. Since n can be quite large (but constant), and this is the only branch (besides comparing k and m with n), I would like to optimize this away.
The distribution of zero and non-zero in a can be considered uniformly random.
My first approach was
count += a[index] & 1;
but then count would only be incremented for all odd numbers.
In addition: I also have a case where a contains bool, but according to C++ Conditionals true and false are defined as non-zero and zero, which basically are equivalent to the above problem.
As stated in the comments for the question if(a[index] != 0) count++; does not produce a branch (in this case), which was somewhat verified in the assembly.
For the sake of the completeness an equivalent to the mentioned statement are count += a[index] != 0; (according to standard ยง4.7 [conv.integral])
I met a very simple interview question, but my solution is incorrect. Any helps on this? 1)any bugs in my solution? 2)any good idea for time complexity O(n)?
Question:
Given an int array A[], define X=A[i]+A[j]+(j-i), j>=i. Find max value of X?
My solution is:
int solution(vector<int> &A){
if(A.empty())
return -1;
long long max_dis=-2000000000, cur_dis;
int size = A.size();
for(int i=0;i<size;i++){
for(int j=i;j<size;j++){
cur_dis=A[j]+A[i]+(j-i);
if(cur_dis > max_dis)
max_dis=cur_dis;
}
}
return max_dis;
}
The crucial insight is that it can be done in O(n) only if you track where potentially useful values are even before you're certain they'll prove usable.
Start with best_i = best_j = max_i = 0. The first two track the i and j values to use in the solution. The next one will record the index with the highest contributing factor for i, i.e. where A[i] - i is highest.
Let's call the value of X for some values of i and j "Xi,j", and start by recording our best solution so far ala Xbest = X0,0
Increment n along the array...
whenever the value at [n] gives a better "i" contribution for A[i] - i than max_i, update max_i.
whenever using n as the "j" index yields Xmax_i,n greater than Xbest, best_i = max_i, best_j = n.
Discussion - why/how it works
j_random_hacker's comment suggests I sketch a proof, but honestly I've no idea where to start. I'll try to explain as best I can - if someone else has a better explanation please chip in....
Restating the problem: greatest Xi,j where j >= i. Given we can set an initial Xbest of X0,0, the problem is knowing when to update it and to what. As we contemplate successive indices in the array as potential values for j, we want to generate Xi,j=n for some i (discussed next) to compare with Xbest. But, what i value to use? Well, given any index from 0 to n is <= j, the j >= i constraint isn't relevant if we pick the best i value from the indices we've already visited. We work out the best i value by separating the i-related contribution to X from the j-related contribution - A[i] - i - so in preparation for considering whether we've a new best solution with j=n we must maintain the best_i variable too as we go.
A way to approach the problem
For whatever it's worth - when I was groping around for a solution, I wrote down on paper some imaginary i and j contributions that I could see covered the interesting cases... where Ci and Cj are the contributions related to n's use as i and j respectively, something like
n 0 1 2 3 4
Ci 4 2 8 3 1
Cj 12 4 3 5 9
You'll notice I didn't bother picking values where Ci could be A[i] - i while Cj was A[j] + j... I could see the emerging solution should work for any formulas, and that would have just made it harder to capture the interesting cases. So - what's the interesting case? When n = 2 the Ci value is higher than anything we've seen in earlier elements, but given only knowledge of those earlier elements we can't yet see a way to use it. That scenario is the single "great" complication of the problem. What's needed is a Cj value of at least 9 so Xbest is improved, which happens to come along when n = 4. If we'd found an even better Ci at [3] then we'd of course want to use that. best_i tracks where that waiting-on-a-good-enough-Cj value index is.
Longer version of my comment: what about iterating the array from both ends, trying to find the highest number, while decreasing it by the distance from the appripriate end. Would that find the correct indexes (and thus the correct X)?
#include <vector>
#include <algorithm>
#include <iostream>
#include <random>
#include <climits>
long long brutal(const std::vector<int>& a) {
long long x = LLONG_MIN;
for(int i=0; i < a.size(); i++)
for(int j=i; j < a.size(); j++)
x = std::max(x, (long long)a[i] + a[j] + j-i);
return x;
}
long long smart(const std::vector<int>& a) {
if(a.size() == 0) return LLONG_MIN;
long long x = LLONG_MIN, y = x;
for(int i = 0; i < a.size(); i++)
x = std::max(x, (long long)a[i]-i);
for(int j = 0; j < a.size(); j++)
y = std::max(y, (long long)a[j]+j);
return x + y;
}
int main() {
std::random_device rd;
std::uniform_int_distribution<int> rlen(0, 1000);
std::uniform_int_distribution<int> rnum(INT_MIN,INT_MAX);
std::vector<int> v;
for(int loop = 0; loop < 10000; loop++) {
v.resize(rlen(rd));
for(int i = 0; i < v.size(); i++)
v[i] = rnum(rd);
if(brutal(v) != smart(v)) {
std::cout << "bad" << std::endl;
return -1;
}
}
std::cout << "good" << std::endl;
}
I'll write in pseudo code because I don't have much time, but this should be the most performing way using recursion
compare(array, left, right)
val = array[left] + array[right] + (right - left);
if (right - left) > 1
val1 = compare(array, left, right-1);
val2 = compare(array, left+1, right);
val = Max(Max(val1,val2),val);
end if
return val
and than you call simply
compare(array,0,array.length);
I think I found a incredibly faster solution but you need to check it:
you need to rewrite your array as follow
Array[i] = array[i] + (MOD((array.lenght / 2) - i));
Then you just find the 2 highest value of the array and sum them, that should be your solution, almost O(n)
wait maybe I'm missing something... I have to check.
Ok you get the 2 highest value from this New Array, and save the positions i, and j. Then you need to calculate from the original array your result.
------------ EDIT
This should be an implementation of the method suggested by Tony D (in c#) that I tested.
int best_i, best_j, max_i, currentMax;
best_i = 0;
best_j = 0;
max_i = 0;
currentMax = 0;
for (int n = 0; n < array.Count; n++)
{
if (array[n] - n > array[max_i] - max_i) max_i = n;
if (array[n] + array[max_i] - (n - max_i) > currentMax)
{
best_i = max_i;
best_j = n;
currentMax = array[n] + array[max_i] - (n - max_i);
}
}
return currentMax;
Question:
Given an int array A[], define X=A[i]+A[j]+(j-i), j>=i. Find max value of X?
Answer O(n):
lets rewrite the formula: X = A[i]-i + A[j]+j
we can track the highest A[i]-i we got and the highest A[j]+j we got. We loop over the array once and update both of our max values. After looping once we return the sum of A[i]-i + A[j]+j, which equals X.
We absolutely don't care about the j>=i constraint, because it is always true when we maximize both A[i]-i and A[j]+j
Code:
int solution(vector<int> &A){
if(A.empty()) return -1;
long long max_Ai_part =-2000000000;
long long max_Aj_part =-2000000000;
int size = A.size();
for(int i=0;i<size;i++){
if(max_Ai_part < A[i] - i)
max_Ai_part = A[i] - i;
if(max_Aj_part < A[j] + j)
max_Ai_part = A[j] - j;
}
return max_Ai_part + max_Aj_part;
}
Bonus:
most people get confused with the j>=i constraint. If you have a feeling for numbers, you should be able to see that i should tend to be lower than j.
Assume we have our formula, it is maximized and i > j. (this is impossible, but lets check it out)
we define x1 := j-i and x2 = i-j
A[i]+A[j]+j-i = A[i]+A[j] + x1, x1 < 0
we could then swap i with j and end up with this:
A[j]+A[i]+i-j = A[i]+A[j] + x2, x2 > 0
it is basically the same formula, but now because i > j the second formula will be greater than the first. In other words we could increase the maximum by swapping i and j which can't be true if we already had the maximum.
If we ever find a maximum, i cannot be greater than j.
#include <iostream>
#include <cstdlib>
typedef unsigned long long int ULL;
ULL gcd(ULL a, ULL b)
{
for(; b >0 ;)
{
ULL rem = a % b;
a = b;
b = rem;
}
return a;
}
void pollard_rho(ULL n)
{
ULL i = 0,y,k,d;
ULL *x = new ULL[2*n];
x[0] = rand() % n;
y = x[0];
k = 2;
while(1){
i = i+1;
std::cout << x[i-1];
x[i] = (x[i-1]*x[i-1]-1)%n;
d = gcd(abs(y - x[i]),n);
if(d!= 1 && d!=n)
std::cout <<d<<std::endl;
if(i+1==k){
y = x[i];
k = 2*k;
}
}
}
int main()
{
srand(time(NULL));
pollard_rho(10);
}
This implementation is derived from CLRS 2nd edition (Page number 894). while(1) looks suspicious to me. What should be the termination condition for the while loop?
I tried k<=n but that doesn't seem to work. I get segmentation fault. What is the flaw in the code and how to correct it?
I only have a 1st edition of CLRS, but assuming it's not too different from the 2nd ed., the answer to the termination condition is on the next page:
This procedure for finding a factor may seem somewhat mysterious at first. Note, however, that POLLARD-RHO never prints an incorrect answer; any number it prints is a nontrivial divisor of n. POLLARD-RHO may not print anything at all, though; there is no guarantee that it will produce any results. We shall see, however, that there is good reason to expect POLLARD-RHO to print a factor of p of n after approximately sqrt(p) iterations of the while loop. Thus, if n is composite, we can expect this procedure to discover enough divisors to factor n completely after approximately n1/4 update, since every prime factor p of n except possibly the largest one is less than sqrt(n).
So, technically speaking, the presentation in CLRS doesn't have a termination condition (that's probably why they call it a "heuristic" and "procedure" rather than an "algorithm") and there are no guarantees that it will ever actually produce anything useful. In practice, you'd likely want to put some iteration bound based on the expected n1/4 updates.
Why store all those intermediary values? You really don't need to put x and y in a array. Just use 2 variables which you keep reusing, x and y.
Also, replace while(1) with while(d == 1) and cut the loop before
if(d!= 1 && d!=n)
std::cout <<d<<std::endl;
if(i+1==k){
y = x[i];
k = 2*k;
So your loop should become
while(d == 1)
{
x = (x*x - 1) % n;
y = (y*y - 1) % n;
y = (y*y - 1) % n;
d = abs(gcd(y-x,n))%n;
}
if(d!=n)
std::cout <<d<<std::endl;
else
std::cout<<"Can't find result with this function \n";
Extra points if you pass the function used inside the loop as a parameter to pollard, so that if it can't find the result with one function, it tries another.
Try replacing while(1) { i = i + 1; with this:
for (i = 1; i < 2*n; ++i) {