Recently I found the risk when using st like this:
int i = 10;
int sum = 0;
while ( i-- ){
sum = sum + i;
It actually get sum = 9 + 8 + 7 + .. + 1. So it lacks 10 in total. But I prefer this way of coding, it's fast and professional. Is there any advice to prevent from the risk and still have concise code?
You have a counter, a stop-condition and a decrement operation, so use a for loop - it's a much better fit than while:
int sum = 0;
for (int i = 10; i > 0; --i) {
sum += i;
}
"Professional", concise and risk-free :)
Edit: Or if you want to be really concise:
int sum = 55;
At least for this specific type of series (sum from 1..N) you can just do N*(N+1)/2. 10*11/2 = 55.
Postfix increment/decrement can be pretty nasty. I recommend not using it. Your example isn't even the worst of it. It's behaving pretty well: you're actually getting sum = 9+8+7+...+1+0; So you are going through the loop 10 times as one would think.
As mentioned in comment, use a for-loop.
int sum=0;
for (int i=10;i;--i) sum+=i;
The prefix operator is much less confusing, and in some cases, makes faster code.
Use do-while instead of while.
No idea what you mean by "risk" and "professional" way of coding, this code is just wrong. If you want it to look "similar" to what you wrote
int i = 10;
int sum = i;
while ( i-- ){
sum = sum + i;
or
int i = 10;
int sum = 0;
do {
sum += i;
} while (--i);
Related
I have been thinking on how to do this for some few hours now.
For example, let's give an array of indefinite length Arr[] = {1,2,3,4} .
(indefinite because it could have any other number of elements)
As it might be obvious, the best way to do this mathematically would probably be multiplying the first element * 1000, + second element * 100, + third element * 10, + fourth element.
So this way the result would be: 1000 + 200 + 30 + 4 = 1234.
The theory is pretty simple, but how can you implement this on a 'for' loop, with the fact that it could have any other number of elements, for example let's suppose it could have 7 elements and the operation would now need a "Seventh element * 100000"? I've been thinking on this for a while and I can't think of a way to write this on a 'for' that makes this possible on the same loop. Do you guys have a suggestion to how could I maybe do this?
Thanks!
Assuming all your integers are just one digit in base 10:
int result = 0;
for (int i = 0; i < len; ++i) {
result = result*10 + arr[i];
}
To let the compiler figure out the array size for you:
template <typename T, size_t size>
int compute(T (&arr)[size]) {
int result = 0;
for (size_t i = 0; i < size; i++) {
result = result * 10 + arr[i];
}
return result;
}
Try it online!
int size = sizeof(Arr) / sizeof(*Arr), p = 1, number = 0;
for(int i = size - 1; i >= 0; i--)
{
number = number + a[i] * p;
p *= 10;
}
cout << number;
I think you could try this. The size variable contains a formula to determine the size of the array, the rest is just simple math. I'm going backwards with the for loop, because this is the way my formula works. Hope it helped!
I met a very simple interview question, but my solution is incorrect. Any helps on this? 1)any bugs in my solution? 2)any good idea for time complexity O(n)?
Question:
Given an int array A[], define X=A[i]+A[j]+(j-i), j>=i. Find max value of X?
My solution is:
int solution(vector<int> &A){
if(A.empty())
return -1;
long long max_dis=-2000000000, cur_dis;
int size = A.size();
for(int i=0;i<size;i++){
for(int j=i;j<size;j++){
cur_dis=A[j]+A[i]+(j-i);
if(cur_dis > max_dis)
max_dis=cur_dis;
}
}
return max_dis;
}
The crucial insight is that it can be done in O(n) only if you track where potentially useful values are even before you're certain they'll prove usable.
Start with best_i = best_j = max_i = 0. The first two track the i and j values to use in the solution. The next one will record the index with the highest contributing factor for i, i.e. where A[i] - i is highest.
Let's call the value of X for some values of i and j "Xi,j", and start by recording our best solution so far ala Xbest = X0,0
Increment n along the array...
whenever the value at [n] gives a better "i" contribution for A[i] - i than max_i, update max_i.
whenever using n as the "j" index yields Xmax_i,n greater than Xbest, best_i = max_i, best_j = n.
Discussion - why/how it works
j_random_hacker's comment suggests I sketch a proof, but honestly I've no idea where to start. I'll try to explain as best I can - if someone else has a better explanation please chip in....
Restating the problem: greatest Xi,j where j >= i. Given we can set an initial Xbest of X0,0, the problem is knowing when to update it and to what. As we contemplate successive indices in the array as potential values for j, we want to generate Xi,j=n for some i (discussed next) to compare with Xbest. But, what i value to use? Well, given any index from 0 to n is <= j, the j >= i constraint isn't relevant if we pick the best i value from the indices we've already visited. We work out the best i value by separating the i-related contribution to X from the j-related contribution - A[i] - i - so in preparation for considering whether we've a new best solution with j=n we must maintain the best_i variable too as we go.
A way to approach the problem
For whatever it's worth - when I was groping around for a solution, I wrote down on paper some imaginary i and j contributions that I could see covered the interesting cases... where Ci and Cj are the contributions related to n's use as i and j respectively, something like
n 0 1 2 3 4
Ci 4 2 8 3 1
Cj 12 4 3 5 9
You'll notice I didn't bother picking values where Ci could be A[i] - i while Cj was A[j] + j... I could see the emerging solution should work for any formulas, and that would have just made it harder to capture the interesting cases. So - what's the interesting case? When n = 2 the Ci value is higher than anything we've seen in earlier elements, but given only knowledge of those earlier elements we can't yet see a way to use it. That scenario is the single "great" complication of the problem. What's needed is a Cj value of at least 9 so Xbest is improved, which happens to come along when n = 4. If we'd found an even better Ci at [3] then we'd of course want to use that. best_i tracks where that waiting-on-a-good-enough-Cj value index is.
Longer version of my comment: what about iterating the array from both ends, trying to find the highest number, while decreasing it by the distance from the appripriate end. Would that find the correct indexes (and thus the correct X)?
#include <vector>
#include <algorithm>
#include <iostream>
#include <random>
#include <climits>
long long brutal(const std::vector<int>& a) {
long long x = LLONG_MIN;
for(int i=0; i < a.size(); i++)
for(int j=i; j < a.size(); j++)
x = std::max(x, (long long)a[i] + a[j] + j-i);
return x;
}
long long smart(const std::vector<int>& a) {
if(a.size() == 0) return LLONG_MIN;
long long x = LLONG_MIN, y = x;
for(int i = 0; i < a.size(); i++)
x = std::max(x, (long long)a[i]-i);
for(int j = 0; j < a.size(); j++)
y = std::max(y, (long long)a[j]+j);
return x + y;
}
int main() {
std::random_device rd;
std::uniform_int_distribution<int> rlen(0, 1000);
std::uniform_int_distribution<int> rnum(INT_MIN,INT_MAX);
std::vector<int> v;
for(int loop = 0; loop < 10000; loop++) {
v.resize(rlen(rd));
for(int i = 0; i < v.size(); i++)
v[i] = rnum(rd);
if(brutal(v) != smart(v)) {
std::cout << "bad" << std::endl;
return -1;
}
}
std::cout << "good" << std::endl;
}
I'll write in pseudo code because I don't have much time, but this should be the most performing way using recursion
compare(array, left, right)
val = array[left] + array[right] + (right - left);
if (right - left) > 1
val1 = compare(array, left, right-1);
val2 = compare(array, left+1, right);
val = Max(Max(val1,val2),val);
end if
return val
and than you call simply
compare(array,0,array.length);
I think I found a incredibly faster solution but you need to check it:
you need to rewrite your array as follow
Array[i] = array[i] + (MOD((array.lenght / 2) - i));
Then you just find the 2 highest value of the array and sum them, that should be your solution, almost O(n)
wait maybe I'm missing something... I have to check.
Ok you get the 2 highest value from this New Array, and save the positions i, and j. Then you need to calculate from the original array your result.
------------ EDIT
This should be an implementation of the method suggested by Tony D (in c#) that I tested.
int best_i, best_j, max_i, currentMax;
best_i = 0;
best_j = 0;
max_i = 0;
currentMax = 0;
for (int n = 0; n < array.Count; n++)
{
if (array[n] - n > array[max_i] - max_i) max_i = n;
if (array[n] + array[max_i] - (n - max_i) > currentMax)
{
best_i = max_i;
best_j = n;
currentMax = array[n] + array[max_i] - (n - max_i);
}
}
return currentMax;
Question:
Given an int array A[], define X=A[i]+A[j]+(j-i), j>=i. Find max value of X?
Answer O(n):
lets rewrite the formula: X = A[i]-i + A[j]+j
we can track the highest A[i]-i we got and the highest A[j]+j we got. We loop over the array once and update both of our max values. After looping once we return the sum of A[i]-i + A[j]+j, which equals X.
We absolutely don't care about the j>=i constraint, because it is always true when we maximize both A[i]-i and A[j]+j
Code:
int solution(vector<int> &A){
if(A.empty()) return -1;
long long max_Ai_part =-2000000000;
long long max_Aj_part =-2000000000;
int size = A.size();
for(int i=0;i<size;i++){
if(max_Ai_part < A[i] - i)
max_Ai_part = A[i] - i;
if(max_Aj_part < A[j] + j)
max_Ai_part = A[j] - j;
}
return max_Ai_part + max_Aj_part;
}
Bonus:
most people get confused with the j>=i constraint. If you have a feeling for numbers, you should be able to see that i should tend to be lower than j.
Assume we have our formula, it is maximized and i > j. (this is impossible, but lets check it out)
we define x1 := j-i and x2 = i-j
A[i]+A[j]+j-i = A[i]+A[j] + x1, x1 < 0
we could then swap i with j and end up with this:
A[j]+A[i]+i-j = A[i]+A[j] + x2, x2 > 0
it is basically the same formula, but now because i > j the second formula will be greater than the first. In other words we could increase the maximum by swapping i and j which can't be true if we already had the maximum.
If we ever find a maximum, i cannot be greater than j.
#include <iostream>
#include <cstdlib>
typedef unsigned long long int ULL;
ULL gcd(ULL a, ULL b)
{
for(; b >0 ;)
{
ULL rem = a % b;
a = b;
b = rem;
}
return a;
}
void pollard_rho(ULL n)
{
ULL i = 0,y,k,d;
ULL *x = new ULL[2*n];
x[0] = rand() % n;
y = x[0];
k = 2;
while(1){
i = i+1;
std::cout << x[i-1];
x[i] = (x[i-1]*x[i-1]-1)%n;
d = gcd(abs(y - x[i]),n);
if(d!= 1 && d!=n)
std::cout <<d<<std::endl;
if(i+1==k){
y = x[i];
k = 2*k;
}
}
}
int main()
{
srand(time(NULL));
pollard_rho(10);
}
This implementation is derived from CLRS 2nd edition (Page number 894). while(1) looks suspicious to me. What should be the termination condition for the while loop?
I tried k<=n but that doesn't seem to work. I get segmentation fault. What is the flaw in the code and how to correct it?
I only have a 1st edition of CLRS, but assuming it's not too different from the 2nd ed., the answer to the termination condition is on the next page:
This procedure for finding a factor may seem somewhat mysterious at first. Note, however, that POLLARD-RHO never prints an incorrect answer; any number it prints is a nontrivial divisor of n. POLLARD-RHO may not print anything at all, though; there is no guarantee that it will produce any results. We shall see, however, that there is good reason to expect POLLARD-RHO to print a factor of p of n after approximately sqrt(p) iterations of the while loop. Thus, if n is composite, we can expect this procedure to discover enough divisors to factor n completely after approximately n1/4 update, since every prime factor p of n except possibly the largest one is less than sqrt(n).
So, technically speaking, the presentation in CLRS doesn't have a termination condition (that's probably why they call it a "heuristic" and "procedure" rather than an "algorithm") and there are no guarantees that it will ever actually produce anything useful. In practice, you'd likely want to put some iteration bound based on the expected n1/4 updates.
Why store all those intermediary values? You really don't need to put x and y in a array. Just use 2 variables which you keep reusing, x and y.
Also, replace while(1) with while(d == 1) and cut the loop before
if(d!= 1 && d!=n)
std::cout <<d<<std::endl;
if(i+1==k){
y = x[i];
k = 2*k;
So your loop should become
while(d == 1)
{
x = (x*x - 1) % n;
y = (y*y - 1) % n;
y = (y*y - 1) % n;
d = abs(gcd(y-x,n))%n;
}
if(d!=n)
std::cout <<d<<std::endl;
else
std::cout<<"Can't find result with this function \n";
Extra points if you pass the function used inside the loop as a parameter to pollard, so that if it can't find the result with one function, it tries another.
Try replacing while(1) { i = i + 1; with this:
for (i = 1; i < 2*n; ++i) {
lets say you want to make a program that will print the numbers 1-9 over and over again
123456789123456789123456789
i guess the most obvious way to do it would be to use a loop
int number = 1;
while(true)
{
print(number);
number = number + 1;
if(number > 9)
number = 1;
}
before i go any further, is this the best way to do this or is there a more common way of doing this?
Will this do?
while(true)
{
print("123456789");
}
Everyone using the % operator so far seems to be under the impression that ten values are involved. They also overlook the fact that their logic will sometimes generate 0. One way to do what you want is:
int i = 1;
while (true) {
print(i);
i = (i % 9) + 1;
}
The most obvious way would be this:
for (;;)
{
for (int i = 1; i < 10; ++i)
{
print(i);
}
}
Why you'd want to optifuscate it is beyond me. Output is going to so overwhelm the computation that any kind of optimization is irrelevant.
First off, why are you trying to "optimize" this? Are you optimizing for speed? Space? Readability or maintainability?
A "shorter" way to do this would be like so:
for (int i = 1; true; i++)
{
print(i);
i = (i + 1) % 10;
}
All I did was:
Convert the while loop to a for
loop
Convert increment +
conditional to increment + mod
operation.
This really is a case of micro-optimization.
My answer is based off Mike's answer but with further optimization:
for (int i = 1; true; i++)
{
std::cout << ('0' + i);
i = (i + 1) % 10;
}
Printing a number is way more expansive then printing a char and addition.
I'm working on Euler Problem 14:
http://projecteuler.net/index.php?section=problems&id=14
I figured the best way would be to create a vector of numbers that kept track of how big the series was for that number... for example from 5 there are 6 steps to 1, so if ever reach the number 5 in a series, I know I have 6 steps to go and I have no need to calculate those steps. With this idea I coded up the following:
#include <iostream>
#include <vector>
#include <iomanip>
using namespace std;
int main()
{
vector<int> sizes(1);
sizes.push_back(1);
sizes.push_back(2);
int series, largest = 0, j;
for (int i = 3; i <= 1000000; i++)
{
series = 0;
j = i;
while (j > (sizes.size()-1))
{
if (j%2)
{
j=(3*j+1)/2;
series+=2;
}
else
{
j=j/2;
series++;
}
}
series+=sizes[j];
sizes.push_back(series);
if (series>largest)
largest=series;
cout << setw(7) << right << i << "::" << setw(5) << right << series << endl;
}
cout << largest << endl;
return 0;
}
It seems to work relatively well for smaller numbers but this specific program stalls at the number 113382. Can anyone explain to me how I would go about figuring out why it freezes at this number?
Is there some way I could modify my algorithim to be better? I realize that I am creating duplicates with the current way I'm doing it:
for example, the series of 3 is 3,10,5,16,8,4,2,1. So I already figured out the sizes for 10,5,16,8,4,2,1 but I will duplicate those solutions later.
Thanks for your help!
Have you ruled out integer overflow? Can you guarantee that the result of (3*j+1)/2 will always fit into an int?
Does the result change if you switch to a larger data type?
EDIT: The last forum post at http://forums.sun.com/thread.jspa?threadID=5427293 seems to confirm this. I found this by googling for 113382 3n+1.
I think you are severely overcomplicating things. Why are you even using vectors for this?
Your problem, I think, is overflow. Use unsigned ints everywhere.
Here's a working code that's much simpler and that works (it doesn't work with signed ints however).
int main()
{
unsigned int maxTerms = 0;
unsigned int longest = 0;
for (unsigned int i = 3; i <= 1000000; ++i)
{
unsigned int tempTerms = 1;
unsigned int j = i;
while (j != 1)
{
++tempTerms;
if (tempTerms > maxTerms)
{
maxTerms = tempTerms;
longest = i;
}
if (j % 2 == 0)
{
j /= 2;
}
else
{
j = 3*j + 1;
}
}
}
printf("%d %d\n", maxTerms, longest);
return 0;
}
Optimize from there if you really want to.
When i = 113383, your j overflows and becomes negative (thus never exiting the "while" loop).
I had to use "unsigned long int" for this problem.
The problem is overflow. Just because the sequence starts below 1 million does not mean that it cannot go above 1 million later. In this particular case, it overflows and goes negative resulting in your code going into an infinite loop. I changed your code to use "long long" and this makes it work.
But how did I find this out? I compiled your code and then ran it in a debugger. I paused the program execution while it was in the loop and inspected the variables. There I found that j was negative. That pretty much told me all I needed to know. To be sure, I added a cout << j; as well as an assert(j > 0) and confirmed that j was overflowing.
I would try using a large array rather than a vector, then you will be able to avoid those duplicates you mention as for every number you calculate you can check if it's in the array, and if not, add it. It's probably actually more memory efficient that way too. Also, you might want to try using unsigned long as it's not clear at first glance how large these numbers will get.
i stored the length of the chain for every number in an array.. and during brute force whenever i got a number less than that being evaluated for, i just added the chain length for that lower number and broke out of the loop.
For example, i already know the Collatz sequence for 10 is 7 lengths long.
now when i'm evaluating for 13, i get 40, then 20, then 10.. which i have already evaluated. so the total count is 3 + 7.
the result on my machine (for upto 1 million) was 0.2 secs. with pure brute force that was 5 seconds.