I am writing a program that reads numbers from a .txt file and outputs a respective amount of asterisks (for even integers) and dollar signs (for odd integers). For example, a 3 would output $$$ and 2 would output **. The program works fine, except for when it reads the number -1. Other negative numbers work just fine, except for -1 for some reason..
Here is my code:
#include <iostream>
#include <fstream>
#include <string>
#include <iomanip>
#include <Windows.h>
using namespace std;
int main()
{
int value, even, odd;
ifstream infile;
infile.open("lab6_input.txt");
while (infile >> value)
{
if (value % 2 == 0)
cout << string(abs(value), "*$"[value % 2]) << endl;
else
cout << string(abs(value), "*$"[value % 2]) << endl;
value++;
}
infile.close();
system("pause");
return 0;
}
Here is my output: https://imgur.com/a/favqrLv
The last number in the output is a -1, but it just displays an empty space.
Your code seems a bit odd. Your variables are not initialized, even and odd are not even used. Your if statement is unnecessary because you have in both cases the same code.
To your question:
You should use abs(value) twice.
Try
while(infile >> value){
cout << string(abs(value), "*$"[abs(value) % 2]) << endl;
value++;
}
Live example
The problem lies here
"*$"[value % 2]
In C++, the result of the modulo operator applied to a negative number is negative (well, technically is a bit more complicated than that). So, when value is negative, that instruction causes undefined behavior, accessing the array (string literal) out of bounds (the one at index -1).
You could solve the issue taking the absolute value of value or of the result, but consider writing a free function like the following, instead.
constexpr bool is_odd(int x)
{
return x % 2;
}
It will better express the intent and will help the compiler to optimize your code (see e.g. here), because it's like you are asking
Tell me if value is divisible by two (the remainder of its division by 2 is zero) or not.
Which is different from
Give me the remainder of the division of value by 2
You may have noted, in linked Compiler Explorer page, that the compilers end up using a simple
and edi, 1
Instead of performing an actual modulo operation. This is because what you really need is the less significant bit and you could directly use in your code
value & 1
Note, though, that the Standard (C++17, while I'm writing) doesn't mandates (yet, C++20 will require two's complement) a particular binary representation for type int, so the previous would be implementation defined (and wrong, if you happen to find a ones' complement still working int implementation).
Related
I'm on Manjaro 64 bit, latest edition. HP pavilion g6, Codeblocks
Release 13.12 rev 9501 (2013-12-25 18:25:45) gcc 5.2.0 Linux/unicode - 64 bit.
There was a discussion between students on why
sn = 1/n diverges
sn = 1/n^2 converges
So decided to write a program about it, just to show them what kind of output they can expect
#include <iostream>
#include <math.h>
#include <fstream>
using namespace std;
int main()
{
long double sn =0, sn2=0; // sn2 is 1/n^2
ofstream myfile;
myfile.open("/home/Projects/c++/test/test.csv");
for (double n =2; n<100000000;n++){
sn += 1/n;
sn2 += 1/pow(n,2);
myfile << "For n = " << n << " Sn = " << sn << " and Sn2 = " << sn2 << endl;
}
myfile.close();
return 0;
}
Starting from n=9944 I got sn2 = 0.644834, and kept getting it forever. I did expect that the compiler would round the number and ignore the 0s at some point, but this is just too early, no?
So at what theoretical point does 0s start to be ignored? And what to do if you care about all 0s in a number? If long double doesn't do it, then what does?
I know it seems like a silly question but I expected to see a longer number, since you can store big part of pi in long doubles. By the way same result for double too.
The code that you wrote suffers from a classic programming mistake: it sums a sequence of floating-point numbers by adding larger numbers to the sum first and smaller numbers later.
This will inevitably lead to precision loss during addition, since at some point in the sequence the sum will become relatively large, while the next member of the sequence will become relatively small. Adding a sufficiently small floating-point value to a sufficiently large floating-point sum does not affect the sum. Once you reach that point, it will look as if the addition operation is "ignored", even though the value you attempt to add is not zero.
You can observe the same effect if you try calculating 100000000.0f + 1 on a typical machine: it still evaluates to 100000000. This does not happen because 1 somehow gets rounded to zero. This happens because the mathematically-correct result 100000001 is rounded back to 100000000. In order to force 100000000.0f to change through addition, you need to add at least 5 (and the result will be "snapped" to 100000008).
So, the issue here is not that the compiler "rounds the number when it gets so small", as you seem to believe. Your 1/pow(n,2) number is probably fine and sufficiently precise (not rounded to 0). The issue here is that at some iteration of your cycle the small non-zero value of 1/pow(n,2) just cannot affect the sum anymore.
While it is true that adjusting output precision will help you to see better what is going on (as stated in the comments), the real issue is what is described above.
When calculating sums of floating-point sequences with large differences in member magnitudes, you should do it by adding smaller members of the sequence first. Using my 100000000.0f example again, you can easily see that 4.0f + 4.0f + 100000000.0f correctly produces 100000008, while 100000000.0f + 4.0f + 4.0f is still 100000000.
You're not running into precision issues here. The sum doesn't stop at 0.644834; it keeps going to roughly the correct value:
#include <iostream>
#include <math.h>
using namespace std;
int main() {
long double d = 0;
for (double n = 2; n < 100000000; n++) {
d += 1/pow(n, 2);
}
std::cout << d << endl;
return 0;
}
Result:
0.644934
Note the 9! That's not 0.644834 any more.
If you were expecting 1.644934, you should have started the sum at n=1. If you were expecting visible changes between successive partial sums, you didn't see those because C++ is truncating the representation of the sums to 6 significant digits. You can configure your output stream to display more digits with std::setprecision from the iomanip header:
myfile << std::setprecision(9);
Here is my code :
#include <iostream>
#include <cmath>
using namespace std;
int main()
{
int n, i, num, m, k = 0;
cout << "Enter a number :\n";
cin >> num;
n = log10(num);
while (n > 0) {
i = pow(10, n);
m = num / i;
k = k + pow(m, 3);
num = num % i;
--n;
cout << m << endl;
cout << num << endl;
}
k = k + pow(num, 3);
return 0;
}
When I input 111 it gives me this
1
12
1
2
I am using codeblocks. I don't know what is wrong.
Whenever I use pow expecting an integer result, I add .5 so I use (int)(pow(10,m)+.5) instead of letting the compiler automatically convert pow(10,m) to an int.
I have read many places telling me others have done exhaustive tests of some of the situations in which I add that .5 and found zero cases where it makes a difference. But accurately identifying the conditions in which it isn't needed can be quite hard. Using it when it isn't needed does no real harm.
If it makes a difference, it is a difference you want. If it doesn't make a difference, it had a tiny cost.
In the posted code, I would adjust every call to pow that way, not just the one I used as an example.
There is no equally easy fix for your use of log10, but it may be subject to the same problem. Since you expect a non integer answer and want that non integer answer truncated down to an integer, adding .5 would be very wrong. So you may need to find some more complicated work around for the fundamental problem of working with floating point. I'm not certain, but assuming 32-bit integers, I think adding 1e-10 to the result of log10 before converting to int is both never enough to change log10(10^n-1) into log10(10^n) but always enough to correct the error that might have done the reverse.
pow does floating-point exponentiation.
Floating point functions and operations are inexact, you cannot ever rely on them to give you the exact value that they would appear to compute, unless you are an expert on the fine details of IEEE floating point representations and the guarantees given by your library functions.
(and furthermore, floating-point numbers might even be incapable of representing the integers you want exactly)
This is particularly problematic when you convert the result to an integer, because the result is truncated to zero: int x = 0.999999; sets x == 0, not x == 1. Even the tiniest error in the wrong direction completely spoils the result.
You could round to the nearest integer, but that has problems too; e.g. with sufficiently large numbers, your floating point numbers might not have enough precision to be near the result you want. Or if you do enough operations (or unstable operations) with the floating point numbers, the errors can accumulate to the point you get the wrong nearest integer.
If you want to do exact, integer arithmetic, then you should use functions that do so. e.g. write your own ipow function that computes integer exponentiation without any floating-point operations at all.
I know from previous threads on this topic that using float arithmetic causes precision anomalies. But Interestingly I observed that the same function is behaving in two different ways.Using COUT output is 4 but if I am saving the result into a variable, then result is 3!
#include <iostream>
#include <cmath>
using namespace std;
#define mod 1000000007
long long int fastPower(long long int a, int n){
long long int res = 1;
while (n) {
if (n & 1) res = (res * a) % mod;
n >>= 1; a = (a * a) % mod;
}
return res;
}
int main() {
int j = 3;
cout << pow(64, (double)1.0/(double)j) << endl; // Outputs 4
int root = pow(64, (double)1.0/(double)j);
cout << root << endl; // Outputs 3
/* As said by "pts", i tried including this condition in my code but including this line in my code resulted in TimeLimitExceeded(TLE). */
if (fastPower(root+1,j) <= 64) root++;
cout << root << endl; // Outputs 4 :)
return 0;
}
Code output on Ideone.com
Now, how can we avoid such errors in a programing contest.
I do not want to use 'round' function because I need only integer value of root. i.e
63(1/6) = 1, 20(1/2) = 4, etc...
How should I modify my code so that correct result is stored in the root variable.
pow returns double. When cout is used, it is rounded(thus, it is 4). When you cast it to int, it just truncates fractional part. Pow returns something like 4 - eps(because of precision issues). When it is just truncated, it is equal to 3.
Dirty hack useful in programming contests: int root = (int)(pow(...) + 1e-7)
As far as I know, there is no single-line answer in C and C++ for getting the ath root of b rounded down.
As a quick workaround, you can do something like:
int root(int a, int b) {
return floor(pow(b, 1.0 / a) + 0.001);
}
This doesn't work for every value, but by adjusting the constant (0.001), you may get lucky and it would work for the test input.
As a workaround, use pow as you use it already, and if it returns r, then try r - 1, r and r + 1 by multiplying it back (using fast exponentiation of integers). This will work most of the time.
If you need a solution which works 100% of the time, then don't use floating point numbers. Use for example binary search with exponentiation. There are faster algorithms (such as Newton iteration), but if you use them on integers then you need to write custom logic to find the exact solution as soon as they stop converging.
There are two problems with your program:
The pow(int, int) overload is no longer available. To avoid this problem, cast the first parameter to double, float, or long double.
Also, command of cout is rounding off your answer in upper roof (3.something into 4) and saving your data is removing all the decimal part and is accepting only integer part.
I made a program to find the factors of a number:
#include <iostream>
#include <cmath>
using namespace::std;
int main() {
long int n = 6008514751432;
int i = 1;
while (i <= n/2) {
if (n % i == 0)
cout << i << " ";
i++;
}
}
I am using xCode BTW
It works fine with smaller numbers, like 2000 lets say, or even 200000. But, when I get up to 6008514751432, which is the number I need to know, it doesn't work, it just says the program is running and displays nothing! What is going on?
Update: When I run the program and wait about 2 minutes, it says:
Warning: the current language does not match this frame.
Current language: auto; currently c++
(gdb)
long int is likely 4 byte wide on your implementation, which means it can only store values up to 2^31 - 1, or 2147483647.
You might try switching to long long, which is typically larger (8 bytes on most platforms):
long long n = 6008514751432LL;
long long i = 1LL;
while (i <= n/2) {
if (n % i == 0)
cout << i << " ";
i++;
}
If that's still not sufficient, you will need to look for some infinite precision number library, such as GMP.
Depending on your platform, you find that 6008514751432 is too large for the type long int. You need to make sure you are using a type that holds a 64-bit integer type.
Also, if you are just trying to find the factors of a number, there is no need to look higher than sqrt(n) as factors greater than that have a corresponding co-factor less than that. Make sure to out the sqrt outside the loop itself.
On a system where long int is larger than int, note that you'll find that at some point i wraps to 0
If it's running it's probably not integer overflow in this specific case.
6008514751432 / 2 as a 32-bit int comes out negative (-144495672)
So the while loop would terminate immediately.
Integer overflow begets undefined behaviour so anything could happen, but most systems behave predictably
Also if it was overflow your compiler should have warned you, or even refused to compile it.
Half of 6008514751432 is still quite a large number and it will take significant time to count that high, and do all those "%" operations. Meanwhile the first few factors (1,2,4,8,1307,2614 etc) have been sent to COUT but not yet flushed so those results are sitting in the buffer, where you can't see them.
If you change your program to put each number on a new line that may give some results sooner, you'll still be waiting days to weeks for the final result though.
I suggest you look for a faster factoring algorithm.
Spoiler: FWIW 6008514751432 has prime factors
2 2 2 1307 574647547
You are assuming that long int is bigger than it is. You should try the following:
print n after you have assigned it
print sizeof(int), sizeof(long long)
use int64_t and the like. int, long and others are not the best choice when the actual size matters.
Add a read from stdin in your loop do try and debug step by step if problems persist
I am a writing a lexer as part of a compiler project and I need to detect if an integer is larger than what can fit in a int so I can print an error. Is there a C++ standard library for big integers that could fit this purpose?
The Standard C library functions for converting number strings to integers are supposed to detect numbers which are out of range, and set errno to ERANGE to indicate the problem. See here
You could probably use libgmp. However, I think for your purpose, it's just unnecessary.
If you, for example, parse your numbers to 32-bit unsigned int, you
parse the first at most 9 decimal numbers (that's floor(32*log(2)/log(10)). If you haven't more, the number is OK.
take the next digit. If the number you got / 10 is not equal to the number from the previous step, the number is bad.
if you have more digits (eg. more than 9+1), the number is bad.
else the number is good.
Be sure to skip any leading zeros etc.
libgmp is a general solution, though maybe a bit heavyweight.
For a lighter-weight lexical analyzer, you could treat it as a string; trim leading zeros, then if it's longer than 10 digits, it's too long; if shorter then it's OK, if exactly 10 digits, string compare to the max values 2^31=2147483648 or 2^32=4294967296. Keep in mind that -2^31 is a legal value but 2^31 isn't. Also keep in mind the syntax for octal and hexadecimal constants.
To everyone suggesting atoi:
My atoi() implementation does not set errno.
My atoi() implementation does not return INT_MIN or INT_MAX on overflow.
We cannot rely on sign reversal. Consider 0x4000...0.
*2 and the negative bit is set.
*4 and the value is zero.
With base-10 numbers our next digit would multiply this by 10.
This is all nuts. Unless your lexer is parsing gigs of numerical data, stop the premature optimization already. It only leads to grief.
This approach may be inefficient, but it's adequate for your needs:
const char * p = "1234567890123";
int i = atoi( p );
ostringstream o;
o << i;
return o.str() == p;
Or, leveraging the stack:
const char * p = "1234567890123";
int i = atoi( p );
char buffer [ 12 ];
snprintf( buffer, 12, "%d", i );
return strcmp(buffer,p) == 0;
How about this. Use atol, and check for overflow and underflow.
#include <iostream>
#include <string>
using namespace std;
main()
{
string str;
cin >> str;
int i = atol(str.c_str());
if (i == INT_MIN && str != "-2147483648") {
cout << "Underflow" << endl;
} else if (i == INT_MAX && str != "2147483647") {
cout << "Overflow" << endl;
} else {
cout << "In range" << endl;
}
}
You might want to check out GMP if you want to be able to deal with such numbers.
In your lexer as you parse the integer string you must multiply by 10 before you add each new digit (assuming you're parsing from left to right). If that running total suddenly becomes negative, you've exceeded the precision of the integer.
If your language (like C) supports compile-time evaluation of expressions, then you might need to think about that, too.
Stuff like this:
#define N 2147483643 // This is 2^31-5, i.e. close to the limit.
int toobig = N + N;
GCC will catch this, saying "warning: integer overflow in expression", but of course no individual literal is overflowing. This might be more than you require, just thought I'd point it out as stuff that real compilers to in this department.
You can check to see if the number is higher or lower than INT_MAX or INT_MIN respectively. You would need to #include <limits.h>