Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions concerning problems with code you've written must describe the specific problem — and include valid code to reproduce it — in the question itself. See SSCCE.org for guidance.
Closed 9 years ago.
Improve this question
What is the safest way to divide B by A, assuming the following types for each of them?
unsigned long long A;
unsigned long int B;
I am already using the following line to do that. It works fine, however, sometimes it fails with the Segmentation Faults.
double C;
C= double(B)/double(A);
Thanks
(Firstly, unsigned long int is the same as unsigned long)
Data type promotion rules mean that when evaluating A / B, B is promoted to unsigned long long and the division performed in integer arithmetic; i.e. any remainder is lost.
Casting either Aor B to double causes the operation to be performed in floating point double. (But note that casting a long long type to a double can result in precision loss.)
Rest assured that C = double(B) / double(A); will not cause a segmentation fault. You must have memory corruption / other undefined behaviour prior to this statement. I suspect you've messed up your stack.
These are ints so no need to cast to a double unless you're actually expecting a floating point fractional result (i.e. you probably do want to cast, but that has nothing to do with your fault).
More than likely you're getting errors because of divide by zero.
Related
Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 3 years ago.
Improve this question
I have tried to access values stored in a vector in reverse order. The following code shows no error:
for (long long int i = 0; i < end.size(); i++)
cout << end[end.size() - 1 - i] << "\n";
But the following code shows runtime error:
for(long long int i = end.size()-1;i>=0;i--) cout<<end[i]<<"\n";
Is there any difference between the two methods?
Is there any difference between the two methods?
end.size() returns std::size_t which is an unsigned type. Given an empty vector, you subtract 1 from unsigned zero. The result is a very large unsigned number due to modular arithmetic that unsigned numbers use.
Here, the behaviour depends on the version of the language, as well as the implementation. If long long can represent the large unsigned value, then you overflow the array with this large index (any index being outside the bounds of an empty vector) and behaviour will be undefined. This would happen on 32 bit systems where std::size_t is presumably 32 bits and long long 64 bits.
If the value is unrepresentable by long long, then prior to C++20, the resulting value will be implementation defined. If that value is negative, then you have desired behaviour, other wise undefined behaviour. After C++20, the result would be congruent with a representable value modulo the number of representable values. If bit width of long long matches with std::size_t, then the result would be -1 and behaviour would be as desired.
In conclusion: Latter approach is broken on some implementations. The first one doesn't have this problem.
The proper way to do it is:
for(auto i=end.size(); i-- ;) cout << end[i] << "\n" ;
This question already has answers here:
Implicit type promotion rules
(4 answers)
Closed 3 years ago.
Have the following code:
short a = 5;
short b = 15;
short c = 25;
short d = std::min(a, b);
short e = std::min(a, b-c); // Error
The last line cannot be compiled, claiming that there's no overload of min() that matches the arguments "short, int".
What is the reason for this being the case?
I understand that the result of b-c could potentially not fit in a short anymore. However that would be the same if I were using INTs and there it doesn't automatically form a LONG or anything to enforce that it fits.
As long as I am sure that the resulting number will never exceed the range of SHORT, it is safe if I use static_cast<short>(b-c), right?
Huge thanks!
Reason: integer promotion. If a type is narrower than int, it is promoted to int automatically. This makes little difference for signed numbers because overflow is undefined, but for unsigned numbers for which overflow wraps, this allows the compiler to emit a lot less code for most processors.
Most of the time, this automatically casts back because assigning to a narrower variable is not an error. You happened to find a case where this really does cause an issue though.
If you're sure it fits, just cast it back.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Closed 5 years ago.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
This question was caused by a typo or a problem that can no longer be reproduced. While similar questions may be on-topic here, this one was resolved in a way less likely to help future readers.
Improve this question
I am trying to use string::length() inside a function I wrote, named match(), but I'm getting garbage values.
Why does the marked line is outputting garbage values?
#include<iostream>
#include<stdio.h>
#include<string>
using namespace std;
void match(string st1,string st2)
{
for(int i=0;i<st1.length();i++)
{
cout<<"Why this value is garbage "<<(i-st1.length()-1)<<"\t";
// this expression gives wrong values ^^^^^^^^^^^^^^^
}
}
int main()
{
string st1,st2;
cout<<"Enter the required string\n";
cin>>st1>>st2;
match(st1,st2);
return 0;
}
imagine a string "foo":
i-st1.length()-1 means:
when i is 0:
0 - 3 = -3
- 1 = -4
but st1.length() is a size_t, which is unsigned, so all terms in the expression are promoted to unsigned values.
(unsigned)0 - (unsigned)3 = 0xfffffffffffffffd
- 1 = 0xfffffffffffffffc
= 18446744073709551612
The problem is that i is an int value, while string::length will return you a size_t value. The first one is a signed value, while the second is unsigned. One way to prevent this is to cast your st1.length() as an int, so all the elements in your operation are signed values. You will then get the value you are looking for.
i-(int)st1.length()-1
This is not garbage, you are implicitly converting/promoting a signed type (i is signed int) to an 'unsigned' one (the return type of length() is size_t which is unsigned).
It happens implicitly because an unsigned type is more powerful than a signed one. This is a common source of bugs in the C/C++ world.
This is what you need to modify:
cout<<" **NOT** garbage "<<(i-(int)st1.length()-1)<<"\t"<<endl;
Happy programming!
Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 6 years ago.
Improve this question
I was writing a small Least common multiple algorithm and encountered something I don't understand. This is the first and last part of the code:
long a = 14159572;
long b = 63967072;
int rest = 4;
long long ans;
.
. // Some other code here that is not very interesting.
.
else
{
//This appears correct, prints out correct answer.
ans = b/rest;
std::cout << a*ans;
}
But if I change the last "else" to this it gives an answer that is much smaller and incorrect:
else
{
std::cout << a*(b/rest);
}
Anyone know why this is? I don't think it's an overflow since it was no negative number that came out wrong, but rather just a much smaller integer (around 6*10^8) than the actual answer (around 2.2*10^14). As far as I understand it should calculate "b/rest" first in both cases, so the answers shouldn't differ?
Difference is not order of operations but data types:
ans = b/rest; // b/rest is long which upscaled to long long
std::cout << a*ans; // a converted to long long and result is long long
vs:
std::cout << a*(b/rest); // a*(b/rest) all calculations in long
so if you change your second variant to:
std::cout << a*static_cast<long long>(b/rest);
you should see the same result.
Update to why your cast did not work, note the difference:
long a,b;
// divide `long` by `long` and upscale result to `long long`
std::cout << static_cast<long long>( a / b );
// upscale both arguments to `long long` and divide `long long` by `long long`
std::cout << a / static_cast<long long>( b );
You're still encountering overflow. Just because you're not observing a negative number doesn't mean there's no overflow.
In your case specifically, long is almost certainly a 32-bit integer, as opposed to long long which is probably a 64-bit integer.
Since the maximum value of a 32-bit signed integer is roughly 2 billion, 14159572 * (63967072 / 4) is most definitely overflowing the range.
Make sure you perform your calculations using long long numbers, or else reconsider your code to avoid overflow in the first place.
The compiler assumes data types for each operand of your math equation and does the multiplication and division according to those assumed data types (refer to "integer division"). This also applies to intermediates of the computation. This also applies to the result passed to the stream since you don't pass a variable of an explicitly defined type.
This question already has answers here:
Closed 11 years ago.
Possible Duplicate:
Best way to detect integer overflow in C/C++
how do we check if any arithmetic operation like addition, multiplication or subtraction could result in an overflow?
Check the size of the operands first, and use std::numeric_limits. For example, for addition:
#include <limits>
unsigned int a, b; // from somewhere
unsigned int diff = std::numeric_limits<unsigned int>::max() - a;
if (diff < b) { /* error, cannot add a + b */ }
You cannot generally and reliably detect arithmetic errors after the fact, so you have to do all the checking before.
You can easily template this approach to make it work with any numeric type.