I have this simple C program.
#include <stdlib.h>
#include <stdio.h>
#include <stdbool.h>
bool foo (unsigned int a) {
return (a > -2L);
}
bool bar (unsigned long a) {
return (a > -2L);
}
int main() {
printf("foo returned = %d\n", foo(99));
printf("bar returned = %d\n", bar(99));
return 0;
}
Output when I run this -
foo returned = 1
bar returned = 0
Recreated in godbolt here
My question is why does foo(99) return true but bar(99) return false.
To me it makes sense that bar would return false. For simplicity lets say longs are 8 bits, then (using twos complement for signed value):
99 == 0110 0011
-2 == unsigned 254 == 1111 1110
So clearly the CMP instruction will see that 1111 1110 is bigger and return false.
But I dont understand what is going on behind the scenes in the foo function. The assembly for foo seems to hardcode to always return mov eax,0x1. I would have expected foo to do something similar to bar. What is going on here?
This is covered in C classes and is specified in the documentation. Here is how you use documents to figure this out.
In the 2018 C standard, you can look up > or “relational expressions” in the index to see they are discussed on pages 68-69. On page 68, you will find clause 6.5.8, which covers relational operators, including >. Reading it, paragraph 3 says:
If both of the operands have arithmetic type, the usual arithmetic conversions are performed.
“Usual arithmetic conversions” is listed in the index as defined on page 39. Page 39 has clause 6.3.1.8, “Usual arithmetic conversions.” This clause explains that operands of arithmetic types are converted to a common type, and it gives rules determining the common type. For two integer types of different signedness, such as the unsigned long and the long int in bar (a and -2L), it says that, if the unsigned type has rank greater than or equal to the rank of the other type, the signed type is converted to the unsigned type.
“Rank” is not in the index, but you can search the document to find it is discussed in clause 6.3.1.1, where it tells you the rank of long int is greater than the rank of int, and the any unsigned type has the same rank as the corresponding type.
Now you can consider a > -2L in bar, where a is unsigned long. Here we have an unsigned long compared with a long. They have the same rank, so -2L is converted to unsigned long. Conversion of a signed integer to unsigned is discussed in clause 6.3.1.3. It says the value is converted by wrapping it modulo ULONG_MAX+1, so converting the signed long −2 produces a ULONG_MAX+1−2 = ULONG_MAX−1, which is a large integer. Then comparing a, which has the value 99, to a large integer with > yields false, so zero is returned.
For foo, we continue with the rules for the usual arithmetic conversions. When the unsigned type does not have rank greater than or equal to the rank of the signed type, but the signed type can represent all the values of the type of the operand with unsigned type, the operand with the unsigned type is converted to the operand of the signed type. In foo, a is unsigned int and -2L is long int. Presumably in your C implementation, long int is 64 bits, so it can represent all the values of a 32-bit unsigned int. So this rule applies, and a is converted to long int. This does not change the value. So the original value of a, 99, is compared to −2 with >, and this yields true, so one is returned.
In the first function
bool foo (unsigned int a) {
return (a > -2L);
}
the both operands of the expression a > -2L have the type long (the first operand is converted to the type long due to the usual arithmetic conversions because the rank of the type long is greater than the rank of the type unsigned int and all values of the type unsigned int in the used system can be represented by the type long). And it is evident that the positive value 99L is greater than the negative value -2L.
The first function could produce the result 0 provided that sizeof( long ) is equal to sizeof( unsigned int ). In this case the type long is unable to represent all (positive) values of the type unsigned int. As a result due to the usual arithmetic conversions the both operands will be converted to the type unsigned long.
For example running the function foo using MS VS 2019 where sizeof( long ) is equal to 4 as sizeof( unsigned int ) you will get the result 0.
Here is a demonstration program written in C++ that visually shows the reason why the result of a call of the function foo using MS VS 2019 can be equal to 0.
#include <iostream>
#include <iomanip>
#include <type_traits>
int main()
{
unsigned int x = 0;
long y = 0;
std::cout << "sizeof( unsigned int ) = " << sizeof( unsigned int ) << '\n';
std::cout << "sizeof( long ) = " << sizeof(long) << '\n';
std::cout << "std::is_same_v<decltype( x + y ), unsigned long> is "
<< std::boolalpha
<< std::is_same_v<decltype( x + y ), unsigned long>
<< '\n';
}
The program output is
sizeof( unsigned int ) = 4
sizeof( long ) = 4
std::is_same_v<decltype( x + y ), unsigned long> is true
That is in general the result of the first function is implementation defined.
In the second functions
bool bar (unsigned long a) {
return (a > -2L);
}
the both operands have the type unsigned long (again due to the usual arithmetic conversions and ranks of the types unsigned long and signed long are equal each other, so an object of the type signed long is converted to the type unsigned long) and -2L interpreted as unsigned long is greater than 99.
The reason for this has to do with the rules of integer conversions.
In the first case, you compare an unsigned int with a long using the > operator, and in the second case you compare a unsigned long with a long.
These operands must first be converted to a common type using the usual arithmetic conversions. These are spelled out in section 6.3.1.8p1 of the C standard, with the following excerpt focusing on integer conversions:
If both operands have the same type, then no further conversion is
needed.
Otherwise, if both operands have signed integer types or both have
unsigned integer types, the operand with the type of lesser integer
conversion rank is converted to the type of the operand with greater
rank.
Otherwise, if the operand that has unsigned integer type has rank
greater or equal to the rank of the type of the other operand, then
the operand with signed integer type is converted to the type of the
operand with unsigned integer type.
Otherwise, if the type of the operand with signed integer type can
represent all of the values of the type of the operand with unsigned
integer type, then the operand with unsigned integer type is converted
to the type of the operand with signed integer type.
Otherwise, both operands are converted to the unsigned integer type
corresponding to the type of the operand with signed integer type.
In the case of comparing an unsigned int with a long the second bolded paragraph applies. long has higher rank and (assuming long is 64 bit and int is 32 bit) can hold all values than an unsigned int can, so the unsigned int operand a is converted to a long. Since the value in question is in the range of long, section 6.3.1.3p1 dictates how the conversion happens:
When a value with integer type is converted to another integer type
other than _Bool, if the value can be represented by the new type, it
is unchanged
So the value is preserved and we're left with 99 > -2 which is true.
In the case of comparing an unsigned long with a long, the first bolded paragraph applies. Both types are of the same rank with different signs, so the long constant -2L is converted to unsigned long. -2 is outside the range of an unsigned long so a value conversion must happen. This conversion is specified in section 6.3.1.3p2:
Otherwise, if the new type is unsigned, the value is converted by
repeatedly adding or subtracting one more than the maximum value that
can be represented in the new type until the value is in the range of
the new type.
So the long value -2 will be converted to the unsigned long value 264-2, assuming unsigned long is 64 bit. So we're left with 99 > 264-2, which is false.
I think what is happening here is implicit promotion by the compiler. When you perform comparison on two different primitives, the compiler will promote one of them to the same type as the other. I believe the rules are that the type with the larger possible value is used as the standard.
So in foo() you are implicitly promoting your argument to a signed long type and the comparison works as expected.
In bar() your argument is an unsigned long, which has a larger maximum value than signed long. Here the compiler promotes -2L to unsigned long, which turns into a very large number.
Related
First, I'd like to point out that I did read some of the other answers with generally similar titles. However, those either refer to what I assume is an older version of the standard, or deal with promotions, not conversions.
I've been crawling through "C++ Crash Course", where the author states the following in the chapter exploring built-in operators:
If none of the floating-point promotion rules apply, you then check
whether either argument is signed. If so, both operands become signed.
Finally, as with the promotion rules for floating-point types, the size of the
largest operand is used to promote the other operand: ...
If I read the standard correctly, this is not true, because, as per cppreference.com,
If both operands are signed or both are unsigned, the operand with lesser conversion rank is converted to the operand with the greater integer conversion rank
Otherwise, if the unsigned operand's conversion rank is greater or equal to the conversion rank of the signed operand, the signed operand is converted to the unsigned operand's type.
Otherwise, if the signed operand's type can represent all values of the unsigned operand, the unsigned operand is converted to the signed operand's type
Otherwise, both operands are converted to the unsigned counterpart of the signed operand's type.
What confuses me even more is the fact that the following code:
printf("Size of int is %d bytes\n", sizeof(int));
printf("Size of short is %d bytes\n", sizeof(short));
printf("Size of long is %d bytes\n", sizeof(long));
printf("Size of long long is %d bytes\n", sizeof(long long));
unsigned int x = 4000000000;
signed int y = -1;
signed long z = x + y;
printf("x + y = %ld\n", z);
produces the following output:
Size of int is 4 bytes
Size of short is 2 bytes
Size of long is 8 bytes
Size of long long is 8 bytes
x + y = 3999999999
As I understand the standard, y should have been converted to unsigned int, leading to an incorrect result. The result is correct, which leads me to assume that no conversion happens in this case. Why? I would be grateful for any clafirication on this matter. Thank you.
(I would also appreciate someone telling me that I won't ever need this kind of arcane knowledge in real life, even if it's not true - just for the peace of mind.)
A conversion of a negative signed value to an unsigned leads into the binary complement value which corresponds to it. An addition of this unsigned value will lead into an overflow which then results just exactly in the same value as adding the negative value would do.
By the way: That's the trick how the processor does a subtraction (or am I wrong in these modern times?).
If I read the standard correctly, this is not true
You're correct. CCC is wrong.
As I understand the standard, y should have been converted to unsigned int,
It did.
The result is correct, which leads me to assume that no conversion happens in this case.
You assume wrong.
Since -1 is not representable unsigned number, it simply converts to the representable number that is representable and is congruent with - 1 modulo the number of representable values. When you add this to x, the result is larger than any representable value, so again result is the representable value that is congruent with the modulo. Because of the magic of how modular arithmetic works, you get the correct result.
You have an addition of an unsigned int and a signed int, so same conversion rank for both operands. So the second rule will apply (emphasize mine):
[Otherwise,] if the unsigned operand's conversion rank is greater or equal to the conversion rank of the signed operand, the signed operand is converted to the unsigned operand's type.
-1 is converted to an unsigned type by adding to it the smallest power of two greater that the highest unsigned int (in fact its representation in 2's complement)
that number is added to 4000000000 and the result is the modulo of the sum and that power of two (in fact discarding its higher bit)
and you just get the expected result.
The standard mandates conversion to the unsigned type because unsigned overflow is defined by the standard, while unsigned one is not.
I would also appreciate someone telling me that I won't ever need this
kind of arcane knowledge in real life
There is one implicit promotion not addressed above that is a common cause of bugs:
unsigned short x = 0xffff;
unsigned short y = 0x0001;
if ((x + y) == 0)
The result of this is false because arithmetic operations are implicitly promoted to at least int.
From the output of the first print statement from your code snippet it is evident that since the size of int is 4 bytes, the unsigned int values can range from 0 to 4,294,967,295. (2^32-1)
The declaration below is hence perfectly valid:
unsigned int x = 4000000000;
Remember here that declaring x as an unsigned integer gives itself twice the (+ve) range of a signed counterpart, but it falls in exactly the same range and hence conversions between them are possible.
signed int y = -1;
signed long z = x + y;
printf("x + y = %ld\n", z);
Hence for the rest of the code above, it doesn't matter here whether the operands of y and z are signed (+ve/-ve) or unsigned (+ve) because you are dealing with an unsigned integer in the first place, for which the signed integers will be converted to their corresponding unsigned integer versions by adding UINTMAX+1 to the signed version.
Here UINTMAX stands for the largest unsigned value which is added with the smallest power of 2 (2^0=1). The end result would obviously be larger than values we can store, hence it is taken as modulo to collect the correct unsigned int.
However, if you were to convert x to signed instead:
signed int x = 4000000000;
int y = -1;
long z = x + y;
printf("x + y = %ld\n", z);
you would get what you might have expected:
x + y = -294967297
I was trying to get the maximum length of a string from an array of strings.
I wrote this code:
char a[100][100] = {"solol","a","1234567","123","1234"};
int max = -1;
for(int i=0;i<5;i++)
if(max<strlen(a[i]))
max=strlen(a[i]);
cout<<max;
The output it gives is -1.
But when I initialize the value of max by 0 instead of 1, the code works fine. Why is it so?
The function strlen returns an unsigned integral (i.e. size_t) so when you compare it to max which is signed, it gets promoted to unsigned and that -1 turns to something like 0xffffffff (the actual value depends on your architecture/compiler) which is larger than any of those strings.
Change max type to be size_t and never compare unsigned with signed integrals.
size_t is the unsigned integer type.
From C++ Standard#4.6:
A prvalue of an integer type other than bool, char16_t, char32_t, or wchar_t whose integer conversion
rank (4.15) is less than the rank of int can be converted to a prvalue of type int if int can represent all the
values of the source type; otherwise, the source prvalue can be converted to a prvalue of type unsigned int.
The value of max is -1 which represent the maximum value that an unsigned integer type can hold. In the expression max<strlen(a[i]), the value returned by strlen will be promoted to unsigned int and will be compared with the maximum value of the unsigned integer. Hence the condition max<strlen(a[i]) will never true for the given input and the value of max will not change.
The longest string in the array of strings a is of length 7. When you initialize the max with 0 or 1, the expression max<strlen(a[i]) evaluate to true for any string whose length is greater than the value of max. Hence you will get the expected output.
strlen returns an unsigned int and when doing the comparison -1 gets promoted to a maximum value an unsigned can hold.
To fix the code, change the max definition to unsigned int max = 0
Implicit type conversion (coercion)
When evaluating expressions, the compiler breaks each expression down into individual subexpressions. The arithmetic operators require their operands to be of the same type. To ensure this, the compiler uses the following rules:
If an operand is an integer that is narrower than an int, it undergoes integral promotion (as described above) to int or unsigned int.
If the operands still do not match, then the compiler finds the highest priority operand and implicitly converts the other operand to match.
The priority of operands is as follows:
long double (highest)
double
float
unsigned long long
long long
unsigned long
long
unsigned int
int (lowest)
In your case on of operands has type int and another one type size_t, so max was promoted to type size_t, and bit representation of -1 is the biggest possible size_t.
You might also want to look in the following parts of the Standard [conv.prom],
[conv.integral], [conv.rank].
I think this might work for you.
#include <algorithm>
const size_t ARRAYSIZE = 100;
char stringArray[ARRAYSIZE][100] = {"solol","a","1234567","123","1234"};
auto longerString = std::max_element(stringArray, stringArray + ARRAYSIZE - 1, [](const auto& s1, const auto& s2){
return strlen(s1) < strlen(s2);
});
size_t maxSize = strlen(longerString[0]);
This works for me, returns the longest char array, just do a strlen over the "first element" and youre done. For your example, it returns 7.
Hope it helps.
-2147483648 is the smallest integer for integer type with 32 bits, but it seems that it will overflow in the if(...) sentence:
if (-2147483648 > 0)
std::cout << "true";
else
std::cout << "false";
This will print true in my testing. However, if we cast -2147483648 to integer, the result will be different:
if (int(-2147483648) > 0)
std::cout << "true";
else
std::cout << "false";
This will print false.
I'm confused. Can anyone give an explanation on this?
Update 02-05-2012:
Thanks for your comments, in my compiler, the size of int is 4 bytes. I'm using VC for some simple testing. I've changed the description in my question.
That's a lot of very good replys in this post, AndreyT gave a very detailed explanation on how the compiler will behave on such input, and how this minimum integer was implemented. qPCR4vir on the other hand gave some related "curiosities" and how integers are represented. So impressive!
-2147483648 is not a "number". C++ language does not support negative literal values.
-2147483648 is actually an expression: a positive literal value 2147483648 with unary - operator in front of it. Value 2147483648 is apparently too large for the positive side of int range on your platform. If type long int had greater range on your platform, the compiler would have to automatically assume that 2147483648 has long int type. (In C++11 the compiler would also have to consider long long int type.) This would make the compiler to evaluate -2147483648 in the domain of larger type and the result would be negative, as one would expect.
However, apparently in your case the range of long int is the same as range of int, and in general there's no integer type with greater range than int on your platform. This formally means that positive constant 2147483648 overflows all available signed integer types, which in turn means that the behavior of your program is undefined. (It is a bit strange that the language specification opts for undefined behavior in such cases, instead of requiring a diagnostic message, but that's the way it is.)
In practice, taking into account that the behavior is undefined, 2147483648 might get interpreted as some implementation-dependent negative value which happens to turn positive after having unary - applied to it. Alternatively, some implementations might decide to attempt using unsigned types to represent the value (for example, in C89/90 compilers were required to use unsigned long int, but not in C99 or C++). Implementations are allowed to do anything, since the behavior is undefined anyway.
As a side note, this is the reason why constants like INT_MIN are typically defined as
#define INT_MIN (-2147483647 - 1)
instead of the seemingly more straightforward
#define INT_MIN -2147483648
The latter would not work as intended.
The compiler (VC2012) promote to the "minimum" integers that can hold the values. In the first case, signed int (and long int) cannot (before the sign is applied), but unsigned int can: 2147483648 has unsigned int ???? type.
In the second you force int from the unsigned.
const bool i= (-2147483648 > 0) ; // --> true
warning C4146: unary minus operator applied to unsigned type, result still unsigned
Here are related "curiosities":
const bool b= (-2147483647 > 0) ; // false
const bool i= (-2147483648 > 0) ; // true : result still unsigned
const bool c= ( INT_MIN-1 > 0) ; // true :'-' int constant overflow
const bool f= ( 2147483647 > 0) ; // true
const bool g= ( 2147483648 > 0) ; // true
const bool d= ( INT_MAX+1 > 0) ; // false:'+' int constant overflow
const bool j= ( int(-2147483648)> 0) ; // false :
const bool h= ( int(2147483648) > 0) ; // false
const bool m= (-2147483648L > 0) ; // true
const bool o= (-2147483648LL > 0) ; // false
C++11 standard:
2.14.2 Integer literals [lex.icon]
…
An integer literal is a sequence of digits that has no period or
exponent part. An integer literal may have a prefix that specifies its
base and a suffix that specifies its type.
…
The type of an integer literal is the first of the corresponding list
in which its value can be represented.
If an integer literal cannot be represented by any type in its list
and an extended integer type (3.9.1) can represent its value, it may
have that extended integer type. If all of the types in the list for
the literal are signed, the extended integer type shall be signed. If
all of the types in the list for the literal are unsigned, the
extended integer type shall be unsigned. If the list contains both
signed and unsigned types, the extended integer type may be signed or
unsigned. A program is ill-formed if one of its translation units
contains an integer literal that cannot be represented by any of the
allowed types.
And these are the promotions rules for integers in the standard.
4.5 Integral promotions [conv.prom]
A prvalue of an integer type other than bool, char16_t, char32_t, or
wchar_t whose integer conversion rank (4.13) is less than the rank of
int can be converted to a prvalue of type int if int can represent all
the values of the source type; otherwise, the source prvalue can be
converted to a prvalue of type unsigned int.
In Short, 2147483648 overflows to -2147483648, and (-(-2147483648) > 0) is true.
This is how 2147483648 looks like in binary.
In addition, in the case of signed binary calculations, the most significant bit ("MSB") is the sign bit. This question may help explain why.
Because -2147483648 is actually 2147483648 with negation (-) applied to it, the number isn't what you'd expect. It is actually the equivalent of this pseudocode: operator -(2147483648)
Now, assuming your compiler has sizeof(int) equal to 4 and CHAR_BIT is defined as 8, that would make 2147483648 overflow the maximum signed value of an integer (2147483647). So what is the maximum plus one? Lets work that out with a 4 bit, 2s compliment integer.
Wait! 8 overflows the integer! What do we do? Use its unsigned representation of 1000 and interpret the bits as a signed integer. This representation leaves us with -8 being applied the 2s complement negation resulting in 8, which, as we all know, is greater than 0.
This is why <limits.h> (and <climits>) commonly define INT_MIN as ((-2147483647) - 1) - so that the maximum signed integer (0x7FFFFFFF) is negated (0x80000001), then decremented (0x80000000).
I understand that, regarding implicit conversions, if we have an unsigned type operand and a signed type operand, and the type of the unsigned operand is the same as (or larger) than the type of the signed operand, the signed operand will be converted to unsigned.
So:
unsigned int u = 10;
signed int s = -8;
std::cout << s + u << std::endl;
//prints 2 because it will convert `s` to `unsigned int`, now `s` has the value
//4294967288, then it will add `u` to it, which is an out-of-range value, so,
//in my machine, `4294967298 % 4294967296 = 2`
What I don't understand - I read that if the signed operand has a larger type than the unsigned operand:
if all values in the unsigned type fit in the larger type then the unsigned operand is converted to the signed type
if the values in the unsigned type don't fit in the larger type, then the signed operand will be converted to the unsigned type
so in the following code:
signed long long s = -8;
unsigned int u = 10;
std::cout << s + u << std::endl;
u will be converted to signed long long because int values can fit in signed long long??
If that's the case, in what scenario the smaller type values won't fit in the larger one?
Relevant quote from the Standard:
5 Expressions [expr]
10 Many binary operators that expect operands of arithmetic or
enumeration type cause conversions and yield result types in a similar
way. The purpose is to yield a common type, which is also the type of
the result. This pattern is called the usual arithmetic conversions,
which are defined as follows:
[2 clauses about equal types or types of equal sign omitted]
— Otherwise, if the operand that has unsigned integer type has rank
greater than or equal to the rank of the type of the other operand,
the operand with signed integer type shall be converted to the type of
the operand with unsigned integer type.
— Otherwise, if the type of
the operand with signed integer type can represent all of the values
of the type of the operand with unsigned integer type, the operand
with unsigned integer type shall be converted to the type of the
operand with signed integer type.
— Otherwise, both operands shall be
converted to the unsigned integer type corresponding to the type of
the operand with signed integer type.
Let's consider the following 3 example cases for each of the 3 above clauses on a system where sizeof(int) < sizeof(long) == sizeof(long long) (easily adaptable to other cases)
#include <iostream>
signed int s1 = -4;
unsigned int u1 = 2;
signed long int s2 = -4;
unsigned int u2 = 2;
signed long long int s3 = -4;
unsigned long int u3 = 2;
int main()
{
std::cout << (s1 + u1) << "\n"; // 4294967294
std::cout << (s2 + u2) << "\n"; // -2
std::cout << (s3 + u3) << "\n"; // 18446744073709551614
}
Live example with output.
First clause: types of equal rank, so the signed int operand is converted to unsigned int. This entails a value-transformation which (using two's complement) gives te printed value.
Second clause: signed type has higher rank, and (on this platform!) can represent all values of the unsigned type, so unsigned operand is converted to signed type, and you get -2
Third clause: signed type again has higher rank, but (on this platform!) cannot represent all values of the unsigned type, so both operands are converted to unsigned long long, and after the value-transformation on the signed operand, you get the printed value.
Note that when the unsigned operand would be large enough (e.g. 6 in these examples), then the end result would give 2 for all 3 examples because of unsigned integer overflow.
(Added) Note that you get even more unexpected results when you do comparisons on these types. Lets consider the above example 1 with <:
#include <iostream>
signed int s1 = -4;
unsigned int u1 = 2;
int main()
{
std::cout << (s1 < u1 ? "s1 < u1" : "s1 !< u1") << "\n"; // "s1 !< u1"
std::cout << (-4 < 2u ? "-4 < 2u" : "-4 !< 2u") << "\n"; // "-4 !< 2u"
}
Since 2u is made unsigned explicitly by the u suffix the same rules apply. And the result is probably not what you expect when comparing -4 < 2 when writing in C++ -4 < 2u...
signed int does not fit into unsigned long long. So you will have this conversion:
signed int -> unsigned long long.
Note that the C++11 standard doesn't talk about the larger or smaller types here, it talks about types with lower or higher rank.
Consider the case of long int and unsigned int where both are 32-bit. The long int has a larger rank than the unsigned int, but since long int and unsigned int are both 32-bit, long int can't represent all the values of unsigned int.
Therefore we fall into to the last case (C++11: 5.6p9):
Otherwise, both operands shall be converted to the unsigned integer type corresponding to the
type of the operand with signed integer type.
This means that both the long int and the unsigned int will be converted to unsigned long int.
why d is not equal b in this example?
unsigned int z = 176400;
long a = -4;
long b = a*z/1000; //b=4294261
long c = a*z; // c=-705600
long d = c/1000; // d =-705
I use Visual Studio 2008, windows XP, core2duo.
Thanks.
It looks like you are using a platform where int and long have the same size. (I've inferred this by the fact that if long was able to hold all the valid values of unsigned int you would not see the behaviour that you are seeing.)
This means that in the expression a*z, both a and z are converted to unsigned long and the result has type unsigned long. (ISO/IEC 14882:2011, 5 [expr] / 9 ... "Otherwise, both operands shall be converted to the unsigned integer type corresponding to the type of the operand with signed integer type.")
c is the result of converting this expression from unsigned long to long and in your case this results in an implementation defined result (that happens to be negative) as the positive value of a*z is not representable in a signed long. In c/1000, 1000 is converted to long and long division is performed (no pun intended) resulting in a long (which happens to be negative) and is stored to d.
In the expressions a*z/1000, 1000 (an expression of type int) is converted to unsigned long and the division is performed between two unsigned long resulting in a positive result. This result is representable as a long and the value is unchanged on converting to long and storing to b.