Can someone explain what happens when size_t, or any other type identifier, is wrapped in parentheses. I know it is the old typecast syntax but in this context I don't follow what is happening.
I've seen it for defining the max size of a type as:
size_t max_size = (size_t)-1
This code (unnecessarily) casts -1 to size_t. The most probable intent was getting the largest possible value of size_t on this system.
Although this code doesn't have Undefined Behavior, this code is ugly - in C++ you should use std::numeric_limits<size_t>::max() and in C use SIZE_MAX macro for exactly a purpose of getting the largest size_t value.
(size_t)-1 is in fact the equivalent of size_t(-1)
See also the following question c cast syntax styles
Some library methods intentionally return (size_t)(-1) to indicate an error condition. For example, the iconv method from the GNU libiconv library. I assume there is some good reason why these functions don't return ssize_t (signed) return values, which would allow you to check for -1 directly.
Related
i'm looking for the exact reason why does this conversion gives up a warning on c++ but not on c. does it connected to the fact that c++ is strongly type and c is weakly type ?
is it because the type in c can be determined while running so the compile does not points out a warning ?
thank you.
The presence or absence of a warning on a conversion from double to int has nothing to do with any difference between C and C++.
A warning (and you didn't tell us what the warning looks like; please update the question with that information) is probably valid. If the truncated double value is outside the representable range of int, the behavior is undefined. If it's within the range, but not mathematically equal to an integer, then the conversion will loose information.
Some compilers will warn about things like this, others won't -- and a given compiler may or may not issue a warning depending on what options you specify.
I get this warning when compiling a source in c ++ with gcc in FreeBSD.
Could someone explain and help me solve the issue I'm having?
Following is a link to the entire source code, it was placed on pastebin because it contains 7000 lines of code. Source char.cpp
Here is the warning message :
In member function 'void CHARACTER::PointChange(BYTE, int, bool, bool)':
Expanding on #KaliG s answer:
On the line 3065 you are declaring:
DWORD exp = GetExp();
Now what is a DWORD? Well, it stands for double word, and a "word" is 16 bits on this C++ implementation (Win32). It is a typedef and is actually an unsigned integer.
http://en.wikipedia.org/wiki/Word_%28computer_architecture%29
The other variable, amount is an argument defined as the type (signed) int.
So you are comparing a signed and unsigned integer - which causes the warning.
You can solve this by simply casting amount to an unsigned int (or "DWORD") since you have verified already that it is in fact positive.
So change the line to:
if (amount < 0 && exp < (DWORD) -amount)
This should work - but I have no idea how your method works other than that.
Sidenote: Hungarian notation is really ghastly stuff - so you should really dig into what the different type names they use actually are. http://en.wikipedia.org/wiki/Hungarian_notation
Sidenote 2: Don't use ALLCAPS class names... developers are used to think that those identifiers are constants, so you confuse other people who might read your code.
Sidenote 3: Read up on 2s complement to understand what the ALU (http://en.wikipedia.org/wiki/Arithmetic_logic_unit) inside the CPU is actually doing: http://en.wikipedia.org/wiki/Two%27s_complement
From the error thrown, I would say it is because 'exp' is either an unsigned or signed variable while 'amount' is the opposite, hence the reason you get the comparison error thrown.
Please post the lines of code where you declare these variables. :)
(Verify if you declared either of these 2 variables as a signed/unsigned by mistake.)
I use g++ and I have defined a custom allocator where the size_type is byte.
I am using it with basic_string to create custom strings.
The "basic_string.tcc" code behaves erroneously because in the code of
_S_create(size_type __capacity, size_type __old_capacity, const _Alloc& __alloc)
the code checks for
const size_type __extra = __pagesize - __adj_size % __pagesize;
But all the arithmetic are byte arithmetic and so __pagesize that should have a value 4096, becomes 0 (because 4096 is a multiple of 256) and we have a "division by 0" exception (the code hangs).
The question isn't what should I do, but how could I ask a correction to the above code ? from whom ? (I may implement those corrections).
Before you can request or suggest a change to something like that, you have to establish a strong case that there is indeed a problem that needs to be fixed. In my view there probably is not.
The question is: under which circumstances would it be legitimate (or useful) to define a size_type as unsigned char? I am not aware of anything in the standard that specifically disallows this choice. It is defined as
unsigned integer type - a type that can represent the size of the largest object in the allocation model.
And unsigned char is definitely an unsigned integer type as per s3.9.1. Interesting.
So is it useful? Clearly you seem to think so, but I'm not sure your case is strongly made out. You could work on providing evidence that this is an issue worth resolving.
So it seems to me the process is:
Establish whether unsigned char is intended to be included as a valid choice in the standard, or whether it should be excluded, or was just overlooked.
Raise a 'standards non-compliance' issue with the team for each compiler that has the problem, providing good reasoning and a repro case.
Consider submitting a patch, if this is something within your ability to fix.
Or you could just use short unsigned int instead. I would.
So on a fairly regular bases it seems I find the type of some constant I declared (typically integer, but occasionally other things like strings) is not the ideal type in a context it is being used, requiring a cast or resulting in a compiler warning about the implicit cast.
E.g. in one piece of code I had something like the below, and got a signed/unsigned comparison issue.
static const int MAX_FOO = 16;
...
if (container.size() > MAX_FOO) {...}
I have been thinking of just always using the smallest / most basic type allowed for a given constant (e.g. char, unsigned char, const char* etc rather than say int, size_t and std::string), but was wondering if this is really a good idea, and if there are some places where it would potentially be a really bad idea? e.g. code using the 'auto' keyword (or perhaps templates) getting a too small type and overflowing on what appeared to be a safe operation?
Going for the smallest type that can hold the initial value is a bad habit. That invites overflow.
Always code for the most general (which according to Murphy's Law is the worst) case. As templates generalize things, that makes the worst case a lot worse. Be prepared for bizarre kinds of overflows and avoid negative numbers while unsigned types are in the neighborhood.
std::size_t is the best choice for the size or length of anything, for the reason you mentioned. But subtract pointers and you get a std::ptrdiff_t instead. Personally I recommend to cast the result of such a subtraction to std::size_t if it can be guaranteed to be positive.
char * does not own its string in the C++ sense as std::string does, so the latter is the more conservative choice.
This question is so broad that no more specific advice can be madeā¦
I am using Visual studio 2008.
For below code
double main()
{
}
I get error:
error C3874: return type of 'main'
should be 'int' instead of 'double'
But if i use below code
char main()
{
}
No errors.
After running and exiting the output window displays
The program '[5856] test2.exe: Native'
has exited with code -858993664
(0xcccccc00).
Question: Is compiler doing implicit cast from default return value of zero(integer) to char ?
how the code 0xcccccc00 got generated ?
Looks like last byte in that code seem to be the actual returned value. Why 0xcccccc is coming ?
The correct way to do it, per the C++ standard is:
int main()
{
...
}
Don't change the return type to anything else or your code will not be C++ and you're just playing with compiler specific functionality. Those ccccc values in your example are just unitialized bytes (which the C allocator sets to 0xCC) being returned.
The value returned from the main function becomes the exit status of the process, though the C standard only ascribes specific meaning to two values: EXIT_SUCCESS (traditionally zero) and EXIT_FAILURE. The meaning of other possible return values is implementation-defined. However there is no standard for how non-zero codes are interpreted.
You may refer to an useful post:
What should main() return in C and C++?
Yet another MSVC extension/bug!
The answer to your first question is sort of yes. A char is essentially a very small integral type, so the compiler is being (extremely) lenient. Double isn't acceptable because it's not an integral type. The 0xCCCCCC is memory that never got initialized (except for the purposes of debugging). Since ASCII characters can only have two hex digits, the conversion failed to set the first 24 bits at all (and just set the last 8 bits to 0). What an odd and undesirable compiler trick.
About main function, $3.6.1/2 - "It
shall have a return type of type int,
but otherwise its type is
implementation-defined."
As I understand, anything that mentions 'shall' in the standard document AND is not ahdered to by the code is an instant condition required to be diagnosed by the compiler, unless the Standard specifically says that such diagnosis is not required.
So I guess VS has a bug if it allows such a code.
The main function is supposed to return an int. Not doing that means you're out in undefined territory. Don't forget to wave at the standard on your way past. Your char return probably works because char can be easily converted to an int. Doubles can certainly not. Not only are they longer (double the length) but they're floating point, which means you'll have 1s in wonky places.
Short answer: don't do that.
It is probably because char will implicitly cast to an int, however double won't as there would be data loss.
(See here: http://msdn.microsoft.com/en-us/library/y5b434w4%28v=VS.71%29.aspx for more info)
However you don't see the conversion problem because the compiler catches the worst sin (as stated in the other answers) of using a non standard return type.