In WaitForSingleObject, is timeout=INFINITE the same as timeout=-1? - waitforsingleobject

I'm working with some Visual Basic for Applications (VB 6.3) code written by someone else, and they've written:
WaitForSingleObject SEI.hProcess, -1
The process this appears in is supposed to return some data in a text box; sometimes it fails to return anything, and I think it's because of this, possibly because it's its timing out. Is that code the same as:
WaitForSingleObject SEI.hProcess, INFINITE
???
Thanks for your help.

The timeout for WaitForSingleObject is actually a DWORD, which is an unsigned 32 bit integer. INFINITE is defined as 0xFFFFFFFF, but -1 mapped into an unsigned type wraps and becomes this value in most integer representations.
Is that code the same as:
Effectively, yes.

So basically your question translates to whether WaitForSingleObject SEI.hProcess, -1 and WaitForSingleObject SEI.hProcess, INFINITE are same or not?
As Reed's answer says, yes, they are same and so is WaitForSingleObject SEI.hProcess, -3999. Basically any negative number will wrap around and become the maximum unsigned int available which is coincidentally the value of INFINITE.
Now, does it mean you should use either of these. NO. You should always use the documented version otherwise C++ is very famous to have undefined behaviour for undocumented features.

Related

Memory waste? If main() should only return 0 or 1, why is main declared with int and not short int or even char?

For example:
#include <stdio.h>
int main (void) /* Why int and not short int? - Waste of Memory */
{
printf("Hello World!");
return 0;
}
Why main() is conventional defined with int type, which allocates 4 bytes in memory on 32-bit, if it usually returns only 0 or 1, while other types such as short int (2 bytes,32-bit) or even char (1 byte,32-bit) would be more memory saving?
It is wasting memory space.
NOTE: The question is not a duplicate of the thread given; its answers only correspond to the return value itself but not its datatype at explicit focus.
The Question is for C and C++. If the answers between those alter, share your wisdom with the mention of the context of which language in particular is focused.
Usually assemblers use their registers to return a value (for example the register AX in Intel processors). The type int corresponds to the machine word That is, it is not required to convert, for example, a byte that corresponds to the type char to the machine word.
And in fact, main can return any integer value.
It's because of a machine that's half a century old.
Back in the day when C was created, an int was a machine word on the PDP-11 - sixteen bits - and it was natural and efficient to have main return that.
The "machine word" was the only type in the B language, which Ritchie and Thompson had developed earlier, and which C grew out of.
When C added types, not specifying one gave you a machine word - an int.
(It was very important at the time to save space, so not requiring the most common type to be spelled out was a Very Good Thing.)
So, since a B program started with
main()
and programmers are generally language-conservative, C did the same and returned an int.
There are two reasons I would not consider this a waste:
1 practical use of 4 byte exit code
If you want to return an exit code that exactly describes an error you want more than 8 bit.
As an example you may want to group errors: the first byte could describe the vague type of error, the second byte could describe the function that caused the error, the third byte could give information about the cause of the error and the fourth byte describes additional debug information.
2 Padding
If you pass a single short or char they will still be aligned to fit into a machine word, which is often 4 Byte/32 bit depending on architecture. This is called padding and means, that you will most likely still need 32 bit of memory to return a single short or char.
The old-fashioned convention with most shells is to use the least significant 8 bits of int, not just 0 or 1. 16 bits is increasingly common due to that being the minimum size of an int allowed by the standard.
And what would the issue be with wasting space? Is the space really wasted? Is your computer so full of "stuff" that the remaining sizeof(int) * CHAR_BIT - 8 would make a difference? Could the architecture exploit that and use those remaining bits for something else? I very much doubt it.
So I wouldn't say the memory is at all wasted since you get it back from the operating system when the program finishes. Perhaps extravagent? A bit like using a large wine glass for a small tipple perhaps?
1st: Alone your assumption/statement if it usually returns only 0 or 1 is wrong.
Usually the return code is expected to be 0 if no error occurred but otherwise it can return any number to represent different errors. And most (at least command line programs) do so. Many programs also output negative numbers.
However there are a few common used codes https://www.tldp.org/LDP/abs/html/exitcodes.html also here another SO member points to a unix header that contains some codes https://stackoverflow.com/a/24121322/2331592
So after all it is not just a C or C++ type thing but also has historical reasons how most operating systems work and expect the programs to behave and since that the languages have to support that and so at least C like languages do that by using an int main(...).
2nd:
your conclusion It is wasting memory space is wrong.
Using an int in comparison to a shorter type does not involve any waste.
Memory is usually handled in word-size (that that mean may depend from your architecture) anyway
working with sub-word-types involves computation overheand on some architecture (read: load, word, mask out unrelated bits; store: load memory, mask out variable bits, or them with the new value, write the word back)
the memory is not wasted unless you use it. if you write return 0; no memory is ever used at this point. if you return myMemorySaving8bitVar; you only have 1 byte used (most probable on the stack (if not optimized out at all))
You're either working in or learning C, so I think it's a Real Good Idea that you are concerned with efficiency. However, it seems that there are a few things that seem to need clarifying here.
First, the int data type is not an never was intended to mean "32 bits". The idea was that int would be the most natural binary integer type on the target machine--usually the size of a register.
Second, the return value from main() is meant to accommodate a wide range of implementations on different operating systems. A POSIX system uses an unsigned 8-bit return code. Windows uses 32-bits that are interpreted by the CMD shell as 2's complement signed. Another OS might choose something else.
And finally, if you're worried about memory "waste", that's an implementation issue that isn't even an issue in this case. Return codes from main are typically returned in machine registers, not in memory, so there is no cost or savings involved. Even if there were, saving 2 bytes in the run of a nontrivial program is not worth any developer's time.
The answer is "because it usually doesn't return only 0 or 1." I found this thread from software engineering community that at least partially answers your question. Here are the two highlights, first from the accepted answer:
An integer gives more room than a byte for reporting the error. It can be enumerated (return of 1 means XYZ, return of 2 means ABC, return of 3, means DEF, etc..) or used as flags (0x0001 means this failed, 0x0002 means that failed, 0x0003 means both this and that failed). Limiting this to just a byte could easily run out of flags (only 8), so the decision was probably to use an integer.
An interesting point is also raised by Keith Thompson:
For example, in the dialect of C used in the Plan 9 operating system main is normally declared as a void function, but the exit status is returned to the calling environment by passing a string pointer to the exits() function. The empty string denotes success, and any non-empty string denotes some kind of failure. This could have been implemented by having main return a char* result.
Here's another interesting bit from a unix.com forum:
(Some of the following may be x86 specific.)
Returning to the original question: Where is the exit status stored? Inside the kernel.
When you call exit(n), the least significant 8 bits of the integer n are written to a cpu register. The kernel system call implementation will then copy it to a process-related data structure.
What if your code doesn't call exit()? The c runtime library responsible for invoking main() will call exit() (or some variant thereof) on your behalf. The return value of main(), which is passed to the c runtime in a register, is used as the argument to the exit() call.
Related to the last quote, here's another from cppreference.com
5) Execution of the return (or the implicit return upon reaching the end of main) is equivalent to first leaving the function normally (which destroys the objects with automatic storage duration) and then calling std::exit with the same argument as the argument of the return. (std::exit then destroys static objects and terminates the program)
Lastly, I found this really cool example here (although the author of the post is wrong in saying that the result returned is the returned value modulo 512). After compiling and executing the following:
int main() {
return 42001;
}
on a POSIX compliant my* system, echo $? returns 17. That is because 42001 % 256 == 17 which shows that 8 bits of data are actually used. With that in mind, choosing int ensures that enough storage is available for passing the program's exit status information, because, as per this answer, compliance to the C++ standard guarantees that size of int (in bits)
can't be less than 8. That's because it must be large enough to hold "the eight-bit code units of the Unicode UTF-8 encoding form."
EDIT:
*As Andrew Henle pointed out in the comment:
A fully POSIX compliant system makes the entire int return value available, not just 8 bits. See pubs.opengroup.org/onlinepubs/9699919799/basedefs/signal.h.html: "If si_code is equal to CLD_EXITED, then si_status holds the exit value of the process; otherwise, it is equal to the signal that caused the process to change state. The exit value in si_status shall be equal to the full exit value (that is, the value passed to _exit(), _Exit(), or exit(), or returned from main()); it shall not be limited to the least significant eight bits of the value."
I think this makes for an even stronger argument for the use of int over data types of smaller sizes.

std::queue::size() can return a huge number after pop() of size() == 0

I have the link here where I push(x) 10 ints, then pop() 11 and the size is not 0, or an exception, but a tremedous number (probably == std::numeric_limit<size_type>::max()). I assume this is the consequence of the internal representation simply doing a size-- and not checking for an already empty() case. This seems like a bug in the stdc++ library.
http://coliru.stacked-crooked.com/a/27ae7f10855e6c23
It's called "Undefined behaviour". To save time in the implementation (not checking if it's already empty), the implementation will simply decrement despite there being "nothing to decrement". Don't do that (and on another implementation it may not do that, so definitely don't rely on it doing anything meaningful). Since it's undefined, it may also dial out to Australia on your modem, erase your hard-disk or cause the application to crash. Or something else...
It is incorrect to pop an empty queue. Doing so means what happens is not defined by the standard, and most implemetations (in release/optimized builds) simply do broken things and do not check.
If you need a safe pop, try:
template<class Q>
void safe_pop(Q&q){
if (!q.empty())
q.pop();
}
and use safe_pop(que); instead of que.pop();.
Probably queue size data member gets decremented in pop() even if the queue is empty. Most likely the size data member has an unsigned integer type, when you decrement an unsigned zero it just wraps around to the biggest representable value.
EDIT: confirmed, 18446744073709551615 is 0xFFFFFFFFFFFFFFFF hexadecimal, which is the biggest value that can be represented by 8 bytes.
When you pop 1 more than push, internally you get size -1. However size_t is unsigned so -1 get typecasted to (unsigned max long - 1). However there is no intention to "fix" this issue because it will add an extra check in code and might cause performance issue. Unlike managed languages, C++ developers don't want anything extra that might cause even slight performance degradation :).
Here's related thread for GCC:
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=55841

C++ Pointer as DWORD

In C++, can I simply cast a pointer to a DWORD?
MyClass * thing;
DWORD myPtr = (DWORD)thing;
Would that work?
You undoubtedly can do it.
Whether it would work will depend on the environment and what you want it to do.
On 32-bit Windows1 (the most common place to see DWORD) it'll normally be fine. On a 64-bit Windows (where you also see DWORD, but not nearly as much) it generally won't.
Or, more accurately, when compiled as a 32-bit executable that will run as a 32-bit process, regardless of the actual copy of Windows you happen to run that on.
In windows its quite common to pass pointers in such way, for example in windows messages. LPARAM is a typedef for LONG_PTR and quite often is used to pass pointers to some structures. You should use reinterpret_cast<DWORD_PTR>(thing) for casting.
No, in a 64 bit process, a pointer is 64 bits but a DWORD is only 32 bits. Use a DWORD_PTR.
http://en.cppreference.com/w/cpp/language/explicit_cast
Read that, understand that, avoid C-style casts because they hide a lot.
Doing so may be able to be done, but would make no sense, for example DWORD is 4 bytes and a pointer (these days) is 8.
reinterpret_cast<DWORD&>(myPtr);
Should work, but it may be undefined or truncate, if anything will work that will!
BTW, reinterpret_cast is the C++ way of saying "Trust me my dear compiler, I know what I'm doing" - it attempts to interpret the bits (0s and 1s) of one thing as another, regardless of how much sense that makes.
A legitimate use though is the famous 1/sqrt hack ;)

c++ crashing on incrementing an unsigned long int

This one is WTF city.
The below program is crashing after a few thousand loops.
unsigned long int nTurn = 1;
bool quit = false;
int main(){
while(!quit){
doTurn();
++nTurn;
}
}
That's, of course, simplified from my game, but nTurn is at the moment used nowhere but the incrementing of it, and when I comment out the ++nTurn line, the program will reliably loop forever. Shouldn't it run into the millions?
WTF, stackoverflow?
Your problem is elsewhere.
Some other part of the program is reading from a wild pointer that ends up pointing to nTurn, and when this loop changes the value the other code acts different. Or there's a race condition, and the increment makes this loop take just a tiny bit longer so the race-y thing doesn't cause trouble. There are an infinite number of things you could have wrong elsewhere.
Can you run your program under valgrind? Some errors it won't find, but a lot it will.
may sound silly, but, if I were looking at this, I'd perhaps output the nTurn var and see if it were always crashing out on the value. then, perhaps initialize nTurn to that & see if that also causes it. you could always put this into debug and see what is happening with various registers and so forth. did you try different compilers ?
I'd use a debugger to catch the fault and see the value of nTurn. Or if you have core dump from the crash load it into the debugger to see the var values at the crash time.
One more point, could the problem be when nTurn wraps round and goes to zero?
++nTurn can't be the source of the crash directly. You may have some sort of buffer overflow causing the memory for the nTurn variable to be accessed by pointer arithmetic when it shouldn't be. That would cause weird behavior when combined with the incrementing.

Rationale behind return 0 as default value in C/C++

Is there a reason why zero is used as a "default" function return value? I noticed that several functions from the stdlib and almost everywhere else, when not returning a proper number (e.g pow(), strcpy()) or an error (negative numbers), simply return zero.
I just became curious after seeing several tests performed with negated logic. Very confusing.
Why not return 1, or 0xff, or any positive number for that matter?
The rationale is that you want to distinguish the set of all the possible (negative) return values corresponding to different errors from the only situation in which all went OK. The simplest, most concise and most C-ish way to pursue such distinction is a logical test, and since in C all integers are "true" except for zero, you want to return zero to mean "the only situation", i.e. you want zero as the "good" value.
The same line of reasoning applies to the return values of Unix programs, but indeed in the tests within Unix shell scripts the logic is inverted: a return value of 0 means "true" (for example, look at the return value of /bin/true).
Originally, C did not have "void". If a function didn't return anything, you just left the return type in the declaration blank. But that meant, that it returned an int.
So, everything returned something, even if it didn't mean anything. And, if you didn't specifically provide a return value, whatever value happened to be in the register the compiler used to return values became the function's return value.
// Perfectly good K&R C code.
NoReturn()
{
// do stuff;
return;
}
int unknownValue = NoReturn();
People took to clearing that to zero to avoid problems.
In shell scripting, 0 represents true, where another number typically represents an error code. Returning 0 from a main application means everything went successfully. The same logic may be being applied to the library code.
It could also just be that they return nothing, which is interpreted as 0. (Essentially the same concept.)
Another (minor) reason has to do with machine-level speed and code size.
In most processors, any operation that results in a zero automatically sets the zero flag, and there is a very cheap operation to jump against the zero flag.
In other words, if the last machine operation (e.g., PUSH) got us to zero, all we need is a jump-on-zero or a jump-not-zero.
On the other hand, if we test things against some other value, then we have to move that value into the register, run a compare operation that essentially subtracts the two numbers, and equality results in our zero.
Because Bash and most other UNIX shell environments regard 0 as success, and -x as a system error, and x as a user-defined error.
There's probably a bunch of forgotten history dating back to the days when everything was written in asm. In general it is much easier to test for zero than for other specific values.
I may be wrong about this, but I think that it's mainly for historical reasons (hysterical raisins?). I believe that K&R C (pre-ANSI) didn't have a void type, so the logical way to write a function that didn't return anything interesting was to have it return 0.
Surely somebody here will correct me if I'm wrong... :)
My understanding is that it was related to the behaviour of system calls.
Consider the open() system call; if it is successful, it returns a non-negative integer, which is the file descriptor that was created. However, down at the assembler level (where there's a special, non-C instruction that traps into the kernel), when an error is returned, it is returned as a negative value. When it detects an error return, the C code wrapper around the system call stores the negated value into errno (so errno has a positive value), and the function returns -1.
For some other system calls, the negative return code at the assembler level is still negated and placed into errno and -1 is returned. However, these system calls have no special value to return, so zero was chosen to indicate success. Clearly, there is a large variety of system calls, but most manage to fit these conventions. For example, stat() and its relatives return a structure, but a pointer to that structure is passed as an input parameter, and the return value is a 0 or -1 status. Even signal() manages it; -1 was SIG_DFL and 0 was SIG_IGN, and other values were function pointers. There are a few system calls with no error return - getpid(), getuid() and so on.
This zero-indicates-success mechanism was then emulated by other functions which were not actually system calls.
Conventionally, a return code of 0 specifies that your program has ended normally and all is well. (You can remember this as "zero errors", although for technical reasons, you cannot use the number of errors your program found as the return code. See Style.) A return code other than 0 indicates that some sort of error has occurred. If your code terminates when it encounters an error, use exit, and specify a non-zero return code. Source
Because 0 is false and null in C/C++ and you can make handy short cuts when that happens.
It is because when used from a UNIX shell a command that returns 0 indicates success.
Any other value indicates a failure.
As Paul Betts indicates positive and negative values delimitate where the error probably originated, but this is only a convention and not an absolute. A user application may return a negative value without any bad consequence (other than it is indicating to the shell that the application failed).
Besides all the fine points made by previous posters, it also cleans up the code considerably when a function returns 0 on success.
Consider:
if ( somefunc() ) {
// handle error
}
is much cleaner than:
if ( !somefunc() ) {
// handle error
}
or:
if ( somefunc() == somevalue ) {
// handle error
}