I've seen many examples of ReportEvent function on the web. In those examples, the handle that is sent to ReportEvent (first argument) is created right before the call to ReportEvent and destroyed right after the call (with RegisterEventSource and DeregisterEventSource respectively).
Is there a reason why the HANDLE is alive only for a short time? Why would it be better than just creating the HANDLE in the begining of the porgram and destroying it at the end? (after all it's only one HANDLE and around 16 million is the maximum). Is there no overhead on creating and destroying the HANDLE each time we call ReportEvent?
Thanks in advance,
Dror
Yes you can. At the beginning of wmain, you may do:
HEVENT hEventLog = RegisterEventSource(NULL, PROVIDER_NAME);
And, at the end, of wmain,
if (hEventLog)
DeregisterEventSource(hEventLog);
This should do, and you may reuse same handle
The more interesting aspect of this question is following - can we call RegisterEventSource only and then assume that exiting process is enough to auto-close all related handles including the one which was obtained via RegisterEventSource?
In other words - if we know for sure that process will be exited sometime then we can call RegisterEventSource only. Nothing more.
And even more clarifying form of the same question - if we will not call DeregisterEventSource() what problems that may cause? Would be there a kind of internal leak in whole OS? Or any other consequences?
For example:
I need to call ReportEvent() in lot of different DLLs, as result - in every DLL I always have to call 3 functions: RegisterEventSource, ReportEvent, DeregisterEventSource. I would like to use internal static variable with a handle, call RegisterEventSource only once and then never call DeregisterEventSource with a hope that when process exited all related handles will be auto-closed.
Would that work?
I am using mprotect to set some memory pages as write protected. When any writing is tried in that memory region, the program gets a SIGSEGV signal. From the signal handler I know in which memory address the write was tried, but I don't know the way how to find out which instruction causes write protection violation. So inside the signal handler I am thinking of reading the program counter(PC) register to get the faulty instruction. Is there a easy way to do this?
If you install your signal handler using sigaction with the SA_SIGINFO flag, the third argument to the signal handler has type void * but points to a structure of type ucontext_t, which in turn contains a structure of type mcontext_t. The contents of mcontext_t are implementation-defined and generally cpu-architecture-specific, but this is where you will find the saved program counter.
It's also possible that the compiler's builtins (__builtin_return_address with a nonzero argument, I think) along with unwinding tables may be able to trace across the signal handler. While this is in some ways more general (it's not visibly cpu-arch-specific), I think it's also more fragile, and whether it actually works may be cpu-arch- and ABI-specific.
Windows APIs uses GetLastError() mechanism to retrieve information about an error or failure. I am considering the same mechanism to handle errors as I am writing APIs for a proprietary module. My question is that is it better for API to return the error code directly instead? Does GetLastError() has any particular advantage? Consider the simple Win32 API example below:
HANDLE hFile = CreateFile(sFile,
GENERIC_WRITE, FILE_SHARE_READ,
NULL, CREATE_NEW, FILE_ATTRIBUTE_NORMAL, NULL);
if (hFile == INVALID_HANDLE_VALUE)
{
DWORD lrc = GetLastError();
if (lrc == ERROR_FILE_EXISTS)
{
// msg box and so on
}
}
As I was writing my own APIs I realized GetLastError() mechanism means that CreateFile() must set the last error code at all exit points. This can be a little error prone if there are many exit points and one of them maybe missed. Dumb question but is this how it is done or there is some kind of design pattern for it?
The alternative would be to provide an extra parameter to the function which can fill in the error code directly so a separate call to GetLastError() will not be needed. Yet another approach can be as below. I will stick with the above Win32 API which is good example to analyzer this. Here I am changing the format to this (hypothetically).
result = CreateFile(hFile, sFile,
GENERIC_WRITE, FILE_SHARE_READ,
NULL, CREATE_NEW, FILE_ATTRIBUTE_NORMAL, NULL);
if (result == SUCCESS)
{
// hFile has correct value, process it
}
else if (result == FILE_ALREADY_EXIT )
{
// display message accordingly
return;
}
else if ( result == INVALID_PATH )
{
// display message accordingly.
return;
}
My ultimate question is what is the preferred way to return error code from an API or even just a function since they both are the same?
Overall, it's a bad design. This is not specific to Windows' GetLastError function, Unix systems have the same concept with a global errno variable. It's a because it's an output of the function which is implicit. This has a few nasty consequences:
Two functions being executed at the same time (in different threads) may overwrite the global error code. So you may need to have a per-thread error code. As pointed out by various comments to this answer, this is exactly what GetLastError and errno do - and if you consider using a global error code for your API then you'll need to do the same in case your API should be usable from multiple threads.
Two nested function calls may throw away error codes if the outer function overwrites an error code set by the inner.
It's very easy to ignore the error code. In fact, it's harder to actually remember that it's there because not every function uses it.
It's easy to forget setting it when you implement a function yourself. There may be many different code paths, and if you don't pay attention one of them may allow the control flow to escape without setting the global error code correctly.
Usually, error conditions are exceptional. They don't happen very often, but they can. A configuration file you need may not be readable - but most of the time it is. For such exceptional errors, you should consider using C++ exceptions. Any C++ book worth it's salt will give a list of reasons why exceptions in any language (not just C++) are good, but there's one important thing to consider before getting all excited:
Exceptions unroll the stack.
This means that when you have a function which yields an exception, it gets propagated to all the callers (until it's caught by someone, possible the C runtime system). This in turn has a few consequences:
All caller code needs to be aware of the presence of exceptions, so all code which acquires resources must be able to release them even in the face of exceptions (in C++, the 'RAII' technique is usually used to tackle them).
Event loop systems usually don't allow exceptions to escape event handlers. There's no good concept of dealing with them in this case.
Programs dealing with callbacks (plain function pointers for instance, or even the 'signal & slot' system used by the Qt library) usually don't expect that a called function (a slot) can yield an exception, so they don't bother trying to catch it.
The bottom line is: use exceptions if you know what they are doing. Since you seem to be rather new to the topic, rather stick to return codes of functions for now but keep in mind that this is not a good technique in general. Don't go for a global error variable/function in either case.
The GetLastError pattern is by far the most prone to error and the least preferred.
Returning a status code enum is a better choice by far.
Another option which you did not mention, but is quite popular, would be to throw exceptions for the failure cases. This requires very careful coding if you want to do it right (and not leak resources or leave objects in half-set-up states) but leads to very elegant-looking code, where all the core logic is in one place and the error handling is neatly separated out.
I think GetLastError is a relic from the days before multi-threading. I don't think that pattern should be used any more except in cases where errors are extraordinarily rare. The problem is that the error code has to be per-thread.
The other irritation with GetLastError is that it requires two levels of testing. You first have to check the return code to see if it indicates an error and then you have to call GetLastError to get the error. This means you have to do one of two things, neither particularly elegant:
1) You can return a boolean indicating success or failure. But then, why not just return the error code with zero for success?
2) You can have a different return value test for each function based on a value that is illegal as its primary return value. But then what of functions where any return value is legal? And this is a very error-prone design pattern. (Zero is the only illegal value for some functions, so you return zero for error in that case. But where zero is legal, you may need to use -1 or some such. It's easy to get this test wrong.)
I have to say, I think the global error handler style (with proper thread-local storage) is the most realistically applicable when exception-handling cannot be used. This is not an optimal solution for sure, but I think if you are living in my world (a world of lazy developers who don't check for error status as often as they should), it's the most practical.
Rationale: developers just tend to not check error return values as often as they should. How many examples can we point to in real world projects where a function returned some error status only for the caller to ignore them? Or how many times have we seen a function that wasn't even correctly returning error status even though it was, say, allocating memory (something which can fail)? I've seen too many examples like these, and going back and fixing them can sometimes even require massive design or refactoring changes through the codebase.
The global error handler is a lot more forgiving in this respect:
If a function failed to return a boolean or some ErrorStatus type to indicate failure, we don't have to modify its signature or return type to indicate failure and change the client code all over the application. We can just modify its implementation to set a global error status. Granted, we still have to add the checks on the client side, but if we miss an error immediately at a call site, there's still opportunity to catch it later.
If a client fails to check the error status, we can still catch the error later. Granted, the error may be overwritten by subsequent errors, but we still have an opportunity to see that an error occurred at some point whereas calling code that simply ignored error return values at the call site would never allow the error to be noticed later.
While being a sub-optimal solution, if exception-handling can't be used and we're working with a team of code monkeys who have a terrible habit of ignoring error return values, this is the most practical solution as far as I see.
Of course, exception-handling with proper exception-safety (RAII) is by far the superior method here, but sometimes exception-handling cannot be used (ex: we should not be throwing out of module boundaries). While a global error handler like the Win API's GetLastError or OpenGL's glGetError sounds like an inferior solution from a strict engineering standpoint, it's a lot more forgiving to retrofit into a system than to start making everything return some error code and start forcing everything calling those functions to check for them.
If this pattern is applied, however, one must take careful note to ensure it can work properly with multiple threads, and without significant performance penalties. I actually had to design my own thread-local storage system to do this, but our system predominantly uses exception-handling and only this global error handler to translate errors across module boundaries into exceptions.
All in all, exception-handling is the way to go, but if this is not possible for some reason, I have to disagree with the majority of the answers here and suggest something like GetLastError for larger, less disciplined teams (I'd say return errors through the call stack for smaller, more disciplined ones) on the basis that if a returned error status is ignored, this allows us to at least notice an error later, and it allows us to retrofit error-handling into a function that wasn't properly designed to return errors by simply modifying its implementation without modifying the interface.
If your API is in a DLL and you wish to support clients that use a different compiler then you then you cannot use exceptions. There is no binary interface standard for exceptions.
So you pretty much have to use error codes. But don't model the system using GetLastError as your exemplar. If you want a good example of how to return error codes look at COM. Every function returns an HRESULT. This allows callers to write concise code that can convert COM error codes into native exceptions. Like this:
Check(pIntf->DoSomething());
where Check() is a function, written by you, that receives an HRESULT as its single parameter and raises an exception if the HRESULT indicates failure. It is the fact that the return value of the function indicates status that allows this more concise coding. Imagine the alternative of returning the status via a parameter:
pIntf->DoSomething(&status);
Check(status);
Or, even worse, the way it is done in Win32:
if (!pIntf->DoSomething())
Check(GetLastError());
On the other hand, if you are prepared to dictate that all clients use the same compiler as you, or you deliver the library as source, then use exceptions.
Exception handling in unmanaged code is not recommended. Dealing with memory leaks without exceptions is a big issue, with exception it becomes nightmare.
Thread local variable for error code is not so bad idea, but as some of the other people said it is a bit error prone.
I personally prefer every method to return an error code. This creates inconvenience for functional methods because instead of:
int a = foo();
you will need to write:
int a;
HANDLE_ERROR(foo(a));
Here HANDLE_ERROR could be a macro that checks the code returned from foo and if it is an error propagates it up (returning it).
If you prepare a good set of macros to handle different situations writhing code with good error handling without exception handling could became possible.
Now when your project start growing you will notice that a call stack information for the error is very important. You could extend your macros to store the call stack info in a thread local storage variable. That is very useful.
Then you will notice that even the call stack is not enough. In many cases an error code for "File not found" at line that say fopen(path, ...); does not give you enough information to find out what is the problem. Which is the file that is not found. At this point you could extend your macros to be able to store massages as well. And then you could provide the actual path of file that was not found.
The question is why bother all of this you could do with exceptions. Several reasons:
Again, Exception handling in unmanaged code is hard to do right
The macro based code (if done write) happens to be smaller and faster than the code needed for exception handling
It is way more flexible. You could enable disable features.
In the project that I am working at the moment I implemented such error handling. It took me 2 days to put in a level to be ready to start using it. And for about a year now I probably spend about 2 weeks total time of maintaining and adding features to it.
You should also consider a object/structure based error code variable. Like the stdio C library is doing it for FILE streams.
On some of my io objects for example, i just skip all further operations when the error state is set so that the user is fine just when checking the error once after a sequence of operations.
This pattern allows you to finetune the error handling scheme much better.
One of the bad designs of C/C++ comes to full light here when comparing it for example with googles GO language. The return of just one value from a function. GO does not use exceptions instead it always returns two values, the result and the error code.
There is a minor group of people who think that exceptions are most of the time bad and misused because errors are not exceptions but something you have to expect. And it hasn't proved that software becames more reliable and easier. Especially in C++ where the only way to program nowadays is RIIA techniques.
I want to use boost::asio but I don't want boost to throw exceptions, because in my environment exceptions must not be raised.
I've encountered BOOST_NO_EXCEPTIONS but the documentation says that callers of throw_exception can assume that this function never returns.
But how can a user supplied function not return? What replacement function would I need to insert here? Do I have to terminate the process in case boost code wants to throw an exception?
Well, what do you want to do on error condition? BOOST_NO_EXCEPTION does not magically make Boost source code use alternative mechanism of propagating error back to callers. So, you either print an error to stderr and die, or you longjmp all the way to the top -- leaking whatever resources the functions presently on the call stack might have allocated.
Either you terminate the process or you goto a something like a global error handler using longjmp which you've previously defined with setjmp.
You seemed to have misunderstood the meaning of BOOST_NO_EXCEPTIONS, it only gives you a chance to bailout in the way you desire in a consistent manner.
The execution has entered a state where it can no more proceed, that is when exception is thrown, so if the user defined throw_exception returns then it is logical to think that the behavior is undefined.
If a call to fread() returns 0 and ferror() indicates an error (vs. EOF), is it OK to retry the read or is it better to close and reopen the file?
I can't start over entirely -- the input file has been partially processed in a way that can't be undone (say I'm writing out a chunk at a time to a socket and, due to existing protocol, have no way of telling the remote end, "never mind, I need to start over").
I could fclose() and fopen() the file, fseek() past the data already processed, and continue the fread()-ing from there, but is all that necessary?
There's no "one size fits all" solution, since different errors can require different handling. Errors from fread() are unusual; if you're calling it correctly, an error may indicate a situation that has left the FILE* in a weird error state. In that case you're best off calling fclose(), fopen(), fseek() to get things back in a good state.
If you're coding for something that's happening, please mention the actual errors you're getting from ferror()...
You can give the clearerr function a look.
You can show the error to the user with perror() or strerror() and ask her is she wants to retry.
It's not mandatory for the implementation to provide such an error message, though. You should set errno to 0 before calling fread(); if it fails and errno is still 0 then no error information will be available.