C++ auto memory malloc and free coverity issue - c++

I have some code to implement auto allocate memory and free as follow:
struct AutoAllocator
{
AutoAllocator(ptr,size),objptr(ptr)
{
some malloc here…
some init memory here…
}
bool isValid()
{ return objptr != 0;}
~AutoAllocator()
{
if(objptr ==0)return;
some free code here;
}
private:
BYTE* &objptr;
};
#define AUTO_AULLOCATOR(ptr,size)\
for(AutoAllocator autoObj(ptr,size);autoObj.isValid();autoObj.~AutoAllocator())
When i use
Ptr * obj;
AUTO_ALLOCATOR(obj,size)
{
Some code here
return;
}
…
Coverity remind me that obj pointer go out of scope leaks the storage it points to
I wonder how i can solve these coverity issue?
Any help?

The best solution here is to ditch this approach. The best possible outcome would be a fragile solution that breaks if not used "correctly". And that will happen both with inexperienced C++ programmers (who won't find your construct in their books) and experienced programmers who write modern C++ code, RAII-style.
Using a macro-based solution is the first problem. It causes the compiler to see a different code structure than the programmer using that macro.
The second problem is that the macro is hiding a non-trivial construct - a for loop. That might be forgiven if the macro is named FOR_something, but here there is no hint at all. In fact, the name of the macro hints at some kind of auto functionality, a C++ keyword for type deduction. And it doesn't do that at all.
Next we have the problems that Coverity detects. It seems it doesn't get the diagnostic exactly right, but that's not unreasonable. Coverity gives good messages for common, small problems such as memory leaks. This code is so bad that Coverity can't infer what the intent was, so it has to guess what you intended. The formal problem is that the destructor of autoObj is called more than once.
There is probably also a bug when any of the initialization code throws an exception, but since you left out that part we can't tell for sure.

Related

How to force a compile error in C++(17) if a function return value isn't checked? Ideally through the type system

We are writing safety-critical code and I'd like a stronger way than [[nodiscard]] to ensure that checking of function return values is caught by the compiler.
[Update]
Thanks for all the discussion in the comments. Let me clarify that this question may seem contrived, or not "typical use case", or not how someone else would do it. Please take this as an academic exercise if that makes it easier to ignore "well why don't you just do it this way?". The question is exactly whether it's possible to create a type(s) that fails compiling if it is not assigned to an l-value as the return result of a function call .
I know about [[nodiscard]], warnings-as-errors, and exceptions, and this question asks if it's possible to achieve something similar, that is a compile time error, not something caught at run-time. I'm beginning to suspect it's not possible, and so any explanation why is very much appreciated.
Constraints:
MSVC++ 2019
Something that doesn't rely on warnings
Warnings-as-Errors also doesn't work
It's not feasible to constantly run static analysis
Macros are OK
Not a runtime check, but caught by the compiler
Not exception-based
I've been trying to think how to create a type(s) that, if it's not assigned to a variable from a function return, the compiler flags an error.
Example:
struct MustCheck
{
bool success;
...???...
};
MustCheck DoSomething( args )
{
...
return MustCheck{true};
}
int main(void) {
MustCheck res = DoSomething(blah);
if( !res.success ) { exit(-1); }
DoSomething( bloop ); // <------- compiler error
}
If such a thing is provably impossible through the type system, I'll also accept that answer ;)
(EDIT) Note 1: I have been thinking about your problem and reached the conclusion that the question is ill posed. It is not clear what you are looking for because of a small detail: what counts as checking? How the checkings compose and how far from the point of calling?
For example, does this count as checking? note that composition of boolean values (results) and/or other runtime variable matters.
bool b = true; // for example
auto res1 = DoSomething1(blah);
auto res2 = DoSomething2(blah);
if((res1 and res2) or b){...handle error...};
The composition with other runtime variables makes it impossible to make any guarantee at compile-time and for composition with other "results" you will have to exclude certain logical operators, like OR or XOR.
(EDIT) Note 2: I should have asked before but 1) if the handling is supposed to always abort: why not abort from the DoSomething function directly? 2) if handling does a specific action on failure, then pass it as a lambda to DoSomething (after all you are controlling what it returns, and what it takese). 3) composition of failures or propagation is the only not trivial case, and it is not well defined in your question.
Below is the original answer.
This doesn't fulfill all the (edited) requirements you have (I think they are excessive) but I think this is the only path forward really.
Below my comments.
As you hinted, for doing this at runtime there are recipes online about "exploding" types (they assert/abort on destruction if they where not checked, tracked by an internal flag).
Note that this doesn't use exceptions (but it is runtime and it is not that bad if you test the code often, it is after all a logical error).
For compile-time, it is more tricky, returning (for example a bool) with [[nodiscard]] is not enough because there are ways of no discarding without checking for example assigning to a (bool) variable.
I think the next layer is to active -Wunused-variable -Wunused-expression -Wunused-parameter (and treat it like an error -Werror=...).
Then it is much harder to not check the bool because comparison is pretty much to only operation you can really do with a bool.
(You can assign to another bool but then you will have to use that variable).
I guess that's quite enough.
There are still Machiavelian ways to mark a variable as used.
For that you can invent a bool-like type (class) that is 1) [[nodiscard]] itself (classes can be marked nodiscard), 2) the only supported operation is ==(bool) or !=(bool) (maybe not even copyable) and return that from your function. (as a bonus you don't need to mark your function as [[nodiscard]] because it is automatic.)
I guess it is impossible to avoid something like (void)b; but that in itself becomes a flag.
Even if you cannot avoid the absence of checking, you can force patterns that will immediately raise eyebrows at least.
You can even combine the runtime and compile time strategy.
(Make CheckedBool exploding.)
This will cover so many cases that you have to be happy at this point.
If compiler flags don’t protect you, you will have still a backup that can be detected in unit tests (regardless of taking the error path!).
(And don’t tell me now that you don’t unit test critical code.)
What you want is a special case of substructural types. Rust is famous for implementing a special case called "affine" types, where you can "use" something "at most once". Here, you instead want "relevant" types, where you have to use something at least once.
C++ has no official built-in support for such things. Maybe we can fake it? I thought not. In the "appendix" to this answer I include my original logic for why I thought so. Meanwhile, here's how to do it.
(Note: I have not tested any of this; I have not written any C++ in years; use at your own risk.)
First, we create a protected destructor in MustCheck. Thus, if we simply ignore the return value, we will get an error. But how do we avoid getting an error if we don't ignore the return value? Something like this.
(This looks scary: don't worry, we wrap most of it in a macro.)
int main(){
struct Temp123 : MustCheck {
void f() {
MustCheck* mc = this;
*mc = DoSomething();
}
} res;
res.f();
if(!res.success) print "oops";
}
Okay, that looks horrible, but after defining a suitable macro, we get:
int main(){
CAPTURE_RESULT(res, DoSomething());
if(!res.success) print "oops";
}
I leave the macro as an exercise to the reader, but it should be doable. You should probably use __LINE__ or something to generate the name Temp123, but it shouldn't be too hard.
Disclaimer
Note that this is all sorts of hacky and terrible, and you likely don't want to actually use this. Using [[nodiscard]] has the advantage of allowing you to use natural return types, instead of this MustCheck thing. That means that you can create a function, and then one year later add nodiscard, and you only have to fix the callers that did the wrong thing. If you migrate to MustCheck, you have to migrate all the callers, even those that did the right thing.
Another problem with this approach is that it is unreadable without macros, but IDEs can't follow macros very well. If you really care about avoiding bugs then it really helps if your IDE and other static analyzers understand your code as well as possible.
As mentioned in the comments you can use [[nodiscard]] as per:
https://learn.microsoft.com/en-us/cpp/cpp/attributes?view=msvc-160
And modify to use this warning as compile error:
https://learn.microsoft.com/en-us/cpp/preprocessor/warning?view=msvc-160
That should cover your use case.

Quality of Visual Studio Community code analysis with SAL annotations

I hope this question is not out of scope for SO; if it is (sorry in that case), please tell me where it belongs and I'll try to move it there.
The concept of SAL annotations for static code analysis in C/C++ seems really useful to me. Take for example the wrongly implemented wmemcpy example on MSDN: Understanding SAL:
wchar_t * wmemcpy(
_Out_writes_all_(count) wchar_t *dest,
_In_reads_(count) const wchar_t *src,
size_t count)
{
size_t i;
for (i = 0; i <= count; i++) { // BUG: off-by-one error
dest[i] = src[i];
}
return dest;
}
MSDN says that "a code analysis tool could catch the bug by analyzing this function alone", which seems great, but the problem is that when I paste this code in VS 2017 Community no warning about this pops up on code analysis, not even with all analysis warnings enabled. (Other warnings like C26481 Don't use pointer arithmetic. Use span instead (bounds.1). do.)
Another example which should produce warnings (at least according to an answer to What is the purpose of SAL (Source Annotation Language) and what is the difference between SAL 1 and 2?), but does not:
_Success_(return) bool GetASmallInt(_Out_range_(0, 10) int& an_int);
//main:
int result;
const auto ret = GetASmallInt(result);
std::cout << result;
And a case of an incorrect warning:
struct MyStruct { int *a; };
void RetrieveMyStruct(_Out_ MyStruct *result) {
result->a = new int(42);
}
//main:
MyStruct s;
RetrieveMyStruct(&s);
// C26486 Don't pass a pointer that may be invalid to a function. Parameter 1 's.a' in call to 'RetrieveMyStruct' may be invalid (lifetime.1).
// Don't pass a pointer that may be invalid to a function. The parameter in a call may be invalid (lifetime.1).
result is obviously marked with _Out_ and not _In_ or _Inout_ so this warning does not make sense in this case.
My question is: Why does Visual Studio's SAL-based code analysis seem to be quite bad; am I missing something? Is Visual Studio Professional or Enterprise maybe better in this aspect? Or is there a tool which can do this better?
And if it's really quite bad: is this a known problem and are there maybe plans to improve this type of analysis?
Related: visual studio 2013 static code analysis - how reliable is it?
Functions contracts, of which SAL annotations are a lightweight realization, make it possible to reason locally about whether a function is doing the right thing and is used wrongly or the opposite. Without them, you could only discuss the notion of bug in the context of a whole program. With them, as the documentation says, it becomes possible to say locally that a function's behavior is a bug, and you can hope that a static analysis tool will find it.
Verifying mechanically that a piece of code does not have bugs remains a difficult problem even with this help. Different techniques exist because there are various partial approaches to the problem. They all have strengths and weaknesses, and they all contain plenty of heuristics. Loops are part of what makes predicting all the behaviors of a program difficult, and implementers of these tools may choose not to hard-code patterns for the extremely simple loops, since these patterns would seldom serve in practice.
And if it's really quite bad: is this a known problem and are there maybe plans to improve this type of analysis?
Yes, researchers have worked on this topic for decades and continue both to improve the theory and to transfer theoretical ideas into practical tools. As a user, you have a choice:
if you need your code to be free of bugs, for instance because it is intended for a safety-critical context, then you already have very heavy methodology in place based on intensive testing at each level of the V-cycle, and this sort of static analysis can already help you reach the same level of confidence with less (but some) effort. You will need more expressive contract specifications than SAL annotations for this goal. An example is ACSL for C.
if you are not willing to make the considerable effort necessary to ensure that code is bug-free with high-confidence, you can still take advantage of this sort of static analysis, but in this case consider any bug found as a bonus. The annotations, because they have a formally defined meaning, can also be useful to assign blame even in the context of a manual code review in which no static analyzer is involved. SAL annotations were designed explicitly for this usecase.

Mark variable as not NULL after BOOST_REQUIRE in PVS-Studio

I'm using PVS-Studio to analyze my Testcode. There are often constructs of the form
const noAnimal* animal = dynamic_cast<noAnimal*>(...);
BOOST_REQUIRE(animal);
BOOST_REQUIRE_EQUAL(animal->GetSpecies(), ...);
However I still get a warning V522 There might be dereferencing of a potential null pointer 'animal' for the last line.
I know it is possible, to mark functions as "not returning NULL" but is it also possible to mark a function as a valid NULL check or make PVS-Studio somehow else aware that animal can't be NULL after BOOST_REQUIRE(animal);?
This also happens if the pointer is checked via any assert flavour first.
Thank you for the interesting example. We'll think, what we can do with the BOOST_REQUIRE macro.
At the moment, I can advise you the following solution:
Somewhere after
#include <boost/test/included/unit_test.hpp>
you can write:
#ifdef PVS_STUDIO
#undef BOOST_REQUIRE
#define BOOST_REQUIRE(expr) do { if (!(expr)) throw "PVS-Studio"; } while (0)
#endif
This way, you will give a hint to the analyzer, that the false condition causes the abort of the control flow.
It is not the most beautiful solution, but I think it was worth telling you about.
Responding to a comment with a large one is a bad idea, so here is my detailed response on the following subject:
Although this is possible it would be a pain to include that define in
all testcase files. Also this is not limited to BOOST_REQUIRE only but
also applies to assert, SDL_Assert or any other custom macro the user
might use.
One should understand that there are three types of test macros and each should be discussed separately.
Macros of the first type simply warn you that something went wrong in the Debug version. A typical example is assert macro. The following code will cause PVS-Studio analyzer to generate a warning:
T* p = dynamic_cast<T *>(x);
assert(p);
p->foo();
The analyzer will point out a possible null-pointer dereferencing here and will be right. A check that uses assert is not sufficient because it will be removed from the Release version. That is, it turns out there’s no check. A better way to implement it is to rewrite the code into something like this:
T* p = dynamic_cast<T *>(x);
if (p == nullptr)
{
assert(false);
throw Error;
}
p->foo();
This code won’t trigger the warning.
You may argue that you are 100% sure that dynamic_cast will never return nullptr. I don’t accept this argument. If you are totally sure that the cast is ALWAYS correct, you should use the faster static_cast. If you are not that sure, you must test the pointer before dereferencing it.
Well, OK, I see your point. You are sure that the code is alright, but you need to have that check with dynamic_cast just in case. OK, use the following code then:
assert(dynamic_cast<T *>(x) != nullptr);
T* p = static_cast<T *>(x);
p->foo();
I don’t like it, but at least it’s faster, since the slower dynamic_cast operator will be left out in the Release version, while the analyzer will keep silent.
Moving on to the next type of macros.
Macros of the second type simply warn you that something went wrong in the Debug version and are used in tests. What makes them different from the previous type is that they stop the algorithm under test if the condition is false and generate an error message.
The basic problem with these macros is that the functions are not marked as non-returning. Here’s an example.
Suppose we have a function that generates an error message by throwing an exception. This is what its declaration looks like:
void Error(const char *message);
And this is how the test macro is declared:
#define ENSURE(x) do { if (!x) Error("zzzz"); } while (0)
Using the pointer:
T* p = dynamic_cast<T *>(x);
ENSURE(p);
p->foo();
The analyzer will issue a warning about a possible null-pointer dereferencing, but the code is actually safe. If the pointer is null, the Error function will throw an exception and thus prevent the pointer dereferencing.
We simply need to tell the analyzer about that by using one of the function annotation means, for example:
[[noreturn]] void Error(const char *message);
or:
__declspec(noreturn) void Error(const char *message);
This will help eliminate the false warning. So, as you can see, it’s quite easy to fix things in most cases when using your own macros.
It might be trickier, however, if you deal with carelessly implemented macros from third-party libraries.
This leads us to the third type of macros. You can’t change them, and the analyzer can’t figure out how exactly they work. This is a common situation, as macros may be implemented in quite exotic ways.
There are three options left for you in this case:
suppress the warning using one of the false-positive suppression means described in the documentation;
use the technique I described in the previous answer;
email us.
We are gradually adding support for various tricky macros from popular libraries. In fact, the analyzer is already familiar with most of the specific macros you might encounter, but programmers’ imagination is inexhaustible and we just can’t foresee every possible implementation.

Does throw() (i.e. __declspec(nothrow)) give real benefits in Visual C++?

Focusing on Visual C++, have you ever experienced significant performance gains in C++ code using throw() (i.e. __declspec(nothrow)) non-throwing specification?
Does it really help the optimizer?
Are there any benchmarks showing performance gains?
I found different (opposite) advice on the Internet:
Boost exception-specification rationale is against throw(), instead Larry Osterman seems to be in favor of it in his blog post: Why add a throw() to your methods?
(I'd like to clarify that I'm interested in VC++ specific code; I know that in GCC the throw() specification can actually be a "pessimization" due to run-time checks.)
P.S. Reading ATL headers, I found that throw() is used pervasively; moreover, I found a convenient C++ RAII unique_handle class in this MSDN article that uses throw() specification as well.
The MSVC compiler treats it as an optimization hint, yes.
Boost has to be cross-platform, they have to go for something that's safe and efficient on a variety of compilers. And as the boost docs say, some compilers might generate slower code when throw() is specified, and in many cases, compilers can deduce that no exception is thrown regardless of whether there is a throw() specification, so for Boost, the safest approach is to just never use throw-specifications.
But if you're targeting MSVC specifically, then throw() effectively tells the compiler not to generate exception-handling code for the function, which may give a speedup in cases where the function was too complex for the compiler to determine that no exception could be thrown.
Main problem with throw() is that code inside function marked as throw() can throw.
For example, this will work perfectly:
void foo() throw()
{
throw "haha\n";
}
int main()
{
try {
foo();
}
catch(const char* s) {
std::cout << s;
}
return 0;
}
Note, that foo would not throw to main, of cause. And you will not catch the exception (as if you comment throw() specifier). Instead, compiler will wrap code function with try{}catch() block. When exception is generated it will be handled by global handler (that will mean your program crash by default).
Note, that compiler have to wrap function code with try{}cath() block, unless compiler is sure, that inner code has no possibility to generate exception.
As a result, some optimization with the caller of foo can be done, but things get more complex inside foo.
EDIT:
Things with __declspec(nothrow) is different: As Microsft tells,
This attribute tells the compiler that the declared function and the
functions it calls never throw an exception.
It means, that compiler can omit try{}catch() wrapper code.
EDIT2
Actually, Microsoft violates standard behavior and does not generate warpper for throw(). Well, then you can use throw() to improve performance.
I add it, but its not to help the optimizer its to help me write more correct code.
class X
{
public:
void swap(X& rhs) throw(); // Swap better not ever throw
// If it does there is something else
// much more seriously wrong
};

How can I trust the behavior of C++ functions that declare const?

This is a C++ disaster, check out this code sample:
#include <iostream>
void func(const int* shouldnotChange)
{
int* canChange = (int*) shouldnotChange;
*canChange += 2;
return;
}
int main() {
int i = 5;
func(&i);
std::cout << i;
return 0;
}
The output was 7!
So, how can we make sure of the behavior of C++ functions, if it was able to change a supposed-to-be-constant parameter!?
EDIT: I am not asking how can I make sure that my code is working as expected, rather I am wondering how to believe that someone else's function (for instance some function in some dll library) isn't going to change a parameter or posses some behavior...
Based on your edit, your question is "how can I trust 3rd party code not to be stupid?"
The short answer is "you can't." If you don't have access to the source, or don't have time to inspect it, you can only trust the author to have written sane code. In your example, the author of the function declaration specifically claims that the code will not change the contents of the pointer by using the const keyword. You can either trust that claim, or not. There are ways of testing this, as suggested by others, but if you need to test large amounts of code, it will be very labour intensive. Perhaps moreso than reading the code.
If you are working on a team and you have a team member writing stuff like this, then you can talk to them about it and explain why it is bad.
By writing sane code.
If you write code you can't trust, then obviously your code won't be trustworthy.
Similar stupid tricks are possible in pretty much any language. In C#, you can modify the code at runtime through reflection. You can inspect and change private class members. How do you protect against that? You don't, you just have to write code that behaves as you expect.
Apart from that, write a unittest testing that the function does not change its parameter.
The general rule in C++ is that the language is designed to protect you from Murphy, not Machiavelli. In other words, its meant to keep a maintainance programmer from accidentally changing a variable marked as const, not to keep someone from deliberatly changing it, which can be done in many ways.
A C-style cast means all bets are off. It's sort of like telling the compiler "Trust me, I know this looks bad, but I need to do this, so don't tell me I'm wrong." Also, what you've done is actually undefined. Casting off const-ness and then modifying the value means the compiler/runtime can do anything, including e.g. crash your program.
The only thing I can suggest is to allocate the variable shouldNotChange from a memory page that is marked as read-only. This will force the OS/CPU to raise an error if the application attempts to write to that memory. I don't really recommend this as a general method of validating functions just as an idea you may find useful.
The simplest way to enforce this would be to just not pass a pointer:
void func(int shouldnotChange);
Now a copy will be made of the argument. The function can change the value all it likes, but the original value will not be modified.
If you can't change the function's interface then you could make a copy of the value before calling the function:
int i = 5;
int copy = i
func(&copy);
Don't use C style casts in C++.
We have 4 cast operators in C++ (listed here in order of danger)
static_cast<> Safe (When used to 'convert numeric data types').
dynamic_cast<> Safe (but throws exceptions/returns NULL)
const_cast<> Dangerous (when removing const).
static_cast<> Very Dangerous (When used to cast pointer types. Not a very good idea!!!!!)
reinterpret_cast<> Very Dangerous. Use this only if you understand the consequences.
You can always tell the compiler that you know better than it does and the compiler will accept you at face value (the reason being that you don't want the compiler getting in the way when you actually do know better).
Power over the compiler is a two edged sword. If you know what you are doing it is a powerful tool the will help, but if you get things wrong it will blow up in your face.
Unfortunately, the compiler has reasons for most things so if you over-ride its default behavior then you better know what you are doing. Cast is one the things. A lot of the time it is fine. But if you start casting away const(ness) then you better know what you are doing.
(int*) is the casting syntax from C. C++ supports it fully, but it is not recommended.
In C++ the equivalent cast should've been written like this:
int* canChange = static_cast<int*>(shouldnotChange);
And indeed, if you wrote that, the compiler would NOT have allowed such a cast.
What you're doing is writing C code and expecting the C++ compiler to catch your mistake, which is sort of unfair if you think about it.