In the following code snippet there is an error that is not trivial but I would have expected tools like AddressSanitizer to catch it.
#include <vector>
#include <iostream>
int main ()
{
std::vector<int> toto;
toto.push_back(2);
int const& titi = toto[0];
toto.pop_back();
std::cout << titi << std::endl;
return 1;
}
When scopping the vector and printing outside of the scope the catch reference an error is thrown use-heap-after-free.
But when there is no scope, the std::vector implementation will probably not release the memory after the pop_back thus the reference is still pointing towards valid memory.
I have search around and I found that you can manually poison memory and I was wondering if this has been implemented in STL (https://github.com/google/sanitizers/wiki/AddressSanitizerManualPoisoning)
This has been implemented in Clang (libc++) and relatively recent GNU (libstdc++) STLs (see Asan wiki for details).
One problem with this feature is that it breaks separate sanitization i.e. ability to sanitize only parts of your app (e.g. only executable and not the libs). The issue is that if vector is pushed in unsanitized and popped in sanitized code, the pusher will not be aware that it needs to unpoison the buffer. For this reason it's disabled by default in GCC (define _GLIBCXX_SANITIZE_VECTOR to enable it), Clang still has it one by default for unclear reasons.
Related
Using Netbeans 8.2 on Linux and GCC 8.1, unique_ptr::operator->() gives an erroneous warning
Unable to resolve template based identifier {var} and more importantly will not autocomplete member names.
Code compiles and runs just fine, and amazingly, the autocomplete still works with no warnings if a shared_ptr is used instead. I have no idea how that is possible. Here is a problematic example, for reference
#include <iostream>
#include <memory>
struct C { int a;};
int main () {
std::unique_ptr<C> foo (new C);
std::shared_ptr<C> bar (new C);
foo->a = 10; //Displays warning, does not auto-complete the members of C
bar->a = 20; //No warning, auto-completes members of C
return 0;
}
I've tried the following to resolve this issue with no luck:
Code Assistance > Reparse Project
Code Assistance > Clean C/C++ cache and restart IDE
Manually deleting cache in ~/.cache/netbeans/8.2
Looking through View > IDE Log for anything that may help
Setting both the C and C++ compilers to C11 and C++11 in project properties
Changing the pre-processing macro __cplusplus to both 201103L and 201402L
Creating a new Netbeans project and trying the above
A large variety of permutations of the above options in different orders
Again, everything compiles and runs just fine, it's just Code Assistance that is giving me an issue. I've run out of things that I've found in other stack overflow answers. My intuition tells me that shared_ptr working and unique_ptr not working is helpful, but I don't know enough about C++ to utilize that information. Please help me, I need to get back to work...
Edit 1
This stack overflow question, although referencing Clang and the libc++ implementation, suggests that it may be an implementation issue within GCC 8.1's libstdc++ unique_ptr.
TLDR Method 1
Add a using pointer = {structName}* directive in your structure fixes code assistance, and will compile and run as intended, like so:
struct C { using pointer = C*; int a;};
TLDR Method 2
The answer referenced in my edit does in fact work for libstdc++ as well. Simply changing the return type of unique_ptr::operator->() from pointer to element_type* will fix code assistance, compile and run as expected (the same thing can be done to unique_ptr::get()). However, changing things in implementations of the standard library worries me greatly.
More Information
As someone who is relatively new to c++ and barely understands the power of template specializations, reading through unique_ptr.h was scary, but here is what I think is messing up Netbeans:
Calling unique_ptr::operator->() calls unique_ptr::get()
unique_ptr::get() calls the private implementation's (__unique_ptr_impl) pointer function __unique_ptr_impl::_M_ptr(). All these calls return the __unique_ptr_impl::pointer type.
Within the private implementation, the type pointer is defined within an even more private implementation _Ptr. The struct _Ptr has two template definitions, one that returns a raw pointer to the initial template variable of unique_ptr, and the second that seems to strip any reference off this template variable, and then find it's type named pointer. I think that this is where Netbeans messes up.
So my understanding is when you call unique_ptr<elementType, deleterType>::operator->(), it goes to __unique_ptr_impl<elementType, deleterType>, where the internal pointer type is found by stripping elementType of any references and then getting the type named dereferenced(elementType)::pointer. So by including the directive using pointer =, Netbeans gets what it wants in finding the dereferenced(elementType)::pointer type.
Including the using directive is entirely superficial, evidenced by the fact that things will compile without it, and by the following example
#include <memory>
#include <iostream>
struct A{
using pointer = A*;
double memA;
A() : memA(1) {}
};
struct B {
using pointer = A*;
double memB;
B() : memB(2) {}
};
int main() {
unique_ptr<A> pa(new A);
unique_ptr<B> pb(new B);
std::cout << pa->memA << std::endl;
std::cout << pb->memB << std::endl;
};
outputs
1
2
As it should, even though in struct B contains using pointer = A*. Netbeans actually tries to autocomplete B-> to B->memA, further evidence that Netbeans is using the logic proposed above.
This solution is contrived as all hell but at least it works (in the wildly specific context that I've used it) without making changes to implementations of stl. Who knows, I'm still confused about the convoluted typing system within unique_ptr.
When compiling following program with Xcode 10 GM:
#include <iostream>
#include <string>
#include <variant>
void hello(int) {
std::cout << "hello, int" << std::endl;
}
void hello(std::string const & msg) {
std::cout << "hello, " << msg << std::endl;
}
int main(int argc, const char * argv[]) {
// insert code here...
std::variant< int, std::string > var;
std::visit
(
[]( auto parameter )
{
hello( parameter );
},
var
);
return 0;
}
I get the following error:
main.cpp:27:5: Call to unavailable function 'visit': introduced in macOS 10.14
However, if I change min deployment target to macOS 10.14, the code compiles fine and it works, even though I am running macOS 10.13.
Since std::visit is function template, and should not depend on OS version (which I proved by running the code on lower version of mac than actually supported), should this be considered as bug and reported to Apple or is this expected behaviour?
The same happens when compiling for iOS (iOS 12 is minimally expected).
All std::variant functionality that might throw std::bad_variant_access is marked as available starting with macOS 10.14 (and corresponding iOS, tvOS and watchOS) in the standard header files. This is because the virtual std::bad_variant_access::what() method is not inline and thus defined in the libc++.dylib (provided by the OS).
There are several workarounds (all technically undefined behaviour), ordered by my personal preference:
1) Grab into the Implementation
std::visit only throws if one of the variant arguments is valueless_by_exception. Looking into the implementation gives you the clue to use the following workaround (assuming vs is a parameter pack of variants):
if (... && !vs.valueless_by_exception() ) {
std::__variant_detail::__visitation::__variant::__visit_value(visitor, vs...);
} else {
// error handling
}
Con: Might break with future libc++ versions. Ugly interface.
Pro: The compiler will probably yell at you when it breaks and the workaround can be easily adapted. You can write a wrapper against the ugly interface.
2) Suppress the Availability Compiler Error ...
Add _LIBCPP_DISABLE_AVAILABILITY to the project setting Preprocessor Macros ( GCC_PREPROCESSOR_DEFINITIONS)
Con: This will also suppress other availability guards (shared_mutex, bad_optional_access etc.).
2a) ... and just use it
It turns out that it already works in High Sierra, not only Mojave (I've tested down to 10.13.0).
In 10.12.6 and below you get the runtime error:
dyld: Symbol not found: __ZTISt18bad_variant_access
Referenced from: [...]/VariantAccess
Expected in: /usr/lib/libc++.1.dylib
in [...]/VariantAccess
Abort trap: 6
where the first line unmangles to _typeinfo for std::bad_variant_access. This means the dynamic linker (dyld) can't find the vtable pointing to the what() method mentioned in the introduction.
Con: Only works on certain OS versions, you only get to know at startup time if it does not work.
Pro: Maintains original interface.
2b) ... and provide your own exception implemention
Add the following lines one of your project source files:
// Strongly undefined behaviour (violates one definition rule)
const char* std::bad_variant_access::what() const noexcept {
return "bad_variant_access";
}
I've tested this for a standalone binary on 10.10.0, 10.12.6, 10.13.0, 10.14.1 and my example code works even when causing a std::bad_variant_access to be thrown, catching it by std::exception const& ex, and calling the virtual ex.what().
Con: My assumption is that this trick will break when using RTTI or exception handling across binary boundaries (e.g. different shared object libraries). But this is only an assumption and that's why I put this workaround last: I have no idea when it will break and what the symptoms will be.
Pro: Maintains original interface. Will probably work on all OS versions.
This happens because std::visit throws an bad_variant_access exception in cases described here and since the implementation of that exception depends on an newer version of libc++ you are required to use versions of iOS and macOS that ship this new version (macOS 10.14 and iOS 12).
Thankfuly, there is a implementation path available for when c++ exceptions are turned off which doesn't depend on the newer libc++ so if possible you can use that option.
P.S.
About the case where you increased the minimum deployment target to 10.14 and were still able to run the program normally on 10.13 I'm guessing you would run into problems at the point that this new exception would be triggered (since the exception method which relies on a newer version of libc++ would not be resolved).
Here's another alternative (that won't be palatable for some). If you're already using Boost, then you can use Boost.Variant2 when targeting iOS.
#if MACRO_TO_TEST_FOR_IOS_LT_11
#include <boost/variant2/variant.hpp>
namespace variant = boost::variant2;
#else
#include <variant>
namespace variant = std;
#endif
Then you can use variant::visit in your code.
I'm still working out the kinks to test for the iOS target version (and if we're targeting iOS at all). That's why I used MACRO_TO_TEST_FOR_IOS_LT_11 above, as a placeholder.
Similarly you can also use abseil-cpp libraries to seamlessly use std::variant where it is enabled, and abseil::variant where it isn't.
Abseil is an open-source collection of C++ code designed to augment the C++ standard library.
Add
CXXFLAGS += -D_LIBCPP_DISABLE_AVAILABILITY
to your Makefile.
See some of the other posts to see details on pros and cons of this, but this will get the code to compile and run.
Even though templates generally come from headers, doesn't mean the runtime target doesn't matter. Those templates are part of a wider library, and they compile to code that must still be compatible with the rest of that library. It makes sense for the whole standard library to be of one, single version, and it makes sense for that version to be the one that'll work on the target machine. Can you imagine the chaos that would ensue otherwise?
Some of the others here have given some low-level, practical reasons why in this particular case that version unity is important. Personally I think it's best to forget about implementation details like "templates go in headers" in situations like this; you shouldn't need to care about it, plus you risk making abstraction-breaking assumptions for little benefit. Just code to contract and you'll be fine.
I have a problem with my current program. For some reason it always crashed after the final line of code on windows. I got a "application is no longer responding" error or something like this.
So I tried the Intel inspector. And Luckily it told me some bad errors in my project where I accessed some uninitialized memory.
Besides this obvious problems that I understand I get also some:
Incorrect memcpy calls in: boost::algorithm::trim()
Uninitialized partial memory access in: myptree.get<boost::posix_time::ptime>("path.to.node") where myptree is of type boost::property_tree::ptree
Uninitialized memory access in: cout << myptime where myptime is of type boost::posix_time::ptime
...
does this mean that I use the boost library functions not properly? Or are this false positives?
I'm just confused because the functions work, they do what I want them to do and I get no error message.
I also get a Memory not deallocated warning at the end (from [Unknown] source).
example for trim:
#include <iostream>
#include <boost/algorithm/string.hpp>
int main() {
std::string test = " test ";
boost::algorithm::trim(test);
std::cout << test << std::endl;
return 0;
}
gives me a incorrect memcpy call...
Boost will happily forward bad arguments; it often has no way to check them. If boost::algorithm::trim passes a bad argument to memcpy, it will be because you passes a bad argument to trim.
So, yes, you should worry. There are almost certainly multiple bugs in your program. Check your calls to the functions reported.
I have a C++ client to a C++/CLI DLL, which initializes a series of C# dlls.
This used to work. The code that is failing has not changed. The code that has changed is not called before the exception is thrown. My compile environment has changed, but recompiling on a machine with an environment similar to my old one still failed. (EDIT: as we see in the answer this is not entirely true, I was only recompiling the library in the old environment, not the library and client together. The client projects had been upgraded and couldn't easily go back.)
Someone besides me recompiled the library, and we started getting memory management issues. The pointer passed in as a String must not be in the bottom 64K of the process's address space. I recompiled it, and all worked well with no code changes. (Alarm #1) Recently it was recompiled, and memory management issues with strings re-appeared, and this time they're not going away. The new error is Unhandled Exception: System.AccessViolationException: Attempted to read or write protected memory. This is often an indication that other memory is corrupt.
I'm pretty sure the problem is not located where I see the exception, the code didn't change between the successful and failing builds, but we should review that to be complete. Ignore the names of things, I don't have much control over the design of what it's doing with these strings. And sorry for the confusion, but note that _bridge and bridge are different things. Lots of lines of code missing because this question is already too long.
Defined in library:
struct Config
{
std::string aye;
std::string bee;
std::string sea;
};
extern "C" __declspec(dllexport) BridgeBase_I* __stdcall Bridge_GetConfiguredDefaultsImplementationPointer(
const std::vector<Config> & newConfigs, /**< new configurations to apply **/
std::string configFolderPath, /**< folder to write config files in **/
std::string defaultConfigFolderPath, /**< folder to find default config files in **/
std::string & status /**< output status of config parse **/
);
In client function:
GatewayWrapper::Config bridge;
std::string configPath("./config");
std::string defaultPath("./config/default");
GatewayWrapper::Config gwtransport;
bridge.aye = "bridged.dll";
bridge.bee = "1.0";
bridge.sea = "";
configs.push_back(bridge);
_bridge = GatewayWrapper::Bridge_GetConfiguredDefaultsImplementationPointer(configs, configPath, defaultPath, status);
Note that call to library that is crashing is in the same scope as the vector declaration, struct declaration, string assignment and vector push-back
There are no threading calls in this section of code, but there are other threads running doing other things. There is no pointer math here, there are no heap allocations in the area except perhaps inside the standard library.
I can run the code up to the Bridge_GetConfiguredDefaultsImplementationPointer call in the debugger, and the contents of the configs vector look correct in the debugger.
Back in the library, in the first sub function, where the debugger don't shine, I've broken down the failing statement into several console prints.
System::String^ temp
List<CConfig^>^ configs = gcnew List<CConfig ^>((INT32)newConfigs.size());
for( int i = 0; i< newConfigs.size(); i++)
{
std::cout << newConfigs[i].aye<< std::flush; // prints
std::cout << newConfigs[i].aye.c_str() << std::flush; // prints
temp = gcnew System::String(newConfigs[i].aye.c_str());
System::Console::WriteLine(temp); // prints
std::cout << "Testing string creation" << std::endl; // prints
std::cout << newConfigs[i].bee << std::flush; // crashes here
}
I get the same exception on access of bee if I move the newConfigs[i].bee out above the assignment of temp or comment out the list declaration/assignment.
Just for reference that the std::string in a struct in a vector should have arrived at it's destination ok
Is std::vector copying the objects with a push_back?
std::string in struct - Copy/assignment issues?
http://www.cplusplus.com/reference/vector/vector/operator=/
Assign one struct to another in C
Why this exception is not caught by my try/catch
https://stackoverflow.com/a/918891/2091951
Generic AccessViolationException related questions
How to handle AccessViolationException
Programs randomly getting System.AccessViolationException
https://connect.microsoft.com/VisualStudio/feedback/details/819552/visual-studio-debugger-throws-accessviolationexception
finding the cause of System.AccessViolationException
https://msdn.microsoft.com/en-us/library/ms164911.aspx
Catching access violation exceptions?
AccessViolationException when using C++ DLL from C#
Suggestions in above questions
Change to .net 3.5, change target platform - these solutions could have serious issues with a large mult-project solution.
HandleProcessCorruptedStateExceptions - does not work in C++, this decoration is for C#, catching this error could be a very bad idea anyway
Change legacyCorruptedStateExceptionsPolicy - this is about catching the error, not preventing it
Install .NET 4.5.2 - can't, already have 4.6.1. Installing 4.6.2 did not help. Recompiling on a different machine that didn't have 4.5 or 4.6 installed did not help. (Despite this used to compile and run on my machine before installing Visual Studio 2013, which strongly suggests the .NET library is an issue?)
VSDebug_DisableManagedReturnValue - I only see this mentioned in relation to a specific crash in the debugger, and the help from Microsoft says that other AccessViolationException issues are likely unrelated. (http://connect.microsoft.com/VisualStudio/feedbackdetail/view/819552/visual-studio-debugger-throws-accessviolationexception)
Change Comodo Firewall settings - I don't use this software
Change all the code to managed memory - Not an option. The overall design of calling C# from C++ through C++/CLI is resistant to change. I was specifically asked to design it this way to leverage existing C# code from existing C++ code.
Make sure memory is allocated - memory should be allocated on the stack in the C++ client. I've attempted to make the vector be not a reference parameter, to force a vector copy into explicitly library controlled memory space, did not help.
"Access violations in unmanaged code that bubble up to managed code are always wrapped in an AccessViolationException." - Fact, not a solution.
but it was the mismatch, not the specific version that was the problem
Yes, that's black letter law in VS. You unfortunately just missed the counter-measures that were built into VS2012 to turn this mistake into a diagnosable linker error. Previously (and in VS2010), the CRT would allocate its own heap with HeapAlloc(). Now (in VS2013), it uses the default process heap, the one returned by the GetProcessHeap().
Which is in itself enough to trigger an AVE when you run your app on Vista or higher, allocating memory from one heap and releasing it from another triggers an AVE at runtime, a debugger break when you debug with the Debug Heap enabled.
This is not where it ends, another significant issue is that the std::string object layout is not the same between the versions. Something you can discover with a little test program:
#include <string>
#include <iostream>
int main()
{
std::cout << sizeof(std::string) << std::endl;
return 0;
}
VS2010 Debug : 32
VS2010 Release : 28
VS2013 Debug : 28
VS2013 Release : 24
I have a vague memory of Stephen Lavavej mentioning the std::string object size reduction, very much presented as a feature, but I can't find it back. The extra 4 bytes in the Debug build is caused by the iterator debugging feature, it can be disabled with _HAS_ITERATOR_DEBUGGING=0 in the Preprocessor Definitions. Not a feature you'd quickly want to throw away but it makes mixing Debug and Release builds of the EXE and its DLLs quite lethal.
Needless to say, the different object sizes seriously bytes when the Config object is created in a DLL built with one version of the standard C++ library and used in another. Many mishaps, the most basic one is that the code will simply read the Config::bee member from the wrong offset. An AVE is (almost) guaranteed. Lots more misery when code allocates the small flavor of the Config object but writes the large flavor of std::string, that randomly corrupts the heap or the stack frame.
Don't mix.
I believe 2013 introduced a lot of changes in the internal data formats of STL containers, as part of a push to reduce memory usage and improve perf. I know vector became smaller, and string is basically a glorified vector<char>.
Microsoft acknowledges the incompatibility:
"To enable new optimizations and debugging checks, the Visual Studio
implementation of the C++ Standard Library intentionally breaks binary
compatibility from one version to the next. Therefore, when the C++
Standard Library is used, object files and static libraries that are
compiled by using different versions can't be mixed in one binary (EXE
or DLL), and C++ Standard Library objects can't be passed between
binaries that are compiled by using different versions."
If you're going to pass std::* objects between executables and/or DLLs, you absolutely must ensure that they are using the same version of the compiler. It would be well-advised to have your client and its DLLs negotiate in some way at startup, comparing any available versions (e.g. compiler version + flags, boost version, directx version, etc.) so that you catch errors like this quickly. Think of it as a cross-module assert.
If you want to confirm that this is the issue, you could pick a few of the data structures you're passing back and forth and check their sizes in the client vs. the DLLs. I suspect your Config class above would register differently in one of the fail cases.
I'd also like to mention that it is probably a bad idea in the first place to use smart containers in DLL calls. Unless you can guarantee that the app and DLL won't try to free or reallocate the internal buffers of the other's containers, you could easily run into heap corruption issues, since the app and DLL each have their own internal C++ heap. I think that behavior is considered undefined at best. Even passing const& arguments could still result in reallocation in rare cases, since const doesn't prevent a compiler from diddling with mutable internals.
You seem to have memory corruption. Microsoft Application Verifier is invaluable in finding corruption. Use it to find your bug:
Install it to your dev machine.
Add your exe to it.
Only select Basics\Heaps.
Press Save. It doesn't matter if you keep application verifier open.
Run your program a few times.
If it crashes, debug it and this time, the crash will point to your problem, not just some random location in your program.
PS: It's a great idea to have Application Verifier enabled at all times for your development project.
Microsoft has a decades old bug when using its leak check gear in debug builds. The leak is reported from the allocation made by the runtime library when using C++ type information, like typeinfo.name(). Also see Memory leaks reported by debug CRT inside typeinfo.name() on Microsoft Connect.
We've been getting error reports and user list discussions because of the leaks for about the same amount of time. The Microsoft bug could also mask real leaks from user programs. The latter point is especially worrisome to me because we may not tending to real problems because of the masking.
I'd like to try to squash the leaks due to use of typeid(T) and typeinfo.name(). My question is, how can we work around Microsoft's bug? Is there a work around available?
On the line of my suggestion in the Q's comments.
For if (valueType == typeid(int)) you can use the type_index (at least since C++11)
For type_info.name() leaking memory:
Since totally eliminating the leak doesn't seem possible, the next best thing would be reduce their number (to only one leaked per type interrogation) and, secondary, to tag them for reporting purposes. Being inside some templated classes, one can hope that the 'mem leaking' report will use the class names (or at least the source file where the allocation happened) - you can subsequently use this information to filter them out from the 'all leaked memory' reports.
So instead of using typeid(<typename>), you use something like:
"file typeid_name_workaround.hpp"
template <typename T> struct get_type_name {
static const char* name() const {
static const char* ret=typeid(T).name();
return ret;
}
};
Other .cpp/.hpp file
#include "typeid_name_workaround.hpp"
struct dummy {
};
int main() {
// instead of typeid(dummy).name() you use
get_type_name<dummy>::name();
}