I'm working on a multi-platform codebase, and on one of the platforms, sprintf_s isn't available, but snprintf does exist, so in this case the solution is to have the line
#define sprintf_s snprintf
However, I'd like to either automatically revert this (or throw a compile time error so I can do it manually) should the platform implement sprintf_s.
I've found multiple questions here to detect if a class has a member function defined (or an overload exists of a stream operator), but none for a function like sprintf_s.
(I'd rather not use anything experimental, but if std::experimental::is_detected is the only solution so be it).
The ideal solution looks something like
if !sprintf exists
#define sprintf_s snprintf
but something like the following would also be acceptable
static_assert(!sprintf_s_exists, "sprintf_s is now defined");
An implementation that provides sprintf_s() should define the macro __STDC_LIB_EXT1__ in <stdio.h>. You might also define __STDC_WANT_LIB_EXT1__ to 1 before including the header yourself.
You can also check for implementations you are sure support it, such as MSVC with a minimum version number, and enable it conditionally only for those.
The more general appriach is what auticonf traditionally did: attempt to compile a small program that calls the function you're testing for, and check the return value. If the program compiles and runs as expected, the script added a macro such as HAS_SPRINTF_S to the configuration file, and the program could then test for that.
Such problem (speaking generally) is not uncommon. Many systems solve it during configure phase of the build (when generating platform specific make or ninja files). There you would normally provide the build system generator with a small "feature test" app which would either compile fine or fail to compile, and you can base the logic of the build system (usually by having it define required compiler macros) based on either of outcomes.
In CMake something similar to the above is called cmake-compile-features
Related
I was trying to add a function declaration for something provided by the system.
However, the function prototype returns size_t, which is int32 on 32bit platform and int64 on 64bit platform.
I'd like to know if there is a method to detect target platform and add the declaration accordingly?
After a bit of research, LLVM IR as a target neutral language cannot possibly know target-specific type sizes. Have a look at this relevant discussion where Chris Lattner comments on the subject. Also, at this relevant SO question.
So, this is the job of the front-end and this causes extra bookkeeping information that front-ends need to "know" for a target and its ABI. So, for example, you might have needs for projects like this in the case of the Loci programming language.
Now, specifically for size_t according to this:
[...] std::size_t can safely store the value of any non-member pointer, in
which case it is synonymous with std::uintptr_t.
So, you could use the getIntPtrType method of DataLayout class.
For any other data types, I'm not sure how far "guessing" can get you (probably not very far judging from the previous references).
Lastly, another alternative could be extending LLVM with a custom intrinsic (see memcpy for example), which inevitably goes through specific definition per target.
For actually adapting your integer type creation you could use the sizeof operator along with the use of CHAR_BIT, in order to provide the correct number of bits in the getIntNType call.
This will get you as far as using the right size for integer type on the platform where your module pass is built on.
For detecting a type's size 'dynamically' on the platform where your pass is being run, I know of none other way than providing that info in some sort of configuration file.
However, this can be automated and using the example of various build systems (e.g. cmake which is also used by LLVM), you can craft a simple program that can be compiled and automate that generation.
To that end, and to make this as portable as possible and avoid reinventing the wheel, you can use cmake's CheckTypeSize module.
When I first heard about it, it sounded like a great feature—a c++ REPL. However, it cannot call STL functions or methods, and has a whole lot of other issues. This question also applies to conditional breakpoints.
Is it still an experimental feature, or have the developers just dropped it?
Example:
(lldb) p iterator->aField
error: call to a function 'std::__1::__wrap_iter<aClass const*>::operator->() const' ('_ZNKSt3__111__wrap_iterIPK8aClassEptEv') that is not present in the target
error: 0 errors parsing expression
error: The expression could not be prepared to run in the target
At present, there's no good way for the debugger to generate methods of template specializations for which the compiler only emitted inlined versions. And the debugger can't call inlined methods.
Here's one limited trick (though it requires C++11) that you can use to force the compiler to generate full copies of the relevant template class so that there are functions the debugger can call. For instance, if I put:
template class std::vector<int>;
in my source code somewhere, the compiler will generate real copies of all the functions in the int specialization of std::vector. This obviously isn't a full solution, and you should only do this in debug builds or it will bloat your code. But when there are a couple of types that you really call methods on, its a useful trick to know.
You mention a "whole lot of other issues". Please file bugs on any expression parser issues you find in lldb, either with the lldb bugzilla: https://llvm.org/bugs, or Apple's bug reporter: http://bugreporter.apple.com. The expression parser is under active development
We have several C++ functions that will be implemented in phase 2 of our project that are part of the public interface or their respective classes and modules. Because they are part of the public interface, we think they should be present, at least in the headers, during phase 1 so that we are still thinking about them as we implement the rest of the classes. However, since they are unimplemented, we want no one to call them. We would like this check to occur at compile time, to ensure correctness.
My desires are:
Compile time (could be an error or warning; warnings are better because they are more flexible - we can selectively turn them off)
Works on G++4.8.1 and doesn't kill the build under Visual Studio 2013 (we use Visual Studio/VisualAssistX only as an editor but the refactoring tools don't work without building)
Not too hard to understand what was done and why
Functions are present in class documentation (we can include some \warning not implemented in phase 1 notation for doxygen to pick up)
I am considering three options:
A belt and suspenders approach of marking them as deprecated (which will generate a warning) and throwing a custom exception - this is almost what I want except the compiler warning that it is "deprecated" is opposite of the real situation: a deprecated method works now but won't work later; this method will work later but does not work now
Another answer tells how to forbid using a function while still having it exist - this is good but unreadable and hard to search for. Plus, it is a compile-time error - we can't just let some functions call it if we change our minds - it is all or nothing. And making every unimplemented function a template makes me wonder if the trick will always work. For example, virtual functions can't be templates.
Just putting them in as a comment - Keeps people from calling them, but they also don't show up in auto-generated documentation (and we can't decide later to have selective calling)
Is there a better way? And if not is there a reason to prefer the template or comment options over the deprecated option?
As alternative:
You can just declare them without definition, so you get link error.
You may then provide a library not_yet_implemented with empty definition to allow the premature usage of these functions.
or
Mark you method deleted: = delete, eventually by wrapping that in a macro
#define NOT_YET_IMPLEMENTED = delete
I am using Boost 1.46 with Turtle lib 1.2.4 and compiler from Visual Studio Express 2013. I have following class to MOCK:
struct IPredicate
{
virtual ~IPredicate() {}
virtual bool operator()(float value) = 0;
};
When I mock operator() with MOCK_NON_CONST_METHOD:
MOCK_BASE_CLASS(MockPredicate, IPredicate)
{
MOCK_NON_CONST_METHOD(operator(), 1, bool(float), id)
};
I got bunch of compiler errors, e.g. syntax error 'operator ' and so on. But when I mock it with MOCK_NON_CONST_METHOD_EXT:
MOCK_BASE_CLASS(MockPredicate, IPredicate)
{
MOCK_NON_CONST_METHOD_EXT(operator(), 1, bool(float), id)
};
everything is ok and works perfectly! According to http://turtle.sourceforge.net/turtle/reference.html MOCKS with EXT suffix are for "compilers without support for variadic macros", but the one I am using has support (checked it with these examples: http://msdn.microsoft.com/en-us/library/ms177415.aspx ). The rest of the documentation isn't really clear about this case.
Is anyone able to explain me what's the case here? Why I have the errors when I don't use EXT suffixed MOCK version?
The stickler’s answer would be that generally there are no guarantees with respect to variadic macros, as variadic macros are non-standard in C++03 (but are standard in C++11). So if you have a method which avoids variadic macros you should definitely use it instead the one with the variadic macros.
Practically though it is very likely that the turtle library hasn’t been extensively tested with msc and is simply relying on one of the non-standard gcc extensions for the macros. The extensions are discussed on the Variadic Macros http://gcc.gnu.org/onlinedocs/cpp/Variadic-Macros.html page. Specifically, for the turtle library to be portable for all C99 conformant compilers only __VA_ARGS__ could be used.
With macros, when you are after the root case - use the /P switch for msc (Preprocess to a File) to generate an .i file with the pre-processors directives expanded, where you can check what's it unhappy about.
Update. As I finished spanning this long story, I decided to quickly download the turtle and check how the macro is defined. And as I did, I discovered that this is simply a sad case of unmaintained documentation. Running grep on the library includes I couldn't find MOCK_NON_CONST_METHOD defined at all. That's why you are getting syntax errors. Another reason to avoid macros - clarity and sanity of C++ error messages.
(I'm the author of turtle)
What happened with 1.2.4 is that for a reason I didn't really investigate the code provided is actually 1.2.1 along with the 1.2.4 documentation.
As nobody complained by opening a ticket directly on sourceforge I didn't notice until quite some time had passed (all my personal and company projects using turtle are continuously integrated with the latest source code).
Anyway, I just tested your code and it compiles with MSVC 2013, turtle 1.2.6 and boost 1.55. If you haven't already done so by the time, you should consider upgrading.
The format of the output of type_info::name() is implementation specific.
namespace N { struct A; }
const N::A *a;
typeid(a).name(); // returns e.g. "const struct N::A" but compiler-specific
Has anyone written a wrapper that returns dependable, predictable type information that is the same across compilers. Multiple templated functions would allow user to get specific information about a type. So I might be able to use:
MyTypeInfo::name(a); // returns "const struct N::A *"
MyTypeInfo::base(a); // returns "A"
MyTypeInfo::pointer(a); // returns "*"
MyTypeInfo::nameSpace(a); // returns "N"
MyTypeInfo::cv(a); // returns "const"
These functions are just examples, someone with better knowledge of the C++ type system could probably design a better API. The one I'm interested in in base(). All functions would raise an exception if RTTI was disabled or an unsupported compiler was detected.
This seems like the sort of thing that Boost might implement, but I can't find it in there anywhere. Is there a portable library that does this?
There are some limitations to do such things in C++, so you probably won't find exactly what you want in the near future. The meta-information about the types that the compiler inserts in the compiled code is also implementation-specific to the RTL used by the compiler, so it'd be difficult for a third-party library to do a good job without relying to undocumented features of each specific compiler that might break in later versions.
The Qt framework has, to my knowledge, the nearest thing to what you intended. But they do that completely independent from RTTI. Instead, they have their own "compiler" that parses the source code and generates additional source modules with the meta-information. Then, you compile+link these modules along with your program and use their API to get the information. Take a look at http://doc.qt.nokia.com/latest/metaobjects.html
Jeremy Pack (from Boost Extension plugin framework) appears to have written such a thing:
http://blog.redshoelace.com/2009/06/resource-management-across-dll.html
3. RTTI does not always function as expected across DLL boundaries. Check out the type_info classes to see how I deal with that.
So you could have a look there.
PS. I remembered because I once fixed a bug in that area; this might still add information so here's the link: https://stackoverflow.com/a/5838527/85371
GCC has __cxa_demangle https://gcc.gnu.org/onlinedocs/libstdc++/manual/ext_demangling.html
If there are such extensions for all compilers you target, you could use them to write a portable function with macros to detect the compiler.