We have a module that pulls in 3rd-party, specifically sqlite_modern_cpp although I don't think that is particularly important. What is important is that code uses C++ feature macros and (specifically) tests for __cpp_lib_uncaught_exceptions to know whether std::uncaught_exceptions is defined.
So far so good, except that we are now looking to change our C++ standard from C++14 to C++17. On iOS builds (don't know fully about other targets) this is suddenly defined. However, we target a minimum version of iOS as 9.3 and, with that, the compiler returns an error to say the minimum value of iOS required is 10.0.
I would prefer, if possible, not to touch the third party code, so my ideal solution to this would be to tell the compiler not to defined __cpp_lib_uncaught_exceptions, so it would fall back to the previous solution. Is there a clean way to do that?
Feature test macros are just normal macros, which you can #undef. Include <exception> first to define the macro, then undef, then include the library. sqlite_modern_cpp is a header only library, so this should cause no problems.
#include <exception>
#undef __cpp_lib_uncaught_exceptions
#include "sqlite_modern_cpp.h"
This is intended to be handled by the availability mechanic in libcxx. For example, __cpp_lib_shared_mutex is controlled by the preprocessor symbol _LIBCPP_AVAILABILITY_DISABLE_FTM___cpp_lib_shared_mutex, which is defined according to the minimum declared target versions of the various Apple OSes.
It should be fairly easy to copy this mechanic to similarly define a _LIBCPP_AVAILABILITY_DISABLE_FTM___cpp_lib_uncaught_exceptions to disable __cpp_lib_uncaught_exceptions; you would need to patch the headers <version> and <__availability>. You might want to update the Clang issue 39631, which was closed back in 2018 (I believe incorrectly, but maybe the availability mechanic didn't exist back then).
Related
Say I have the below (very simple) code.
#include <iostream>
int main() {
std::cout << std::stoi("12");
}
This compiles fine on both g++ and clang; however, it fails to compile on MSVC with the following error:
error C2039: 'stoi': is not a member of 'std'
error C3861: 'stoi': identifier not found
I know that std::stoi is part of the <string> header, which presumably the two former compilers include as part of <iostream> and the latter does not. According to the C++ standard [res.on.headers]
A C++ header may include other C++ headers.
Which, to me, basically says that all three compilers are correct.
This issue arose when one of my students submitted work, which the TA marked as not compiling; I of course went and fixed it. However, I would like to prevent future incidents like this. So, is there a way to determine which header files should be included, short of compiling on three different compilers to check every time?
The only way I can think of is to ensure that for every std function call, an appropriate include exists; but if you have existing code which is thousands of lines long, this may be tedious to search through. Is there an easier/better way to ensure cross-compiler compatibility?
Example with the three compilers: https://godbolt.org/z/kJhS6U
Is there an easier/better way to ensure cross-compiler compatibility?
This is always going to be a bit of a chore if you have a huge codebase and haven't been doing this so far, but once you've gone through fixing your includes, you can stick to a simple procedure:
When you write new code that uses a standard feature, like std::stoi, plug that name into Google, go to the cppreference.com article for it, then look at the top to see which header it's defined in.
Then include that, if it's not already included. Job done!
(You could use the standard for this, but that's not as accessible.)
Do not be tempted to sack it all off in favour of cheap, unportable hacks like <bits/stdc++.h>!
tl;dr: documentation
Besides reviewing documentation and doing that manually (painful and time consuming) you can use some tools which can do that for you.
You can use ReSharper in Visual Studio which is capable to organize imports (in fact VS without ReSharper is not very usable). If include is missing it recommends to add it and if it is obsolete line with include is shown in more pale colors.
Or you can use CLion (available for all platforms) which also has this capability (in fact this is the same manufacture JetBrains).
There is also tool called include what you used, but its aim is take advantages of forward declaration, I never used that (personally - my team mate did that for our project).
There are already two compilers that support C++ modules:
Clang: http://clang.llvm.org/docs/Modules.html
MS VS 2015: http://blogs.msdn.com/b/vcblog/archive/2015/12/03/c-modules-in-vs-2015-update-1.aspx
When starting a new project now, what should I pay attention to in order to be able to adopt the modules feature when it is eventually released in my compiler?
Is it possible to use modules and still maintain compatibility with older compilers that do not support it?
There are already two compilers that support C++ modules
clang: http://clang.llvm.org/docs/Modules.html
MS VS 2015: http://blogs.msdn.com/b/vcblog/archive/2015/12/03/c-modules-in-vs-2015-update-1.aspx
The Microsoft approach appears to be the one gaining the most traction, mainly because Microsoft are throwing a lot more resources at their implementation than any of the clang folk currently. See https://llvm.org/bugs/buglist.cgi?list_id=100798&query_format=advanced&component=Modules&product=clang for what I mean, there are some big showstopper bugs in Modules for C++, whereas Modules for C or especially Objective C look much more usable in real world code. Visual Studio's biggest and most important customer, Microsoft, is pushing hard for Modules because it solves a whole ton of internal build scalability problems, and Microsoft's internal code is some of the hardest C++ to compile anywhere in existence so you can't throw any compiler other than MSVC at it (e.g. good luck getting clang or GCC to compile 40k line functions). Therefore the clang build tricks used by Google etc aren't available to Microsoft, and they have a huge pressing need to get it fixed sooner rather than later.
This isn't to say there aren't some serious design flaws with the Microsoft proposal when applied in practice to large real world code bases. However Gaby is of the view you should refactor your code for Modules, and whilst I disagree, I can see where he is coming from.
When starting a new project now, what should I pay attention to in order to be able to adopt the modules feature when it is eventually released in my compiler?
In so far as Microsoft's compiler is currently expected to implement Modules, you ought to make sure your library is usable in all of these forms:
Dynamic library
Static library
Header only library
Something very surprising to many people is that C++ Modules as currently expected to be implemented keeps those distinctions, so now you get a C++ Module variant for all three of the above, with the first most looking like what people expect a C++ Module to be, and the last looking most like a more useful precompiled header. The reason you ought to support those variants is because you can reuse most of the same preprocessor machinery to also support C++ Modules with very little extra work.
A later Visual Studio will allow linking of the module definition file (the .ifc file) as a resource into DLLs. This will finally eliminate the need for the .lib and .dll distinction on MSVC, you just supply a single DLL to the compiler and it all "just works" on module import, no headers or anything else needed. This of course smells a bit like COM, but without most of the benefits of COM.
Is it possible to use modules in a single codebase and still maintain compatibility with older compilers that do not support it?
I'm going to assume you meant the bold text inserted above.
The answer is generally yes with even more preprocessor macro fun. #include <someheader> can turn into an import someheader within the header because the preprocessor still works as usual. You can therefore mark up individual library headers with C++ Modules support along something like these lines:
// someheader.hpp
#if MODULES_ENABLED
# ifndef EXPORTING_MODULE
import someheader; // Bring in the precompiled module from the database
// Do NOT set NEED_DEFINE so this include exits out doing nothing more
# else
// We are at the generating the module stage, so mark up the namespace for export
# define SOMEHEADER_DECL export
# define NEED_DEFINE
# endif
#else
// Modules are not turned on, so declare everything inline as per the old way
# define SOMEHEADER_DECL
# define NEED_DEFINE
#endif
#ifdef NEED_DEFINE
SOMEHEADER_DECL namespace someheader
{
// usual classes and decls here
}
#endif
Now in your main.cpp or whatever, you simply do:
#include "someheader.hpp"
... and if the compiler had /experimental:modules /DMODULES_ENABLED then your application automagically uses the C++ Modules edition of your library. If it doesn't, you get inline inclusion as we've always done.
I reckon these are the minimum possible set of changes to your source code to make your code Modules-ready now. You will note I have said nothing about build systems, this is because I am still debugging the cmake tooling I've written to get all this stuff to "just work" seamlessly and I expect to be debugging it for some months yet. Expect to see it maybe at a C++ conference next year or the year after :)
Is it possible to use modules and still maintain compatibility with older compilers that do not support it?
No, it is not possible. It might be possible using some #ifdef magic like this:
#ifdef CXX17_MODULES
...
#else
#pragma once, #include "..." etc.
#endif
but this means you still need to provide .h support and thus lose all the benefits, plus your codebase looks quite ugly now.
If you do want to follow this approach, the easiest way to detect "CXX17_MODULES" which I just made up is to compile a small test program that uses modules with a build system of your choice, and define a global for everyone to see telling whether the compilation succeeded or not.
When starting a new project now, what should I pay attention to in order to be able to adopt the modules feature when it is eventually released in my compiler?
It depends. If your project is enterprise and gets you food on the plate, I'd wait a few years once it gets released in stables so that it becomes widely adapted. On the other hand, if your project can afford to be bleeding-edge, by all means, use modules.
Basically, it's the same story ast with Python3 and Python2, or less relevantly, PHP7 and PHP5. You need to find a balance between being a good up-to-date programmer and not annoying people on Debian ;-)
I am wondering if it is possible to enforce direct #include requirements with GCC. Let say I have these files:
abc.h:
typedef struct {
int useful;
} str;
file1.h:
#include <abc.h>
#ifndef GUARD
#define GUARD
#include <deh.h>
typedef struct {
int useful;
} str2;
#endif
file2.h:
#ifndef GUARD2
#define GUARD2
#include <file1.h>
void a_function (str* my_str);
void a_function2(str2* my_str);
#endif
The problem is that "file2.h" is using "str" defined in "abc.h". Let say "file1.h" is provided by the system on some Linux systems. I have no control of "file1.h" content. If may or may not include , it may or may not be inside include guards and it may or may not change over time.
The issue is when it come to support multiple distributions and system. If file2.h is accidentally using "str" without including , it may compile anyway on most systems, but may fail on others, or in the future when "file1.h" change.
Is there a way to force GCC (or LLVM) to use only types directly defined in file2.h? I understand that "#include" are just that, includes, so the compiler internal may not be aware of those issues after the preprocessor phase, however, I am wondering if this is currently possible and, if so, how?
I had this problem a few time with "normal" Linux distributions, but it was even worst with early Android NDK versions.
No, #include instructs the compiler to treat the other file's content as if it were placed at the #include directive -- you're asking for the other file's content to be treated somehow differently.
Your best hope in this scenario is to use a static analysis tool that performs dependency analysis, and check that there are no direct dependencies on types (or functions or objects) obtained through indirect (nested) inclusion.
The free doxygen documentation tool extracts information about inclusion and dependencies, which it makes available in XML format. Of course, it isn't as accurate as a true compiler, in terms of overload resolution and template processing. I'm sure there are paid tools that will be more accurate (user Ira Baxter pops up from time to time mentioning a commercial product his company sells, DMS Toolkit or something like that, which sounded like it would get at this information). But I'm guessing that doxygen will give you the right results for most "normal" code.
There isn't anything in the C++ language which would verify that all headers are included correctly. However, there is include-what-you-use which is based on clang. I haven't tried using it but it seems to be in the direction of what you are looking for. For C, implementing an analyzer detecting dependencies and report missing direct includes seems to be fairly straight forward. When trying the same with C++ things get somewhat harder due to the need of detecting dependencies for template instantiations.
Based on last weeks discussion at the C++ committee meeting, refactoring sources and headers to properly include what is actually used may be helpful for future module support in C++.
This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
How do I check for C++11 support?
I am writing a small library and I would like to use class enums whenever the compiler supports them. I also want to use other C++11 features, such as final and override keywords.
So far, I have used tricks to make sure it compiled on all versions of GCC, but I when I booted my Windows partition, Visual Studio 2010 started complaining too. Here is an example of the tricks I used:
#if __GNUC__ == 4 && (__GNUC_MINOR__ > 7 || \
(__GNUC_MINOR__ == 7 && __GNUC_PATCHLEVEL__ > 1))
# define TATO_OVERRIDE override
# define TATO_NO_THROW nothrow
#else
# define TATO_OVERRIDE
# define TATO_NO_THROW throw()
#endif
I know that the newest version of Visual Studio already supports a batch of new features too. What I would like to have, is something like a set of macro which tells me what features are available on the compiler I am using.
#ifdef THIS_COMPILER_SUPPORTS_CLASS_ENUMS
...
#endif
Does this exist? Is there a library that does that?
The compiler’s documentation?
Let me clarify. I know how to find those information, my problem is elsewhere. I don’t want to go through every possible compiler’s documentation to gather those information, especially since the same compiler might support different features with respect to its version. This is what I have been doing so far, and what I am looking for is actually a way not to do that.
Boost actually has a wide range of such macros available. You could use that. Otherwise, the only way is to essentially check the compiler's version and use your knowledge of the features supported in that version to decide if a feature is available or not.
Essentially, what Boost does, except manually.
There were discussions of having some standardized feature test mechanism but it turns out that this doesn't make any sense: If a compiler implements the standard, all feature tests would yield true. If it doesn't there is no reason to assume that it follows the standard in terms of the feature tests!
Thus, using some sort of configuration file seems to be the most reliable approach. Personally, I would do it differently than explicitly checking for compiler versions: instead, I would use something trying whether a compiler supports a specific feature to an acceptable degree. The configuration could be run in terms of autoconf or something similar.
With respect to the resulting configuration I would try to map things to suitable constructs and not use conditional compilation outside the configuration headers. For example, I would use something like this:
#if defined(KUHL_HAS_CLASS_FINAL)
# define kuhl_class_final final
#else
# define kuhl_class_final
#endif
Specifically for class enums you might need to use something a bit tricky because the enumeration values will only be available within a scope while the values are only available outside a scope. Thus, it may be necessaray to come up with some form of extra nesting in one case but not the other.
clang has some built-in macros for various feature checks: clang feature-check macros
Would be nice if all compiler vendors would pick up these (and more).
“What I would like to have, is something like a set of macro which tells me what features are available on the compiler I am using.”
There's no such thing in the standard.
A practical approach to compiler differences is to have a header for each compiler and compiler version you support. These headers should have the same name. Which one is included depends on the include path, tool usage, which is easy to customize for each compiler.
I call that concept virtual headers. I've found that it works nicely for three levels: system dependency, compiler dependency and version dependency. I think the scheme doesn't scale up to more than that, but on the other hand, that seems to be all that one needs.
I'm trying to figure out which of the additions to the algorithm headers are supported by a given implementation (gcc and MSVC would be enough).
The simple way would be to do it the same way as one would do it for core features: check the compiler version and define a macro if a language feature is supported. Unfortunately I cannot find a list that shows the version numbers for either compiler.
Is simply checking for a generic C++0x macro (GXX_EXPERIMENTAL or __cplusplus) enough or should I check the change lists for the compilers and build my macros based on those lists?
http://gcc.gnu.org/onlinedocs/libstdc++/manual/status.html#status.iso.200x
Since all compiler vendors provide a nice list of what's available in what version, and you would test the functionality anyways, I would use compiler versions to check for specific features. Or demand the user uses at least a good version, and not worry about it.
__cplusplus is not necessarily a C++0x macro, it tells you nothing. GXX_EXPERIMENTAL has existed since GCC 4.3, so that's pretty useless too.
This one is for GCC.
This one is for MSVC. (mind you: partially implemented means broken)
This one is for Intel.
Here you can find what macros to check against for a specific version of a compiler.
As far as I could figure out the only proper solution is to have a build script that tries to compile and run a file that uses the feature and has a runtime assertion. Depending on the outcome have a #define CONFIG_NO_FEATURENAME or similiar in a config file and guard your uses and workaround with a #ifndef.
This way it is possible to check if
the feature is available
the feature functions properly (depending on the correctness of the assertion)