Modules are an alternative to #includes. Clang has a complete implementation for C++. How would I go about if I wanted to use modules using Clang now?
Using
import std.io;
in a C++ source file does not work (compile) yet, as the specification for modules (which includes syntax) isn't final.
The Clang documentation states that, when passing the -fmodules flag, #includes will be rewritten to their appropriate imports. However, checking the preprocessor suggests otherwise (test.cpp only contains #include <stdio.h> and an empty main):
$ clang++-3.5 -fmodules -E test.cpp -o test
$ grep " printf " test
extern int printf (const char *__restrict __format, ...);
Furthermore, compiling this test file with -fmodules vs no flags at all produces the same object file.
What am I doing wrong?
As of this commit, Clang has experimental support for the Modules TS.
Let's take the same example files (with a small change) as in the VS blog post about experimental module support.
First, define the module interface file. By default, Clang recognizes files with cppm extension (and some others) as C++ module interface files.
// file: foo.cppm
export module M;
export int f(int x)
{
return 2 + x;
}
export double g(double y, int z)
{
return y * z;
}
Note that the module interface declaration needs to be export module M; and not just module M; like in the VS blog post.
Then consume the module as follows:
// file: bar.cpp
import M;
int main()
{
f(5);
g(0.0, 1);
return 0;
}
Now, precompile the module foo.cppm with
clang++ -fmodules-ts --precompile foo.cppm -o M.pcm
or, if the module interface extension is other than cppm (let's say ixx, as it is with VS), you can use:
clang++ -fmodules-ts --precompile -x c++-module foo.ixx -o M.pcm
Then build the program with
clang++ -fmodules-ts -c M.pcm -o M.o
clang++ -fmodules-ts -fprebuilt-module-path=. M.o bar.cpp
or, if the pcm file name is not the same as the module name, you'd have to use:
clang++ -fmodules-ts -fmodule-file=M.pcm bar.cpp
I've tested these commands on Windows using the r303050 build (15th May 2017).
Note: When using the -fprebuilt-module-path=. option, I get a warning:
clang++.exe: warning: argument unused during compilation: '-fprebuilt-module-path=.' [-Wunused-command-line-argument]
which appears to be incorrect because without that option, the module M is not found.
Like you mentioned, clang does not yet have a C++ syntax for imports,
so I doubt that #include directives are going to be literally rewritten as imports when preprocessing a file, so that may not be the best way to test if modules are working as intended.
However, if you set -fmodules-cache-path=<path> explicitly, you can observe clang populating it with precompiled module files (*.pcm) during a build - if there are any modules involved.
You'll need to use libc++ (which seems to come with a module.modulemap as of version 3.7.0) if you want to use a modules enabled standard library right now - though in my experience this isn't working entirely just yet.
(Visual Studio 2015's C++ compiler is also supposed to get some form of module support with Update 1 in November)
Independently of the stdlib, you could still use modules in your own code. The clang docs contain a detailed description of the Module Map Language.
Related
Let's say that I have the following two modules:
module_a.cppm
export module module_a;
export namespace A_Namespace{
const int A_Export = 2;
}
module_b.cppm
export module module_b;
import module_a;
export namespace B_Namespace
{
const int B_Export = A_Namespace::A_Export + 1;
}
Now, I want to compile these modules into .pcm files for use in an executable. To do so, I use the following commands:
clang++ -fmodules-ts --precompile module_a.cppm -o module_a.pcm
clang++ -fmodules-ts --precompile module_b.cppm -o module_b.pcm
And then module_a.pcm and module_b.pcm would get compiled into .o files, which are then compiled into my final executable.
module_a.pcm compiles successfully, however module_b throws the following error:
fatal error: module 'module_a' not found
I suspect that this is because the definition for module_a is not provided when compiling module_b. Were I using the traditional .hpp/.cpp combination, I would compile the sources for module_a.cpp and module_b.cpp into a single binary, and the header files module_a.hpp and module_b.hpp would provide a declaration that the compiler can satisfy.
How do you satisfy module-interdependencies during compilation?
The above was produced using clang++ 14.0.0 on an x64 Ubuntu 22.04 instance
So far it seems to me that including almost any libstdc++ header in a C++ module interface causes compile errors on clang 14.0.0 and the libstdc++ that comes bundled with GCC 11.2.0. I wonder if I am doing something wrong or if this is just not something that is supported yet. (I see that the Clang modules support is "partial", but haven't been able to find what is implemented and what is not.)
Here's a trivial module example that I got to work with clang-14 in Linux, linked with libstdc++. It demonstrates that libstdc++ headers can be used in a module implementation, but this example does not #include anything in the module interface:
// mod_if.cc
export module mod;
export int foo();
// mod.cc
module;
#include <iostream>
module mod;
int foo() {
std::cout << "Hello world from foo()" << std::endl;
return 42;
}
// use.cc
import mod;
#include <iostream>
int main() {
std::cout << foo() << std::endl;
}
This works:
$ CXXFLAGS="-std=c++20 -fmodules -fprebuilt-module-path=prebuilt"
$ clang++ -c $CXXFLAGS -Xclang -emit-module-interface -o prebuilt/mod.pcm mod_if.cc
$ clang++ -c $CXXFLAGS -fmodule-file=prebuilt/mod.pcm mod.cc -o mod.o
$ clang++ $CXXFLAGS use.cc mod.o prebuilt/mod.pcm -o use
$ ./use
Hello world from foo()
42
However, suppose I wanted foo to return a std::string:
// mod_if.cc
module;
#include <string>
export module mod;
export std::string foo();
// mod.cc
module;
#include <string>
module mod;
std::string foo() {
return "42";
}
// no use.cc needed since the error happens when building mod.cc
This does not compile (first of many similar errors shown):
$ clang++ -c $CXXFLAGS -Xclang -emit-module-interface -o prebuilt/mod.pcm mod_if.cc
$ clang++ -c $CXXFLAGS -fmodule-file=prebuilt/mod.pcm mod.cc -o mod.o
In file included from mod.cc:2:
In file included from /usr/lib64/gcc/x86_64-pc-linux-gnu/11.2.0/../../../../include/c++/11.2.0/string:40:
In file included from /usr/lib64/gcc/x86_64-pc-linux-gnu/11.2.0/../../../../include/c++/11.2.0/bits/char_traits.h:39:
In file included from /usr/lib64/gcc/x86_64-pc-linux-gnu/11.2.0/../../../../include/c++/11.2.0/bits/stl_algobase.h:64:
In file included from /usr/lib64/gcc/x86_64-pc-linux-gnu/11.2.0/../../../../include/c++/11.2.0/bits/stl_pair.h:65:
/usr/lib64/gcc/x86_64-pc-linux-gnu/11.2.0/../../../../include/c++/11.2.0/compare:348:33: error: redefinition of '__cmp_cat_id<std::partial_ordering>'
inline constexpr unsigned __cmp_cat_id<partial_ordering> = 2;
^
/usr/lib64/gcc/x86_64-pc-linux-gnu/11.2.0/../../../../include/c++/11.2.0/bits/stl_pair.h:65:11: note: '/usr/lib64/gcc/x86_64-pc-linux-gnu/11.2.0/../../../../include/c++/11.2.0/compare' included multiple times, additional include site in header from module 'mod.<global>'
# include <compare>
^
/usr/lib64/gcc/x86_64-pc-linux-gnu/11.2.0/../../../../include/c++/11.2.0/bits/stl_pair.h:65:11: note: '/usr/lib64/gcc/x86_64-pc-linux-gnu/11.2.0/../../../../include/c++/11.2.0/compare' included multiple times, additional include site in header from module '<global>'
# include <compare>
^
mod.cc:1:1: note: <global> defined here
module;
^
Is there currently a way to make this code work (without resorting to writing module maps for the libstdc++ headers)? Why does this error happen? It sounds strange that the inline constexpr declaration included in the global module fragment gets exported, but then I don't claim to understand modules well.
Ok, this is something that sort of worked for a large project. Note that this was half a year ago, so the world may have moved on.
I ended up creating a single header, "sys.hh", that #includes pretty much all the system headers used in the project. What seems to be important is that nothing directly or indirectly #included by this file gets #included directly or indirectly (outside the module system) in anything that gets linked into the final binary.
My "sys.hh" looks something like this:
#include <algorithm>
#include <array>
#include <assert.h>
#include <atomic>
#include <bits/std_abs.h>
// 100+ lines omitted, including things like glib, gtk, libjpeg
#include <vector>
#include <x86intrin.h>
#include <zlib.h>
// Macros won't get exported, so whatever the code needs, redefine as
// constexpr (or consteval functions) here. Unfortunately, I don't think
// there's a way to retain the name of the macro; so add an underscore.
// Also put them in a namespace.
#define MEXP(X) constexpr auto X ## _ = X;
namespace sys {
MEXP(INTENT_PERCEPTUAL);
MEXP(INTENT_RELATIVE_COLORIMETRIC);
MEXP(INTENT_SATURATION);
MEXP(INTENT_ABSOLUTE_COLORIMETRIC);
MEXP(G_PRIORITY_DEFAULT_IDLE);
}
And my my modulemap file contains an entry like this:
module sys {
header "prebuilt/sys.hh"
use _Builtin_intrinsics
export *
}
Compiling this header/module is a bit of an incremental process; you will run into modules that fail to compile because they indirectly include the same headers, so you add them into this file and rebuild until it works.
Note that managing build dependencies becomes much more of a thing with modules. At least half a year ago no good (automatic) ways seemed to exist to discover what needs to be rebuilt. This is made trickier by the fact that the name of the module does not tell where it lives in the source code.
I'm aware that C++20 is not fully supported (yet) by the compilers, but I really want to learn modules and other C++20 stuff.
Modules are supported in GCC11 and Clang-8+.
Compiler Support of C++20
I've installed Clang-10 on my Ubuntu, but it still gives me errors:
import <iostream>;
using namespace std;
int main(){
cout << "Hello world";
}
What am I doing wrong?
COMMANDS:
clang++ -Wall -std=c++2a -stdlib=libc++ -fimplicit-modules
-fimplicit-module-maps main.cpp -o main
clang++ -Wall -std=c++20 -stdlib=libc++ -fimplicit-modules
-fimplicit-module-maps main.cpp -o main
ERROR: fatal error: 'iostream' file not found
Although c++20 adds modules the c++20 standard library doesn't expose any modules.
Microsoft have implemented some standard library modules which may or may not match a future c++ standard: https://learn.microsoft.com/en-us/cpp/cpp/modules-cpp?view=msvc-160#consume-the-c-standard-library-as-modules. With these your example would be:
import std.core;
using namespace std;
int main(){
cout << "Hello world";
}
As far as I can see neither libc++ or libstdc++ have implemented any modules yet.
By default, gcc trunk use c++17, and clang trunk use c++14, so you have to say compiler, that you want to use c++20
If you are compiling your code in terminal by yourself, than add following flag
--std=c++2a
If you compile your code using Cmake, than add following to your CMakeLists.txt
set(CMAKE_CXX_STANDARD 20)
And if you compile in some IDE(Codeblocks or Visual studio), than somewhere in compiler settings put supporting c++20
trunk means "the main line of development", so this compiler version should be latest officially supported
GoogleTest 1.6 with Cygwin 1.7: 'fileno' was not declared in this scope
Error message when building a simple test on Factorial() function in Eclipse CDT:
Invoking: Cygwin C++ Compiler
g++ -std=c++0x -DGTEST_OS_CYGWIN=1 -I"E:\source\gtest-1.6.0\include" -O0 -g3 -Wall -c -fmessage-length=0 -MMD -MP -MF"src/challenge.d" -MT"src/challenge.d" -o "src/challenge.o" "../src/challenge.cpp"
In file included from E:\source\gtest-1.6.0\include/gtest/internal/gtest-internal.h:40:0,
from E:\source\gtest-1.6.0\include/gtest/gtest.h:57,
from ../src/challenge.cpp:11:
E:\source\gtest-1.6.0\include/gtest/internal/gtest-port.h: In function 'int testing::internal::posix::FileNo(FILE*)':
E:\source\gtest-1.6.0\include/gtest/internal/gtest-port.h:1589:51: error: 'fileno' was not declared in this scope
E:\source\gtest-1.6.0\include/gtest/internal/gtest-port.h:1595:57: error: 'strdup' was not declared in this scope
E:\source\gtest-1.6.0\include/gtest/internal/gtest-port.h:1627:71: error: 'fdopen' was not declared in this scope
Eclipse CDT 8.1 running gcc 4.7.3 on Cygwin 1.7.22
gTest 1.6 succesfully built including demo tests, with cmake 2.8.9 on Cygwin 1.7.22
I've linked the built lib with full path, E:\lib\gtest-1.6.0\Cygwin\libgtest.a
The following command option was added manually, got same error without it.
-DGTEST_OS_CYGWIN=1
Seems the errors have nothing to do with my code. Anyone using gTest with Eclipse and Cygwin?
Thank you,
unsigned long Factorial(unsigned n) {
return n==0? 0 : n*Factorial(n-1);
}
// Tests factorial of 0.
TEST(FactorialTest, HandlesZeroInput) {
EXPECT_EQ(1, Factorial(0));
}
// Tests factorial of positive numbers.
TEST(FactorialTest, HandlesPositiveInput) {
EXPECT_EQ(1, Factorial(1));
EXPECT_EQ(2, Factorial(2));
EXPECT_EQ(6, Factorial(3));
EXPECT_EQ(40320, Factorial(8));
}
Setting the C++ standard to -std=gnu++0x rather than -std=c++0x, worked for me. You can try the statement:
g++ -std=gnu++0x -DGTEST_OS_CYGWIN=1 -I"E:\source\gtest-1.6.0\include" -O0 -g3 -Wall -c -fmessage-length=0 -MMD -MP -MF"src/challenge.d" -MT"src/challenge.d" -o "src/challenge.o" "../src/challenge.cpp"
Setting symbol (-DGTEST_OS_CYGWIN=1) has got nothing to do with this error.
Some functions go beyond the ANSI standard.
These are disabled when you use std=c++11 (or std=c++0x).
Among them are fdopen, fileno and strdup.
There are two possibilities to use them:
Use the GNU dialect (std=gnu++11).
If you want to compile without dialect and make a local exception, you can include stdio.h with the __STRICT_ANSI__ undefined. (see: Error "'fdopen' was not declared" found with g++ 4 that compiled with g++3)
I have tested both on Suse Linux Enterprise 11, MinGW and Cygwin.
Addition: Another (possibly better) way to access non-ANSI symbols would be to add
#define _POSIX_C_SOURCE 200809L
before the first #include in your file. This will give you access to most of the non-standard routines.
Some functions (e.g. realpath(...)) require
#define _BSD_SOURCE
to be inserted on top of your file.
When writing C++ code, I often start by writing full 'implementation' code in my header files, then later need to refactor the implementation into a .cpp file.
This is great, but I find this process laborious, but otherwise pretty easy, so I wondered about whether is there any automated way to do this?
Specifically, I want to convert all class and function definitions in the .h to declarations, and have them declared in a new .cpp file.
I'm using xcode, but I am open to any solutions.
There is Lazy C++ where you only write one .lzz file and it generates .h and .cpp for you.
I am really looking forward for C++ modules where you only write .cpp and the import file is generated automatically. But we will have to wait for a few years even though Clang has started to implement modules. Here are some examples.
You can use some tools such as Makeheaders
http://www.hwaci.com/sw/mkhdr/
but in general, these tools are not complete, especially meeting new c++11 files.
You may be also interested in Eclipse's function "Refactor/Toggle function". It not always work properly however.
C++20 modules essentially do that for us
As mentioned at: https://quuxplusone.github.io/blog/2019/11/07/modular-hello-world/ clang 2019-11 implements it along:
clang++ -std=c++2a -c helloworld.cpp -Xclang -emit-module-interface -o helloworld.pcm
clang++ -std=c++2a -c -fprebuilt-module-path=. -o helloworld.o helloworld.cpp
clang++ -std=c++2a -fprebuilt-module-path=. -o main.out main.cpp helloworld.o
where:
helloworld.cpp contains the implementation
helloworld.pcm is a precompiled module, basically an auto-extracted header from the .cpp (but in a clang internal language format) which gets used by main.cpp without an .hpp
So basically clang is the tool, and thus parsing is perfect.