Mixed programming - Including C++ header to Fortran - c++

I am trying to use a function from library written in C++ in my program written in Fortran. The C++ library is summarized in one header file so that if you want to use it in another C++ program you only do #include functions.h I would like to find out how to do something similar in Fortran.
From my research I've created this minimal viable example:
clib/functions.h:
#ifndef ADD_H
#define ADD_H
extern "C"
{
int __stdcall add(int x, int y);
}
#endif
clib/functions.cpp:
extern "C"
{
int __stdcall add(int x, int y)
{
return x + y;
}
}
cinclude.c
#include "clib/functions.h"
cinterface.f95:
module cinterface
use,intrinsic::ISO_C_BINDING
integer(C_INT)::a,b
interface
integer(C_INT) function add(a,b) bind(C,name="add")
use,intrinsic::ISO_C_BINDING
implicit none
!GCC$ ATTRIBUTES STDCALL :: add
!DEC$ ATTRIBUTES STDCALL :: add
integer(C_INT), value ::a,b
end function add
end interface
end module cinterface
main.f90
program main
use cinterface
implicit none
integer :: c
c = add(1,2)
write(*,*) c
end program
makefile:
FC = gfortran
CC = g++
LD = gfortran
FFLAGS = -c -O2
CFLAGS = -c -O2
OBJ=main.o
DEP = \
cinterface.o cinclude.o
.SUFFIXES: .f90 .f95 .c .o
# default rule to make .o files from .f files
.f90.o : ; $(FC) $(FFLAGS) $*.f90 -o $*.o
.f95.o : ; $(FC) $(FFLAGS) $*.f95 -o $*.o
.c.o : ; $(CC) $(CFLAGS) $*.c -o $*.o
%.o: %.mod
#
main.ex: ${DEP} ${OBJ}
$(LD) ${DEP} ${OBJ} -o prog.exe
#
When I try to make this project using Cygwin I get a following error:
main.o:main.f90:(.text+0x13): undefined reference to `add'
main.o:main.f90:(.text+0x13): relocation truncated to fit: R_X86_64_PC32 against undefined symbol `add'
collect2: error: ld returned 1 exit status
make: *** [makefile:19: main.ex] Error 1
How can I make add function in Fortran work?

You are most of the way there. There are two things you need to address to make this work: linkage and argument passing conventions.
Linkage
As francescalus noted, the Fortran compiler doesn't understand how to parse a C/C++ header file. So your functions.h and cinclude.c files aren't going to be of any use in this example.
Don't throw away your functions.h yet, though. In it you declare the add function as:
extern "C"
{
int __stdcall add(int x, int y);
}
The extern "C" is the important part. This tells g++ that the symbols in the following block of code aren't subject to all of the C++ name mangling. You'll need the same surrounding the add definition in functions.cpp.
extern "C"
{
int add(int x, int y)
{
return x + y;
}
}
Once you've done that all you will need to link are functions.o, cinterface.o/mod, and main.o.
Argument passing conventions
The way add is declared the arguments x and y are passed to the function by value. That is the default behavior for C/C++ function arguments. Fortran, on the other hand, defaults to passing arguments to functions/subroutines by reference. In C++ this would look like int add(int* x, int* y). There are two ways to address this problem.
The first option is to redefine your add function with integer pointers for the arguments and dereference them inside the function.
extern "C"
{
int add(int* x, int* y)
{
return *x + *y;
}
}
The second option (IMHO the preferred option) is to declare the Fortran interface to pass the arguments by value. They aren't being modified in the add function...why pass them by reference? If you choose this option, then your cinterface.f95 will need to contain the following declaration of add:
integer(C_INT) function add(a,b) bind(C,name="add")
use,intrinsic::ISO_C_BINDING
implicit none
integer(C_INT),value::a,b
end function add
Note the additional value decoration on the variables a and b. No matter which option you go with, without it on my machine I get 8393540 printed out as the result of the add function call. After addressing the argument passing conventions I get 3 printed out as expected.

A build system can simplify this significantly (albeit at the cost of introducing a complex build system). Assuming the directory layout in your question (although without the the cinclude.c since I don't see what purpose it serves)
$ tree
.
├── cinterface.f90
├── clib
│   ├── CMakeLists.txt
│   ├── functions.cpp
│   └── functions.h
├── CMakeLists.txt
└── main.f90
The contents of the The cmake files are
$ cat CMakeLists.txt
cmake_minimum_required(VERSION 3.9)
project(cpp-add LANGUAGES C CXX Fortran)
add_subdirectory(clib)
add_executable(glue cinterface.f90 main.f90)
target_link_libraries(glue PUBLIC cpp-clib)
and
$ cat clib/CMakeLists.txt
add_library(cpp-clib functions.cpp)
The project can then be configured and built in the usual way:
$ cmake -H. -Bbuild && cmake --build build
Execution:
$ build/glue
3

I was having the same issue, can you show us the g++ compile line?
My issue was caused because my make file didn't properly include the appropriate .o file within the compilation for the .exe
i.e I had something like
Test: Test.cpp dependancy.o
g++ Test.cpp -o test.exe
and I was getting the same error returned that you did.
I solved it by ensuring the .o was actually used on the compile line.
Test: Test.cpp dependancy.o
g++ dependancy.o Test.cpp -o test.exe
I suggest this because the error "Undefined Symbol" typically means the compiler doesn't actually know where the code is for your call to the add function.

Related

Failing to use CXX to link Rust-written library in C++ project

I am testing CXX with a very simple project to link a Rust library into a C++ executable.
I write a foo() -> () Rust function and try to access it from C++ but the linker does not find it.
Here's what I have:
// lib.rs
#[cxx::bridge]
mod ffi {
extern "Rust" {
pub fn foo() -> ();
}
}
pub fn foo() -> () {
println!("foo")
}
# Cargo.toml
[package]
name = "cpprust"
version = "0.1.0"
edition = "2021"
[lib]
name = "cpprust"
path = "src/lib.rs"
crate-type = ["staticlib", "rlib", "dylib"] # EDIT: this is incorrect, see note at the end of question
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
[dependencies]
cxx = "1.0"
// main.cpp
void foo(); // I tried including lib.rs.h but it was not generated!
int main() {
foo();
}
Running cargo build generates target\debug\libcpprust.so.
I then try to make the project with (EDIT: g++ command is incorrect, see note at the end of question):
g++ -L../target/debug/ -lcpprust -o cpprust main.cpp
/tmp/ccOA8kJy.o: In function `main':
main.cpp:(.text+0x5): undefined reference to `foo()'
collect2: error: ld returned 1 exit status
make: *** [Makefile:2: cpprust] Error 1
What is wrong here?
EDIT: prog-fh's great answer correctly points out that I need to include build.rs with C++ compilation, even without having C++ to compile and access within the crate. However, even after implementing their answer, I was still getting the same error message. It turns out that I had two other problems: 1) the order of my arguments to g++ were incorrect, and I needed pthread -l dl as well. It should have been:
g++ -o cpprust main.cpp -I ../target/cxxbridge -L../target/debug -lcpprust -pthread -l dl
2) My Cargo.toml file was also generating "rlib", "dylib" library types, but that somehow also causes the error above; it works when only staticlib is generated.
Considering this documentation, the build.rs script should generate the lib.rs.h which was missing in your attempt.
Note that the example in the documentation considers that the main program comes from Rust, and that the C++ code is an extension.
In your question, it is the opposite situation: your main program comes from C++ but is extended by some Rust code.
This answer is made of two parts:
a minimal example very similar to yours (no C++ code to be called from Rust),
a more complete example with a bidirectional interaction between C++ and Rust (but the main program still is on the C++ side).
edit to answer subsequent questions in the comments
As said in the comment of the second build.rs below, the name chosen in .compile("cpp_from_rust") will be used to name a library containing the compiled C++ code (libcpp_from_rust.a for example).
This library will then be used by Rust to extend the Rust code: the libcpprust.a main target produced by Rust contains libcpp_from_rust.a.
If no C++ file is provided before .compile() (as in the first, minimal example below), this C++ library only contains the symbols enabling extern "Rust" access from C++.
$ nm ./target/debug/build/cpprust-28371278e6cda5e2/out/libcpp_from_rust.a
lib.rs.o:
U _GLOBAL_OFFSET_TABLE_
0000000000000000 T _Z13rust_from_cppv
U cxxbridge1$rust_from_cpp
On the other hand, you already found in the documentation that multiple invocations of .file() are allowed in order to provide the C++ library with much more code from various source files.
Another question was about the kind of library we want Rust to produce.
This documentation enumerates the various binary targets Rust can produce, especially various kinds of libraries.
Since in your original question you wanted the main executable to be on the C++ side, this means that Rust should produce a library which can be considered as a system library, not a Rust specific one, because Rust won't be involved anymore when generating the executable.
In the aforementioned documentation, we can see that only staticlib and cdylib are suitable for this usage.
In my examples, I chose staticlib for the sake of simplicity, but cdylib can be used too.
However, it is a bit more complicated because, as the main library (libcpprust.so) is generated as a dynamic one, Rust does not insert the C++ library (libcpp_from_rust.a) into it; thus, we have to link against this C++ library, which is not very convenient.
g++ -std=c++17 -o cpp_program src/main.cpp \
-I .. -I target/cxxbridge \
-L target/debug -l cpprust \
-L target/debug/build/cpprust-28371278e6cda5e2/out -l cpp_from_rust \
-pthread -l dl
And of course, because we are now dealing with a shared library, we have to find it at runtime.
$ LD_LIBRARY_PATH=target/debug ./cpp_program
I don't know whether some other kinds of libraries (crate-type) could work (by chance) with this C++ main program or not, but the documentation states that only staticlib and cdylib are intended for this usage.
Finally, note that if you use crate-type = ["staticlib", "rlib", "dylib"] in Cargo.toml (as stated in your comment), you will produce three libraries:
target/debug/libcpprust.a from staticlib,
target/debug/libcpprust.rlib from rlib,
target/debug/libcpprust.so from dylib.
Unfortunately, when linking with the command g++ ... -l cpprust ..., the linker will prefer the .so to the .a; you will be in the same situation as cdylib above.
The layout of the directory for the minimal example
cpprust
├── Cargo.toml
├── build.rs
└── src
├── lib.rs
└── main.cpp
Cargo.toml
[package]
name = "cpprust"
version = "0.1.0"
edition = "2021"
[lib]
crate-type = ["staticlib"]
[dependencies]
cxx = "1.0"
[build-dependencies]
cxx-build = "1.0"
build.rs
fn main() {
// This will consider the ffi part in lib.rs in order to
// generate lib.rs.h and lib.rs.cc
// minimal example: no C++ code to be called from Rust
cxx_build::bridge("src/lib.rs")
.compile("cpp_from_rust");
}
src/lib.rs
#[cxx::bridge]
mod ffi {
extern "Rust" {
fn rust_from_cpp() -> ();
}
}
pub fn rust_from_cpp() -> () {
println!("called rust_from_cpp()");
}
src/main.cpp
/*
Building this program happens outside of the cargo process.
We simply need to link against the Rust library and the
system libraries it depends upon
g++ -std=c++17 -o cpp_program src/main.cpp \
-I .. -I target/cxxbridge \
-L target/debug -l cpprust \
-pthread -l dl
*/
// consider the ffi part of Rust code
#include "cpprust/src/lib.rs.h"
#include <iostream>
int
main()
{
std::cout << "starting from C++\n";
rust_from_cpp();
std::cout << "finishing with C++\n";
return 0;
}
The cargo build command will generate the libcpprust.a static library in target/debug.
Building the main program simply relies on usual commands, provided that we find the relevant headers and libraries (see the comments in the code).
Note that the C++ source code for the main program is in the src directory here, but it could have been put anywhere else.
The layout of the directory for the bidirectional example
cpprust
├── Cargo.toml
├── build.rs
└── src
├── cpp_from_rust.cpp
├── cpp_from_rust.hpp
├── lib.rs
└── main.cpp
We just added a pair of .hpp/.cpp files.
build.rs
fn main() {
// This will consider the ffi part in lib.rs in order to
// generate lib.rs.h and lib.rs.cc
// The generated library (libcpp_from_rust.a) contains the code
// from cpp_from_rust.cpp and will be inserted into the generated
// Rust library (libcpprust.a).
cxx_build::bridge("src/lib.rs")
.file("src/cpp_from_rust.cpp")
.flag_if_supported("-std=c++17")
.compile("cpp_from_rust");
}
Note that this time the build process actually handles some C++ code (see below) to be called from Rust.
src/lib.rs
#[cxx::bridge]
mod ffi {
extern "Rust" {
fn rust_from_cpp() -> ();
}
unsafe extern "C++" {
include!("cpprust/src/cpp_from_rust.hpp");
fn cpp_from_rust() -> ();
}
}
pub fn rust_from_cpp() -> () {
println!("entering rust_from_cpp()");
ffi::cpp_from_rust();
println!("leaving rust_from_cpp()");
}
src/cpp_from_rust.hpp
#ifndef CPP_FROM_RUST_HPP
#define CPP_FROM_RUST_HPP
// declare a usual C++ function (no Rust involved here)
void
cpp_from_rust();
#endif // CPP_FROM_RUST_HPP
src/cpp_from_rust.cpp
#include "cpp_from_rust.hpp"
#include <iostream>
// define a usual C++ function (no Rust involved here)
void
cpp_from_rust()
{
std::cout << "called " << __func__ << "()\n";
}
Cargo.toml, src/main.cpp and the build process (cargo build, g++ ...) still are the same as in the previous example.

Shared libraries and c++20 modules

There is very little documentation online on the proper use of C++20 modules in shared libraries. Many folks are clearly interested, but I haven't been able to find a clear solution.
In MSVC, you need to use dllexport when compiling the library, and dllimport when consuming the symbols. This can be done using macros in "legacy C++", but this does not work with C++20 modules, since the code is only compiled once, regardless of preprocessor directives.
This post suggests that you only need to use dllexport now, and that dllimport will be taken care of automatically by the compiler. However, this comes from a comment which has now been deleted, and I couldn't find any reliable source on the topic.
How is one expected to create a shared library using C++20 modules?
Background
A translation unit which declares a module interface or a module partition will be treated as a module unit and will, when compiled, generate both an object file and a binary module interface (BMI).
The BMI is a binary representation of an abstract syntax tree, that is a data structure representing the syntax and data types of the program. We have the traditional C++ compilation pipeline:
program -> precompiler -> lexer -> parser -> assembler -> linker
With GCC, we should add the compiler flag -c which tells the compiler to compile and assemble but not link.
But shared libraries are built by the linker by reading several compiled object files together and creating a shared object. So that happens after the BMI's have been built. And the BMI's may be built without linking them together as that is two different stages.
Module Visibility
In C# when building a DLL we have visibility attributes on class level, ie. public, private, internal. In C++ we can obtain the same functionality with module partitions.
A module partition, declared with module <module> : <partition>; will be entirely visible inside the compilation unit that declares export module <module>;, but not outside that module. This reminds me of internal mode from C#. But if we however export the partition with export module <module> : <partition>; then its declarations will be publicly visible. Read more on cppreference.
Example
I have solved that problem with GCC (g++-11), see here.
In essence, you don't need DLL import/export since there are (likely) no headers involved. I have tried inserting these visibility attributes but with complaints from my compiler, so I guess we might not need them after all. Other than that, it's standard procedure. I copy/paste my example here as well:
Main
import <iostream>;
import mathlib;
int main()
{
int a = 5;
int b = 6;
std::cout << "a = " << a << ", b = " << b << '\n';
std::cout << "a+b = " << mathlib::add(a, b) << '\n';
std::cout << "a-b = " << mathlib::sub(a, b) << '\n';
std::cout << "a*b = " << mathlib::mul(a, b) << '\n';
std::cout << "a/b = " << mathlib::div(a, b) << '\n';
return 0;
}
Library
export module mathlib;
export namespace mathlib
{
int add(int a, int b)
{
return a + b;
}
int sub(int a, int b)
{
return a - b;
}
int mul(int a, int b)
{
return a * b;
}
int div(int a, int b)
{
return a / b;
}
}
Makefile
GCC=g++-11 -std=c++20 -fmodules-ts
APP=app
build: std_headers mathlib main
std_headers:
$(GCC) -xc++-system-header iostream
mathlib: mathlib.cpp
$(GCC) -c $< -o $#.o
$(GCC) -shared $#.o -o libmathlib.so
main: main.cpp
$(GCC) $< -o $(APP) -Xlinker ./libmathlib.so
clean:
#rm -rf gcm.cache/
#rm -f *.o
#rm -f $(APP)
#rm -f *.so
Running
g++-11 -std=c++20 -fmodules-ts -xc++-system-header iostream
g++-11 -std=c++20 -fmodules-ts -c mathlib.cpp -o mathlib.o
g++-11 -std=c++20 -fmodules-ts -shared mathlib.o -o libmathlib.so
g++-11 -std=c++20 -fmodules-ts main.cpp -o app -Xlinker ./libmathlib.so
./app
a = 5, b = 6
a+b = 11
a-b = -1
a*b = 30
a/b = 0
Now this is clearly platform-specific, but the approach should work on other platforms. I have tested a similar thing with Clang as well (same repo as linked).
C++20 modules have no special relationship with shared libraries. They are primarily a replacement of header files.
This means that you would develop a shared library with C++20 modules in a similar fashion as you would with header files before C++20, at least with my current understanding. You design some API that is exported (unfortunately still using vendor-specific attributes like __declspec(dllexport) or __attribute__((visibility("default")))) and implement it. You build your shared library file (.dll/.so) and an import library for distribution, same way as before. However instead of distributing header files, you would distribute module interface units instead. Module interface units are files containing an export module ABC; declaration at the top.
And executables consuming that shared library would then import that module using import ABC;, instead of #include-ing a header file.
Edit: As was pointed out in the comments, it is seemingly still necessary on Windows to provide a macro switch inside the module interfaces that toggles between dllexport and dllimport attributes, similar to as it is done with headers. However, I have currently not experimented with this and can only defer to what #jeremyong has experimented with in What is the expected relation of C++ modules and dynamic linkage?.

Make - Multiple definitions errors when compiling C++ project

I have a problem and yes I acknowledge this kind of question has been asked before, but none of them were really helping me, as the context and kind of errors given were not exactly the same.
I have a project I'm trying to compile with a makefile. The compilation process is the following :
game_f.exe : objects/game_f.o objects/game_f_main.o archives/io.a
#g++ $^ $(ALL_OTHER_NEEDED_LIBS) -o $# -std=c++11 -pedantic -Wall -g
Now those lines are creating 'multiple definitions' errors. Such as those :
io.cpp:(.text+0x1FDA) : multiple definitions of <some function or variable name> :
archives/io.a(io.o):io.cpp:(.text+0x1FDA) : defined for the first time here
Now this is coming from a header file where I create a namespace. The namespace is created as such :
io.h :
namespace io {
extern int TermWidth;
[some other variables, declared with "extern <some type> name;"]
extern void getTermWidth();
[some other functions, declared with "extern <some type> name(args)"
}
io.cpp :
namespace io {
TermWidth = 0;
[other variables definitions]
void getTermWidth() { ... }
}
The compiler seems to ignore the extern word when compiling from the makefile. But what is weird is if I type the entire g++ line, without going through the makefile, the compilation works.
This compiles fine on my macOS machine when going through the makefile, by the way, which is even weirder.
Does g++ have a special option for extern definitions ? Is this a known problem of g++ and make ? How could I avoid this error ?
EDIT : Yes, the .h files have include guards.

Compiling several source (main and headers) files and linking them in ROOT CINT?

Let me first set the context, it is CERN's ROOT and CINT and ACLiC etc.
Suppose I have a main macro named macro.cpp and two headers h1.cpp (contains the definition of a function) and h1.h containing the declaration of the function defined in h1.cpp similarly I have h2.cpp and h2.h. The main program macro.cpp calls those functions inside h1 and h2. I was successful compiling the source files using:
root [0] .L h1.cpp+
root [1] .L h2.cpp+
root [2] .L macro.cpp+
which generated three .so files macro_cpp.so, h1_cpp.so and h2_cpp.so. I want to know what to do with them ? How do I link them so that I have something like a "macro.out" or something like that (a single executable file of some kind) which I can execute (although I don't know how !) and achieve whatever I wished to achieve with the macro.
Note: If I just load all the files using .L file_name.cpp etc and just execute the main macro using .x macro.cpp then everything works fine and I have results, but this is not what I want ! I want to compile like we do in usual g++ and by the way in every forum everyone keeps advising on compiling using .L file_name.cpp+ or ++ .. I would really like to know the whole story. Because nobody seems to explain beyond .L file_name.cpp+ .. what next ? What to do with the .so etc.
I am a beginner, I will really appreciate a simple and step by step answer and explanation.
Thanks.
Edit-1: I am working with:
g++ (Ubuntu 5.4.0-6ubuntu1~16.04.4) 5.4.0 20160609
Edit-2: ROOT related information:
ROOT 5.34/36 (v5-34-36#v5-34-36, dic 07 2016, 23:31:51 on linuxx8664gcc)
CINT/ROOT C/C++ Interpreter version 5.18.00, July 2, 2010
If you want to compile and link you can use a standard compiler instead of Cint/Aclic.
For example, assuming you are working on a *nix platform, you can use the example files below:
h1.h
int add_one(int a);
h1.cpp
#include "h1.h"
int add_one(int a)
{
return a+1;
}
h2.h
#include <TLorentzVector.h>
TLorentzVector multiply_by_two(const TLorentzVector v);
h2.cpp
#include "h2.h"
TLorentzVector multiply_by_two(const TLorentzVector v)
{
return 2.0*v;
}
macro.cpp
#include "h1.h"
#include "h2.h"
#include <TLorentzVector.h>
#include <iostream>
using std::cout;
using std::endl;
int main()
{
int a = 0;
TLorentzVector v;
v.SetPtEtaPhiM(1.0, 0.0, 0.0, 0.0);
cout<<"calling add_one on "<<a<<": "<<add_one(a)<<endl;
cout<<"calling multiply_by_two on "<<v.Pt()<<": "<<multiply_by_two(v).Pt()<<endl;
return 0;
}
Then you can compile with
g++ -c -g -Wall `root-config --cflags` h1.cpp
g++ -c -g -Wall `root-config --cflags` h2.cpp
g++ -c -g -Wall `root-config --cflags` macro.cpp
and link with
g++ `root-config --glibs` h1.o h2.o macro.o
The executable will be a.out:
$ ./a.out
calling add_one on 0: 1
calling multiply_by_two on 1: 2
You can put these g++ commands in a script or, when you start having several files and directories, you can write your make file (or cmake). For this last step, see for example the tutorial here
http://www-pnp.physics.ox.ac.uk/~brisbane/Teaching/Makefiles/Tutorial_1_Makefiles_and_ROOT.pdf
Note 1: one advantage of using g++ is that you will get clear error messages when something doesn't compile. The error messages from Cint can
be difficult to understand--although this is very much improved in root 6 with Cling.
Note 2: another advantage of using a standard compiler is that you will be able to easily link your main executable against libraries other than root.
This answer is based mostly on the answer by user2148414, but if one follows the answer will notice that there were some issues with the method of linking the source (*.cpp) files. My answer also addresses another important object called a TApplication that will play a crucial role in such applications involving root libraries. The following linking step:
g++ `root-config --glibs` h1.o h2.o macro.o
will likely show a lot of errors complaining about the root objects like TWhatever (in user2148414's answer TLorentzVector will show problems). In the comments to that answer one can find the discussion on including various physics libraries that can solve the problem but without discussing that (and I am not comfortable either :) ) let me write down the command that solves everthing.
This procedure is a one-liner, that is no need to compile individual files, create *.cpp files and *.h files as discussed in that answer then compile and link and create a single executable named "someExecutable" using:
g++ macro.cpp h1.cpp h2.cpp `root-config --libs --cflags` -o someExecutable
or better (and one should do it)
g++ -Wall -Wextra -Werror -pedantic -std=c++14 macro.cpp h1.cpp h2.cpp `root-config --libs --cflags` -o someExecutable
This will solve my original answer but for completeness I would like to add a few more things.
TApplication
My original motivation was to create an application that talks to "ROOT" but I didn't want to work with the ROOT shell, CINT, ACLiC etc and wanted to work entirely with g++. user2148414's and my answer will solve the part of creating an application but the application will not serve any purpose, it will run, create histograms draw them and do all the stuff but all the canvases will close in the end when the code reaches "return 0;". To keep the canvases open we will need "TApplication". So the consider the main of user2148414's answer, I am going include just two more lines and include two arguments to the main:
macro.cpp
#include "h1.h"
#include "h2.h"
#include <TLorentzVector.h>
#include <iostream>
using std::cout;
using std::endl;
int main(int argc, char* argv[]) //introduced arguments to main
{
// here I introduce TApplication
TApplication* SomeApp = new TApplication("SomeApp",&argc, argv);
int a = 0;
TLorentzVector v;
v.SetPtEtaPhiM(1.0, 0.0, 0.0, 0.0);
cout<<"calling add_one on "<<a<<": "<<add_one(a)<<endl;
cout<<"calling multiply_by_two on "<<v.Pt()<<": "<<multiply_by_two(v).Pt()<<endl;
//and just before returning 0
SomeApp->Run();
return 0;
}

`undefined reference` when trying to interface C to call C++

I was trying to call C++ from C. I am not sure about the linking order. It could have been that to cause the error. For some reasons, the compiler complains undefined reference to helloWorld.
Could anyone advise?
main.c:
#include "example.h"
#include <stdio.h>
int main(){
helloWorld();
return 0;
}
example.h:
#ifndef HEADER_FILE
#define HEADER_FILE
#ifdef __cplusplus
extern "C" {
#endif
void helloWorld();
#ifdef __cplusplus
}
#endif
#endif
example.cpp:
#include "example.h"
#include <iostream>
void helloWorld(){
std::cout << "Hello World from CPP";
}
There are two ways to do this. While both work, one is "cleaner" than the other. Side note: As trojanfoe pointed out, you may have left off the .o on the compile/link command.
Here's a two step process:
cc -c main.c
c++ -o mypgm example.cpp main.o
This is a bit ugly because the usual convention is that the source that gets compiled is the one with main
Here's the more usual way:
c++ -c example.cpp
cc -c main.c
c++ -o mypgm main.o example.o
NOTE: In both cases, the "linker" must be c++ to resolve the std::* that example.cpp uses
UPDATE:
What is mypgm?
mypgm [just an example name] is the name of the [fully linked and ready to run] output executable or program. It's the argument for the -o option. The linker takes your relocatable input .o files, links them together to produce the output file [that can now be run as a command].
It's pretty standard nomenclature for something that is arbitrary in example instruction or code sequences [like here on SO]. You could replace "mypgm" with "ursa_majors_test_program" or "example", or whatever you'd like. To run the program, then type ./mypgm [or ./ursa_majors_test_program or ./example]
There's no magic to the name, just like there was no magic to you naming your source files main.c and example.cpp
It should be descriptive of function. If you had said you were working on a text editing program, in my example, I might have used -o editor