LLVM 8 and later ORC JIT problems with math library - llvm

Since LLVM 8 (including current LLVM trunk, aka LLVM 9) I run into a problem when using the ORC JIT ExecutionEngine with functions which contain a call to the standard math library.
The JIT compiler is able to find the symbol for the function but fails to get the address for it in case the function makes a call to the math library.
I attach a simple exploit that shows the problem. The program test.cc reads in an IR file which contains a single function in LLVM's Intermediate Representation:
The function takes a single argument, a floating point number, and returns in case of
"func_works.ll" the argument itself, and in case of
"func_cos_fails.ll" the cosine of the argument.
I didn't implement the choice between the two files at runtime, so the program needs to be rebuilt when switching to the other case.
The program uses the standard KaleidoscopeJIT.h that comes with LLVM (except that I had to expose the Datalayout).
If you build the program with the "func_works.ll" and run it, the program succeeds with:
symbol found!
address found!
If you build the program with the "func_cos_fails.ll" and run it, the program fails with:
symbol found!
Failure value returned from cantFail wrapped call
UNREACHABLE executed at install/llvm-8.0-x86-debug/include/llvm/Support/Error.h:732!
This happens with LLVM 8 release and the current LLVM trunk.
Does anyone see what's going on?
This test was run on an x86 Linux Ubuntu system with LLVM configured with
cmake -G "Unix Makefiles" \
-DBUILD_SHARED_LIBS="ON" \
-DLLVM_ENABLE_RTTI="ON" \
-DLLVM_ENABLE_ZLIB="OFF" \
-DLLVM_ENABLE_TERMINFO="OFF" \
-DCMAKE_BUILD_TYPE="Debug" \
-DCMAKE_INSTALL_PREFIX=$CMAKE_INSTALL_PREFIX \
-DLLVM_TARGETS_TO_BUILD="X86" \
$SRC
test.cc:
#include "KaleidoscopeJIT.h"
#include "llvm/Analysis/Passes.h"
#include "llvm/ExecutionEngine/ExecutionEngine.h"
#include "llvm/ExecutionEngine/MCJIT.h"
#include "llvm/ExecutionEngine/ObjectCache.h"
#include "llvm/ExecutionEngine/SectionMemoryManager.h"
#include "llvm/IR/DataLayout.h"
#include "llvm/IR/DerivedTypes.h"
#include "llvm/IR/IRBuilder.h"
#include "llvm/IR/LLVMContext.h"
#include "llvm/IR/LegacyPassManager.h"
#include "llvm/IR/Module.h"
#include "llvm/IR/Verifier.h"
#include "llvm/IRReader/IRReader.h"
#include "llvm/Support/CommandLine.h"
#include "llvm/Support/FileSystem.h"
#include "llvm/Support/Path.h"
#include "llvm/Support/SourceMgr.h"
#include "llvm/Support/TargetSelect.h"
#include "llvm/Support/raw_ostream.h"
#include "llvm/Transforms/Scalar.h"
#include "llvm/Support/InitLLVM.h"
#include <iostream>
using namespace llvm;
using namespace llvm::orc;
int main(int argc, char **argv) {
InitLLVM X(argc, argv);
EnableDebugBuffering = true;
LLVMContext Context;
InitializeNativeTarget();
InitializeNativeTargetAsmPrinter();
InitializeNativeTargetAsmParser();
cl::ParseCommandLineOptions(argc, argv, "Kaleidoscope example program\n");
SMDiagnostic Err;
std::unique_ptr<llvm::Module> M = parseIRFile( "func_cos_fails.ll" , Err, Context, false);
//std::unique_ptr<llvm::Module> M = parseIRFile( "func_works.ll" , Err, Context, false);
if (!M) {
Err.print("IR parsing failed: ", errs());
return 0;
}
std::unique_ptr<KaleidoscopeJIT> TheJIT;
TheJIT = llvm::make_unique<KaleidoscopeJIT>();
auto H = TheJIT->addModule(std::move(M));
std::string MangledName;
llvm::raw_string_ostream MangledNameStream(MangledName);
llvm::Mangler::getNameWithPrefix(MangledNameStream, "func_ir" , TheJIT->getDL() );
if (auto Sym = TheJIT->findSymbol(MangledNameStream.str()))
{
std::cout << "symbol found!\n";
void* fptr = (void *)cantFail(Sym.getAddress());
std::cout << "address found!\n";
}
else
{
std::cout << "symbol not found!\n";
}
return 0;
}
func_cos_fails.ll:
source_filename = "module"
target datalayout = "e-m:e-i64:64-f80:128-n8:16:32:64-S128"
declare float #cosf(float)
define float #func_ir(float %arg0) {
entrypoint:
%0 = call float #cosf(float %arg0)
ret float %0
}
func_works.ll:
source_filename = "module"
target datalayout = "e-m:e-i64:64-f80:128-n8:16:32:64-S128"
define float #func_ir(float %arg0) {
entrypoint:
ret float %arg0
}
KaleidoscopeJIT.h:
#ifndef LLVM_EXECUTIONENGINE_ORC_KALEIDOSCOPEJIT_H
#define LLVM_EXECUTIONENGINE_ORC_KALEIDOSCOPEJIT_H
#include "llvm/ADT/STLExtras.h"
#include "llvm/ADT/iterator_range.h"
#include "llvm/ExecutionEngine/ExecutionEngine.h"
#include "llvm/ExecutionEngine/JITSymbol.h"
#include "llvm/ExecutionEngine/Orc/CompileUtils.h"
#include "llvm/ExecutionEngine/Orc/IRCompileLayer.h"
#include "llvm/ExecutionEngine/Orc/LambdaResolver.h"
#include "llvm/ExecutionEngine/Orc/RTDyldObjectLinkingLayer.h"
#include "llvm/ExecutionEngine/RTDyldMemoryManager.h"
#include "llvm/ExecutionEngine/SectionMemoryManager.h"
#include "llvm/IR/DataLayout.h"
#include "llvm/IR/Mangler.h"
#include "llvm/Support/DynamicLibrary.h"
#include "llvm/Support/raw_ostream.h"
#include "llvm/Target/TargetMachine.h"
#include <algorithm>
#include <map>
#include <memory>
#include <string>
#include <vector>
namespace llvm {
namespace orc {
class KaleidoscopeJIT {
public:
using ObjLayerT = LegacyRTDyldObjectLinkingLayer;
using CompileLayerT = LegacyIRCompileLayer<ObjLayerT, SimpleCompiler>;
KaleidoscopeJIT()
: Resolver(createLegacyLookupResolver(
ES,
[this](const std::string &Name) {
return ObjectLayer.findSymbol(Name, true);
},
[](Error Err) { cantFail(std::move(Err), "lookupFlags failed"); })),
TM(EngineBuilder().selectTarget()), DL(TM->createDataLayout()),
ObjectLayer(ES,
[this](VModuleKey) {
return ObjLayerT::Resources{
std::make_shared<SectionMemoryManager>(), Resolver};
}),
CompileLayer(ObjectLayer, SimpleCompiler(*TM)) {
llvm::sys::DynamicLibrary::LoadLibraryPermanently(nullptr);
}
TargetMachine &getTargetMachine() { return *TM; }
VModuleKey addModule(std::unique_ptr<Module> M) {
auto K = ES.allocateVModule();
cantFail(CompileLayer.addModule(K, std::move(M)));
ModuleKeys.push_back(K);
return K;
}
void removeModule(VModuleKey K) {
ModuleKeys.erase(find(ModuleKeys, K));
cantFail(CompileLayer.removeModule(K));
}
JITSymbol findSymbol(const std::string Name) {
return findMangledSymbol(mangle(Name));
}
const DataLayout& getDL() const {
return DL;
}
private:
std::string mangle(const std::string &Name) {
std::string MangledName;
{
raw_string_ostream MangledNameStream(MangledName);
Mangler::getNameWithPrefix(MangledNameStream, Name, DL);
}
return MangledName;
}
JITSymbol findMangledSymbol(const std::string &Name) {
#ifdef _WIN32
// The symbol lookup of ObjectLinkingLayer uses the SymbolRef::SF_Exported
// flag to decide whether a symbol will be visible or not, when we call
// IRCompileLayer::findSymbolIn with ExportedSymbolsOnly set to true.
//
// But for Windows COFF objects, this flag is currently never set.
// For a potential solution see: https://reviews.llvm.org/rL258665
// For now, we allow non-exported symbols on Windows as a workaround.
const bool ExportedSymbolsOnly = false;
#else
const bool ExportedSymbolsOnly = true;
#endif
// Search modules in reverse order: from last added to first added.
// This is the opposite of the usual search order for dlsym, but makes more
// sense in a REPL where we want to bind to the newest available definition.
for (auto H : make_range(ModuleKeys.rbegin(), ModuleKeys.rend()))
if (auto Sym = CompileLayer.findSymbolIn(H, Name, ExportedSymbolsOnly))
return Sym;
// If we can't find the symbol in the JIT, try looking in the host process.
if (auto SymAddr = RTDyldMemoryManager::getSymbolAddressInProcess(Name))
return JITSymbol(SymAddr, JITSymbolFlags::Exported);
#ifdef _WIN32
// For Windows retry without "_" at beginning, as RTDyldMemoryManager uses
// GetProcAddress and standard libraries like msvcrt.dll use names
// with and without "_" (for example "_itoa" but "sin").
if (Name.length() > 2 && Name[0] == '_')
if (auto SymAddr =
RTDyldMemoryManager::getSymbolAddressInProcess(Name.substr(1)))
return JITSymbol(SymAddr, JITSymbolFlags::Exported);
#endif
return nullptr;
}
ExecutionSession ES;
std::shared_ptr<SymbolResolver> Resolver;
std::unique_ptr<TargetMachine> TM;
const DataLayout DL;
ObjLayerT ObjectLayer;
CompileLayerT CompileLayer;
std::vector<VModuleKey> ModuleKeys;
};
} // end namespace orc
} // end namespace llvm
#endif // LLVM_EXECUTIONENGINE_ORC_KALEIDOSCOPEJIT_H
for convenience I provide a Makefile:
LLVM_CONFIG = ${LLVM_INSTALL_PATH}
LLVM_CXXFLAGS = $(shell $(LLVM_CONFIG) --cxxflags)
LLVM_LDFLAGS = $(shell $(LLVM_CONFIG) --ldflags)
LLVM_LIBS = $(shell $(LLVM_CONFIG) --libs)
all: test
test.o: test.cc KaleidoscopeJIT.h
g++ -c -o $# $< $(LLVM_CXXFLAGS)
test: test.o
g++ -o $# $< $(LLVM_LDFLAGS) $(LLVM_LIBS)
clean:
rm -f *.o
rm -f test

I believe the solution to this (for llvm 7 and 8 anyway) was found here:
https://stackoverflow.com/a/56862433/2310373
Namely, replace:
[this](const std::string &Name) {
return ObjectLayer.findSymbol(Name, true);
},
With something like
[this](const std::string &Name) {
auto FoundSymbol = ObjectLayer.findSymbol(Name, true);
if (!FoundSymbol) {
if (auto SymAddr = RTDyldMemoryManager::getSymbolAddressInProcess(Name))
return JITSymbol(SymAddr, JITSymbolFlags::Exported);
}
return FoundSymbol;
},

Related

Clang CFE Compilation unable to read file

I'm trying to compile test files using the Clang Compiler FrontEnd with the following code:
#include <string>
#include <vector>
#include <memory>
#include <iostream>
#include <llvm/Support/Host.h>
#include <llvm/Support/Program.h>
#include <llvm/Support/FileSystem.h>
#include <llvm/Support/VirtualFileSystem.h>
#include "clang/Basic/LLVM.h"
// #include <clang/Driver/ToolChain.h>
#include <clang/Driver/Driver.h>
#include <clang/Basic/Diagnostic.h>
#include <clang/Basic/DiagnosticIDs.h>
#include <clang/Basic/DiagnosticOptions.h>
#include <clang/Frontend/CompilerInstance.h>
#include <clang/Driver/Compilation.h>
using namespace clang::driver;
using namespace clang;
std::string pwd = "/home/arch/Documents/CFE/Examples/Compile/";
std::string target_executable = pwd + "target";
int main()
{
llvm::IntrusiveRefCntPtr<DiagnosticsEngine> DE(clang::CompilerInstance::createDiagnostics(new DiagnosticOptions));
llvm::ErrorOr<std::string> clangPath = llvm::sys::findProgramByName("clang++");
Driver D(StringRef(clangPath.get().c_str()), llvm::sys::getDefaultTargetTriple(), *DE, "clang LLVM compiler", InMemoryFileSystem);
// D.setCheckInputsExist(false);
std::vector<const char *> args;
args.push_back(clangPath.get().c_str());
args.push_back((target_executable + ".cpp").c_str());
args.push_back("-cl-std=clc++2021");
args.push_back(("-o " + target_executable + ".spv").c_str());
ArrayRef<const char *> compileArgs(args);
std::unique_ptr<Compilation> C(D.BuildCompilation(compileArgs));
assert(C > 0);
bool CallbackHasCalled = false;
C->setPostCallback(
[&](const Command &Cmd, int Ret)
{ std::cout << "postCallback return value: " << Ret << std::endl;
CallbackHasCalled = true; });
const JobList &Jobs = C->getJobs();
auto &CmdCompile = Jobs.getJobs().front();
const Command *FailingCmd = nullptr;
assert(C->ExecuteCommand(*CmdCompile, FailingCmd));
assert(FailingCmd);
}
With the following CMakeLists.txt:
cmake_minimum_required(VERSION 3.23)
project("CLANG_CFE")
find_package(Clang REQUIRED)
target_precompile_headers(clangBasic PUBLIC)
execute_process(COMMAND clang-config --libs OUTPUT_VARIABLE LLVM_LIBRARIES)
execute_process(COMMAND clang-config --cxx-flags OUTPUT_VARIABLE LLVM_CXX_FLAGS)
add_executable(compile_example main.cpp)
target_compile_options(compile_example PRIVATE ${LLVM_CXX_FLAGS})
set(CLANG_LIBS clangBasic clangDriver clangFrontend)
target_link_libraries(compile_example PRIVATE ${CLANG_LIBS} LLVMSupport)
The code compiles fine but fails to read the target_executable, resulting in the following error:
error: no such file or directory: 'P�ZUU'
Is this a read/write permission-error?
It turned out to be a problem with the usage of absolute paths, the method Driver::DiagnoseInputExistence automatically rejects the file path if it starts with "/".
Using a relative filepath solved this issue.

Using JIT in llvm (ExecutionEngine create error) C++

I'm trying to use public example of compiling and running function from llvm bitcode, but it doesn't work (I'm using the last version of llvm)
Here is my code
#pragma warning(disable : 4146)
#include <iostream>
#include "llvm/IR/LLVMContext.h"
#include "llvm/IR/Module.h"
#include "llvm/IR/IRBuilder.h"
#include "llvm/IR/Verifier.h"
#include "llvm/ExecutionEngine/ExecutionEngine.h"
#include "llvm/ExecutionEngine/SectionMemoryManager.h"
#include "llvm/ExecutionEngine/Orc/CompileUtils.h"
#include "llvm/Support/TargetSelect.h"
#include "llvm/ExecutionEngine/MCJIT.h"
#include "llvm/Transforms/Scalar.h"
#include "llvm/Analysis/BasicAliasAnalysis.h"
using namespace llvm;
int main(int argc, char* argv[]) {
InitializeNativeTarget();
InitializeNativeTargetAsmPrinter();
InitializeNativeTargetAsmParser();
llvm::LLVMContext context;
std::unique_ptr<llvm::Module> m{ new llvm::Module {"test", context} };
Constant* init_value = ConstantInt::get(Type::getInt32Ty(context), APInt(32, 12));
GlobalVariable* bsp = new GlobalVariable(*m, Type::getInt32Ty(context), false, GlobalValue::ExternalLinkage, init_value, "bsp", 0/*, GlobalValue::GeneralDynamicTLSModel*/);
llvm::FunctionType* funcType = llvm::FunctionType::get(Type::getInt32Ty(context), {}, false);
llvm::Function* mainFunc = llvm::Function::Create(funcType, llvm::Function::ExternalLinkage, "main", m.get());
llvm::BasicBlock* bb = llvm::BasicBlock::Create(context, "entrypoint", mainFunc);
IRBuilder<> builder(bb);
builder.CreateRet(builder.CreateLoad(bsp));
std::string err;
llvm::EngineBuilder EB(std::move(m));
EB.setEngineKind(llvm::EngineKind::JIT).setErrorStr(&err);
llvm::ExecutionEngine* EE = EB.create();
if (!EE) {
std::cerr << "unable to create a jitter: " << err << "\n";
return 0;
}
EE->finalizeObject();
const auto fa = (int(*)())EE->getFunctionAddress("main");
printf("%d\n", fa());
system("pause");
}
I can compile and build but this error appears at startup
unable to create a jitter: JIT has not been linked in.
I looked on the internet to find a solution to this problem, it was said that the file MCJIT.h needed to be included, but the code shows that it is included. And it also crash me on end of the function. Like i can use this code in the other function, but it will trigger a Visual Studio breakpoint at the exit of the function
EDIT:
I tried calling the LLVMLinkInMCJIT() function, this gave no result and I saw the following in my binary.
00007FF7C0D1AA96 | FF15 FC961A00 | call qword ptr ds:[<&LLVMLinkInInterpreter>]
LLVMLinkInInterpreter function code
00007FF8BCB6EC30 | C2 0000 | ret 0x0
Look like I need to rebuild my llvm-c.dll, I'll try it and tell you what happens

Why is my LLVM JIT implementation segfaulting?

I'm trying to implement a simple JIT compiler using LLVM, following along with the tutorial (http://releases.llvm.org/4.0.1/docs/tutorial/BuildingAJIT1.html), and I'm running into a segfault. I've rewritten my code in the form a minimal (albeit still kinda long) example. The example loops through the integers 0 through 9 and for each one attempts to compile a function that prints that integer, add it to a module, execute the function, and then remove the module from the JIT. This is to simulate an interactive session in which a user inputs commands such as print 0, print 1, etc.
#include <array>
#include <cstdint>
#include <iostream>
#include <llvm/ExecutionEngine/ExecutionEngine.h>
#include <llvm/ExecutionEngine/JITSymbol.h>
#include <llvm/ExecutionEngine/Orc/CompileUtils.h>
#include <llvm/ExecutionEngine/Orc/IRCompileLayer.h>
#include <llvm/ExecutionEngine/Orc/LambdaResolver.h>
#include <llvm/ExecutionEngine/Orc/ObjectLinkingLayer.h>
#include <llvm/ExecutionEngine/SectionMemoryManager.h>
#include <llvm/ExecutionEngine/RuntimeDyld.h>
#include <llvm/IR/BasicBlock.h>
#include <llvm/IR/Constants.h>
#include <llvm/IR/DerivedTypes.h>
#include <llvm/IR/Function.h>
#include <llvm/IR/GlobalValue.h>
#include <llvm/IR/GlobalVariable.h>
#include <llvm/IR/IRBuilder.h>
#include <llvm/IR/LLVMContext.h>
#include <llvm/IR/LegacyPassManager.h>
#include <llvm/IR/Mangler.h>
#include <llvm/IR/Module.h>
#include <llvm/IR/Type.h>
#include <llvm/IR/Value.h>
#include <llvm/IR/Verifier.h>
#include <llvm/Support/DynamicLibrary.h>
#include <llvm/Support/TargetSelect.h>
#include <llvm/Support/raw_ostream.h>
#include <llvm/Target/TargetMachine.h>
#include <llvm/Transforms/Scalar.h>
#include <llvm/Transforms/Scalar/GVN.h>
#include <memory>
#include <stdexcept>
#include <string>
#include <utility>
#include <vector>
int main() {
llvm::InitializeNativeTarget();
llvm::InitializeNativeTargetAsmPrinter();
llvm::InitializeNativeTargetAsmParser();
auto machine = llvm::EngineBuilder().selectTarget();
llvm::orc::ObjectLinkingLayer<> linking_layer;
llvm::orc::IRCompileLayer<llvm::orc::ObjectLinkingLayer<>> compile_layer(
linking_layer, llvm::orc::SimpleCompiler(*machine)
);
llvm::LLVMContext context;
llvm::IRBuilder<> builder(context);
auto layout = machine->createDataLayout();
auto module = std::make_unique<llvm::Module>("module", context);
auto manager = std::make_unique<llvm::legacy::FunctionPassManager>(
module.get()
);
for (
auto p : {
llvm::createInstructionCombiningPass(),
llvm::createReassociatePass(), llvm::createGVNPass(),
llvm::createCFGSimplificationPass()
}
) manager->add(p);
module->setDataLayout(layout);
llvm::sys::DynamicLibrary::LoadLibraryPermanently(nullptr);
auto index = llvm::ConstantInt::get(context, llvm::APInt(8, 0));
std::vector<llvm::Constant*> indices = {index, index};
std::string func_name = "func";
for (auto i = 0; i < 10; ++i) {
auto format_str = new llvm::GlobalVariable(
*module, llvm::ArrayType::get(llvm::Type::getInt8Ty(context), 4),
true, llvm::GlobalValue::PrivateLinkage,
llvm::ConstantDataArray::getString(context, "%i\n"), "format_str"
);
format_str->setAlignment(1);
auto function = llvm::Function::Create(
llvm::FunctionType::get(
llvm::Type::getVoidTy(context), std::vector<llvm::Type*>{},
false
), llvm::Function::ExternalLinkage, func_name, module.get()
);
builder.SetInsertPoint(
llvm::BasicBlock::Create(context, "entry", function)
);
builder.CreateCall(
module->getOrInsertFunction(
"printf", llvm::FunctionType::get(
llvm::IntegerType::getInt32Ty(context),
llvm::PointerType::get(llvm::Type::getInt8Ty(context), 0),
true
)
), std::vector<llvm::Value*>{
llvm::ConstantExpr::getGetElementPtr(
nullptr, format_str, indices
), llvm::ConstantInt::get(context, llvm::APInt(32, i))
}, "call"
);
builder.CreateRetVoid();
std::string message;
llvm::raw_string_ostream message_stream(message);
if (llvm::verifyFunction(*function, &message_stream))
throw std::runtime_error(message_stream.str());
auto handle = compile_layer.addModuleSet(
std::array<std::unique_ptr<llvm::Module>, 1>{std::move(module)},
std::make_unique<llvm::SectionMemoryManager>(),
llvm::orc::createLambdaResolver(
[&](const std::string& name) {
auto symbol = compile_layer.findSymbol(name, false);
return symbol ? symbol : llvm::JITSymbol(nullptr);
}, [](const std::string& name) {
auto address = llvm::RTDyldMemoryManager::
getSymbolAddressInProcess(name);
return address ? llvm::JITSymbol(
address, llvm::JITSymbolFlags::Exported
) : llvm::JITSymbol(nullptr);
}
)
);
std::string mangled_name;
llvm::raw_string_ostream mangled_name_stream(mangled_name);
llvm::Mangler::getNameWithPrefix(
mangled_name_stream, func_name, layout
);
(
reinterpret_cast <void(*)()> (
static_cast <intptr_t> (
compile_layer.findSymbol(
mangled_name_stream.str(), true
).getAddress()
)
)
)();
compile_layer.removeModuleSet(handle);
}
}
The expected output is as follows.
0
1
2
3
4
5
6
7
8
9
Instead I get this.
0
Segmentation fault (core dumped)
According to GDB, the segfault is occuring during the call to llvm::GlobalVariable::GlobalVariable. Here's the backtrace.
#0 0x00007ffcdb8b6541 in llvm::GlobalVariable::GlobalVariable(llvm::Module&, llvm::Type*, bool, llvm::GlobalValue::LinkageTypes, llvm::Constant*, llvm::Twine const&, llvm::GlobalVariable*, llvm::GlobalValue::ThreadLocalMode, unsigned int, bool) () from /usr/lib/libLLVM-4.0.so
#1 0x000000010000698a in main () at main.cc:83
I'm using LLVM version 4.0.1 and GCC version 7.1.1 and compiling with the following command.
g++ -std=c++17 main.cc -o main -O0 -Wall -Wextra -Wno-unused-function \
-Wno-unused-value -Wno-unused-parameter -Werror -ggdb \
`llvm-config --system-libs --libs core`
I'm hoping some LLVM veteran can find my mistake. Thanks, guys!
module is initialized before the for loop:
auto module = std::make_unique<llvm::Module>("module", context);
then in the for loop:
for(...)
{
auto format_str = new llvm::GlobalVariable(*module, ...);
^~~~~~~
...
std::array<std::unique_ptr<llvm::Module>, 1>{std::move(module)},
^~~~~~~~~~~~~~~~~
}
At first iteration you access the object owned by module (ok) and then move from it. This will transfer the ownership of the managed object away from module.
At the second iteration you access the object managed by module -> crash (because it doesn't have a managed object anymore)

g++ Qt link issues

I'm having trouble (dynamically) linking with the Qt libraries whilst compiling a C++ project with g++ 4.5.3:
g++ -Wall XMLInOut.cpp tinyxml2.cpp -I"C:\Qt\4.7.1\include" -I"$ADTF_ADDONS\adtf-device-toolbox\include" -L"C:\Qt\4.7.1\lib" -lQtCore4 -DtUInt32=uint -o xmltest.exe
The output is the following :
/tmp/cclJqe9S.o:XMLInOut.cpp:(.text+0x1db): undefined reference to `QString::toUInt(bool*, int) const'
/tmp/cclJqe9S.o:XMLInOut.cpp:(.text$_ZN7QStringC1EPKc[QString::QString(char const*)]+0x15): undefined reference to QString::fromAscii_helper(char const*, int)
/usr/lib/gcc/i686-pc-cygwin/4.5.3/../../../../i686-pc-cygwin/bin/ld: /tmp/cclJqe9S.o: bad reloc address 0x15 in section `.text$_ZN7QStringC1EPKc[QString::QString(char const*)]'
If I understand correctly, the -l option will link against a dynamic library. in the Qt/libs I have the QtCore4.dll
What am I doing wrong?
Cheers
EDIT :
Although I believe the problem lies fully in the linkage, I will post my code :
#include "XMLInOut.h"
#include "tinyxml2.h"
#include <iostream>
#include "ModelToInterfaceWrapper.h"
//#define __DEBUG_OUT_
#ifdef __DEBUG_OUT_
#define out(A) cout<<A
#else
#define out(A)
#endif
using namespace std;
using namespace tinyxml2;
void XMLInOut::readAndParseFile(const char* pFilename){
XMLDocument doc;
doc.LoadFile( "config.xml" );
XMLElement *service = doc.FirstChildElement( "root" )->FirstChildElement( "Service" );
for( service; service; service=service->NextSiblingElement() )
{// Iterating Services
out(endl<<service->FirstAttribute()->Value());
QString serviceName;
tUInt32 signalValue ;
const XMLAttribute* description = ((const XMLElement *)service)->FindAttribute("description");
if((const XMLAttribute*)0 != description){
out(" ("<<description->Value()<<")");
serviceName = description->Value();
}else {
serviceName = service->FirstAttribute()->Value();
}
for(XMLElement *signal = service->FirstChildElement( "Signal" ); signal; signal=signal->NextSiblingElement() ){
out(endl<<" "<<signal->GetText());
signalValue = (new QString (signal->GetText()))->toUInt();
}
}
}
int main(int argc, char **argv) {
XMLInOut xmlinOut;
xmlinOut.readAndParseFile("config.xml");
}

How to run commandline/terminal utils with Boost.Process 0.5?

I foud out there is a new Boost.Process 0.5 but I cant see how to execute across Windows Linux and Mac ping or echo.
I got it working at leaast on Windows with simple:
#include <string>
#include <iostream>
#include <boost/asio.hpp>
#include <boost/iostreams/device/file_descriptor.hpp>
#include <boost/iostreams/stream.hpp>
#include <boost/process.hpp>
#include <boost/filesystem/path.hpp>
#include <boost/system/error_code.hpp>
namespace bp = boost::process;
namespace bpi = boost::process::initializers;
namespace bio = boost::iostreams;
int main()
{
bp::pipe p = bp::create_pipe();
{
bio::file_descriptor_sink sink(p.sink, bio::close_handle);
boost::filesystem::path p("C:/Windows/System32/cmd.exe");
boost::system::error_code ec;
bp::execute(
bpi::run_exe(p),
bpi::set_cmd_line(L"cmd /c echo --echo-stderr hello"),
bpi::bind_stdout(sink),
bpi::set_on_error(ec)
);
}
bio::file_descriptor_source source(p.source, bio::close_handle);
bio::stream<bio::file_descriptor_source> is(source);
std::string s;
is >> s;
std::cout << s << std::endl;
std::cin.get();
return 0;
}
On windows this works correctly but how to make it crossplatform to work also on Mac and Linux? (I am stupid and do not know how to write one path that would work for any Unix terminal (or at least for Linux Bash and mac default one)) So How to run commandline/terminal utils with Boost.Process 0.5 on Windows and Unix like OSs (better not writing path to terminal each time but just writting app like echo or ping and its arguments)?
...Found related code inside prevoius version:
std::string exe;
std::vector<std::string> args;
#if defined(BOOST_POSIX_API)
exe = "/bin/sh";
args.push_back("sh");
args.push_back("-c");
args.push_back(command);
#elif defined(BOOST_WINDOWS_API)
char sysdir[MAX_PATH];
UINT size = ::GetSystemDirectoryA(sysdir, sizeof(sysdir));
if (!size)
boost::throw_exception(boost::system::system_error(boost::system::error_code(::GetLastError(), boost::system::get_system_category()), "boost::process::launch_shell: GetWindowsDirectory failed"));
BOOST_ASSERT(size < MAX_PATH);
exe = std::string(sysdir) + (sysdir[size - 1] != '\\' ? "\\cmd.exe" : "cmd.exe");
args.push_back("cmd");
args.push_back("/c");
args.push_back(command);
#endif
Under boost.process 0.5 the shell_path() API was introduced so might the following will hook you up
#if defined(BOOST_POSIX_API)
#define SHELL_COMMAND_PREFIX "-c"
#elif defined(BOOST_WINDOWS_API)
#define SHELL_COMMAND_PREFIX "/c"
#endif
filesystem::path shellPath = process::shell_path();
std::string cl = shell_path().string() + " " SHELL_COMMAND_PREFIX " ";
cl += "ping 127.0.0.1";
execute(
set_cmd_line(cl),
throw_on_error()
);
If you really want to hide the #ifdef, I'd go on and edit the boost sources to return also the relevant command prefix (adding new API) , open source after all isn't it ? :).
You could find the relevant sources to edit at boost/process/windows/shell_path.hpp and boost/process/posix/shell_path.hpp