How do I use the pthreads in a ROS C++ Node - c++

I am trying to use pthread library inside the ros Node, I am including it as #include <pthread>. When I run catkin_make I get the error below. I created a simple thread as std::thread m1(move, 1);
thread: No such file or directory
#include <pthread>
^~~~~~~~~
The signature of the move function is void move(short axis_no, short direction = 0) and I instantiated the thread as
std::thread m1(move, 1);
m1.join();
I tried to add the pthread library in my CmakeLists.txt as follows
add_compile_options(-std=c++11 -pthread) and
target_link_libraries(pthread). Is there a way I can use the pthread library inside the ros Node?
Thank you.

pthread is a C library available on some platforms and if you want to use the pthread_* functions you need to #include <pthread.h>.
Since C++11 there is an abstraction layer on top of the native threading libraries which is available if you instead #include <thread>.
You can then use std::thread - but you will still need to link with -pthread.
Instead of hardcoding -pthread, you can let CMake find and add the appropriate library:
find_package(Threads REQUIRED)
target_link_libraries(your_app PRIVATE Threads::Threads)
In your program, the move function needs two arguments (the function signature is void(*)(short, short) even if you have a default value for direction).
#include <thread>
#include <iostream>
void move(short axis_no, short direction = 0) {
std::cout << axis_no << ' ' << direction << '\n';
}
int main() {
auto th = std::thread(move, 1, 2); // needs 2 arguments
th.join();
}
If you want, you can create an overload to only have to supply one argument, but in that case you also need to help the compiler to select the correct overload.
#include <thread>
#include <iostream>
void move(short axis_no, short direction) { // no default value here
std::cout << axis_no << ' ' << direction << '\n';
}
void move(short axis_no) { // proxy function
move(axis_no, 0); // where 0 is default
}
int main() {
using one = void(*)(short);
using two = void(*)(short, short);
// use one of the above type aliases to select the proper overload:
auto th = std::thread(static_cast<one>(move), 1);
th.join();
}

Related

How to resolve symbols that reside in the current session when constructing a (OrcV2) Jit compiler with llvm-13?

Edit
I'm basically trying to do this but with llvm's orc Jit api (llvm-13)
I have a library that JIT's some code using llvm (13). I have some functions in that library that I want to make available to the JIT without writing them in LLVM IR.
Here's some code:
#include "llvm/Analysis/AliasAnalysis.h"
#include "llvm/ExecutionEngine/JITSymbol.h"
#include "llvm/ExecutionEngine/Orc/CompileUtils.h"
#include "llvm/ExecutionEngine/Orc/Core.h"
#include "llvm/ExecutionEngine/Orc/ExecutionUtils.h"
#include "llvm/ExecutionEngine/Orc/IRCompileLayer.h"
#include "llvm/ExecutionEngine/Orc/IRTransformLayer.h"
#include "llvm/ExecutionEngine/Orc/JITTargetMachineBuilder.h"
#include "llvm/ExecutionEngine/Orc/Mangling.h"
#include "llvm/ExecutionEngine/Orc/LLJIT.h"
#include "llvm/ExecutionEngine/Orc/RTDyldObjectLinkingLayer.h"
#include "llvm/IR/IRBuilder.h"
#include "llvm/ExecutionEngine/Orc/ExecutorProcessControl.h"
#include "llvm/ExecutionEngine/SectionMemoryManager.h"
#include "llvm/Passes/PassBuilder.h"
#include "llvm/Support/Error.h"
#include "llvm/Support/CommandLine.h"
#include "llvm/Support/InitLLVM.h"
#include "llvm/IRReader/IRReader.h"
#include "llvm/Support/TargetSelect.h"
using namespace llvm;
using namespace llvm::orc;
// this is just a demo module that creates a function that adds 1 to an int
ThreadSafeModule makeSimpleModule() {
auto Context = std::make_unique<LLVMContext>();
auto M = std::make_unique<Module>("test", *Context);
// Create the add1 function entry and insert this entry into module M. The
// function will have a return type of "int" and take an argument of "int".
Function *Add1F =
Function::Create(FunctionType::get(Type::getInt32Ty(*Context),
{Type::getInt32Ty(*Context)}, false),
Function::ExternalLinkage, "add1", M.get());
// Add a basic block to the function. As before, it automatically inserts
// because of the last argument.
BasicBlock *BB = BasicBlock::Create(*Context, "EntryBlock", Add1F);
// Create a basic block builder with default parameters. The builder will
// automatically append instructions to the basic block `BB'.
IRBuilder<> builder(BB);
// Get pointers to the constant `1'.
Value *One = builder.getInt32(1);
// Get pointers to the integer argument of the add1 function...
assert(Add1F->arg_begin() != Add1F->arg_end()); // Make sure there's an arg
Argument *ArgX = &*Add1F->arg_begin(); // Get the arg
ArgX->setName("AnArg"); // Give it a nice symbolic name for fun.
// Create the add instruction, inserting it into the end of BB.
Value *Add = builder.CreateAdd(One, ArgX);
// Create the return instruction and add it to the basic block
builder.CreateRet(Add);
return {std::move(M), std::move(Context)};
}
// this represents a function in my library that I want to make available to the JIT.
namespace mylibsubnamespace {
extern "C" {
int add2(int a) {
return a + 2;
}
}
}
int main(int argc, const char *argv[]) {
// do some JIT initialization
llvm::InitLLVM X(argc, argv);
llvm::InitializeNativeTarget();
llvm::InitializeNativeTargetAsmPrinter();
llvm::InitializeNativeTargetAsmParser();
// Create an LLJIT instance.
auto J = LLJITBuilder().create();
// this code seems to enable symbol resolution for when the missing symbol is
// in the standard C library (and presumably included).
// This is what allows the "cos" function below to work (comment it out and we get a seg fault)
auto DLSG = llvm::orc::DynamicLibrarySearchGenerator::GetForCurrentProcess(
(*J)->getDataLayout().getGlobalPrefix());
if (!DLSG) {
llvm::logAllUnhandledErrors(
std::move(DLSG.takeError()),
llvm::errs(),
"DynamicLibrarySearchGenerator not built successfully"
);
}
(*J)->getMainJITDylib().addGenerator(std::move(*DLSG));
auto M = makeSimpleModule();
(*J)->addIRModule(std::move(M));
// Look up the JIT'd function, cast it to a function pointer, then call it.
// This function is written in LLVM IR directly.
auto Add1Sym = (*J)->lookup("add1");
int (*Add1)(int) = (int (*)(int)) Add1Sym->getAddress();
int Result = Add1(42);
outs() << "add1(42) = " << Result << "\n";
// Look up the JIT'd function, cast it to a function pointer, then call it.
// This function is defined in the standard C library. Its symbol is resolved
// by DynamicLibrarySearchGenerator above
auto CosSym = (*J)->lookup("cos");
double (*Cos)(double) = (double (*)(double)) CosSym->getAddress();
outs() << "Cos(50) = " << Cos(50) << "\n";
So far so good. What I haven't been able to work out is how to make the add2 function available in a cachable way. I have successfully been able to follow the instructions here to enable hardcoding of the address in the current session, like so:
auto symbolStringPool = (*J)->getExecutionSession().getExecutorProcessControl().getSymbolStringPool();
orc::SymbolStringPtr symbPtr = symbolStringPool->intern("add2");
// JITTargetAddress is uint64 typedefd
llvm::JITSymbolFlags flg;
llvm::JITEvaluatedSymbol symb((std::int64_t) &mylibsubnamespace::add2, flg);
if (llvm::Error err = (*J)->getMainJITDylib().define(
llvm::orc::absoluteSymbols({{symbPtr, symb}}))) {
llvm::logAllUnhandledErrors(std::move(err), llvm::errs(), "Could not add symbol add2");
}
But the instructions explicitly advice against this strategy, since symbols resolved this way are not cachable. However, resolving the symbol with something like how the instructions suggest:
JD.addGenerator(DynamicLibrarySearchGenerator::Load("/path/to/lib"
DL.getGlobalPrefix()));
isn't possible because there is no /path/to/lib. What is the normal way to handle such situations?
What you need is to add -rdynamic or -Wl, -export-dynamic flag to the linker.
-E --export-dynamic
When creating a dynamically linked executable, add all symbols to the dynamic symbol table. The dynamic symbol table is the set of symbols which are visible from dynamic objects at run time. If you do not use this option, the dynamic symbol table will normally contain only those symbols which are referenced by some dynamic object mentioned in the link. If you use dlopen to load a dynamic object which needs to refer back to the symbols defined by the program, rather than some other dynamic object, then you will probably need to use this option when linking the program itself.

How do you load a custom module into Lua?

This has been driving me nuts for a long time now. I have followed every tutorial I could find on the internet (here are couple examples[ [1], [2] of the maybe half dozen good ones found via Google search), and still no clear explanation. Although it seems it must be something fairly simple as that lack of a documented explanation implies that it's something most people would take for granted.
How do I load a custom module into Lua?
On the advice of questions like this one, I have written a module that builds a shared library with the expectation that I would be able to load it through a require call. However, when I do that I get undefined symbol errors, despite those exact symbols appearing in the list from the command nm -g mylib.so.
Those two tutorials I linked before aim to create executables that look wrappers of the *.lua file. That is, the built *.exe file should be called to run the Lua program with the custom module.
I understand that these types questions are asked here fairly frequently (as noted in this answer), but I am still at a loss. I tried some of the binding packages (Luabind and OOLua), but those didn't work out great (e.g. my earlier question--which I did ultimately figure out, sort of).
I have implemented a class in C++
I have wrapped the constructors, destructors, and functions with thunks
I have built it errorless-ly as a shared library
Yet no matter what I get undefined symbol: ... errors when I try to load it as mod = require('mylib.so'). How do I do this?
Working Example of a Library of Functions
For the record, just registering a basic function works fine. The below code, when built as libluatest.so, can be run in Lua using the commands:
> require('libluatest')
> greet()
hello world!
libluatest.cpp
extern "C"
{
#include <lualib.h>
#include <lauxlib.h>
#include <lua.h>
}
#include <iostream>
static int greet(lua_State *L)
{
std::cout << "hello world!" << std::endl;
return 0;
}
static const luaL_reg funcs[] =
{
{ "greet", greet},
{ NULL, NULL }
};
extern "C" int luaopen_libluatest(lua_State* L)
{
luaL_register(L, "libluatest", funcs);
return 0;
}
Failing Example of a Class
This is what I am stuck on currently. It doesn't seem to want to work.
myObj.h
#include <string>
class MyObj
{
private:
std::string name_;
public:
MyObj();
~MyObj();
void rename(std::string name);
};
myObj.cpp
extern "C"
{
#include <lualib.h>
#include <lauxlib.h>
#include <lua.h>
}
#include <iostream>
#include "myObj.h"
void MyObj::rename(std::string name)
{
name_ = name;
std::cout << "New name: " << name_ << std::endl;
}
extern "C"
{
// Lua "constructor"
static int lmyobj_new(lua_State* L)
{
MyObj ** udata = (MyObj **)lua_newuserdata(L, sizeof(MyObj));
*udata = new MyObj();
luaL_getmetatable(L, "MyObj");
lua_setmetatable(L, -1);
return 1;
}
// Function to check the type of an argument
MyObj * lcheck_myobj(lua_State* L, int n)
{
return *(MyObj**)luaL_checkudata(L, n, "MyObj");
}
// Lua "destructor": Free instance for garbage collection
static int lmyobj_delete(lua_State* L)
{
MyObj * obj = lcheck_myobj(L, 1);
delete obj;
return 0;
}
static int lrename(lua_State* L)
{
MyObj * obj = lcheck_myobj(L, 1);
std::string new_name = luaL_checkstring(L, 2);
obj->rename(new_name);
return 0;
}
int luaopen_libmyObj(lua_State* L)
{
luaL_Reg funcs[] =
{
{ "new", lmyobj_new }, // Constructor
{ "__gc", lmyobj_delete }, // Destructor
{ "rename", lrename }, // Setter function
{ NULL, NULL } // Terminating flag
};
luaL_register(L, "MyObj", funcs);
return 0;
}
}
Compiled into libmyObj.so using a simple CMake build with C++11 standard flags on.
Error
> require('libmyObj')
error loading module 'libmyObj' from file './libmyObj.so':
./libmyObj.so: undefined symbol: _ZN5MyObjC1Ev stack traceback: [C]:
? [C]: in function 'require' stdin:1: in main chunk [C]: ?
I am dealing with Lua 5.1 on Ubuntu 14.04.
I am wondering if it has something to do with the mix of C and C++...
It seems that you do not implement:
MyObj() ; ~MyObj();
and be careful with luaopen_* function, since module name is myObj, function name should be luaopen_libmyObj.

Qt5.4 How can I create an app based in plugins?

I have an app in QT 5.4 but when I need include a new functionality, I need recompile all the app, this take a time, I need know how create or modified my app to use plugins created by me.
A plugin-based architecture requires binary compatible and stable interfaces. Once you have these, a full-project recompilation should take about as much time as recompiling a single plugin.
Most likely, you have interdependencies in your code that preclude maintaining binary compatibility anyway - if you didn't, your changes would be localized enough so that a recompilation would only touch a couple of files.
What you're trying to do is come up with a solution to the wrong problem. Fix the structure of your code, and your recompilation times will drop. No need for plugins.
There are many alternatives; a popular one consists of using shared libraries implementing a well-defined API.
For example. Imagine this is the API you want to open for customization:
// pluggin_api.hpp
// This file defines the pluggin interface.
#pragma once
extern "C" { // avoid name mangling
const char* pluggin_name();
void foo(int x);
void bar(int y);
}
Then your users (or yourself) will implement different variations of this API, for example:
// pluggin_1.cpp
#include "pluggin_api.hpp"
#include <iostream>
const char* pluggin_name() {
return "Pluggin 1";
}
void foo(int x) {
std::cout << "2 * x = " << 2 * x << std::endl;
}
void bar(int y) {
std::cout << " 3 * y = " << 3 * y << std::endl;
}
and
// pluggin_2.cpp
#include "pluggin_api.hpp"
#include <iostream>
const char* pluggin_name() {
return "Pluggin 2";
}
void foo(int x) {
std::cout << "20 * x = " << 20 * x << std::endl;
}
void bar(int y) {
std::cout << " 30 * y = " << 30 * y << std::endl;
}
These .cpp files are compiled as shared libraries; under Linux it looks like this:
$ g++ -shared -fPIC -o pluggin_1.so pluggin_1.cpp
$ g++ -shared -fPIC -o pluggin_2.so pluggin_2.cpp
Finally, the main application can call the different pluggins by name:
// main.cpp
#include <iostream>
#include <dlfcn.h> // POSIX --- will work on Linux and OS X, but
// you'll need an equivalent library for Windows
void execute_pluggin(const char* name) {
// declare the signature of each function in the pluggin -- you
// could do this in the header file instead (or another auxiliary
// file)
using pluggin_name_signature = const char*(*)();
using foo_signature = void(*)(int);
using bar_signature = void(*)(int);
// open the shared library
void* handle = dlopen(name, RTLD_LOCAL | RTLD_LAZY);
// extract the functions
auto fun_pluggin_name = reinterpret_cast<pluggin_name_signature>(dlsym(handle, "pluggin_name"));
auto fun_foo = reinterpret_cast<foo_signature>(dlsym(handle, "foo"));
auto fun_bar = reinterpret_cast<bar_signature>(dlsym(handle, "bar"));
// call them
std::cout << "Calling Pluggin: " << fun_pluggin_name() << std::endl;
fun_foo(2);
fun_bar(3);
// close the shared library
dlclose(handle);
}
int main(int argc, char *argv[]) {
for(int k = 1; k < argc; ++k) {
execute_pluggin(argv[k]);
}
}
Compile and link with the dl library:
$ g++ -o main main.cpp -std=c++14 -ldl
and run (notice you need the ./ before the name; this has to do with library naming conventions and search paths):
$ ./main ./pluggin_1.so ./pluggin_2.so
Calling Pluggin: Pluggin 1
2 * x = 4
3 * y = 9
Calling Pluggin: Pluggin 2
20 * x = 40
30 * y = 90
There are so many details that I left out (most importantly error management). I recommend you read the book API Design for C++ to find alternative ideas (such as using an scripting language, using inheritance, using templates, and a mix of them).
I like the shared library method because then I can use the pluggins in other applications (for example: I can use Python's ctypes library, or Matlab's loadlibrary). I can also write the pluggin in, say, Fortran, and then wrap it in an interface compatible with the API.
Finally: notice that this has absolutely nothing to do with QT (though QT may provide a platform-independent shared library loader; I don't know). This is just a common manner in which people provide hooks for customization.
The Qt Documentation provides a 'How To' on creating plugins to extend a Qt based application using Qt's own mechanisms. See http://doc.qt.io/qt-5/plugins-howto.html.
It talks about a high-level API and a low-level API. You are interested in the low-level API.

Calling External Codes

Is it possible to call routines from an external file like notepad (or also cpp file if needed)?
e.g.
I have 3 files.
MainCode.cpp
SubCode_A.cpp <- not included in the headers of the MainCode.cpp
SubCode_B.cpp <- not included in the headers of the MainCode.cpp
MainCode_A.cpp
#include <iostream>
using namespace std;
int main ()
{
int choice = 0;
cin >> choice;
if (choice == 1)
{
"call routines from SubCode_A.cpp;" <- is there a possible code for this?
}
else if (choice == 2)
{
"call routines from SubCode_B.cpp;" <- is there a possible code for this?
}
return 0;
}
=================================
SubCode_A.cpp CODES
{
if (1) //i need to include if statement :)
cout >> "Hello World!!";
}
=================================
SubCode_B.cpp CODES
{
if (1) //i need to include if statement :)
cout >> "World Hello!!";
}
Make the code in e.g. SubCode_A.cpp a function, then declare this function in your main source file and call it. You of course have to build with all source files to create the final executable.
You can just use an #include statement.
Include instructs the compiler to insert the specified file at the #include point.
So your code would be
if (choice == 1)
{
#include "SubCode_A.cpp"
}
...
And you wouldn't need the extra braces in the SubCode_?.cpp files because they exist in MainCode.cpp
Of course, the compiler will only compile what is in the SubCode files at the time of compilation. Any changes to source that aren't compiled won't end up in your executable.
But mid source #includes doesn't lend itself to very readable code.
No
You have to compile both codes,
Declare an external function (e.g. extern void function (int);, in a header.
Compile those two files which will include this header.
Then in a 3rd file, where you use it just include the header.
BUT as you include all the 3 files in the compilation it will work.
This other post maybe useful : Effects of the extern keyword on C functions
It is not possible to call the code in another executable. It is possible for one application to expose an "api" (application programming interface) through a library or DLL which allows you to call some of the code that the application uses.
While compiling YOUR code, though, the compiler needs to know the "fingerprint" of the functions you are going to call: that is, what it returns and what arguments it takes.
This is done through a declaration or "prototype stub":
// subcode.h
void subCodeFunction1(); // prototype stub
void subCodeFunction3(int i, int j);
// subcode.cpp
#include <iostream>
void subCodeFunction1()
{
std::cout << "subCodeFunction1" << std::endl;
}
void subCodeFunction2()
{
std::cout << "subCodeFunction2" << std::endl;
}
void subCodeFunction3(int i, int j)
{
std::cout << "subCodeFunction1(" << i << "," << j << ")" << std::endl;
}
// main.cpp
#include "subcode.h"
int main() {
subCodeFunction1(); // ok
subCodeFunction2(); // error: not in subcode.h, comment out or add to subcode.h
subCodeFunction3(2, 5); // ok
return 0;
}

How to implement a Singleton in an application with DLL

I have an application (in MS Visual Studio) that contains 3 projects:
main (the one that contains the main function)
device (models some hardware device)
config (contains some configuration for both other projects)
So the dependency graph is:
main depends on device, which depends on config
main depends on config
The config project contains a Singleton, which holds some configuration parameters.
I decided to turn the device project into a DLL. When i did this, it seems that i got two instances of the Singleton in the config project! I guess this is a classic problem, which might have a good solution. So how can i fix this?
I reproduced the problem with the following (relatively small) code. Of course, in my case there are some 30 projects, not just 3. And i would like to make just 1 DLL (if possible).
// config.h
#pragma once
#include <string>
#include <map>
class Config
{
public:
static void Initialize();
static int GetConfig(const std::string& name);
private:
std::map<std::string, int> data;
};
// config.cpp
#include "config.h"
static Config g_instance;
void Config::Initialize()
{
g_instance.data["one"] = 1;
g_instance.data["two"] = 2;
}
int Config::GetConfig(const std::string& name)
{
return g_instance.data[name];
}
// device.h
#pragma once
#ifdef _DLL
#define dll_cruft __declspec( dllexport )
#else
#define dll_cruft __declspec( dllimport )
#endif
class dll_cruft Device
{
public:
void Work();
};
// device.cpp
#include "device.h"
#include <iostream>
#include "config.h"
void Device::Work()
{
std::cout << "Device is working: two = " << Config::GetConfig("two") << '\n';
}
// main.cpp
#include <iostream>
#include "config.h"
#include "device.h"
int main()
{
std::cout << "Before initialization in application: one = " << Config::GetConfig("one") << '\n';
Config::Initialize();
std::cout << "After initialization in application: one = " << Config::GetConfig("one") << '\n';
Device().Work();
std::cout << "After working in application: two = " << Config::GetConfig("two") << '\n';
}
Output:
Before initialization in application: one = 0
After initialization in application: one = 1
Device is working: two = 0
After working in application: two = 2
Some explanations on what the code does and why:
Main application starts
The first print is just to show that the singleton is not initialized yet
Main application initializes the singleton
The first print shows that the initialization worked
Main application starts the "hardware device"
Inside the DLL, the singleton is not initialized! I expect it to output two = 2
The last print shows that the singleton is still initialized in main application
When I ran into this same problem I solved it by creating another DLL whose sole purpose is to manage the singleton instance. All attempts to get a pointer to the singleton call the function inside this new DLL.
You can decide where singleton should reside and then expose it to other consumers.
Edited by OP:
For example, i want that the config instance appear only in the EXE (not DLL).
Turn the instance into a pointer
static Config* g_instance;
Add a separate initializing function to device's exported functions:
void InitializeWithExisting(Config* instance) {g_instance=instance;}
After initializing the singleton normally, use the second initialization:
Config::Initialize();
Config::InitializeWithExisting();
I believe that defining and accessing singleton instance this way might solve your problem:
Config& getInstance()
{
static Config config;
return config;
}
This way you also don't need to have (and call) the Initialize method, you can use constructor for initializing, that will be called automatically when you call getInstance for the first time.