Setting Up Intel TBB with Nuwen - c++

I've recently become interested in learning more about parallel computing, concurrency, etc. My main language is C++, so obviously I decided to use that in my personal studies.
After some research (read: looking things up on Google), I decided that using Intel's TBB library would be the most ideal.
The one thing that's got me stuck so far, though, is setting it up to use on my computer. I have looked on the Internet for some sort of tutorial on setting TBB up with MinGW (in my case, specifically Nuwen) and haven't really found anything.
So, here's my question: how would I set up TBB to use with a Nuwen distro?

TBB doesn't provides binaries for mingw in a Windows package. So you should build it from source code. You need compiler and GNU make;
Download source code(.zip) from https://github.com/01org/tbb/releases
Unpack somethere (not sure, but in common: beware of spaces in the dir path)
Open your console with compiler's environment, go to $archive_root/src and call gmake tbb tbbmalloc compiler=gcc. Also you could try to add stdver=c++11 in the build command if your complier supports this mode.
You will find a library build in the $archive_root/build/windows_... directory

Related

c++17 parallel algorithms and CMake

It looks like more and more standard libraries implementations are relying on TBB for their parallel algorithms. This is a bit surprising to me, as I didn't think that standard libraries would have external dependencies (outside of stuff like pthread), but that's a different question I imagine.
My issue is that I need to bake this into my CMakeLists.txt files now.
First bad news: There is no official CMake support for TBB, and TBB itself does not provide any FindTBB.cmake file. You can find it here and there on the web, but if standard libraries start relying on it, it would be nice to have it officially supported by CMake. Is this coming further down the line?
Then, I need to have some slightly convoluted code in my CMakeLists.txt file to find_package(TBB REQUIRED) and link the corresponding targets when required (depending on the standard library, version, etc.). It looks like Conan is already offering a package that hides all that stuff from the user. You just get parallelstl and that's it. Will we have something similar in CMake in the future?
We can already use these parallel algorithms in CMake today, but it would be great to make it easier to create such projects.
I have Ubuntu 21.10 with GCC and TBB (libtbb-dev 2020.3-1) installed.
Since TBB uses pkg-config what worked for me is this:
# file CMakeLists.txt
find_package(PkgConfig REQUIRED)
pkg_search_module(TBB REQUIRED tbb)
link_libraries(PkgConfig::TBB)
Unfortunately, when I later installed intel-oneapi 2021.4.0 which includes its own TBB (in /opt/intel/oneapi) this stopped working.
This newer version of TBB cannot be used as a backed for GCC's (parallel) STL apparently (generates compilation errors in my system), so I did this in order to pick the system's TBB and not the /opt/intel version.
link_libraries(-ltbb) #(PkgConfig::TBB)
This defeats the purpose of CMake, so I am also looking for a more robust solution.
My recommendation at the moment is do not install oneapi's TBB if what you want is to make it work with system's GCC.

Better way to give provide path of libraries while compiling in C++

I pretty new to C++. I was wondering, as what is considered generally a neat way to provide paths for various files/libraries while compiling or executing c++ codes.
For ex:
I have Boost libraries installed in some location on my system. Lets call it X
In order to execute anything I have to type in
c++ -I LongpathWhichisX/to/boost_1_60_0 example.cpp -o example
Similarly, also Long path for the input file while executing the code.
Is there a better way to address it. Is it possible to create environment variables lets Y, which refers to path 'X'. And we can use following command to compile code
c++ -I Y/to/boost_1_60_0 example.cpp -o example
Best way is to use build tools. For example you can use Make. You define all your include paths (and other options) in the Makefile. In console you just have to call make to build your project or something like make run to run your project.
The usual way is to make a Makefile where you can specify all needed paths and compile options in proper variables.
If you don't want/need a Makefile and rather want to run compiler from command-line, then you may use the CPATH environment variable to specify a colon-separated list of paths to include files.
This is a broad question but the other answers highlight the most important step. It is essential to learn build tools like make because they will make it easier to build your projects during development and for others to build it later. In the modern programming age though this is not enough. If you are using something like Boost (which targets many platforms) you will probably want to make your build cross-platform as well. For this you would use either cmake or autotools which both have scripts that make it much easier to locate the Boost libraries (and others).
Any other build systems, in my opinion, are a pain and are the bane of maintainers of Linux distributions. CMake used to be in that catergory but it has gained wide acceptance now. CMake targets building cross-platform projects across operating systems (Windows and Unixes) better (again in my opinion) because it attempts to provide the native build system on each platform (for example: Visual Studio in Windows, Make on all Unices, XCode on Mac). The autotools instead target the Unix environment with much greater depth (you have a bit of a harder time on Windows, but you can target embedded Unix systems to high end Unix server systems with much more flexibility).
Note: Autotools support for cross-compiling is superior in almost every way to other solutions. I always cringe when I download something that needs to be cross compiled for Arm Linux and it uses some weird build system. Funnily enough, boost is one of these.
This is a bit of a long winded answer. In summary, it is essential that you learn a build system for native development. It is part of your skill set and until you have that skill you can't really contribute to open-source projects or even your employer developing closed-source projects.

MinGW vs MinGW-W64 vs MSVC (VC++) in cross compiling

Let's put like this: We are going to create a library that needs to be cross platform and we choose GCC as compiler, it works awesomely on Linux and we need to compile it on Windows and we have the MinGW to do the work.
MinGW tries to implement a native way to compile C++ on Windows but it doesn't support some features like mutex and threads.
We have the MinGW-W64 that is a fork of MinGW that supports those features and I was wondering, which one to use? Knowing that GCC is one of the most used C++ compilers. Or it's better to use the MSVC (VC++) on Windows and GCC on Linux and use CMake to handle with the independent compiler?
Thanks in advance.
Personally, I prefer a MinGW based solution that cross compiles on Linux, because there are lots of platform independent libraries that are nearly impossible (or a huge PITA) to build on Windows. (For example, those that use ./configure scripts to setup their build environment.) But cross compiling all those libraries and their dependencies is also annoying even on Linux, if you have to ./configure and make each of them yourself. That's where MXE comes in.
From the comments, you seem to worry about dependencies. They are costly in terms of build environment setup when cross compiling, if you have to cross compile each library individually. But there is MXE. It builds a cross compiler and a large selection of platform independent libraries (like boost, QT, and lots of less notable libraries). With MXE, boost becomes a lot more attractive as a solution. I've used MXE to build a project that depends on Qt, boost, and libexiv2 with nearly no trouble.
Boost threads with MXE
To do this, first install mxe:
git clone -b master https://github.com/mxe/mxe.git
Then build the packages you want (gcc and boost):
make gcc boost
C++11 threads with MXE
If you would still prefer C++11 threads, then that too is possible with MXE, but it requires a two stage compilation of gcc.
First, checkout the master (development) branch of mxe (this is the normal way to install it):
git clone -b master https://github.com/mxe/mxe.git
Then build gcc and winpthreads without modification:
make gcc winpthreads
Now, edit mxe/src/gcc.mk. Find the line that starts with $(PKG)_DEPS := and add winpthreads to the end of the line. And find --enable-threads=win32 and replace it with --enable-threads=posix.
Now, recompile gcc and enjoy your C++11 threads.
make gcc
Note: You have to do this because the default configuration supports Win32 threads using the WINAPI instead of posix pthreads. But GCC's libstdc++, the library that implements std::thread and std::mutex, doesn't have code to use WINAPI threads, so they add a preprocessor block that strips std::thread and std::mutex from the library when Win32 threads are enabled. By using --enable-threads=posix and the winpthreads library, instead of having GCC try to interface with Win32 in it's libraries, which it doesn't fully support, we let the winpthreads act as glue code that presents a normal pthreads interface for GCC to use and uses the WINAPI functions to implement the pthreads library.
Final note
You can speed these compilations up by adding -jm and JOBS=n to the make command. -jm, where m is a number that means to build m packages concurrently. JOBS=n, where n is a number that means to use n processes building each package. So, in effect, they multiply, so only pick m and n so that m*n is at most not much more than the number of processor cores you have. E.g. if you have 8 cores, then m=3, n=4 would be about right.
Citations
http://blog.worldofcoding.com/2014_05_01_archive.html#windows
If you want portability, Use standard ways - <thread> library of C++11.
If you can't use C++11, pthread can be solution, although VC++ could not compile it.
Do you want not to use both of these? Then, just write your abstract layer of threading. For example, you can write class Thread, like this.
class Thread
{
public:
explicit Thread(int (*pf)(void *arg));
void run(void *arg);
int join();
void detach();
...
Then, write implementation of each platform you want to support. For example,
+src
|---thread.h
|--+win
|--|---thread.cpp
|--+linux
|--|---thread.cpp
After that, configure you build script to compile win/thread.cpp on windows, and linux/thread.cpp on linux.
You should definitely use Boost. It's really great and does all things.
Seriously, if you don't want to use some synchronization primitives that Boost.Thread doesn't support (such as std::async) take a look on the Boost library. Of course it's an extra dependency, but if you aren't scared of this, you will enjoy all advantages of Boost such as cross-compiling.
Learn about differences between Boost.Thread and the C++11 threads here.
I think this is a fairly generic list of considerations when you need to choose multi-platform tools or sets of tools, for a lot of these you probably already have an answer;
Tool support, who and how are you going to get support from if something doesn't work; how strong is the community and the vendor?
Native target support, how well does the tool understand the target platform?
Optimization potential?
Library support (now and in the intermediate future)?
Platform SDK support, if needed?
Build tools (although not directly asked here, do they work on both platforms; most of the popular ones do).
One thing I see that seems to not really have been dealt with is;
What is the target application expecting?
You mention you are building a library, so what application is going to use it and what does that application expect.
The constraint here being the target application dictates the most fundamental aspect of the system, the very tool used to built it. How is the application going to use the library;
What API and what kind of API is needed by or for that application?
What kind of API do you want to offer (C-style, vs. C++ classes, or a combination)?
What runtime is it using, will it be the same, or will there be conflicts?
Given these, and possible fact that the target application may still be unknown; maintain as much flexibility as possible. In this case, endeavour to maintain compatibility with gcc, mingw-w64 and msvc. They all offer a broad spectrum of C++11 language support (true, some more than others) and generally supported by other popular libraries (even if these other libraries are not needed right now).
I thought the comment by Hans Passant...
Do what works first
... really does apply here.
Since you mentioned it; the mingw-builds for mingw-w64 supports thread etc. with the posix build on Windows, both 64 bit and 32 bit.

find boost library path

I'm writing a program using boost program_options, I followed this instruction: http://www.boost.org/doc/libs/1_47_0/more/getting_started/unix-variants.html#build-a-simple-program-using-boost and everythings is fine. The point now is that I want to distribute the source, so my problem is how to find where the boost libraries are installed on other linux machines (supposing they are). For example on my pc they are in /usr/lib64 but on the other machine they're installed in non-standard places.
I don't want to use tool like autotools, I'm using a simple plain Makefile.
Is there some tool provided with the boost installation to find where the libraries are? Is there some enviroment variables?
You either need to use a tool like autotools (I thoroughly recommend CMake, it's awesome), or have it available in a place that your compiler can find it. You can't configure everyone's system for them though, so usually the latter is insufficient.

Help on Porting a SIP library to PSP

I'm currently trying to port a SIP stack library (pjSIP) to the PSP Console (using the PSPSDK toolchain), but I'm having too much trouble with the makefiles (making the proper changes and solving linking issues).
Does anyone know a good text, book or something to get some insight on porting libraries?
The only documentation this project offers on porting seems too dedicated to major OS's.
Look at other libraries that were ported over to the PSP. Doing diffs between a linux version of a library, and a PSP version should show you.
Also, try to get to know how POSIX compatible the PSP is, that will tell you how big the job of porting the library over is.
The PSP is not UNIX and is not POSIX compliant, however the open source toolchain is composed by gcc 4.3, bintutils 1.16.1 and newlib 1.16.
Most of the C library is already present and can compile most of your code. Lots of libraries have been ported just by invoking the configure script with the following arguments:
LDFLAGS="-L$(psp-config --pspsdk-path)/lib -lc -lpspuser" ./configure --host psp --prefix=$(pwd)/../target/psp
However you might need to patch your configure and configure.ac scripts to know the host mips allegrex (the PSP cpu), to do that you search for a mips*--) line and clone it to the allegrex like:
mips*-*-*)
noconfigdirs="$noconfigdirs target-libgloss"
;;
mipsallegrex*-*-*)
noconfigdirs="$noconfigdirs target-libgloss"
;;
Then you run the make command and hope that newlib has all you need, if it doesn't then you just need to create alternatives to the functions you are missing.
Porting is very platform specific, so I don't think you will find much general literature on the subject.
Off the top of my mind, some things you may encounter:
endianness
word size
available libraries
compiler differences
linker differences (you've already seen that one)
peripheral hardware differences
...
I did some more research and found this post at ps2dev forum:
The PSP is not a Unix system, and the pspsdk is not POSIX compliant. It's close in some places, but you can't expect to just take any code that compiles fine on a POSIX system and have it work. For example:
pspsdk uses newlib, which lacks some of glibc's features and headers.
libc is not linked by default, so typical autoconf tests will fail to build
autoconf knows nothing about the PSP
defining PSP_MODULE_INFO and running psp-fixup-imports on the executable are required, otherwise it won't run
You should look at all of the other libraries and programs that have been ported (in the psp and pspware repositories). All SDL libs use autoconf, for example.
I think this delivers more detail into what I was looking for, and also show the #[Jonathan Arkell] point of looking into libraries that already been ported.
Thanks for your replies.