how to organize configure for plugins - c++

There is application called "runner" located in 'app/bin' directory.
And there are a lot of plugin modules that should be located in 'app/bin/modules/' directory.
Mainly we are developing plugin modules and running them with "runner", There are no much work on "runner", only bugfixes.
Currently, to compile plugin, we compile whole project with "runner" and deploy to run environment.
Now I want reorganize it, so compile only source code of plugin.
The problem is: to compile modules we need to modify "configure.ac" to add module Makefile path, and then run "./configure"
(Second approach)
To avoid modification of main "configure.ac" I can create secondary "configure" files for plugin modules. In this case we will have a lot of "configure.ac"s. One module is implemented 1-2 days max.
I would like to hear experts' opinion for such situation.
Which approach is preferable ?

I'd go with the second approach. Adding to a system is better than modifying a system.

Related

Qt Application : How to create standalone executable file for Windows (& Mac) from Mac?

I developed a Qt application in MacBook (El-Capitan 10.11.2) and it is ready now to be released.
What i want now, is to create the standalone executable file for both Mac and Windows OS.
But I don't know how !
I found this link but I am unable to follow it is guidance, it looks different from what my system is showing me.
If you have any idea, please help me.
Thank you
Well, to compile an application for windows, you will need a windows machine (or at least a virtual machine). You can't compile for windows on mac.
Regarding the "standalone": The easy way is to deploy your application together with all the required dlls/frameworks and ship them as one "package". To to this, there are the tools windeployqt and macdeployqt. However, those will not be "single file" applications, but rather a collection of files.
If you want to have one single file, you will have to build Qt statically! You can to this, but you will have to do it on your own. And if you do, please notice that the LGPL-license (the one for the free version of Qt) requires you to make the source-code of your program public! That's not the case if you just link to the dynamic libraries.
EDIT:
Deployment
Deployment can be really hard, because you have to do it differently for each platform. Most times you will have 3 steps
Dependency resolving: In this step, you collect all the exectuables/lirabries/translations/... your application requires and collect them somewhere they can find each other. For windows and mac, this can be done using the tools I mentioned above.
Installation: Here you will have to create some kind of "installer". The easiest way is to create a zip-file that contains everyhing you need. But if you want to have a "nice" installation, you will have to create proper "installers" for each platform. (One of many possibilities is the Qt Installer Framework. Best thing about it: It's cross platform.)
Distribution: Distribution is how to get your program to the user. On Mac, you will have the App-Store, for windows you don't. Best way is to provide the download on a website created for this (like sourceforge, github, ...)
I can help you with the first step, but for the second step you will have to research the possibilities and decide for a way to do it.
Dependencies
Resolving the dependencies can be done by either building Qt statically (this way you will have only one single file, but gain additional work because you will have to compile Qt) or using the dynamic build. For the dynamic build, Qt will help you to resolve the dependencies:
macdeployqt is rather easy to use. Compile your app in release mode and call <qt_install_dir>/bin/macdeployqt <path_to_your_bundle>/<bundle>.app. After thats done, all Qt libraries are stored inside the <bundle>.app folder.
For windeployqt is basically the same: <qt_install_dir>\bin\windeployqt --release <path_to_your_build>\<application>.exe. All dependencies will be inside the build folder. (Hint: copy the <application>.exe in an empty directoy and run windeployqt on that path instead. This way you get rid of all the build-files).
Regarding the static build: Just google it, you will find hundreds of explanations for any platform. But unless you have no other choice but to use one single file (for whatever reason) it would recommend you to use dynamic builds. And regarding the user experience: On mac, they won't notice a difference, since in both cases everything will be hidden inside the app bundle. On windows, it's normal to have multiple files, so no one will bother. (And if you create an installer for windows, just make sure to add a desktop shortcut. This way the user will to have "a single file" to click.)

Beginning Code::blocks and UnitTest++

I'm about to start a C++ project but I'm stuck at the basics.
I want to use the (linux) Code::Blocks IDE, and it's easy to create a normal project. However I want to do TDD using the UnitTest++ framework, and I don't know how to set everything up cleanly.
I've already asked a question about where to put the UnitTest::RunAllTests() command, and they told me the best place is the main() of a separate program.
How do I go about doing this in Code::Blocks? I think I need to create 2 projects:
The "real" project with its own main();
The unit testing project containing the tests and the main() with UnitTest::RunAllTests() inside.
Then somehow have the first project build and run the second during its build process. I don't know how to do that yet but I can find out on my own.
My questions are:
this is the right method?
do I have to create also a project for the UnitTest++ framework, in order to let other people build it on other platforms? Or is dropping the complied library in the project's path enough?
how can I organize the directories of these projects together? It'd be nice to put the tests related to each package in the same directory as that package, but is it ok to have multiple projects in the same directory tree?
I'll partly answer my own questions, as I've managed to get everything working.
Following the instructions on the official documentation page, I've put the UnitTest++ folder with the compiled library and all the source files in my project's path.
Then I created a test project for all the unit testing, with a main function containing the famous UnitTest::RunAllTests(). I put $exe_output as a post-build process here, in order to have the tests executed automatically every time I build this project.
I created the "real" project where my code to be tested will go. In the build settings I specified the test project as a dependency of the real project, so that every time I build the real one, it also builds the test project first.
With these settings I can work on my tests and on the real code, and I only have to build the real one to have the updated tests executed. Any failing test will also make the build fail.
Now two questions remain: "is this the best approach?" and "right now each project lives in a different directory. Is it wiser to leave it this way or should I put each test in the same folder as the real code to be tested?"

Is there any virtualenv like tool for c++ out there?

I have found a problem with the test environment in a c++ problem.
We have a machine which downloads the code from the version control system and, build it and execute the unit test, nothing new.
The problem arise when we add a new dependency in our project. We are developing a lot of features at the same time and it is something relatively common. We this happens we have to advise testers and give them an easy way to reproduce the compilation environment ...
And I was thinking if there is any other easy way to go through this ... don't know, some tool like virtualenv or buildout for python ..
I have been searching at google, but with no luck.
Any help will be appreciated.
You can always add all of the dependencies to the revision control system and provide automated scripts that will install the required subsystems. Where I work, if you just download the current version from the repository, you can build in one step an ISO image that can be installed by testers in any computer they want. The image contains everything from the OS up to the application.
Depending on your particular situation, you might want to start with smaller steps, like adding the dependencies to the repository and having the testers check there whether any new file appears or changes version.
No ready tool, AFAIK, except maybe for CMake which can control things like that for you.
For C++, it's fairly easy to manage "by hand" since you can set LIB, LIBPATH and PATH environment variables to carefully selected directories. No site.py, eggs, .pth files and the like as with Python.
We do this at our shop, setting up our build/development environment closely and have everything in revision control (mostly scripts that download huge zips of prebuilt libs and unpack them to the right places).
Small libs are copied to common dirs, larger get their own entry in the env-vars.
This works equally well for Python and Java. Haven't tried other languages...yet. :)

CMake: how best to build multiple (optional) subprojects?

Imagine an overall project with several components:
basic
io
web
app-a
app-b
app-c
Now, let's say web depends on io which depends on basic, and all those things are in one repo and have a CMakeLists.txt to build them as shared libraries.
How should I set things up so that I can build the three apps, if each of them is optional and may not be present at build time?
One idea is to have an empty "apps" directory in the main repo and we can clone whichever app repos we want into that. Our main CMakeLists.txt file can use GLOB to find all the app directories and build them (not knowing in advance how many there will be). Issues with this approach include:
Apparently CMake doesn't re-glob when you just say make, so if you add a new app you must run cmake again.
It imposes a specific structure on the person doing the build.
It's not obvious how one could make two clones of a single app and build them both separately against the same library build.
The general concept is like a traditional recursive CMake project, but where the lower-level modules don't necessarily know in advance which higher-level ones will be using them. Yet, I don't want to require the user to install the lower-level libraries in a fixed location (e.g. /usr/local/lib). I do however want a single invocation of make to notice changed dependencies across the entire project, so that if I'm building an app but have changed one of the low-level libraries, everything will recompile appropriately.
My first thought was to use the CMake import/export target feature.
Have a CMakeLists.txt for basic, io and web and one CMakeLists.txt that references those. You could then use the CMake export feature to export those targets and the application projects could then import the CMake targets.
When you build the library project first the application projects should be able to find the compiled libraries automatically (without the libraries having to be installed to /usr/local/lib) otherwise one can always set up the proper CMake variable to indicate the correct directory.
When doing it this way a make in the application project won't do a make in the library project, you would have to take care of this yourself.
Have multiple CMakeLists.txt.
Many open-source projects take this appraoch (LibOpenJPEG, LibPNG, poppler &etc). Take a look at their CMakeLists.txt to find out how they've done this.
Basically allowing you to just toggle features as required.
I see two additional approaches. One is to simply have basic, io, and web be submodules of each app. Yes, there is duplication of code and wasted disk space, but it is very simple to implement and guarantees that different compiler settings for each app will not interfere with each other across the shared libraries. I suppose this makes the libraries not be shared anymore, but maybe that doesn't need to be a big deal in 2011. RAM and disk have gotten cheaper, but engineering time has not, and sharing of source is arguably more portable than sharing of binaries.
Another approach is to have the layout specified in the question, and have CMakeLists.txt files in each subdirectory. The CMakeLists.txt files in basic, io, and web generate standalone shared libraries. The CMakeLists.txt files in each app directory pull in each shared library with the add_subdirectory() command. You could then pull down all the library directories and whichever app(s) you wanted and initiate the build from within each app directory.
You can use ADD_SUBDIRECTORY for this!
https://cmake.org/cmake/help/v3.11/command/add_subdirectory.html
I ended up doing what I outlined in my question, which is to check in an empty directory (containing a .gitignore file which ignores everything) and tell CMake to GLOB any directories (which are put in there by the user). Then I can just say cmake myrootdir and it does find all the various components. This works more or less OK. It does have some side drawbacks though, such as that some third-party tools like BuildBot expect a more traditional project structure which makes integrating other tools with this sort of arrangement a little more work.
The CMake BASIS tool provides utilities where you can create independent modules of a project and selectively enable and disable them using the ccmake command.
Full disclosure: I'm a developer for the project.

standalone tool for generating makefile(s) from Eclipse's .cproject file?

Is there a standalone tool, that can be ran from a shell script, to generate a makefile from the .cproject? Actually, the same functionality as the CDT itself, but that can be non-interactive.
As is probably obvious, I want to be able to run a script that checkouts and builds the software, comprising from several C++ project. I am trying to avoid moving to a build system like maven, as it seems as like an overhead in this early stage of our project. thanks!
I know that there was discussions on the CDT-dev mailinglist a few months back about having a command-line tool for building CDT projects. Writing such a tool is really not very difficult (there was an example mentioned), it is simply a matter of defining your own Eclipse-application, load the project, and build it. Searching the CDT-dev list on "standalone build" should give you some relevant hits.