How to create a self-starting C++ systemd service with CPackDeb? - c++

I'm new to Debian packaging and I believe this is a fairly basic question, but I am embarrassed to say I've struck out on Google.
I have a C++ project which builds with CMake and packages a debian with CPack. This project has a service component (systemd style). My goal is to have the service enabled & auto-started upon installation of the package.
My research has yielded two approaches:
1) Run various systemctl commands inside the {pre,post}{inst,rm} scripts in the Debian. Care is required to handle install, remove, and upgrade scenarios properly.
2) Simply put the project.service inside the debian directory and let debhelper (with dh_systemd_enable) handle service installation and start 'automagically'.
Option #2 is obviously preferred because the {pre,post}{inst,rm} is very manual and thus error-prone, but I cannot figure out whether there's a well supported way to leverage debhelper from within CPack.
The Question: I'd like to avoid rewriting the debian packaging stuff in my project's CMake as it's been around for some time and works well. The relationship (if any) between CPackDeb and debhelper is not clear to me -- can CPack take advantage of the dh_systemd_enable features or must I manage the service manually in {pre,post}{inst,rm} scripts?

Related

cmake of minimal_build under Centos7

I'm trying to compile the examples under cpp starting with minimal_build. I don't have much cmake experience. Must this be run under docker, or can it just be compiled in a Linux shell? I'm running Centos7 on a AWS EC2 instance, and I've installed cmake 3.20.2. Executing sudo ./run.sh, errors immediately with "cd: /io: No such file or directory". When I try and make what I think are the necessary changes to the scripts, I keep hitting errors. So I just want to see is this is even possible before proceeding further.
Thanks.
Yes, it is possible. I recently built Arrow on CentOS 7. With any C++ project there are going to be challenges switching amongst Linux distributions. The docker image is a way to provide a single example that the Arrow project can verify. You will need to adapt your Linux environment based on the issues you encounter. #Tsyvarev is also correct, you will want to use run_static.sh instead of run.sh. In order to do this you will need to dive a bit further into the details.
The build script has two steps. First, it will build the Arrow project itself. This is probably going to be the more challenging step. This guide is helpful for this step and provides a lot more detail into how Arrow builds and what options there are. The second step will be to compile and build the example.
Specifically for CentOS 7 one of the challenges you will face is that you will need a newer version of CMake. I ended up building CMake from source. If you go this route you also need to make sure that CMake is built with curl/https support. I used the --system-curl option for this.
That is all I remember having to do special for CentOS 7 at the moment. As you go about this task if you run into further, more specific, issues, feel free to ask them here or on the Arrow dev/user mailing list.

MSI Build uninstall- Installed directory not removing

I created the MSI build package for our application. After this installation, we triggered another dependent driver software in the separate process in a committed event of Installer class like below,
Process.Start (" Path of driver software ")
We are facing an issue, installed directory ( It's empty ) folder is not removing while un-installing the same. Actually like installation, we triggered the un-installation of dependent driver software in the separate process by overriding the Uninstall method of installer class.
Anyone, please help me to overcome this issue? How could I remove the installed directory?
I can't change the installation procedure, since we are aware that we can't process another installation/un-installation when another one is going.
You are running a non-MSI driver install EXE from within your MSI? Correct? Or maybe it is an MSI wrapped in an EXE?
Do you have Installshield Premier? Could you use a suite project and install the EXE via the bootstrapper before (or after) the MSI install? I have honestly never used this feature, but running setups in sequence is what it is for. Embedded custom actions in MSI files kicking off EXE files are notoriously unreliable. This is - in my opinion - especially true if you are running with managed code as well (which I think you are).
In the long run managed code may yield safer custom action code (security-wise based on CAS), but for now it seems to cause unwanted runtime dependencies - especially for very large-scale distribution (global distribution) targeting diverse Windows versions (Vista, 7, 8, 10).
I am told it takes a while to get used to Installshield's suite feature, but maybe it is better for you? You can run EXE files, MSI files, patches and zips in sequence. Some fiddling to define uninstall and upgrade behavior I guess and lots of testing. I am pretty sure corporate application packagers would be happy to see a suite rather than an MSI with lots of strange stuff embedded in it.
UPDATE: Once you have compiled a suite setup.exe file it can be extracted as described here: Regarding silent installation using Setup.exe generated using Installshield 2013 (.issuite) project file
Alternatively you could try to extract the setup.exe files for the driver setup and install the drivers as regular MSI components and run DPinst.exe to install / uninstall the drivers (tool from DIFx). Also quite clunky - especially when you need to include uninstall.
Your driver setup likely uses DPInst.exe already. I would check if you can extract an MSI from the EXE and use it instead of the EXE to include in the suite project. Some hints for how to deal with setup.exe files (extraction, runtime paramenters etc...): Extract MSI from EXE.
WiX has the Driver element in one of its extensions to deal with driver installs. I have never had the chance to test it.

SVN and SFTP synchronisation with eclipse

I have to create and configure an eclipse (Mars 2) for a C project. The project is on a SVN repository, and can only be compiled on a specific linux redhat server that has the appropriate toolchain.
What I need is an IDE that would allow me to commit my changes to the repository and that would automagically synchronize them on the Linux server. I tried a few things but none of them worked. I must (to my great regret) avoid the need of a terminal while using that IDE, but of course not while configuring it.
Firstly, I used the Remote System Explorer feature in eclipse. I connected succefully to the server, created a "Remote Project" that I could open in the C/C++ perspective. However, the whole thing is impossible to use, as it has no indexation, I had to create "User Actions" in order to compile (which is on my point of vue pretty anti-ergonomic) and the SVN plugin does not detect the project as an SVN copy. Furthermore, in the C/C++ perspective, there is a 2s gap between the moment I type something, and the moment it appears on my screen.
I also tryed to mount a network filesystem on my local machine, with sshfs, and if it works far better, I still experience lags. Also, I had to write a Makefile and call my compiler via "ssh $(USER)#$(HOST) build.ksh". (one of the point of the projetc is to write a real Makefile...). But SVN is working.
I also tried to run eclipse on the host machine, with X forwarding, and if it works perfectly, there is still lags...
Finally, I tried an sftp synchronisation, but it seems I can't use my SVN plugin features and the sftp together.
I am out of solutions, and pretty frustrated as I feel that this kind of things should be pretty easy. I mean, all I want is that eclipse automatically copy my files on my remote home directory... Thanks for your help...
To me this sounds like a perfect use-case for a continuus integration (CI) system. Generally speaking, this CI system pulls the code from your repository (for example in regular intervals) and then executes the build chain, collects artifacts, informs you about the state of your build, etc.
Although it originated from the Java world, I have successfully used Jenkins for continuus integration of C-projects on a Linux server, but there are others, like TeamCity or GitLab CI (the latter would require you to switch to Git, but it's a really neat system with a YAML configuration for CI).
Of course CI systems have a learning curve - you don't something like a free meal - but it may really be worth the effort.

What is the recommended way for packaging a C++ daemon on Mac OSX?

I'm working on a multi-platform project that is composed of a service/daemon which runs on Windows, Linux, and Mac OSX.
The code I have is portable, and the application runs fine (from the command line) on all the systems. As this application is designed to run in the background, I made it a Windows service on Windows and a Linux daemon (with the appropriate scripts in init.d) for Linux.
Now my problem is Mac OSX: I have little experience with this operating system, and I am having hard times figuring out the best practices for it regarding my situation:
I'd like to have an installer for my project (I believe a .dmg file, that would likely install an .app; please correct me if there is a better alternative).
Here some information about this project of mine:
It is build entirely in C++ (it uses boost, curl, iconv)
The current build system is not XCode (however If there is a way of keeping my current code layout while integrating and building everything into XCode, I don't mind. I've done something similar for Windows anyway).
There is no graphical user interface
The daemon should start on startup automatically (or even better: make that a user's choice).
The daemon requires root access during its execution.
That's probably a lot of context to consider for a single question, so I will try to make it easier to read:
How would you package/create an installer for a pure-C++ daemon on Mac OSX ?
Since this doesn't have a UI, I wouldn't package it as a .app -- that's the preferred format for double-clickable GUI apps, not for daemons. If it's just a single binary (no support files except maybe things like config files, etc), I'd follow unix conventions and put the binary someplace like /usr/local/libexec (or wherever you put it on Linux). Note that /usr/local doesn't exist by default on OS X, so your installer will need to create it if it doesn't exist.
For getting it to execute: I'll agree with James Bedford's suggestion of using launchd. The launchd .plist file should be installed in /Library/LaunchDaemons (LaunchDaemons run as root at startup, while LaunchAgents run as normal users when that user logs in). Make sure the daemon does not drop itself into the background -- launchd keeps watch over the programs it launches, and if they background themselves it thinks they've crashed, and generally tries to relaunch them, which doesn't work very well. You can adjust the settings to work with background programs, but it's best to have it run in the foreground.
For packaging: Here, I agree with mah -- use an installer package. I actually still like the old GUI PackageMaker tool (deprecated, but it still works), but the new CLI tools are probably better to learn at this point. If you follow my recommendation about /usr/local/libexec, your package should actually contain the "local" directory (with libexec subdir and your binary in that), and install that into /usr -- if /usr/local already exists, it'll just merge with what's already there, but if not it'll create the entire thing. On the other hand, /Library/LaunchDaemons is guaranteed to exist, so your package only needs to contain the actual .plist file to put in it.
Packaging as a .app makes some sense if what you're distributing is more than just a command line (for example, if it has resources such as static configuration data, images, frameworks/dylibs) that need to come along with it).
Regardless of what exactly is getting distributed, you can create an installer using tools that you already have -- pkgbuild and productbuild, both in /usr/bin. Making OS X Installer Packages like a Pro - Xcode Developer ID ready pkg can get you started using these tools.
Have you checked out the Daemons and Services Programming Guide provided by Apple? I think that would be very helpful as an introduction to the platform and should point you in the right direction (if not show you how to do exactly what you want).
You should also check out launchd (which is discussed in that programming guide). launchd is the official deamon launcher/manager for OSX, and is heavily integrated with the operating system. It should be easy enough to wrap your existing cross-platform deamon into a launched deamon, and you can integrate with OS X so that the deamon will start up automatically.

How to build WSO2 4.X from source?

We have been trying to build wso2 (various products) from source to no avail.
I have looked for information all over (with assistance from Google) and followed the few instructions we have found but without luck.
I have, on the other hand, found various posts discussing this process and how error prone it is due to this or that.
Don't get me wrong, WSO2 looks like an amazing framework to work within but confidence in the project is not boosted by the complicated/error prone/enormous build process.
Does anyone here have a good description/recipes to build the 4.x.x version of carbon?
I really don't think it is intentionally hard to build. The product is huge with tons on developers working on it. Most of the issues seem to be around erroneous commits by developers. My understanding is that WSO2 will be changing the development process to make it more robust (source: Manoj's Comment).
The WSO2 set of products are awesome and well engineered. They can be built, but you will need to persist and resolve issues along the way.
It took me quite a few days to get a working build in my spare time. Here is a rough sequence of tasks to perform:
1) Checkout the 4.0.0 branch:
svn co https://svn.wso2.org/repos/wso2/carbon/orbit/branches/4.0.0
svn co https://svn.wso2.org/repos/wso2/carbon/kernel/branches/4.0.0
svn co https://svn.wso2.org/repos/wso2/carbon/platform/branches/4.0.0
For more information of the code base high level structure, see here: what is wso2 'orbit', 'kernel' and 'platform'?
2) Decide which version of a product you need to build - Which version of patch-release to build?
3) Build the three separate code bases (build the main branch plus patch-release versions below your required version).
build orbit 4.0.0/ Then build orbit/patch-release/4.0.x
build kernel 4.0.0/ Then build kernel/patch-release/4.0.x
build platform 4.0.0/ Then build platform/patch-release/4.0.x
Note to build:
use Java 6 (Use Sun/Oracle JDK - not OpenJDK)
use Maven 3
set MAVEN_OPTS to -Xms512m -Xmx1024m -XX:MaxPermSize=1024m
you will probably need to use the following mvn command line: mvn clean install -Dmaven.test.skip=true
You will find the built distribution zip file here: ROOT/distribution/product/modules/distribution/target/ (source: WSO2 Carbon 4.1.x - how to make the distribution)
Be prepared to put in the time to hunt down and fixing issues as you encounter them. Most issues seem to be due to maven dependency issues. Using google, you can usually find the answer. Also you post any issues you need help with on stackoverflow.