QProcess Multiplatform command - c++

I need to launch some script using QProcess.
For this, under windows, I use QProcess::execute("cmd [...]");.
However, this won't work if I go under some other OS such as Linux.
So, I was wondering if the best solution to make that code portable, would be to interfere with a mutliplatform scripting solution, such as TCL for exemple.
So I use : QProcess:execute("tclsh text.tcl"); and it works.
But, I got three questions concerning that problem. Because I'm not sure of what I've done.
Will execute() execute tclsh with the file test.tcl both under Windows and Linux wherever I execute it ? It seems to do so, but I want to be sure ! Is there any bad scenario that can happen ?
Is this a good solution ? I know lots of people have way more experience than I do, and I'd be grateful for anything I could learn !
Why not using std::system() ? Is it less portable ?

While this isn't a total answer, I can point out a few things.
In particular, tclsh is quite happy under Windows; it's a major supported platform. The main problem that could happen in practice is if you pass a filename with a space in it (this is distinctly more likely under Windows than on a Unix due to differences in community practice). However, the execute() as you have written it has no problems. Well, as long as tclsh is located on the PATH.
The other main option for integrating Tcl script execution with Qt is to link your program against the Tcl binary library and use that. Tcl's API is aimed at C, so it should be pretty simple to use from C++ (if a little clunky from a C++ perspective):
// This holds the description of the API
#include "tcl.h"
// Initialize the Tcl library; *call only once*
Tcl_FindExecutable(NULL);
// Make an evaluation context
Tcl_Interp *interp = Tcl_CreateInterp();
// Execute a script loaded from a file (or whatever)
int resultCode = Tcl_Eval(interp, "source test.tcl");
// Check if an error happened and print the error if it did
if (resultCode == TCL_ERROR) {
std::cerr << "ERROR: " << Tcl_GetString(Tcl_GetObjResult(interp)) << std::endl;
}
// Squelch the evaluation context
Tcl_DeleteInterp(interp);
I'm not a particularly great C++ coder, but this should give the idea. I have no idea about QProcess::execute() vs std::system().

A weak point of your solution is that on windows you'll have to install tclsh. There is no tclsh on Solaris either. May be somewhere else.
Compared to std::system(), QProcess gives you more control and information about the process of executing your command. If all you need is just to execute script (without receiving the output, for example) - std::system() is a good choice.
What I've used in a similar situation:
#ifdef Q_OS_WIN
mCommand = QString("cmd /C %1 %2").arg(command).arg(args);
#else
mCommand = QString("bash %1 %2").arg(command).arg(args);
#endif

Related

Fastest way to make console output "verbose" or not

I am making a small system and I want to be able to toggle "verbose" text output in the whole system.
I have made a file called globals.h:
namespace REBr{
extern bool console_verbose = false;
}
If this is true I want all my classes to print a message to the console when they are constructing, destructing, copying or doing pretty much anything.
For example:
window(string title="",int width=1280,int height=720):
Width(width),Height(height),title(title)
{
if(console_verbose){
std::cout<<"Generating window #"<<this->instanceCounter;
std::cout<<"-";
}
this->window=SDL_CreateWindow(title.c_str(),0,0,width,height,SDL_WINDOW_OPENGL);
if(console_verbose)
std::cout<<"-";
if(this->window)
{
this->glcontext = SDL_GL_CreateContext(window);
if(console_verbose)
std::cout<<".";
if(this->glcontext==NULL)
{
std::cout<<"FATAL ERROR IN REBr::WINDOW::CONSTR_OPENGLCONTEXT: "<<SDL_GetError()<<std::endl;
}
}
else std::cout<<"FATAL ERROR IN REBr::WINDOW::CONSTR_WINDOW: "<<SDL_GetError()<<std::endl;
if(console_verbose)
std::cout<<">done!"<<endl;
}
Now as you can see I have a lot of ifs in that constructor. And I REALLY dont want that since that will slow down my application. I need this to be as fast as possible without removing the "loading bar" (this helps me determine at which function the program stopped functioning).
What is the best/fastest way to accomplish this?
Everying in my system is under the namespace REBr
Some variants to achieve that:
Use some logger library. It is the best option as it gives you maximum flexibility and some useful experience ;) And you haven't to devise something. For example, look at Google GLOG.
Define some macro, allowing you to turn on/off all these logs by changing only the macro. But it isn't so easy to write such marco correctly.
Mark your conditional flag as constexpr. That way you may switch the flag and, depending on its value, compiler will optimise ifs in compiled program. But ifs will still be in code, so it looks kinda bulky.
Anyway, all these options require program recompilation. W/o recompilation it is impossible to achieve the maximum speed.
I often use a Logger class that supports debug levels. A call might look like:
logger->Log(debugLevel, "%s %s %d %d", timestamp, msg, value1, value2);
The Logger class supports multiple debug levels so that I can fine tune the debug output. This can be set at any time through the command line or with a debugger. The Log statement uses a variable length argument list much like printf.
Google's logging module is widely used in the industry and supports logging levels that you can set from the command line. For example (taken from their documentation)
VLOG(1) << "I'm printed when you run the program with --v=1 or higher";
VLOG(2) << "I'm printed when you run the program with --v=2 or higher";
You can find the code here https://github.com/google/glog and the documentation in the doc/ folder.

Is it possible to call Matlab from QtCreator?

I am doing image analysis using C++ in the QtCreator environment. In order to build a learning model, I want to use the TreeBagger class from MATLAB, which is really powerful. Can I call MATLAB from QtCreator, give it some parameters, and get back the classification error? Can I do this without using mex files?
From QProcess's Synchronous Process API example:
QProcess gzip;
gzip.start("gzip", QStringList() << "-c");
if (!gzip.waitForStarted())
return false;
gzip.write("Qt rocks!");
gzip.closeWriteChannel();
if (!gzip.waitForFinished())
return false;
QByteArray result = gzip.readAll();
The concept to from this example is the process of being able to execute matlab w/ whatever settings that would be preferable and begin writing a script to it immediately. After the write; you can close the channel, wait for response, then read the results from matlab. Uunfortunately, I'm not experienced w/ it to provide a more direct example, but this is the concept for the most case. Please research the documentation for anything else.
Matlab has an "engine" interface described here to let standalone programs call matlab functions. It has the advantage that you can call engPutVariableand engGetVariable to transfer your data in binary format (I think it works by using shared memory between your process and matlab, but I'm not sure on this), so you don't have to convert your data to ascii and parse the result from ascii.
For c++ you might want to write a wrapper class for RAII or have a look at http://www.codeproject.com/Articles/4216/MATLAB-Engine-API, where this has already been done.

Problems with system() calls in Linux

I'm working on a init for an initramfs in C++ for Linux. This script is used to unlock the DM-Crypt w/ LUKS encrypted drive, and set the LVM drives to be available.
Since I don't want to have to reimplement the functionality of cryptsetup and gpg I am using system calls to call the executables. Using a system call to call gpg works fine if I have the system fully brought up already (I already have a bash script based initramfs that works fine in bringing it up, and I use grub to edit the command line to bring it up using the old initramfs). However, in the initramfs it never even acts like it gets called. Even commands like system("echo BLAH"); fail.
So, does anyone have any input?
Edit: So I figured out what was causing my errors. I have no clue as to why it would cause errors, but I found it.
In order to allow hotplugging, I needed to write /sbin/mdev to /proc/sys/kernel/hotplug...however I ended up switching around the parameters (on a function I wrote myself no less) so I was writing /proc/sys/kernel/hotplug to /sbin/mdev.
I have no clue as to why that would cause the problem, however it did.
Amardeep is right, system() on POSIX type systems runs the command through /bin/sh.
I doubt you actually have a legitimate need to invoke these programs you speak of through a Bourne shell. A good reason would be if you needed them to have the default set of environment variables, but since /etc/profile is probably also unavailable so early in the boot process, I don't see how that can be the case here.
Instead, use the standard fork()/exec() pattern:
int system_alternative(const char* pgm, char *const argv[])
{
pid_t pid = fork();
if (pid > 0) {
// We're the parent, so wait for child to finish
int status;
waitpid(pid, &status, 0);
return status;
}
else if (pid == 0) {
// We're the child, so run the specified program. Our exit status will
// be that of the child program unless the execv() syscall fails.
return execv(pgm, argv);
}
else {
// Something horrible happened, like system out of memory
return -1;
}
}
If you need to read stdout from the called process or send data to its stdin, you'll need to do some standard handle redirection via pipe() or dup2() in there.
You can learn all about this sort of thing in any good Unix programming book. I recommend Advanced Programming in the UNIX Environment by W. Richard Stevens. The second edition coauthored by Rago adds material to cover platforms that appeared since Stevens wrote the first edition, like Linux and OS X, but basics like this haven't changed since the original edition.
I believe the system() function executes your command in a shell. Is the shell executable mounted and available that early in your startup process? You might want to look into using fork() and execve().
EDIT: Be sure your cryptography tools are also on a mounted volume.
what do you have in initramfs ? You could do the following :
int main() {
return system("echo hello world");
}
And then strace it in an initscript like this :
strace -o myprog.log myprog
Look at the log once your system is booted

Implementing "app.exe -instruction file" notation in C++

I have a project for my Data Structures class, which is a file compressor that works using Binary Trees and other stuff. We are required to "zip" and "unzip" any given file by using the following instructions in the command line:
For compressing: compressor.exe -zip file.whatever
For uncompressing: compressor.exe -unzip file.zip
We are programming in C++. I use the IDE Code::Blocks and compile using GCC in Windows.
My question is: How do you even implement that??!! How can you make your .exe receive those parameters in command line, and then execute them the way you want?
Also, anything special to have in mind if I want that implementation to compile in Linux?
Thanks for your help
You may want to look in your programming text for the signature of the main function, your program's entry point. That's where you'll be able to pull in those command line parameters.
I don't want to be more detailed than that because this is apparently a key point of the assignment, and if I ever find myself working with you, I'll expect you to be able to figure this sort of stuff out on your own once you've received an appropriate nudge. :)
Good luck!
As I recall, the Single UNIX Specification / POSIX defines getopt in unistd.h to handle the parsing of arguments for you. While this is a C function, it should also work in C++.
GNU GLIBC has this in addition to getopt_long (in getopt.h) to support GNU's extended --style .
Lo logré, I gotz it!!
I now have a basic understanding on how to use the argc and argv[ ] parameters on the main() function (I always wondered what they were good for...). For example, if I put in the command line:
compressor.exe -unzip file.zip
Then:
argc initializes in '3' (number of arguments in line)
argv[0] == "compressor.exe" (name of app.)
argv[1] == "-unzip"
argv[2] == "file.zip"
Greg (not 'Creg', sorry =P) and Bemrose, thank you guys for your help!! ^^

What is the point of clog?

I've been wondering, what is the point of clog? As near as I can tell, clog is the same as cerr but with buffering so it is more efficient. Usually stderr is the same as stdout, so clog is the same as cout. This seems pretty lame to me, so I figure I must be misunderstanding it. If I have log messages going out to the same place I have error messages going out to (perhaps something in /var/log/messages), then I probably am not writing too much out (so there isn't much lost by using non-buffered cerr). In my experience, I want my log messages up to date (not buffered) so I can help find a crash (so I don't want to be using the buffered clog). Apparently I should always be using cerr.
I'd like to be able to redirect clog inside my program. It would be useful to redirect cerr so that when I call a library routine I can control where cerr and clog go to. Can some compilers support this? I just checked DJGPP and stdout is defined as the address of a FILE struct, so it is illegal to do something like "stdout = freopen(...)".
Is it possible to redirect clog, cerr, cout, stdin, stdout, and/or stderr?
Is the only difference between clog and cerr the buffering?
How should I implement (or find) a more robust logging facility (links please)?
Is it possible to redirect clog, cerr, cout, stdin, stdout, and/or stderr?
Yes. You want the rdbuf function.
ofstream ofs("logfile");
cout.rdbuf(ofs.rdbuf());
cout << "Goes to file." << endl;
Is the only difference between clog and cerr the buffering?
As far as I know, yes.
If you're in a posix shell environment (I'm really thinking of bash), you can redirect any
file descriptor to any other file descriptor, so to redirect, you can just:
$ myprogram 2>&5
to redirect stderr to the file represented by fd=5.
Edit: on second thought, I like #Konrad Rudolph's answer about redirection better. rdbuf() is a more coherent and portable way to do it.
As for logging, well...I start with the Boost library for all things C++ that isn't in the std library. Behold: Boost Logging v2
Edit: Boost Logging is not part of the Boost Libraries; it has been reviewed, but not accepted.
Edit: 2 years later, back in May 2010, Boost did accept a logging library, now called Boost.Log.
Of course, there are alternatives:
Log4Cpp (a log4j-style API for C++)
Log4Cxx (Apache-sponsored log4j-style API)
Pantheios (defunct? last time I tried I couldn't get it to build on a recent compiler)
Google's GLog (hat-tip #SuperElectric)
There's also the Windows Event logger.
And a couple of articles that may be of use:
Logging in C++ (Dr. Dobbs)
Logging and Tracing Simplified (Sun)
Since there are several answers here about redirection, I will add this nice gem I stumbled across recently about redirection:
#include <fstream>
#include <iostream>
class redirecter
{
public:
redirecter(std::ostream & dst, std::ostream & src)
: src(src), sbuf(src.rdbuf(dst.rdbuf())) {}
~redirecter() { src.rdbuf(sbuf); }
private:
std::ostream & src;
std::streambuf * const sbuf;
};
void hello_world()
{
std::cout << "Hello, world!\n";
}
int main()
{
std::ofstream log("hello-world.log");
redirecter redirect(log, std::cout);
hello_world();
return 0;
}
It's basically a redirection class that allows you to redirect any two streams, and restore it when you're finished.
Redirections
Konrad Rudolph answer is good in regard to how to redirect the std::clog (std::wclog).
Other answers tell you about various possibilities such as using a command line redirect such as 2>output.log. With Unix you can also create a file and add another output to your commands with something like 3>output.log. In your program you then have to use fd number 3 to print the logs. You can continue to print to stdout and stderr normally. The Visual Studio IDE has a similar feature with their CDebug command, which sends its output to the IDE output window.
stderr is the same as stdout?
This is generally true, but under Unix you can setup the stderr to /dev/console which means that it goes to another tty (a.k.a. terminal). It's rarely used these days. I had it that way on IRIX. I would open a separate X-Window and see errors in it.
Also many people send error messages to /dev/null. On the command line you write:
command ...args... 2>/dev/null
syslog
One thing not mentioned, under Unix, you also have syslog().
The newest versions under Linux (and probably Mac OS/X) does a lot more than it used to. Especially, it can use the identity and some other parameters to redirect the logs to a specific file (i.e. mail.log). The syslog mechanism can be used between computers, so logs from computer A can be sent to computer B. And of course you can filter logs in various ways, especially by severity.
The syslog() is also very simple to use:
syslog(LOG_ERR, "message #%d", count++);
It offers 8 levels (or severity), a format a la printf(), and a list of arguments for the format.
Programmatically, you may tweak a few things if you first call the openlog() function. You must call it before your first call to syslog().
As mentioned by unixman83, you may want to use a macro instead. That way you can include some parameters to your messages without having to repeat them over and over again. Maybe something like this (see Variadic Macro):
// (not tested... requires msg to be a string literal)
#define LOG(lvl, msg, ...) \
syslog(lvl, msg " (in " __FILE__ ":%d)", __VA_ARGS__, __LINE__)
You may also find __func__ useful.
The redirection, filtering, etc. is done by creating configuration files. Here is an example from my snapwebsites project:
mail.err /var/log/mail/mail.err
mail.* /var/log/mail/mail.log
& stop
I install the file under /etc/rsyslog.d/ and run:
invoke-rc.d rsyslog restart
so the syslog server handles that change and saves any mail related logs to those folders.
Note: I also have to create the /var/log/mail folder and the files inside the folder to make sure it all works right (because otherwise the mail daemon may not have enough permissions.)
snaplogger (a little plug)
I've used log4cplus, which, since version 1.2.x, is quite good. I have three cons about it, though:
it requires me to completely clear everything if I want to call fork(); somehow it does not survive a fork(); call properly... (at least in the version I had it used a thread)
the configuration files (.properties) are not easy to manage in my environment where I like the administrators to make changes without modifying the original
it uses C++03 and we are now in 2019... I'd like to have at least C++11
Because of that, and especially because of point (1), I wrote my own version called snaplogger. This is not exactly a standalone project, though. I use many other projects from the snapcpp environment (it's much easier to just get snapcpp and run the bin/build-snap script or just get the binaries from launchpad.)
The advantage of using a logger such as snaplogger or log4cplus is that you generally can define any number of destinations and many other parameters (such as the severity level as offered by syslog()). The log4cplus is capable of sending its output to many different places: files, syslog, MS-Windows log system, console, a server, etc. Check out the appenders in those two projects to have an idea of the list of possibilities. The interesting factor here is that any log can be sent to all the destinations. This is useful to have a file named all.log where all your services send their logs. This allows to understand certain bugs which would not be as easy with separate log files when running many services in parallel.
Here is a simple example in a snaplogger configuration file:
[all]
type=file
lock=true
filename=/var/log/snapwebsites/all.log
[file]
lock=false
filename=/var/log/snapwebsites/firewall.log
Notice that for the all.log file I require a lock so multiple writers do not mangle the logs between each others. It's not necessary for the [file] section because I only have one process (no threads) for that one.
Both offer you a way to add your own appenders. So for example if you have a Qt application with an output window, you could write an appender to send the output of the SNAP_LOG_ERROR() calls to that window.
snaplogger also offers you a way to extend the variable support in messages (also called the format.) For example, I can insert the date using the ${date} variable. Then I can tweak it with a parameter. To only output the year, I use ${date:year}. This variable parameter support is also extensible.
snaplogger can filter the output by severity (like syslog), by a regex, and by component. We have a normal and a secure component, the default is normal. I want logs sent to the secure component to be written to secure files. This means in a sub-directory which is way more protected than the normal logs that most admins can review. When I run my HTTP services, some times I send information such as the last 3 digits of a credit card. I prefer to have those in a secure log. It could also be password related errors. Anything I deem to be a security risk in a log, really. Again, components are extensible so you can have your own.
One little point about the redirecter class. It needs to be destroyed properly, and only once. The destructor will ensure this will happen if the function it is declared in actually returns, and the object itself is never copied.
To ensure it can't be copied, provide private copy and assignment operators:
class redirecter
{
public:
redirecter(std::ostream & src, std::ostream & dst)
: src_(src), sbuf(src.rdbuf(dst.rdbuf())) {}
~redirecter() { src.rdbuf(sbuf); }
private:
std::ostream & src_;
std::streambuf * const sbuf_;
// Prevent copying.
redirecter( const redirecter& );
redirecter& operator=( const redirecter& );
};
I'm using this technique by redirecting std::clog to a log file in my main(). To ensure that main() actually returns, I place the guts of main() in a try/catch block. Then elsewhere in my program, where I might call exit(), I throw an exception instead. This returns control to main() which can then execute a return statement.
Basic Logger
#define myerr(e) {CriticalSectionLocker crit; std::cerr << e << std::endl;}
Used as myerr("ERR: " << message); or myerr("WARN: " << message << code << etc);
Is very effective.
Then do:
./programname.exe 2> ./stderr.log
perl parsestderr.pl stderr.log
or just parse stderr.log by hand
I admit this is not for extremely performance critical code. But who writes that anyway.