Embedded console tools functionality in application - c++

I'm currently developing an application that happens to require some file preprocessing before actually reading the data.
Doing it externally was not a possibility so I came up with a fork & execve of "cut options filename | sort | uniq -c" etc... and I execute it like that.
However I thought that maybe there was already another option to reuse all those ancient and good working tools directly in my code and not having to invoke them through a shell.
I am currently looking at busybox to see if there is an easy way of statically link and programatically call those utils but no luck yet.

Arkaitz, the answer no, because of how you've phrased the question.
You ask for "another option to reuse all those ancient and good working tools directly in my code and not having to invoke them through a shell"
The problem with that is, the proper and accepted way of reusing all those ancient and good working tools is exactly what you're saying you want to avoid - invoking them via a shell (or at least, firing them up as child processes via popen for example) - and it's definitely not recommend to try to subsume, copy, or duplicate these tools into your code.
The UNIX (and Linux) model for data manipulation is robust and proven - why would you want to avoid it?

The 'ancient' tools were built for use by the shell, not to be built/linked into an executable. There are, however, more recent tools that kinda do lot of what you showed on your command line preprocessor: iostreams with extractors (to replace cut), std::sort and std::unique to replace the respective programs...
struct S { string col1, col3;
bool operator<( const S& s ) { return col1 < s.col1; }
};
vector<S> v;
while( cin ) {
S s;
string dummy;
cin >> s.col1 >> dummy >> col3 >> dummy;
v.push_back( s );
}
sort(v.begin(), v.end(), S::smaller );
unique( v.begin(), v.end() );
Not too complicated, I think.

Try popen().
char buffer [ BUFFER_SIZE ];
FILE * f = popen( "cut options filename | sort | uniq -c", "r" );
while( /*NOT*/! feof(f) )
fgets( buffer, BUFFER_SIZE, f );
pclose( f );
Reference: How to execute a command and get output of command within C++ using POSIX?

You have to do it through the shell, but it's easier to use "system" call.
while(something) {
int ret = system("foo");
if (WIFSIGNALED(ret) &&
(WTERMSIG(ret) == SIGINT || WTERMSIG(ret) == SIGQUIT))
break;
}

Just write another useful 'ancient and good' tool ;) and read all data from stdin and return it to stdout.
cat *.txt | grep 'blabla' | sort | my_new_tool | tee -o res_file

The nice way to do it is:
Create 2 pipes
Fork a new process
Replace stdin and stdout for child process with pipes using dup2 function
exec a command you'd like
Write and read from parent process using pipes

busybox was my first thought as well, although you might also want to consider embedding a scripting engine like Python and doing these kind of manipulations in Python scripts.
I would definitely not try to strip this kind of functionality out of GNU command line tools since they have grown significantly since the early UNIX days and sprouted an awful lot of options.
If the busybox code seems too hard to adapt, then the next place I would look would be Minix source code. Look under Previous Versions and pick one of the version 1 or 2 Minixes because those were written as teaching code so they tend to be clearer and simpler.

If you do not want to call external commands (whether by exec, popen or system etc) but do not want to modify the source of these utilities and compile them into your code (relatively easy, just change 'main' to 'main_cut' etc), then the only remaining option I see is to embed the utilities inside your code and either extract them at runtime or dynamically create a filing system by pointing at the data inside your code (eg using a floppy or cd image and writing a FUSE module that picks up the disk image data from a ram address). All of which seems like a lot of work just to make this look like a single neatly-packaged utility.
Personally, if i really had to do this, I'd get the source of all those utils and compile them in as external calls. Of course you'd no longer have pipes easily available, you'd either have to use temp files for preprocessing, or something more complicated involving co-routines. Or maybe sockets. Lots of work and messy whatever you do!

Related

Execute a C++ program and copy the cmd output using Perl

I am trying to code a perl script which would compile a C++ program and then execute it by passing in the required values. After the execution, I need whatever is printed on the cmd for comparison with a sample code.
Right now my code compiles the cpp code perfectly and correctly executes it, but completes without passing in the values.
All I am doing right now is use system commands
system("cl E:\\checker\\Perl\\temp.cpp");
system("temp.exe");
system("10");
system("20");
system("30");
system("40");
system("50");
The C++ Code is something like this
cin >> a;
cin >> b;
cin >> c;
cin >> d;
cin >> e;
The code correctly compiles and exectues, but the following values (which are inputs to the C++ code which is using cin) doesn't seem to work
Please note I am using the Visual Studio compiler. Also can someone tell me how I can extract the information outputted by the C++ code into maybe a perl array for comparison.
You can use IPC::Open2 to establish bidirectional communication between your Perl script and the other program, e.g.
use IPC::Open2;
use autodie;
my ($chld_out, $chld_in);
my $pid = open2(
$chld_out,
$chld_in,
q(bc -lq)
);
$chld_in->print("1+1\n");
my $answer = <$chld_out>;
print $answer;
kill 'SIGINT', $pid; # I don't believe waitpid() works on Win32. Possibly with Cygwin.
Unfortunately, buffering can make this approach a lot harder than one would hope. You'll also have to manually wait and reap the child process. An alternative would be to use a module like IO::Pty or Expect to create a pseudo-tty environment to simulate user interaction (but I believe these two only work in a Cygwin environment on Windows). There's also IPC::Run, a more fully-featured alternative to IPC::Open2/3.
See also: perlipc and perlfaq8.
The correct syntax for system is either
system('command', 'arg1', 'arg2', ... , 'argn');
Or all as a single string, which allows shell interpretation (which you may not want):
system('command arg1 arg2');
system does not capture output. Instead, use the backticks operator:
my $command_output = `command args`;
or its generic form qx. (If you assign to an array, the output will be split on $/ and pushed onto the array one line at a time).
There is also the pipe form of open (open my $pipeh, '-|', 'command', 'arg1', ..., 'argn') or die $!;) and the readpipe function.

pidof from a background script for another background process

I wrote a c++ program to check if a process is running or not . this process is independently launched at background . my program works fine when I run it on foreground but when I time schedule it, it do not work .
int PID= ReadCommanOutput("pidof /root/test/testProg1"); /// also tested with pidof -m
I made a script in /etc/cron.d/myscript to time schedule it as follows :-
45 15 * * * root /root/ProgramMonitor/./testBkg > /root/ProgramMonitor/OutPut.txt
what could be the reason for this ?
string ReadCommanOutput(string command)
{
string output="";
int its=system((command+" > /root/ProgramMonitor/macinfo.txt").c_str());
if(its==0)
{
ifstream reader1("/root/ProgramMonitor/macinfo.txt",fstream::in);
if(!reader1.fail())
{
while(!reader1.eof())
{
string line;
getline(reader1,line);
if(reader1.fail())// for last read
break;
if(!line.empty())
{
stringstream ss(line.c_str());
ss>>output;
cout<<command<<" output = ["<<output<<"]"<<endl;
break;
}
}
reader1.close();
remove("/root/ProgramMonitor/macinfo.txt");
}
else
cout<<"/root/ProgramMonitor/macinfo.txt not found !"<<endl;
}
else
cout<<"ERROR: code = "<<its<<endl;
return output;
}
its output coming as "ERROR: code = 256"
thanks in advacee .
If you really wanted to pipe(2), fork(2), execve(2) then read the output of a pidof command, you should at least use popen(3) since ReadCommandOutput is not in the Posix API; at the very least
pid_t thepid = 0;
FILE* fpidof = popen("pidof /root/test/testProg1");
if (fpidof) {
int p=0;
if (fscanf(fpidof, "%d", &p)>0 && p>0)
thepid = (pid_t)p;
pclose(fpidof);
}
BTW, you did not specify what should happen if several processes (or none) are running the testProg1....; you also need to check the result of pclose
But you don't need to; actually you'll want to build, perhaps using snprintf, the pidof command (and you should be scared of code injection into that command, so quote arguments appropriately). You could simply find your command by accessing the proc(5) file system: you would opendir(3) on "/proc/", then loop on readdir(3) and for every entry which has a numerical name like 1234 (starts with a digit) readlink(2) its exe entry like e.g. /proc/1234/exe ...). Don't forget the closedir and test every syscall.
Please read Advanced Linux Programming
Notice that libraries like Poco or toolkits like Qt (which has a layer QCore without any GUI, and providing QProcess ....) could be useful to you.
As to why your pidof is failing, we can't guess (perhaps a permission issue, or perhaps there is no more any process like you want). Try to run it as root in another terminal at least. Test its exit code, and display both its stdout & stderr at least for debugging purposes.
Also, a better way (assuming that testProg1 is some kind of a server application, to be run in at most one single process) might be to define different conventions. Your testProg1 might start by writing its own pid into /var/run/testProg1.pid and your current application might then read the pid from that file and check, with kill(2) and a 0 signal number, that the process is still existing.
BTW, you could also improve your crontab(5) entry. You could make it run some shell script which uses logger(1) and (for debugging) runs pidof with its output redirected elsewhere. You might also read the mail perhaps sent to root by cron.
Finally I solved this problem by using su command
I have used
ReadCommanOutput("su -c 'pidof /root/test/testProg1' - root");
insteadof
ReadCommanOutput("pidof /root/test/testProg1");

QProcess Multiplatform command

I need to launch some script using QProcess.
For this, under windows, I use QProcess::execute("cmd [...]");.
However, this won't work if I go under some other OS such as Linux.
So, I was wondering if the best solution to make that code portable, would be to interfere with a mutliplatform scripting solution, such as TCL for exemple.
So I use : QProcess:execute("tclsh text.tcl"); and it works.
But, I got three questions concerning that problem. Because I'm not sure of what I've done.
Will execute() execute tclsh with the file test.tcl both under Windows and Linux wherever I execute it ? It seems to do so, but I want to be sure ! Is there any bad scenario that can happen ?
Is this a good solution ? I know lots of people have way more experience than I do, and I'd be grateful for anything I could learn !
Why not using std::system() ? Is it less portable ?
While this isn't a total answer, I can point out a few things.
In particular, tclsh is quite happy under Windows; it's a major supported platform. The main problem that could happen in practice is if you pass a filename with a space in it (this is distinctly more likely under Windows than on a Unix due to differences in community practice). However, the execute() as you have written it has no problems. Well, as long as tclsh is located on the PATH.
The other main option for integrating Tcl script execution with Qt is to link your program against the Tcl binary library and use that. Tcl's API is aimed at C, so it should be pretty simple to use from C++ (if a little clunky from a C++ perspective):
// This holds the description of the API
#include "tcl.h"
// Initialize the Tcl library; *call only once*
Tcl_FindExecutable(NULL);
// Make an evaluation context
Tcl_Interp *interp = Tcl_CreateInterp();
// Execute a script loaded from a file (or whatever)
int resultCode = Tcl_Eval(interp, "source test.tcl");
// Check if an error happened and print the error if it did
if (resultCode == TCL_ERROR) {
std::cerr << "ERROR: " << Tcl_GetString(Tcl_GetObjResult(interp)) << std::endl;
}
// Squelch the evaluation context
Tcl_DeleteInterp(interp);
I'm not a particularly great C++ coder, but this should give the idea. I have no idea about QProcess::execute() vs std::system().
A weak point of your solution is that on windows you'll have to install tclsh. There is no tclsh on Solaris either. May be somewhere else.
Compared to std::system(), QProcess gives you more control and information about the process of executing your command. If all you need is just to execute script (without receiving the output, for example) - std::system() is a good choice.
What I've used in a similar situation:
#ifdef Q_OS_WIN
mCommand = QString("cmd /C %1 %2").arg(command).arg(args);
#else
mCommand = QString("bash %1 %2").arg(command).arg(args);
#endif

How to run a C++ program in another C++ program?

I have a simple C++ program that takes in inputs and outputs some string. Like this:
$ ./game
$ what kind of game? type r for regular, s for special.
$ r
$ choose a number from 1 - 10
$ 1
$ no try again
$ 2
$ no try again
$ 5
$ yes you WIN!
Now I want to write a c++ program can runs this c++ program and plays the game automatically without user input and then outputs it to a file or standard output.
Running it would look like this:
./program game r > outputfile
game is the game program, r for playing regular style.
How should I do this? The main reason I need this program is that I want to do automatic testing for a much bigger program.
You could use std::system from <cstdlib>:
std::system("game r > outputfile");
The return value is ./program's, the sole argument must be of type char const *.
There is no standard way to run a program and feed it standard input, though. Judging by your command line, you're on some Unix variant where popen from <stdio.h> should work:
FILE *sub = popen("game r > outputfile", "w");
then write to sub with the stdio functions and read outputfile afterwards.
(But for simple testing, I'd recommend implementing the core logic of your program as a set of functions/classes that can be run by a custom main function in a loop; or pick your favorite scripting language to handle this kind of thing.)
I'd be more efficient to add a caller function to your main source and have it control looping, logging, and feeding input. It would also not require system calls or other magic to pull off. Being a game programmer, we have our games play themselves as much as possible to help with debugging, and almost always this is done via internal code, not through external scripting or system calls. It makes it easier to feed viable input as well.
This scenario cries out for a "script", IMHO.
Bash, Perl, Python - you name it.
SIMPLEST CASE:
Just write a bash script to call ./program game r > outputfile.
Or ./program game r < test_input.txt > test_output.txt
For more advanced scenarios, you might want to look at "expect".
You might also want to look at "STAF", which might be a great way to "automate your automated tests":
http://staf.sourceforge.net/current/STAFFAQ.htm

C++: How to escape user input for safe system calls?

On a Linux platform, I have C++ code that goes like this:
// ...
std::string myDir;
myDir = argv[1]; // myDir is initialized using user input from the command line.
std::string command;
command = "mkdir " + myDir;
if (system(command.c_str()) != 0) {
return 1;
}
// continue....
Is passing user input to a system() call safe at all?
Should the user input be escaped / sanitized?
How?
How could the above code be exploited for malicious purposes?
Thanks.
Just don't use system. Prefer execl.
execl ("/bin/mkdir", "mkdir", myDir, (char *)0);
That way, myDir is always passed as a single argument to mkdir, and the shell isn't involved. Note that you need to fork if you use this method.
But if this is not just an example, you should use the mkdir C function:
mkdir(myDir, someMode);
Using system() call with command line parameters without sanitizing the input can be highly insecure.
The potential security threat could be a user passing the following as directory name
somedir ; rm -rf /
To prevent this , use a mixture of the following
use getopt to ensure your input is
sanitized
sanitize the input
use execl instead of system to execute
the command
The best option would be to use all three
Further to Matthew's answer, don't spawn a shell process unless you absolutely need it. If you use a fork/execl combination, individual parameters will never be parsed so don't need to be escaped. Beware of null characters however which will still prematurely terminate the parameter (this is not a security problem in some cases).
I assume mkdir is just an example, as mkdir can trivially be called from C++ much more easily than these subprocess suggestions.
Reviving this ancient question as I ran into the same problem and the top answers, based on fork() + execl(), weren't working for me. (They create a separate process, whereas I wanted to use async to launch the command in a thread and have the system call stay in-process to share state more easily.) So I'll give an alternative solution.
It's not usually safe to pass user input as-is, especially if the utility is designed to be sudo'd; in order to sanitize it, instead of composing the string to be executed yourself, use environment variables, which the shell has built-in escape mechanisms for.
For your example:
// ...
std::string myDir;
myDir = argv[1]; // myDir is initialized using user input from the command line.
setenv("MY_DIR", myDir, 1);
if (system("mkdir \"${MY_DIR}\"") != 0) {
return 1;
}
// continue....