Program to call other programs - c++

I am writing a program that will solve a type of min. spanning tree problem. i have 2 different algorithms that I've gotten working in two separate .cpp files i've named kruskels.cpp and prims.cpp.
my question is this:
each file takes the following command line to run it . time ./FILENAME INPUTFILE FACTOR
i would like to make a program that, depending on what inputfile is entered, will run either kruskels.cpp or prims.cpp. how can i do this?
this program must pass those command line arguments to kruskels or prims. each file (kruskels.cpp and prims.cpp) are designed to be run using those command line arugments (so they take in INPUTFILE and FACTOR as variables to do file io).
this should be for c++.

You can call external programs using the system function.
However, it would be much better to build your Kruskal and Prim solvers in a modular way as classes, and instantiate the appropriate class from your main, according to the input. For this you'll link kruskels.cpp, prims.cpp and your main.cpp into a single executable.

The standard way is to use system(). You might also want to look up popen() (or, on Windows, _popen()).
Edit: My assumption was that you have two executables and (critical point) want to keep them as separate executables. In that case, using system is pretty straightforward. For example, you could do something like:
std::stringstream buffer;
if (use_Kruskals)
buffer << "Kruskals " << argv[1] << argv[2];
else
buffer << "Prims " << argv[1] << argv[2];
system(buffer.str().c_str());
Depending on what you're doing (and as Eli pointed out) you might want to create a single executable, with your implementations of Prim's and Kruskal's methods in that one executable instead. Without seeing your code for them, it's impossible to guess how much work this would be though.

If you need your top program to regain control after executing one of your two child programs, use system() or popen(), if you don't need it then you can use execve()

Related

C++ cin cout to another exe

I have 2 different executables which use traditional cin/cout to interact with the user.
Lets imagine it is a program that inputs some numbers and outputs the result.
Now I want to write a driving program in C++ that calls these executables and provides inputs to them ("cin>>"'s a value) and also gets the result of their cout.
I tried using system() that makes a command line call that starts the executable. But I am not able to provide a further input(in some cases inputs) to that. Any idea how to do that?
Unfortunately I can't switch the program to take command line arguments.

Saving An Array of Number into A Spesific File and Call It Later

Is there any command in C++ that I can use as reference to save an array of numbers into a spesific file that I can access and call it as an array of numbers as well in the next day? I am working on a ton of numbers, the thing is I know how to input those numbers into array, but I just dont want to input all of them again and again everytime I access the program, so is there any chance to input them at one time only? what I find out there is to save the file into .txt file, which is if I call it later it will be a "text" data, and i cant use that numbers on my project.
Your terminology is non-standard, so I suspect you're having trouble understanding how to find answers.
C++ doesn't have "commands". The closest two things you might mean are keywords and functions. Keywords are integral to the language, but do very low level things, like break, return, for and while. Functions are collections of C++ statements that you can call as one unit. There are various libraries that contain related sets of functions. The most common library in C++ is the C++ standard library, which contains functions like std::fopen and std::puts which are useful when writing to files.
In addition to the library functions, there are functions you can write yourself. The advantage of those is that you can easily study and modify the contents of the function to achieve just what you want.
Since you use the term "reference", I gather you're looking for a function that you can read to modify for your own purposes.
In that case, check out this collection of code. You could make some modifications and wrap it in a function to do what you want. In particular, note the for loop with the out_file<< num_array[count]<<" "; statement for how to write to a file, and the second for loop with the in_file >> a; statement for how to read the file back.
save the numbers in csv file
http://www.cplusplus.com/forum/general/170845/
the reload the csv file
http://forums.codeguru.com/showthread.php?396459-Reading-CSV-file-into-an-array

GetCommandLine linux *true* equivalent

Similar question to Linux equivalent of GetCommandLine and CommandLineToArgv
Is it possible to get the raw command line in linux? The file /proc/self/cmdline is destroyd.
./a.out files="file 1","file 2" param="2"
prints
./a.outfiles=file 1,file 2param=2
which is junk
Escaping command line does work for all arguments but the first.
./a.out files=\"fil 1\",\"fil 2\"\ param=\"2\"
prints
./a.outfiles="fil1","fil2" param="2"
You can't do that. The command line arguments are actually passed to the new process as individual strings. See the linux kernel source:
kernel_execve
Note that kernel_execve(...) takes a const char *argv[] - so there is no such thing as a long string commandline in Linux - it's the layer above that needs to split the arguments into separate components.
Edit: actually, the system call is here:
excve system call
But the statement above still applies. The parameter for argv is already split by the time the kernel gets it from the C-library call to exec.
It is the responsibility of the "starter of the program" (typically a shell, but doesn't have to be) to produce the argv[] array. It will do the "globbing" (expansion of wildcard filenames to the actual files that it matches) and stripping of quotations, variable replacement and so on.
I would also point out that although there are several variants of "exec" in the C library, there is only one way into the kernel. All variants end up in the execve system call that I linked to above. The other variants are simply because the caller may not fancy splitting arguments into invdividual elements, so the C library "helps out" by doing that for the programmer. Similarly for passing an environment array to the new program - if the programmer don't need specific environment, he/she can just call the variant that automatically take the parent process env.

Test environment for an Online Judge

I am planning to build an Online Judge on the lines of CodeChef, TechGig, etc. Initially, I will be accepting solutions only in C/C++.
Have thought through a security model for the same, but my concern as of now is how to model the execution and testing part.
Method 1
The method that seems to be more popular is to redirect standard input to the executable and redirect standard output to a file, for example:
./submission.exe < input.txt > output.txt
Then compare the output.txt file with some solution.txt file character by character and report the results.
Method 2
A second approach that I have seen is not to allow the users to write main(). Instead, write a function that accepts some arguments in the form of strings and set a global variable as the output. For example:
//This variable should be set before returning from submissionAlgorithm()
char * output;
void submissionAlgorithm(char * input1, char * input2)
{
//Write your code here.
}
At each step, and for a test case to be executed, the function submissionAlgorithm() is repeatedly called and the output variable is checked for results.
Form an initial analysis I found that Method 2 would not only be secure (I would prevent all read and write access to the filesystem from the submitted code), but also make the execution of test cases faster (maybe?) since the computations of test results would occur in memory.
I would like to know if there is any reason as to why Method 1 would be preferred over Method 2.
P.S: Of course, I would be hosting the online judge engine on a Linux Server.
Don't take this wrong, but you will need to look at security from a much higher perspective. The problem will not be the input and output being written to a file, and that should not affect performance too much either. But you will need to manage submisions that can actually take down your process (in the second case) or the whole system (with calls to the OS to write to disk, acquire too much memory....)
Disclaimer I am by no means a security expert.

C++ ofstream vs. C++ cout piped to file

I'm writing a set of unit tests that write calculated values out to files. Each test produces a square matrix that holds anywhere from 50,000 to 500,000 doubles, and I have a total of 128 combinations of test cases.
Is there any significant overhead involved in writing cout statements and then piping that output to files, or would I be better off writing directly to the file using an ofstream?
This is going to be dependent on your system and environment. This likely to be very little difference, but there is only one way to be sure: try both approaches and measure them.
Since the dimensions involved are so large I'm assuming that these files are not meant to be read by a human being? Just make sure you write them out as binary and not human-readable text because that will make so much more difference than the difference between using ofstream or piping cout.
Whether this means you have to use ofstream or not I don't know. I've never written binary to cout so I can't say whether that's possible...
As Charles Bailey said, it's implementation dependent; what follows is mostly for linux implementation with gnu toolchain, but I hardly imagine it being very different in other os.
In libstdc++ 4.4.2:
An fstream contain an underlying stdio_filebuf which is a basic_filebuf. This basic_filebuf contain it's own buffer by inheriting basic_streambuf, and actually contain a __basic_file, itself containing an underlying plain C stdio abstraction (FILE* or std::__c_file*), in which it flush the buffer.
cout, which is an ostream is initialized with a stdio_sync_filebuf itself initialized with the C file abstraction stdout. stdio_sync_filebuf call plain C stdio functions.
Considering only C++, it appear that an fstream may be more efficient thanks to two layers of buffer.
Considering C only, if the process is forked with the stdout file descriptor redirected in a file, there should be no difference between writing to a new opened file (what fstream does at the end) or to stdout since the fd point to a file anyway (what cout does at the end).
If I were you, I would use an fstream since it's your intent.