(avoid) splitting code to .cpp and .h in C++ and efficient compilation - c++

The common practice in C++ is to separate declarations in .h (or .hpp) and implementation into .cpp.
I know about two main reasons ( maybe there are others ):
Compilations speed ( you do not have to recomplie everything when you change just one file, you can link it by make from pre-compiled .o files )
Forward declarations are sometimes necessary ( when implementation of class A depends on class B and implementation of class B on class A ) ... but I don't have this problem so often and usulaly I can solve it.
In case of object oriented programming it looks like this:
QuadraticFunction.h:
class QuadraticFunc{
public:
double a,b,c;
double eval ( double x );
double solve( double y, double &x1, double &x2 );
};
QuadraticFunction.cpp:
#include <math.h>
#include "QuadraticFunc.h"
double QuadraticFunc::eval ( double x ){ return c + x * (b + x * a ); };
double QuadraticFunc::solve( double y, double &x1, double &x2 ){
double c_ = c - y;
double D2 = b * b - 4 * a * c_;
if( D2 > 0 ){
double D = sqrt( D2 );
double frac = 0.5/a;
x1 = (-b-D)*frac;
x2 = (-b+D)*frac;
}else{ x1 = NAN; x2 = NAN; }
};
main.cpp :
#include <math.h>
#include <stdio.h>
#include "QuadraticFunc.h"
QuadraticFunc * myFunc;
int main( int argc, char* args[] ){
myFunc = new QuadraticFunc();
myFunc->a = 1.0d; myFunc->b = -1.0d; myFunc->c = -1.0d;
double x1,x2;
myFunc->solve( 10.0d, x1, x2 );
printf( "soulution %20.10f %20.10f \n", x1, x2 );
double y1,y2;
y1 = myFunc->eval( x1 );
y2 = myFunc->eval( x2 );
printf( "check %20.10f %20.10f \n", y1, y2 );
delete myFunc;
}
then compile it with makefile like this:
FLAGS = -std=c++11 -Og -g -w
SRCS = QuadraticFunc.cpp main.cpp
OBJS = $(subst .cpp,.o,$(SRCS))
all: $(OBJS)
g++ $(OBJS) $(LFLAGS) -o program.x
main.o: main.cpp QuadraticFunc.h
g++ $(LFLAGS) -c main.cpp
QuadraticFunc.o: QuadraticFunc.cpp QuadraticFunc.h
g++ $(LFLAGS) -c QuadraticFunc.cpp
clean:
rm -f *.o *.x
However, I find it often very inconvenient
especially, when you change code a lot ( e.g. in initial phase of development when you are not yet sure about overall structure of the whole project ).
You have to go back-and-forth all the time between .cpp and .h part of code when doing significant changes to the class structure.
You have twice as much files in the editor and in project folder which is confusing.
You have to write some informations ( like function headers or QuadraticFunc:: ) twice, where you can do many typos and inconsistencies so the compiler complains all the time ( I do such mistakes very often )
Every-time you add / remove / rename some class you have to edit Makefile, where you do a lot of other mistakes which are hard to track from the compiler output ( e.g. I often forgot to write Makefile so that the code recompiles every dependency which I edit )
From this point of view I like much more how Java works. For this reason I was writing my C++ programs simply by putting all the code (including implementation) inside .h. Like this:
#include <math.h>
class QuadraticFunc{
public:
double a,b,c;
double eval ( double x ){ return c + x * (b + x * a ); }
double solve( double y, double &x1, double &x2 ){
double c_ = c - y;
double D2 = b * b - 4 * a * c_;
if( D2 > 0 ){
double D = sqrt( D2 );
double frac = 0.5/a;
x1 = (-b-D)*frac;
x2 = (-b+D)*frac;
}else{ x1 = NAN; x2 = NAN; }
};
};
with universal default makefile like this:
FLAGS = -std=c++11 -Og -g -w
all : $(OBJS)
g++ main.cpp $(LFLAGS) -w -o program.x
( main.cpp remains the same )
However, now when I'm starting to write more complex programs, the compile time starts to be quite long when I have to recompile everything all the time.
Is there any way how to use advantages of make ( faster compile time ) and still organize program structure in the Java-like way ( everything in class body instead of separate .h and .cpp ) which I find much more convenient ?

However, now when I'm starting to write more complex programs, the compile time starts to be quite long when I have to recompile everything all the time.
One of the best points in separating header and the class file is that you don't have to compile everything.
When you have class1.h, class1.cpp, class2.h, class2.cpp, ... ,classN.h and classN.cpp, those headers are only included in the compiled objects of each class. So if your function's logic changes in class2 but your header doesn't, you'll only have to compile class2 into object file. Then you'll do the linking for all object files which produces your actual executable. Linking is FAST.
If you are building large and complex programs and find that editing the headers is the problem, consider DESIGNING your application before writing it.

Short answer: no.
Long answer: Still no.
Either you put all your code in a header file, or you use two files, where the header is included and the source file compiled on its own.
I personally have no problem with using two files. Most editors support a "two file view" - and most of them also support "jump to definition of ".
Putting all functions inside the class declaration also has another side-effect, and that is that all functions are marked inline, which can lead to the function being produced multiple times in the output binaries if the same header is included in several source files.
Although the compile time, in my experience, is not a consequence of parsing, but of the code-generation part of the compilation - which typically is the .cpp file - so if you include several large headers most likely won't matter that much.
Of course use make [or something similar] with properly defined dependencies to build your project.

C++ is C++ and Java is Java. Splitting your source code in .h- und .cpp files is part of C++'s language concept. If you don't like it you shouldn't use it.
Putting everything in one header-file is practially the same as including the .cpp file (which works but is highly inappropriate).
You should not do this when:
writing standard classes, functions,...
using the code section multiple times in bigger programs (leads to redefinition error when you include everything in main.cpp otherwise)
you want to outsource part of your program in a static/dynamic library. Pretty much every available library created works that way.
Examples: WindowsAPI (COM!!), SFML, Boost (partly) (and a lot more)
You could do this when:
the code does very simple things, e.g. Bit-shift operation (making color codes), string analysis,...
Examples: Boost (partly)
You must do this when:
Creating template classes or functions as they are generated at compiling runtime. This is one of the major and most discussed downsides on the .h/.ccp-concept, so you are not the first one to wonder about it.
Examples: STL (C++ standard template library)

First of all you have to write implementation and definition in seperated file because of readebility. İt is possible to put them into same file , but it is not readable . When writing code, write for the next contributor who will need to understand your code. That is important because the next one may be you :) Second issue is makefile . Make is to make easier compilation process not faster . So if you make change in any of your file, you don't need to make change ant make files. Thanks of make , you dont need to compile your file in order again and again.write the files once , use every time. But if you add new code file which effect compilation process ,yes you have to make change in makefile. You can more information about readability,writing definition and implementation in seperate files and the all other style issue in google c++ style guide and about makefile in gnu make manuel.

Related

Generating multiple *.oct* files from a single *.cc* source file to interface a C library to Octave

I have a C library that I want to use from within Octave. Following the tutorial, it seems straight forward: wrap the functions in C++ then mkoctfile them. The problem is: what if I want to have multiple functions definitions (wrappers) in a single source file?
In the mentioned tutorial it is stated
It should be noted that it is perfectly acceptable to have more than one DEFUN_DLD function in a source file. However, there must either be a symbolic link to the oct-file for each of the functions defined in the source code with the DEFUN_DLD macro or the autoload (Function Files) function should be used.
Then in the provided link:
Once Octave finds a file with a name that matches (the called function), the contents of the file are read. If it defines a single function, it is compiled and executed. See Script Files, for more information about how you can define more than one function in a single file.
In this second link, there is no info as to how to load a .oct file with multiple functions in it or how to generate multiple .oct files from a single source file. From what I've understood, the later is the correct approach. How can I do that?
The point of the second link is that you don't load a .oct file with multiple functions in it - at least not from octave's perspective. That's what the symlinks are for - you have symbols A, B, and C in there? Make A.oct, B.oct, and C.oct symbolic links that point at that file and you can use them as if each contained only the symbol you care about.
If you have multiple function definitions in a single oct file, you use autoload(). So if you have foo.oct which has functions foo and bar, then you do:
autoload ("bar", "path-to-foo.oct");
I'll start by clarifying the second quote-window in your question. This is not referring specifically to .oct defined functions. What this is implying is the difference between a canonical m-file defined function, and 'on-the-spot' functions defined directly in the console or as part of a script.
As for the first quote-window, when it comes to functions that are defined in .oct files, the situation is different. What it's saying is that you can create an .oct file that defines many functions, but in order to call these functions, there needs to be a file by the same name in your path. So if an .oct file defines functions "foo" and "bar", you need to have one copy of the .oct file called "foo.oct", and another (or, more realistically, as symbolic link to the original) renamed as "bar.oct".
Similarly, you can also define a "foo.m" and "bar.m" file in your workspace, which only contains the documentation for those functions, such that if you then do "help foo" or "help bar" you get the intended documentation out.
Alternatively, you can use autoload, as carandraug suggested.
Another possibility to generate a C to Octave interface is using SWIG which can generate a single .oct file with all your functions. Refer to here when using pointers and arrays.
Here is an example:
header
/* File: example.h */
int fact(int n);
int fact2(int n1, int n2);
void add(int *x, int *y, int *r);
source
/* File: example.c */
#include "example.h"
int fact(int n) {
if (n < 0){ /* This should probably return an error, but this is simpler */
return 0;
}
if (n == 0) {
return 1;
}
else {
/* testing for overflow would be a good idea here */
return n * fact(n-1);
}
}
int fact2(int n1, int n2) {
return fact(n1)*fact(n2);
}
void add(int *x, int *y, int *r) {
*r = *x + *y;
}
interface
/* File example.i */
%module swigexample
%include "cpointer.i"
%{
#include "example.h"
%}
%pointer_functions(int, intp)
%include "example.h"
compile
swig -octave -o swigexample.cpp example.i
mkoctfile -I. -I/usr/include/octave-4.2.2/ -Iswiglib swigexample.cpp example.c
test
% File test.m
swigexample;
fact(5)
fact2(4,4)
% ==============
a = new_intp();
intp_assign(a, 37);
b = new_intp();
intp_assign(b, 22);
c = new_intp();
add(a,b,c);
r = intp_value(c);
delete_intp(a);
delete_intp(b);
delete_intp(c);
r

load dylib in Excel through vba for mac

I am trying to expose a C++ DLL to Excel using mac. The DLL was written and compiled with Xcode 4 and I am using Excel 2011.
For simple functions, extern "C" does the work and I am able to use the dylib in Excel. Specifically, if the C++ code is something like
extern "C"
{
double sumC( double a, double b)
{
return a + b;
}
}
and the VBA code is:
Private Declare Function addFunction _
Lib "Macintosh HD:Users:SVM:Documents:Excel:lib:libTestDLLtoVBA.dylib" _
Alias "sumC" (ByVal a As Double, ByVal b As Double) As Double
Function Addition(a As Double, b As Double)
Addition = addFunction(a, b)
End Function
everything works fine. But, I am interested in exposing to Excel more complex code with classes defined in header files - as in the example below - and in that case Excel returns #VALUE!. My C++ code is something like this
header file:
#ifndef TestDLLtoVBA_TestFunction_h
#define TestDLLtoVBA_TestFunction_h
class AdditionVBATest{
public:
AdditionVBATest(){};
AdditionVBATest( double ){ m_AdditionResult = 0.0; }
~AdditionVBATest(){};
void setResult( double nAddition ){ m_AdditionResult = nAddition; }
double getResult(){ return m_AdditionResult; }
void addFunct( double x, double y, double &nResult );
double addFunct( double, double );
private:
double m_AdditionResult;
};
double addFunctionC( double a, double b);
#endif
cpp file:
#include <iostream>
#include "TestFunction.h"
void AdditionVBATest::addFunct(double x, double y, double &nResult)
{
nResult = 0.0;
nResult = x + y;
AdditionVBATest::setResult(nResult);
}
double AdditionVBATest::addFunct( double a, double b )
{
double nResult(0.0);
AdditionVBATest addCompute;
addCompute.AdditionVBATest::addFunct(a, b, nResult);
AdditionVBATest addResult;
return addResult.getResult();
}
And finally this is the file that contains the function I would like to expose to Excel:
#include <iostream>
#include "TestFunction.h"
extern "C"
{
double addFunctionC( double a, double b)
{
AdditionVBATest *resAddition;
double result(0.0);
result = resAddition->AdditionVBATest::addFunct(a, b);
return result;
}
}
I tried to use the same dylib in a C++ app and works fine, so I believe it is something related to exposing the library through VBA.
The VBA code I used is
Private Declare Function addFunction _
Lib "Macintosh HD:Users:SVM:Documents:Excel:lib:libTestDLLtoVBA.dylib" Alias "addFunctionC" _
(ByVal a As Double, ByVal b As Double) As Double
Function Addition(a As Double, b As Double)
Addition = addFunction(a, b)
End Function
Any help would be greatly appreciated.
This is of course not necessary but to simplify I will assume from now on that your header file and both source files are in the same directory, and that all future commands are executed in this directory. Then replace in both source files
#include "TestFunction.h"
with
#include "./TestFunction.h"
Then, compile as follows :
g++ -m32 -Wall -g -c ./TestFunction.cpp -o ./TestFunction.o
g++ -m32 -dynamiclib ./file.cpp ./TestFunction.o -o ./libTestDLLtoVBA.dylib
where
g++ is your gcc. I guess it is clang's one, on my computer it is gcc 5.2.0, but both should work fine
file.cpp is the source file containing the declaration of addFunctionC
the -m32 option asks to produce a 32 bits dylib, as excel/VBA on mac are 32 bits
Now do a nm -gU libTestDLLtoVBA.dylib and you will see a line with _addFunctionC showing that the function addFunctionC is exported indeed.
In the VBA, declare as follows :
Declare Function addFunctionC Lib "/Path/to/your/libTestDLLtoVBA.dylib" (ByVal x As Double, ByVal y As Double) As Double
Function Addition(a As Double, b As Double)
Addition = addFunction(a, b)
End Function
and it should work. This was a great source of inspiration.
Now, if you don't want to used command line and want to use XCode for instance, I would say that the only thing you have to take care of is to ensure that XCode produces a 32 bits dylib, as excel/VBA are 32 bits only on mac OS X. I really think that this 32 bits stuff is your problem.
You likely want to read up on COM automation objects. Sometimes known as "OLE" or "ActiveX". It's essentially a way to expose classes and objects written in C++ to other programming languages.
For scripting environments (VBA and Javascript), the traditional way to do this is by registering a COM object that exposes an IDispatch interface.
Some links to get you started:
https://msdn.microsoft.com/en-us/library/windows/desktop/ms221375(v=vs.85).aspx
https://msdn.microsoft.com/en-us/library/windows/desktop/ms221326(v=vs.85).aspx
Don't get hung up on the details of "controls" and embedded UI windows. Foucs on getting a simple COM object class and interface declared in an IDL, generate a typelib, implement a C++ class that implements the interface and IDispatch. ATL (Active Template Library) makes this stuff easy.

kcachegrind: there is no source available for the following function

I am trying to view the annotated source using $ valgrind --tool=callgrind ./myProgram followed by $ kcachegrind using Ubuntu 12.04 (and I'm having the same problem with $ qcachegrind using Mac OSX).
The C++ script myProgram.cpp makes calls to functions that live in a .hpp file (via #include "../include/myHeader.hpp", etc.). I compile myProgram.cpp like this:
g++ -g -o myProgram myProgram.o -l<some third party lib>
where I don't care about viewing annotated source for that third party lib.
What I would like to see is annotated source for the functions in myHeader.hpp and for myProgram.cpp.
Instead, I see kcachegrind's Flat Profile window with a list of all the functions that get called, including the functions in myHeader.hpp - this is great. Now, kcachegrind reports the location of functions from myHeader.hpp as being from myProgram - this is odd. And finally, when I select any function from the Flat Profile window and request to see Source Code, I am met with:
There is no source available for the following function
<name of the selected function>
This is because no debug information is present.
Recompile the source and redo the profile run.
The function is located in the ELF object:
<some location...>
What I've tried:
added the directory holding myHeader.hpp to the Annotations list using kcachegrind's GUI.
compiled using -O0 to remove compiler optimizations
I'm answering my own question thanks to user n.m. - I discovered this while running a simplified example. The problem was with my compilation instruction, I was compiling to an object file with -g rather than compiling to an executable with -g.
Here's a working example for how to get kcachegrind to show annotated source:
main.cpp lives in directory someDirectory/example
// main.cpp
#include <iostream>
#include <math.h>
#include "../include/header.hpp"
using namespace std;
int main() {
double a=1.0; double b=4.0;
double tol = 1E-10;
double zero = -99;
if (sin(a)*sin(b) < 0 && (b-a) >= tol)
zero = bisect_sine(a,b,tol);
cout << zero << endl;
return 0;
}
Header file header.hpp lives in someDirectory/include
// header.hpp
#include <math.h>
#include <iostream>
using namespace std;
double bisect_sine(double a, double b, double tol) {
double c;
int step = 0; int maxsteps = 100;
while (step < maxsteps) {
c = (a+b)/2.0;
if (sin(c) == 0 || (b-a)/2 < tol)
return c;
if (sin(a)*sin(c) >= 0)
a = c;
else
b = c;
step+=1;
}
}
Makefile
# Makefile
CXX = g++
main:
$(CXX) -g -o main main.cpp
chmod 700 main
clean:
rm main
After all of this, simply run make (yielding the executable main that was compiled with debugging -g), followed by valgrind --tool=callgrind ./main. This will produce the expected callgrind.out.<PID> file, which can be read by kcachegrind. Source annotation will then be available for the main() function of main.cpp as well as for bisect_sine() from the header file.
So, this turned out to be a compilation issue. If I understood more about compilation into executables, object files, shared objects, yada yada yada, I would not have gotten into this mess.

How modularization is working in c++

Currently, I'm learning c++ and a question about the modularization process. Suppose I want to write a function to add two or three numbers. For that purpose I've wrote the following header file:
// Sum2.hpp
//
// Preprocessor directives; ensures that we do not include a file twice
// (gives compiler error if you do so)
#ifndef Sum2_HPP
#define Sum2_HPP
/////////// Useful functions //////////////////
// Max and Min of two numbers
double Sum2(double x, double y);
// Max and Min of three numbers
double Sum2(double x, double y, double z);
////////////////////////////////////////////////
#endif
This is just declaration. In a separate file, I specify the functions:
// Sum2.cpp
// Code file containing bodies of functions
//
#include "Sum2.hpp"
/////////// Useful functions //////////////////
// Sum of two numbers
double Sum2(double x, double y)
{
return x+y;
}
// Sum of three numbers
double Sum2(double x, double y, double z)
{
return Sum2(Sum2(x,y),z);
}
And then, in the main programm I want to use these functions:
// main.cpp
#include <iostream>
#include "Sum2.hpp"
int main()
{
double d1;
double d2;
double d3;
std::cout<<"Give the first number ";
std::cin>> d1;
std::cout<<"Give the second number ";
std::cin>> d2;
std::cout<<"Give the third number ";
std::cin>> d3;
std::cout<<"The sum is: "<<Sum2(d1,d2);
std::cout<<"The sum is: "<<Sum2(d1,d2,d3);
return 0;
}
I used g++ -c Sum2.cpp to generate the object code Sum2.o. Why is there a reference error when I want to compile and create an executable from the main code, i.e. g++ -o main main.cpp?
It is working when I compile both at same time, i.e. g++ -o main main.cpp Sum2.cpp. I thought by creating the object code Sum2.o and including the header file in main.cpp the compiler will automatically recognize the object code. Why this is not working?
// Preprocessor directives; ensures that we do not include a file twice
// (gives compiler error if you do so)
No actually, it won't give a compiler error. It just won't do anything.
As for your actual question, c++ unlike some other languages won't try to find your object files for you. You have to tell the compiler where they are at. For this application you should really compile it like so:
g++ -c main.cpp
g++ -c Sum2.cpp
g++ -o main main.o Sum2.o
The first two actually compile the code. The second links the code together to produce the executable. If you execute
g++ -o main main.cpp Sum2.cpp
The compiler will automatically run both steps for you. It works for a small project, but for larger projects you don't want to run all the steps unless something has changed.
Now, you may think that's a pain. You'd be right. That's why there are various tools like CMake, Scons, Jam, Boost.Build which are designed to make it easier to build C++ projects.

How to auto-include all headers in directory

I'm going through exercises of a C++ book. For each exercise I want to minimize the boilerplate code I have to write. I've set up my project a certain way but it doesn't seem right, and requires too many changes.
Right now I have a single main.cpp file with the following:
#include "e0614.h"
int main()
{
E0614 ex;
ex.solve();
}
Each time I create a new class from an exercise, I have to come and modify this file to change the name of the included header as well as the class i'm instantiating.
So my questions are:
Can I include all headers in the directory so at least I don't have to change the #include line?
Better yet, can I rewrite my solution so that I don't even have to touch main.cpp, without having one file with all the code for every exercise in it?
Update:
I ended up following Poita_'s advice to generate main.cpp via a script.
Since I'm using an IDE (Visual Studio), I wanted this integrated with it, so did a bit of research on how. For those interested in how, read on (it was fairly, but not entirely, straightforward).
Visual Studio lets you use an external tool via the Tools -> External Tools menu, and contains a bunch of pre-defined variables, such as $(ItemFileName), which can be passed on to the tool. So in this instance I used a simple batch file, and it gets passed the name of the currently selected file in Visual Studio.
To add that tool to the toolbar, right click on the toolbar, select Customize -> Commands -> Tools, and select the "External Command X" and drag it to the toolbar. Substitute X with the number corresponding to the tool you created. My installation contained 5 default pre-existing tools listed in Tools -> External Tools, so the one I created was tool number 6. You have to figure out this number as it is not shown. You can then assign an icon to the shortcut (it's the BuildMain command shown below):
No. You have to include them all if that's what you want to do.
No. At least, not in a way that's actually going to save typing.
Of course, you could write a script to create main.cpp for you...
If you build your code using make, you should be able to do this.
Can I include all headers in the directory so at least I don't have to change the #include line?
Change your include line to something like #include <all_headers.h>. Now, you can let your Makefile auto-generate all_headers.h with a target like:
all_headers.h:
for i in `ls *.h`; do echo "#include <$i>" >>all_headers.h; done
Make sure that all_headers.h is getting deleted when you 'make clean'.
Better yet, can I rewrite my solution so that I don't even have to touch main.cpp,
without having one file with all the code for every exercise in it?
You can do this if you abstract away your class with a typedef. In your example, change your class name from E0614 to myClass (or something). Now, add a line to your Makefile underneath the for loop above that says echo "typedef "$MY_TYPE" myClass;" >>all_headers.h. When you build your program, invoke 'make' with something like make MY_TYPE=E0614 and your typedef will be automatically filled in with the class you are wanting to test.
If you're on Unix system, you can have a softlink that points to the latest excercise.
ln -s e0615.h latest.h
and name your class E instead of E0614, of course
P.S. To the best of my knowledge, you can't do #include xxx*
Don't use one main.cpp which you modify for each exercise. This solution makes use of make's builtin rules, so you only have to type make e0614 and it will generate e0614.cpp, compile, and link it. You can customize each .cpp file (they won't be regenerated as written below) and maintain all of that history to refer to as you complete exercises, rather than erasing it as you move from one to the next. (You should also use source control, such as Mercurial.)
Makefile
e%.cpp:
./gen_ex_cpp $# > $#
You can generate boilerplate code with scripts, because you don't want it to be tedious either. There are several options for these scripts—and I use a variety of languages including C++, Python, and shell—but the Python below is short and should be simple and clear enough here.
Sample generate script
#!/usr/bin/python
import sys
args = sys.argv[1:]
if not args:
sys.exit("expected filename")
name = args.pop(0).partition(".")[0]
if args:
sys.exit("unexpected args")
upper_name = name.upper()
print """
#include "%(name)s.hpp"
int main() {
%(upper_name)s ex;
ex.solve();
return 0;
}
""" % locals()
Make a master include file containing the names of all the headers you want.
It's a really bad idea to include *, even if you could.
You could use conditional compilation for the class name by using concatenation.
// Somewhere in your other files
define CLASS_NUMBER E0614
// in main.cpp
#define ENTERCLASSNUMBER(num) \
##num## ex;
// in main()
ENTERCLASSNUMBER(CLASS_NUMBER)
Don't know about the includes though. As suggested above, a script might be the best option.
writing a makefile rule to pass the name of the executable as a -DHEADERFILE=something parameter to the compiler shouldn't be difficult at all. Something like:
%.exe : %.h %.cpp main.cpp
gcc -o $< -DHEADER_FILE=$<F $>
OTOH, I don't know if #include does macro expansion on the filename.
sed -i 's/\<\\([eE]\\)[0-9]+\\>/\19999/' main.cpp
Replace 9999 with the required number. There might be better ways.
Why not using object mechanisms ?
You can use an Exemplar strategy for this.
class BaseExercise
{
public:
static bool Add(BaseExercise* b) { Collection().push_back(b); return true; }
static size_t Solve() {
size_t nbErrors = 0;
for(collections_type::const_iterator it = Collection().begin(), end = Collection().end(); it != end; ++it)
nbErrors += it->solve();
return nbErrors;
}
size_t solve() const
{
try {
this->solveImpl();
return 0;
} catch(std::exception& e) {
std::cout << mName << " - end - " << e.what() << std::endl;
return 1;
}
}
protected:
explicit BaseExercise(const char* name): mName(name)
{
}
private:
typedef std::vector<BaseExercise*> collection_type;
static collection_type& Collection() { collection_type MCollection; return MCollection; }
virtual void solveImpl() const = 0;
const char* mName;
}; // class BaseExercise
template <class T>
class BaseExerciseT: public BaseExercise
{
protected:
explicit BaseExerciseT(const char* b): BaseExercise(b) {
static bool MRegistered = BaseExercise::Add(this);
}
};
Okay, that's the base.
// Exercise007.h
#include "baseExercise.h"
class Exercise007: public BaseExerciseT<Exercise007>
{
public:
Exercise007(): BaseExerciseT<Exercise007>("Exercise007") {}
private:
virtual void solveImpl() const { ... }
};
// Exercise007.cpp
Exercise007 gExemplar007;
And for main
// main.cpp
#include "baseExercise.h"
int main(int argc, char* argv[])
{
size_t nbErrors = BaseExercise::Solve();
if (nbErrors) std::cout << nbErrors << " errors" << std::endl;
return nbErrors;
}
And here, you don't need any script ;)
try this:-
#ifndef a_h
#define a_h
#include <iostream>
#include <conio.h>
#incl....as many u like
class a{
f1();//leave it blank
int d;
}
#endif //save this as a.h
later
include this in ur main program that is cpp file
#include "a.h"
...your program