When I generate a preprocessor .i output file from a C or C++ source code file using the /P command line option in any version of a Microsoft compiler, the resulting code will sometimes not compile at all even though the original C/C++ source code compiled fine. For example, when I compile the following code directly there are no problems:
#define VALUE -1
int main(void)
{
int x = -VALUE;
return 0;
}
However, the preprocessor output file "equivalent" is the following, which is obviously not equivalent and is not going to compile:
int main(void)
{
int x = --1;
return 0;
}
So my questions are:
Is there some other Microsoft option I should be using to get a .i file that is always a correct representation of the original?
Are there any compilers whose preprocessor output files are always a correct representation of the original?
If I can't use Microsoft for this my second choice would be minGW, and then clang for Windows. I just want something that produces a correct .i file.
Thanks
Related
I want to observe the difference in op code binary output of compilation between two versions of a very basic C++ program. For example, 2 + 2 = ?, with no libraries called. I expected the compiled output to be a tiny file of binary op codes with a few small headers, being new to compiled programs, but there are large headers.
simple.cpp
int main()
{
unsigned int a = 2;
unsigned int b = 2;
unsigned int c = a + b;
}
compiler:
g++ -std=c++0x simple.cpp -o simple
Is there a format that I can export to that doesn't contain headers, just op code binary that we instruct the machine to execute? If not, what bytes or location in the resulting file can I look for to isolate the relevant logic from the program?
I need the machine code, not assembly, since my project is the analysis of differently obfuscated versions of a source file to attempt recognizing one based on the other. A complex subject with questionable feasibility, but nevertheless that's why I'm asking to isolate the machine code and not just the assembly - to test analysis against the true machine code outputs.
I tried googling the header structure but can't seem to find much info.
Seeing ld(1): GNU linker - Linux man page, you will find that you can use --oformat=output-format option to specify output format.
binary is a format that don't have headers.
Then, seeing gcc(1): GNU project C/C++ compiler - Linux man page, you will find that you can use -Wl option to pass options to the linker.
-nostdlib option is also useful to avoid extra things added.
Combining these, you can try this command:
g++ -std=c++0x simple.cpp -nostdlib -Wl,--oformat=binary -o simple
I am not able to have mkoctfile to successfully create an oct file that is a wrapper of some C++ function of mine (e.g. void my_fun(double*,double)). In particular my problem rises from the fact that, the wrapper code my_fun_wrap.cpp requires the inclusion of the <octave/oct.h> library which only provides C++ headers (see here), but the original code of my_fun also uses source code that is in C. E.g.
// my_fun_wrapper.cpp
#include <octave/oct.h>
#include "custom_functions_libc.h"
DEFUN_DLD(my_fun_wrapper,args, , "EI MF network model A with delays (Brunel, JCN 2000)"){
// Input arguments
NDArray xvar = args(0).array_value();
double x = xvar(0);
// Output arguments
double dy[4];
dim_vector dv (4,1);
NDArray dxvars(dv);
// Invoke my C function which also includes code in the lib file custom_functions_libc.c
my_fun(dy,x);
// Then assign output value to NDArray
for(int i=0;i<4;i++) dxvars(i) = dy[i];
// Cast output as octave_value as required by the octave guidelines
return octave_value (dxvars);
}
Then suppose that my custom_functions_libc.h and custom_functions_libc.c files are somewhere in a folder <path_to_folder>/my_libs. Ideally, from Octave command line I would compile the above by:
mkoctfile -g -v -O -I<path_to_folder>/my_libs <path_to_folder>/my_libs/custom_functions_libc.c my_fun_wrapper.cpp -output my_fun_wrapper -lm -lgsl -lgslcblas
This actually generates my_fun_wrapper.oct as required. Then I can call this latter from within some octave code, e.g.
...
...
xx = [0., 2.5, 1.];
yy = [1e-5, 0.1, 2.];
dxv = test_my_function(xx,yy);
function dy = test_my_function(xx,yy)
xx += yy**2;
dy = my_fun_wrapper(xx);
endfunction
It turns out that the above code will exit with an error in test_my_function saying that within the my_fun_wrapper the symbol Zmy_fundd is not recognized. Upon receiving such kind of error I suspected that something went wrong on the linking process. But strangely enough the compiler did not produce any error as I said. Yet, a closer inspection of the verbose output of the compiler revealed that mkoctfile is changing compiler automatically between different files depending on their extension. So my_fun_wrapper.cpp is compiled by g++ -std=gnu++11 but custom_function_libc.c is compiled by gcc -std=gnu11 and somehow the custom_function_libc.o file ensuing by this compilation process, when linked with my_fun_wrapper.o does not matches unresolved symbols.
The example above is very simplistic. In practice, in my case custom_function_libc includes many more custom C libraries. A workaround so far was to clone the .c source file for those libraries into .cpp files. But I do not like this solution very much.
How can I eventually mix C++ and C code safely and compile it successfully by mkoctfile? octave manual suggests to prepend an extern C specification (see here) which I am afraid I am not very familiar with. Is this the best way? Could you suggest me alternatively, a potential alternative solution?
So apparently the easiest solution, according to my above post is to correct the wrapper by the following preprocessor directives:
// my_fun_wrapper.cpp
#include <octave/oct.h>
// ADDED code to include the C source code
#ifdef __cplusplus
extern "C"
{
#endif
// END ADDITION
#include "custom_functions_libc.h"
// ADDED code to include the C source code
#ifdef __cplusplus
} /* end extern "C" */
#endif
// END ADDITION
...
...
This will compile and link fine.
I am trying to build node.js under eclipse. ( I want to use an IDE to step through the internals of node, so I can answer some questions). I am getting a compilation error I don't understand. Below are the 2 relevant lines from the source:
static uint64_t counter_gc_start_time;
counter_gc_start_time = NODE_COUNT_GET_GC_RAWTIME();
I replaced it with the (manually expanded) macro, thus;
counter_gc_start_time = (do { } while (false));
But I still get a compilation error:
/Users/concunningham/Documents/Node/node/src/node_counters.cc:81:30: error: expected expression
counter_gc_start_time = (do { } while (false));
I am compiling under OS/X, 10.13.4, using compiler flag -std=c++11.
Can anyone tell me what this line of code is supposed to do ?
If you look at node_counters.h
#ifdef HAVE_PERFCTR
#include "node_win32_perfctr_provider.h"
#else
...
#define NODE_COUNT_GET_GC_RAWTIME() do { } while (false)
#endif
When HAVE_PERFCTR is defined, node_win32_perfctr_provider.h is included instead of that define that will fail to compile. The definition for NODE_COUNT_GET_GC_RAWTIME(); is in node_win32_perfctr_provider.cc
I don't know this library, it is just what I see by looking at the files. Where and when HAVE_PERFCTR gets defined is beyond what I searched. But if you have the lib on your machine, the answer is there. I'd have to download it to know more. As jbp points out, this looks like some kind of windows thing.
Can we write a program in c++ that compile a c++ source code using a compiler ?
for example we have a program that takes the file name and then compiles it:
Enter your C++ source code file name : cppSource.cpp
your program compiled.
output: cppSource.exe
Or:
Enter your C++ source code file name : cppSource.cpp
Sorry. There is no C++ Compiler in your computer.
I do not mean that we write a Compiler.
I mean write a program that compiles the cpp file using a compiler.
How can we access a compiler. and how to detect that in a computer a compiler is installed or not.
Use system function to call any executable, for example g++ compiler:
#include <stdlib.h>
int retval = system("g++ file.cpp");
Return value of system can be checked, and it will be the return value of the called executable if the shell was able to execute it. In this case, usually it will be 0 if a compiler exists and the code compiled successfully.
Alternatively (and also to prevent called program output from being displayed), for additional details, it would be possible to redirect the program output to a file and then open that file and parse it's content.
int retval = system("g++ file.cpp > output.txt");
I use legacy C-Code in my current C++ project by including external headers:
extern "C" {
# include "ANN/ANN_4t70P1.h"
# include "ANN/ANN_4t70P2.h"
# include "ANN/ANN_4t70P3.h"
# include "ANN/ANN_4t70P4.h"
}
The header files look like this:
extern int ANN_4t70P1(float *in, float *out, int init);
static struct {
int NoOfInput; /* Number of Input Units */
int NoOfOutput; /* Number of Output Units */
int(* propFunc)(float *, float*, int);
} ANN_4t70P1REC = {8,3,ANN_4t70P1};
The C-Code is created by an ancient batch-file and cannot be compiled using C++ compilers. Nevertheless, this implementation works fine for Windows and Mac OS. However, when I compile the code using gcc and g++ on Linux and run the application, ANN_4t70P1REC returns incorrect values.
Are there any special linker flags that I missed out when linking the project?
Thanks!
What do you mean by:
The C-Code is created by an ancient batch-file and cannot be compiled
using C++ compilers
Are you linking using object files generated by different compilers?
If so, try to inspect your object files with:
readelf -h <objectname>
Check if there is a different ABI. If the code is generated by a very old GCC <3.3/3.4 you can have problems linking with newer versions.
Are you sure you don't have any warnings during the link?