get the value of a c constant - c++

I have a .h file in which hundreds of constants are defined as macros:
#define C_CONST_NAME Value
What I need is a function that can dynamically get the value of one of these constants.
needed function header :
int getConstValue(char * constName);
Is that even possible in the C langage?
---- EDIT
Thanks for the help, That was quick :)
as i was thinking there is no miracle solution for my needs.
In fact the header file i use is generated by "SCADE : http://www.esterel-technologies.com/products/scade-suite/"
On of the solution i got from #Chris is to use some python to generate c code that does the work.
Now its to me to make some optimizations in order to find the constant name. I have more than 5000 constants O(500^2)
i'm also looking at the "X-Macros" The first time i hear of that, home it works in C because i'm not allowed to use c++.
Thanks

C can't do this for you. You will need to store them in a different structure, or use a preprocessor to build the hundreds of if statements you would need. Something like Cogflect could help.

Here you go. You will need to add a line for each new constant, but it should give you an idea about how macros work:
#include <stdio.h>
#define C_TEN 10
#define C_TWENTY 20
#define C_THIRTY 30
#define IFCONST(charstar, define) if(strcmp((charstar), #define) == 0) { \
return (define); \
}
int getConstValue(const char* constName)
{
IFCONST(constName, C_TEN);
IFCONST(constName, C_TWENTY);
IFCONST(constName, C_THIRTY);
// No match
return -1;
}
int main(int argc, char **argv)
{
printf("C_TEN is %d\n", getConstValue("C_TEN"));
return 0;
}
I suggest you run gcc -E filename.c to see what gcc does with this code.

A C preprocessor macro (that is, something named by a #define statement) ceases to exist after preprocessing completes. A program has no knowledge of the names of those macros, nor any way to refer back to them.
If you tell us what task you're trying to perform, we may be able to suggest an alternate approach.

This is what X-Macros are used for:
https://secure.wikimedia.org/wikipedia/en/wiki/C_preprocessor#X-Macros
But if you need to map a string to a constant, you will have to search for the string in the array of string representations, which is O(n^2).

You can probably do this with gperf, which generates a lookup function that uses a perfect hash function.
Create a file similar to the following and run gperf with the -t option:
struct constant { char *name; int value; };
%%
C_CONST_NAME1, 1
C_CONST_NAME2, 2
gperf will output C (or C++) code that does the lookup in constant time, returning a pointer to the key/value pair, or NULL.
If you find that your keyword set is too large for gperf, consider using cmph instead.

There's no such capability built into C. However, you can use a tool such as doxygen to extract all #defines from your source code into a data structure that can be read at runtime (doxygen can store all macro definitions to XML).

Related

C++ __TIME__ is different when called from different files

I encountered this strange thing while playing around with predefined macros.
So basically, when calling __TIME__ from different files, this happens:
Is there anyway I can fix this? Or why does this happen?
All I am doing is printf("%s\n", __Time__); from different functions in different sources.
Or why does this happen?
From the docs:
This macro expands to a string constant that describes the time at which the preprocessor is being run.
If source files are compiled at different times, then the time will be different.
Is there anyway I can fix this?
You could use a command line tool to generate the time string, and pass the string as a macro definition to the compiler. That way the time will be the same for all files compiled by that command.
To answer your original question: __TIME__ is going to be different for different files because it specifies the time when that specific file was compiled.
However, you're asking X-Y problem. To address what you're actually trying to do:
If you need a compilation-time value, you're better off letting your build system specify it. That is, with make or whatever you're using, generate a random seed somehow, then pass that to the compiler as a command-line option to define your own preprocessor macro (e.g. gcc -DMY_SEED=$(random_value) ...). Then you could apply that to all C files that you compile and have each of them use MY_SEED however you want.
Well, I think your use case is kind of weird, but a simple way to get the same time in all files is to use __TIME__ in exactly one source file, and use it to initialize a global variable:
compilation_time.h:
const char *compilation_time;
compilation_time.c:
#include "compilation_time.h"
const char *compilation_time = __TIME__;
more_code.c:
#include "compilation_time.h"
...
printf("%s\n", compilation_time);
If you really want to construct an integer as in your comment (which may be non-portable as it assumes ASCII), you could do
seed.h:
const int seed;
seed.c:
#include "seed.h"
const int seed = (__TIME__[0] - '0') + ...;
more_code.c:
#include "compilation_time.h"
...
srand(seed);

C/C++ preprocessor directive for handing compilation errors

The title might be somewhat confusing, so I'll try to explain.
Is there a preprocessor directive that I can encapsulate a piece of code with, so that if this piece of code contains a compilation error, then some other piece of should be compiled instead?
Here is an example to illustrate my motivation:
#compile_if_ok
int a = 5;
a += 6;
int b = 7;
b += 8;
#else
int a = 5;
int b = 7;
a += 6;
b += 8;
#endif
The above example is not the problem I am dealing with, so please do not suggest specific solutions.
UPDATE:
Thank you for all the negative comments down there.
Here is the exact problem, perhaps someone with a little less negative approach will have an answer:
I'm trying to decide during compile-time whether some variable a is an array or a pointer.
I've figured I can use the fact that, unlike pointers, an array doesn't have an L-value.
So in essence, the following code would yield a compilation error for an array but not for a pointer:
int a[10];
a = (int*)5;
Can I somehow "leverage" this compilation error in order to determine that a is an array and not a pointer, without stopping the compilation process?
Thanks
No.
It's not uncommon for large C++ (and other-language) projects to have a "configuration" stage designed into their build system to attempt compilation of different snippets of code, generating a set of preprocessor definitions indicating which ones worked, so that the compilation of the project proper can then use the preprocessor definitions in #ifdef/#else/#endif statements to select between alternatives. For many UNIX/Linux software packages, running the "./configure" script coordinates this. You can read about the autoconf tool that helps create such scripts at http://www.gnu.org/software/autoconf/
This is not supported in standard C. However, many command shells make this fairly simple. For example, in bash, you can write a script such as:
#!/bin/bash
# Try to compile the program with Code0 defined.
if cc -o program -DCode0= "$*"; then
# That worked, do nothing extra. (Need some command here due to bash syntax.)
/bin/true
else
# The first compilation failed, try without Code0 defined.
cc -o program "$*"
fi
./program
Then your source code can test whether Code0 is defined:
#if defined Code0
foo bar;
#else
#include <stdio.h>
int main(void)
{
printf("Hello, world.\n");
return 0;
}
#endif
However, there are usually better ways to, in effect, make source code responsive to the environment or the target platform.
On the updated question :
If you're writing C++, use templates...
Specifically, to test the type of a variable you have helpers : std::enable_if, std::is_same, std::is_pointer, etc
See the type support module : http://en.cppreference.com/w/cpp/types
C11 _Generic macros might be able to handle this. If not, though, you're screwed in C.
Not in the C++ preprocessor. In C++ you can easily use overload resolution or a template or even expression SFINAE or anything like that to execute a different function depending on if a is an array or not. That is still occurring after preprocessing though.
If you need one that is both valid C and valid C++, the best you can do is #ifdef __cplusplus and handle it that way. Their common subset (which is mostly C89) definitely does not have something that can handle this at any stage of compilation.

Is it possible to make the execution of a program skip fprintf-statements/How to create my own fprintf-function?

In my C++-code there are several fprintf-statements, which I have used for debugging. Since I might need them again, I would prefer not to comment them out for the moment.
However, I need the execution of the program to be fast, so I would like to avoid them being printed out, as they are for the moment (I redirected stderr to a file).
Preferably this would be determined by the user passing an argument to the program, which I would extract like this:
main (int argc, char *argv[])
{
int isPrint=0;
if (argc > 1 ) {
isPrint = atoi ( argv[2]);
}
}
I thought of renaming fprintf to another name, and then from that function do a fprintf-call using the same parameters, based on the value of isPrint; however, then I realized that fprintf can have so many different kind of arguments and a various number of arguments; and that I don't know any generic way of declaring my own function with those requirements.
So I wonder how to create a function,which works exactly like fprintf, but which takes the extra parameter isPrint; or how to solve the above problem in another way.
Complementary information after first post:
One solution would be to add this before each fprintf-statement:
if (isPrint == true )
The typical approach is to use the preprocessor to compile away the calls to fprintf().
You would do something like this:
#if defined DEBUG
#define LOG(a) fprintf a
#else
#define LOG(a)
#endif
And in the code you would do:
LOG(("The value is %f", some_variable));
Note the double parenthesis, that's just to make the syntax work. You can do it nicer, but this is simpler to explain.
Now, you would either just edit the code to #define or #undef the DEBUG preprocessor symbol at the top of the file, or pass suitable options to the compiler (-D for GCC).
First note that if this is just for debugging, I'd agree that the typical way is to use macros or preprocessor defines to tell the compiler to include logging or not.
However, if you don't want it removed entirely by the compiler (so that you can turn the printing on or off with an argument), you could write your own log function that takes isPrint and some string, and then use snprintf() to format the string before you call it.
Something along these lines:
void myLog(int isPrint, char *message)
{
if(isPrint == 1)
{
fprintf(logFile, "%s", message);
}
}
char msg[64];
snprintf(msg, 64, "Test Message %d", 10);
myLog(isPrint, msg);
It may also be possible to wrap fprintf() in your own varags function, but that would be more complicated.
For debugging purpose you can use the variable argument macro:
#ifdef DEBUG
#define FPRINTF(...) fprintf(__VA_ARGS__)
#else
#define FPRINTF(...)
#endif
Be attentive that, if you use fprintf directly instead of FPRINTF then since you are defining a library function, it should appear after #include<> of that function.
It depends how much flexibilty you've got in changing the code and whether you want to be able to switch this off at runtime or just compile time.
I'd suggest you wrap it in your own variadic function (for tips look here) and then you've encapsulated the functionality.
Your function will essentially be just a thin wrapper round fprintf() but at this point you can then either use the preprocessor to ensure that your logging function does nothing if you compile it out, or you can do an integer comparison with, say, a logging level at runtime so that the underlying fprintf() only gets called if your debugging level is high enough.

Why use #define instead of a variable

What is the point of #define in C++? I've only seen examples where it's used in place of a "magic number" but I don't see the point in just giving that value to a variable instead.
The #define is part of the preprocessor language for C and C++. When they're used in code, the compiler just replaces the #define statement with what ever you want. For example, if you're sick of writing for (int i=0; i<=10; i++) all the time, you can do the following:
#define fori10 for (int i=0; i<=10; i++)
// some code...
fori10 {
// do stuff to i
}
If you want something more generic, you can create preprocessor macros:
#define fori(x) for (int i=0; i<=x; i++)
// the x will be replaced by what ever is put into the parenthesis, such as
// 20 here
fori(20) {
// do more stuff to i
}
It's also very useful for conditional compilation (the other major use for #define) if you only want certain code used in some particular build:
// compile the following if debugging is turned on and defined
#ifdef DEBUG
// some code
#endif
Most compilers will allow you to define a macro from the command line (e.g. g++ -DDEBUG something.cpp), but you can also just put a define in your code like so:
#define DEBUG
Some resources:
Wikipedia article
C++ specific site
Documentation on GCC's preprocessor
Microsoft reference
C specific site (I don't think it's different from the C++ version though)
Mostly stylistic these days. When C was young, there was no such thing as a const variable. So if you used a variable instead of a #define, you had no guarantee that somebody somewhere wouldn't change the value of it, causing havoc throughout your program.
In the old days, FORTRAN passed even constants to subroutines by reference, and it was possible (and headache inducing) to change the value of a constant like '2' to be something different. One time, this happened in a program I was working on, and the only hint we had that something was wrong was we'd get an ABEND (abnormal end) when the program hit the STOP 999 that was supposed to end it normally.
I got in trouble at work one time. I was accused of using "magic numbers" in array declarations.
Like this:
int Marylyn[256], Ann[1024];
The company policy was to avoid these magic numbers because, it was explained to me, that these numbers were not portable; that they impeded easy maintenance. I argued that when I am reading the code, I want to know exactly how big the array is. I lost the argument and so, on a Friday afternoon I replaced the offending "magic numbers" with #defines, like this:
#define TWO_FIFTY_SIX 256
#define TEN_TWENTY_FOUR 1024
int Marylyn[TWO_FIFTY_SIX], Ann[TEN_TWENTY_FOUR];
On the following Monday afternoon I was called in and accused of having passive defiant tendencies.
#define can accomplish some jobs that normal C++ cannot, like guarding headers and other tasks. However, it definitely should not be used as a magic number- a static const should be used instead.
C didn't use to have consts, so #defines were the only way of providing constant values. Both C and C++ do have them now, so there is no point in using them, except when they are going to be tested with #ifdef/ifndef.
Most common use (other than to declare constants) is an include guard.
Define is evaluated before compilation by the pre-processor, while variables are referenced at run-time. This means you control how your application is built (not how it runs)
Here are a couple examples that use define which cannot be replaced by a variable:
#define min(i, j) (((i) < (j)) ? (i) : (j))
note this is evaluated by the pre-processor, not during runtime
http://msdn.microsoft.com/en-us/library/8fskxacy.aspx
The #define allows you to establish a value in a header that would otherwise compile to size-greater-than-zero. Your headers should not compile to size-greater-than-zero.
// File: MyFile.h
// This header will compile to size-zero.
#define TAX_RATE 0.625
// NO: static const double TAX_RATE = 0.625;
// NO: extern const double TAX_RATE; // WHAT IS THE VALUE?
EDIT: As Neil points out in the comment to this post, the explicit definition-with-value in the header would work for C++, but not C.

Is there a standardised way to get type sizes in bytes in C++ Compilers?

I was wondering if there is some standardized way of getting type sizes in memory at the pre-processor stage - so in macro form, sizeof() does not cut it.
If their isn't a standardized method are their conventional methods that most IDE's use anyway?
Are there any other methods that anyone can think of to get such data?
I suppose I could do a two stage build kind of thing, get the output of a test program and feed it back into the IDE, but that's not really any easier than #defining them in myself.
Thoughts?
EDIT:
I just want to be able to swap code around with
#ifdef / #endif
Was it naive of me to think that an IDE or underlying compiler might define that information under some macro? Sure the pre-processor doesn't get information on any actual machine code generation functions, but the IDE and the Compiler do, and they call the pre-processor and declare stuff to it in advance.
EDIT FURTHER
What I imagined as a conceivable concept was this:
The C++ Committee has a standard that says for every type (perhaps only those native to C++) the compiler has to give to the IDE a header file, included by default that declares the size in memory that ever native type uses, like so:
#define CHAR_SIZE 8
#define INT_SIZE 32
#define SHORT_INT_SIZE 16
#define FLOAT_SIZE 32
// etc
Is there a flaw in this process somewhere?
EDIT EVEN FURTHER
In order to get across the multi-platform build stage problem, perhaps this standard could mandate that a simple program like the one shown by lacqui would be required to compile and run be run by default, this way, whatever that gets type sizes will be the same machine that compiles the code in the second or 'normal' build stage.
Apologies:
I've been using 'Variable' instead of 'Type'
Depending on your build environment, you may be able to write a utility program that generates a header that is included by other files:
int main(void) {
out = make_header_file(); // defined by you
fprintf(out, "#ifndef VARTYPES_H\n#define VARTYPES_H\n");
size_t intsize = sizeof(int);
if (intsize == 4)
fprintf(out, "#define INTSIZE_32\n");
else if (intsize == 8)
fprintf(out, "#define INTSIZE_64\n");
// .....
else fprintf(out, "$define INTSIZE_UNKNOWN\n");
}
Of course, edit it as appropriate. Then include "vartypes.h" everywhere you need these definitions.
EDIT: Alternatively:
fprintf(out, "#define INTSIZE_%d\n", (sizeof(int) / 8));
fprintf(out, "#define INTSIZE %d\n", (sizeof(int) / 8));
Note the lack of underscore in the second one - the first creates INTSIZE_32 which can be used in #ifdef. The second creates INTSIZE, which can be used, for example char bits[INTSIZE];
WARNING: This will only work with an 8-bit char. Most modern home and server computers will follow this pattern; however, some computers may use different sizes of char
Sorry, this information isn't available at the preprocessor stage. To compute the size of a variable you have to do just about all the work of parsing and abstract evaluation - not quite code generation, but you have to be able to evaluate constant-expressions and substitute template parameters, for instance. And you have to know considerably more about the code generation target than the preprocessor usually does.
The two-stage build thing is what most people do in practice, I think. Some IDEs have an entire compiler built into them as a library, which lets them do things more efficiently.
Why do you need this anyway?
The cstdint include provides typedefs and #defines that describe all of the standard integer types, including typedefs for exact-width int types and #defines for the full value range for them.
No, it's not possible. Just for example, it's entirely possible to run the preprocessor on one machine, and do the compilation entirely separately on a completely different machine with (potentially) different sizes for (at least some) types.
For a concrete example, consider that the normal distribution of SQLite is what they call an "amalgamation" -- a single already-preprocessed source code file that you actually compile on your computer.
You want to generate different code based on the sizes of some type? maybe you can do this with template specializations:
#include <iostream>
template <int Tsize>
struct dosomething{
void doit() { std::cout << "generic version" << std::endl; }
};
template <>
void dosomething<sizeof(int)>::doit()
{ std::cout << "int version" << std::endl; }
template <>
void dosomething<sizeof(char)>::doit()
{ std::cout << "char version" << std::endl; }
int main(int argc, char** argv)
{
typedef int foo;
dosomething<sizeof(foo)> myfoo;
myfoo.doit();
}
How would that work? The size isn't known at the preprocessing stage. At that point, you only have the source code. The only way to find the size of a type is to compile its definition.
You might as well ask for a way to get the result of running a program at the compilation stage. The answer is "you can't, you have to run the program to get its output". Just like you need to compile the program in order to get the output from the compiler.
What are you trying to do?
Regarding your edit, it still seems confused.
Such a header could conceivably exist for built-in types, but never for variables. A macro could perhaps be written to replace known type names with a hardcoded number, but it wouldn't know what to do if you gave it a variable name.
Once again, what are you trying to do? What is the problem you're trying to solve? There may be a sane solution to it if you give us a bit more context.
For common build environments, many frameworks have this set up manually. For instance,
http://www.aoc.nrao.edu/php/tjuerges/ALMA/ACE-5.5.2/html/ace/Basic__Types_8h-source.html
defines things like ACE_SIZEOF_CHAR. Another library described in a book I bought called POSH does this too, in a very includable way: http://www.hookatooka.com/wpc/
The term "standardized" is the problem. There's not standard way of doing it, but it's not very difficult to set some pre-processor symbols using a configuration utility of some sort. A real simple one would be compile and run a small program that checks sizes with sizeof and then outputs an include file with some symbols set.