I have a problem with an initialization of a vector of integers named id_vectorthat don't need to change.
the initializer is done like this:
static int id[4]{200,300,400,500};
id_vector = std::vector<int> (id,id + (sizeof(id)/sizeof(int)));
So far so good. but now we need to have different versions of this software, and I would like to choose a different id array depending on the version. The only way I can think of is to do it inside a #ifdefor some sort of macro, but I was wondering if someone knows a better way to do it.
I was thinking something like a namespace but I don't know if this is a good use for it.
UPDATE
Had a typo in the code, sorry about that. Sorry if I didn't make myself clear but basically this is a list of valid ids and there are two versions that need different valid ids. Don't think is necessary to add more code.
If you need to put different values for different platforms, and you need the change to happen in code (as opposed to, say, a configuration file) you will end up with an #ifdef in your code. Here is one example of how you can do it:
static int id[4]{
#ifdef PLEFTORM_1
200,300,400,500
#endif
#ifdef PLEFTORM_2
100,200,410,522
#endif
};
id_vector = std::vector<int> (id,id + (sizeof(id)/sizeof(*id)));
Another solution may be to put all platform-dependent stuff in separate files, and conditionally include that file. This may reduce the number of #ifdefs to 1, but the overall code may end up being harder to read.
Of course you can change your strategy, and place platform-specific IDs in a configuration file. This would help you avoid conditional compilation, at the expense of introducing a run-time dependency on the configuration file.
Namespaces ?
namespace PLATFORM_A
{
static int id[4]{200,300,400,500};
}
namespace PLATFORM_B
{
static int id[4]{500,400,300,200};
}
using namespace USE_PLATFORM;
...
id_vector = std::vector<int> (id,id + (sizeof(id)/sizeof(int)));
With g++ -DUSE_PLATFORM=PLATFORM_A ... or g++ -DUSE_PLATFORM=PLATFORM_B ... or however you setup USE_PLATFORM....
std::vector<int> id_vector (id,id + (sizeof(id)/sizeof(int)));
id_vector = std::vector<int> (id, id + (sizeof(id)/sizeof(*id)));
Related
I'm working on a project that has a directory of files: myFiles.
I need something like:
const vector<string> configs = { contents_of_file0, contents_of_file1, ... };
It is desired that the contents of these files be part of the binary, as opposed to being read at runtime.
Is there a clean way to do this?
Today, there is a massive hack, the contents of all the files are concatenated into a single #define, by a build script.
#define contents "a supper massive string that is too large for some compilers to ingest"
This #define is later parsed at runtime.
I'm looking for a cleaner way...
Is there a clean way to do this?
There is proposed feature for this purpose that may end up in a future standard.
Until then, you can use meta programming: Generate source code from the input file. The generated source should contain the initialiser based on the file. An open source program exists that can do this: xxd
concatenated into a single #define
I don't see an advantage to this. If you want separate strings, then generate separate initialisers. I also don't see a need for a macro.
I also recommend carefully re-considering whether it even makes sense to want this. Loading a massive executable isn't any faster than reading large files, and reading files is more flexible.
This #define is later parsed at runtime. This is the wrong statement, define is a preprocessor, which is done in the very early stage of compiling. And for static strings, it can be parsed in build time since c++17, this is also a consideration solution to generate constepr vector when combined with constexpr vector from c++2a.
May consider using the include hack for simplicity, if you use it please don't forget to set up the dependency for tbl file in makefile:
For the source file:
int main(int argc, char* argv[]) {
std::vector<std::string> vec = {
#include "1.tbl"
#include "2.tbl"
};
for (auto& v : vec) {
std::cout << v << std::endl;
}
return 0;
}
For the table file 1.tbl:
"1","2","3",
For the table file 2.tbl:
"a", "b","c",
We get the output:
1
2
3
a
b
c
I am new to coding and C++ and I am asking myself how to store or structure all these little (sub)functions and code in a proper way?
For example a function to sum up all values of an array or the Fibonacci numbers or all the other little functions and programs which are basic stuff esp. pointers etc.!?
My idea is to create an ordinary .txt sheet and to copy and paste them all there in just one .txt
For me it´s important to have them all at one place. How do you pros handle this or do you guys really have most of this stuff in your local memory (brain). For me it seems impossible to remember all the functions and algorithms or even the syntax (when the code starts to get nasty).
If I understood your question correctly then you are asking where/how we store reusable snippets of code in an easy to access way. There are number of methods to accomplish this, one which you have mentioned is to simply use a text file and copy paste as needed, but in my opinion this is a bit archaic.
I have two main methods I like to use, first if it's code I want to access online or is rather large functions I plan to reuse, I simply make a gist of it and leave it there, ready to be accessed as needed. Usually I name it something descriptive so when I look through all my gists, I can find the ones I need quickly.
The second method, and the stuff I do for code that mainly gets reused is to make snippets using my IDE's configuration files. Such snippets usually are written in JSON format and include a trigger word, for example: for and then when you hit a special key, typically tab, it will expand the snippet to something like:
for(int i = 0; i < n; i++) {
// Code goes here...
}
And we can simply just hit tab to edit the starting condition, the ending condition, the increment and the variable names. Snippets are very versatile and you can write as many as you want. If you use Visual Studio Code you can take a look at the C++ tools extension which has some default snippets.
Lastly I keep a handy bookmark to a C++ reference site and look up stuff in the STL as needed so I'm not reinventing the wheel or making extra work for myself.
Welcome to StackOverflow!!!
In C++ you generally put all your functions in a header and cpp file to store all the functions. You then go to the main and pick up a reference to the header file.
// A2DD.h
#ifndef A2DD_H
#define A2DD_H
namespace A2DD{
int GetSum(int x, int y);
}
#endif
and the implementation goes in the CPP file:
// A2DD.cpp
#include "A2DD.h"
int A2DD::GetSum(int x, int y){
return x + y;
}
Then go the main.cpp
#include "A2DD.h"
int main(){
std::cout << GetSum(2, 2) << std::endl;
}
As far as remembering the functions, you can simply take a quick look at the header file which declares the functions (no implementation)
I’ve used global variables without having any noticeable problems but would like to know if there are potential problems or drawbacks with my use of globals.
In the first scenario, I include const globals into a globals.h file, I then include the header into various implementation files where I need access to any one of the globals:
globals.h
const int MAX_URL_LEN = 100;
const int MAX_EMAIL_LEN = 50;
…
In the second scenario, I declare and initialize the globals in an implementation file when the application executes. These globals are never modified again. When I need access to these globals from a different implementation file, I use the extern keyword:
main.cpp
char application_path[128];
char data_path[128];
// assign data to globals
strcpy(application_path, get_dll_path().c_str());
…
do_something.cpp
extern char application _path[]; // global is now accessible in do_something.cpp
Regarding the first scenario above, I’ve considered removing all of the different “include globals.h” and using extern where access to those globals is needed but have not done so since just including the globals.h is so convenient.
I am concerned that I will have different versions of the variables for each implementation file that includes globals.h.
Should I use extern instead of including the globals.h everywhere access is needed?
Please advise, and thank you.
Global mutable variables
provide invisible lines of influence across all of the code, and
you cannot rely on their values, or whether they've been initialized.
That is, global mutable variables do for data flow what the global goto once did for execution flow, creating a spaghetti mess, wasting everyone's time.
Constant global variables are more OK, but even for those you run into
the initialization order fiasco.
I remember how angry I got when I realized that all my troubles in wrapping a well known GUI framework, was due to it needlessly using global variables and provoking the initialization order fiasco. First the anger was directed at the author, then at myself for being so stupid, not realizing what was going on (or rather, was not going on). Anyway.
A sensible solution to all this is Meyers' singletons, like
inline
auto pi_decimal_digits()
-> const string&
{
static const string the_value = compute_pi_digits();
return the_value;
}
For the case of a global that's dynamically initialized from some place that knows the value, “one programmer's constant is another programmer's variable”, there is no good solution, but one practical solution is to accept the possibility of a run time error and at least detect it:
namespace detail {
inline
auto mutable_pi_digits()
-> string&
{
static string the_value;
return the_value;
}
} // namespace detail
inline
void set_pi_digits( const string& value )
{
string& digits = detail::mutable_pi_digits();
assert( digits.length() == 0 );
digits = value;
}
inline
auto pi_digits()
-> const string&
{ return detail::mutable_pi_digits(); }
Your implementation is fine for now. Globals become a problem when
Your program grows and so does your number of globals.
New people join the team that don't know what you were thinking.
Number 1 becomes particularly troublesome when your program becomes multi-threaded. Then you have a number of threads using the same data and you may require protection, which is difficult with just a list of globals.
By grouping data in separate files according to some criteria such as purpose or subject matter your code becomes more maintainable as it grows and you leave breadcrumbs for new programmers on the project to figure out how the software works.
One issue with globals is that when you go to include 3rd party libraries in your code, sometimes they've used globals with the same names as yours. There are definitely times when a global makes sense, but if possible you should also take care to do something like put it into a namespace.
I have a .h file in which hundreds of constants are defined as macros:
#define C_CONST_NAME Value
What I need is a function that can dynamically get the value of one of these constants.
needed function header :
int getConstValue(char * constName);
Is that even possible in the C langage?
---- EDIT
Thanks for the help, That was quick :)
as i was thinking there is no miracle solution for my needs.
In fact the header file i use is generated by "SCADE : http://www.esterel-technologies.com/products/scade-suite/"
On of the solution i got from #Chris is to use some python to generate c code that does the work.
Now its to me to make some optimizations in order to find the constant name. I have more than 5000 constants O(500^2)
i'm also looking at the "X-Macros" The first time i hear of that, home it works in C because i'm not allowed to use c++.
Thanks
C can't do this for you. You will need to store them in a different structure, or use a preprocessor to build the hundreds of if statements you would need. Something like Cogflect could help.
Here you go. You will need to add a line for each new constant, but it should give you an idea about how macros work:
#include <stdio.h>
#define C_TEN 10
#define C_TWENTY 20
#define C_THIRTY 30
#define IFCONST(charstar, define) if(strcmp((charstar), #define) == 0) { \
return (define); \
}
int getConstValue(const char* constName)
{
IFCONST(constName, C_TEN);
IFCONST(constName, C_TWENTY);
IFCONST(constName, C_THIRTY);
// No match
return -1;
}
int main(int argc, char **argv)
{
printf("C_TEN is %d\n", getConstValue("C_TEN"));
return 0;
}
I suggest you run gcc -E filename.c to see what gcc does with this code.
A C preprocessor macro (that is, something named by a #define statement) ceases to exist after preprocessing completes. A program has no knowledge of the names of those macros, nor any way to refer back to them.
If you tell us what task you're trying to perform, we may be able to suggest an alternate approach.
This is what X-Macros are used for:
https://secure.wikimedia.org/wikipedia/en/wiki/C_preprocessor#X-Macros
But if you need to map a string to a constant, you will have to search for the string in the array of string representations, which is O(n^2).
You can probably do this with gperf, which generates a lookup function that uses a perfect hash function.
Create a file similar to the following and run gperf with the -t option:
struct constant { char *name; int value; };
%%
C_CONST_NAME1, 1
C_CONST_NAME2, 2
gperf will output C (or C++) code that does the lookup in constant time, returning a pointer to the key/value pair, or NULL.
If you find that your keyword set is too large for gperf, consider using cmph instead.
There's no such capability built into C. However, you can use a tool such as doxygen to extract all #defines from your source code into a data structure that can be read at runtime (doxygen can store all macro definitions to XML).
I was wondering if there is some standardized way of getting type sizes in memory at the pre-processor stage - so in macro form, sizeof() does not cut it.
If their isn't a standardized method are their conventional methods that most IDE's use anyway?
Are there any other methods that anyone can think of to get such data?
I suppose I could do a two stage build kind of thing, get the output of a test program and feed it back into the IDE, but that's not really any easier than #defining them in myself.
Thoughts?
EDIT:
I just want to be able to swap code around with
#ifdef / #endif
Was it naive of me to think that an IDE or underlying compiler might define that information under some macro? Sure the pre-processor doesn't get information on any actual machine code generation functions, but the IDE and the Compiler do, and they call the pre-processor and declare stuff to it in advance.
EDIT FURTHER
What I imagined as a conceivable concept was this:
The C++ Committee has a standard that says for every type (perhaps only those native to C++) the compiler has to give to the IDE a header file, included by default that declares the size in memory that ever native type uses, like so:
#define CHAR_SIZE 8
#define INT_SIZE 32
#define SHORT_INT_SIZE 16
#define FLOAT_SIZE 32
// etc
Is there a flaw in this process somewhere?
EDIT EVEN FURTHER
In order to get across the multi-platform build stage problem, perhaps this standard could mandate that a simple program like the one shown by lacqui would be required to compile and run be run by default, this way, whatever that gets type sizes will be the same machine that compiles the code in the second or 'normal' build stage.
Apologies:
I've been using 'Variable' instead of 'Type'
Depending on your build environment, you may be able to write a utility program that generates a header that is included by other files:
int main(void) {
out = make_header_file(); // defined by you
fprintf(out, "#ifndef VARTYPES_H\n#define VARTYPES_H\n");
size_t intsize = sizeof(int);
if (intsize == 4)
fprintf(out, "#define INTSIZE_32\n");
else if (intsize == 8)
fprintf(out, "#define INTSIZE_64\n");
// .....
else fprintf(out, "$define INTSIZE_UNKNOWN\n");
}
Of course, edit it as appropriate. Then include "vartypes.h" everywhere you need these definitions.
EDIT: Alternatively:
fprintf(out, "#define INTSIZE_%d\n", (sizeof(int) / 8));
fprintf(out, "#define INTSIZE %d\n", (sizeof(int) / 8));
Note the lack of underscore in the second one - the first creates INTSIZE_32 which can be used in #ifdef. The second creates INTSIZE, which can be used, for example char bits[INTSIZE];
WARNING: This will only work with an 8-bit char. Most modern home and server computers will follow this pattern; however, some computers may use different sizes of char
Sorry, this information isn't available at the preprocessor stage. To compute the size of a variable you have to do just about all the work of parsing and abstract evaluation - not quite code generation, but you have to be able to evaluate constant-expressions and substitute template parameters, for instance. And you have to know considerably more about the code generation target than the preprocessor usually does.
The two-stage build thing is what most people do in practice, I think. Some IDEs have an entire compiler built into them as a library, which lets them do things more efficiently.
Why do you need this anyway?
The cstdint include provides typedefs and #defines that describe all of the standard integer types, including typedefs for exact-width int types and #defines for the full value range for them.
No, it's not possible. Just for example, it's entirely possible to run the preprocessor on one machine, and do the compilation entirely separately on a completely different machine with (potentially) different sizes for (at least some) types.
For a concrete example, consider that the normal distribution of SQLite is what they call an "amalgamation" -- a single already-preprocessed source code file that you actually compile on your computer.
You want to generate different code based on the sizes of some type? maybe you can do this with template specializations:
#include <iostream>
template <int Tsize>
struct dosomething{
void doit() { std::cout << "generic version" << std::endl; }
};
template <>
void dosomething<sizeof(int)>::doit()
{ std::cout << "int version" << std::endl; }
template <>
void dosomething<sizeof(char)>::doit()
{ std::cout << "char version" << std::endl; }
int main(int argc, char** argv)
{
typedef int foo;
dosomething<sizeof(foo)> myfoo;
myfoo.doit();
}
How would that work? The size isn't known at the preprocessing stage. At that point, you only have the source code. The only way to find the size of a type is to compile its definition.
You might as well ask for a way to get the result of running a program at the compilation stage. The answer is "you can't, you have to run the program to get its output". Just like you need to compile the program in order to get the output from the compiler.
What are you trying to do?
Regarding your edit, it still seems confused.
Such a header could conceivably exist for built-in types, but never for variables. A macro could perhaps be written to replace known type names with a hardcoded number, but it wouldn't know what to do if you gave it a variable name.
Once again, what are you trying to do? What is the problem you're trying to solve? There may be a sane solution to it if you give us a bit more context.
For common build environments, many frameworks have this set up manually. For instance,
http://www.aoc.nrao.edu/php/tjuerges/ALMA/ACE-5.5.2/html/ace/Basic__Types_8h-source.html
defines things like ACE_SIZEOF_CHAR. Another library described in a book I bought called POSH does this too, in a very includable way: http://www.hookatooka.com/wpc/
The term "standardized" is the problem. There's not standard way of doing it, but it's not very difficult to set some pre-processor symbols using a configuration utility of some sort. A real simple one would be compile and run a small program that checks sizes with sizeof and then outputs an include file with some symbols set.