Basically, I have a .proto definition which declares the package as main.
package main;
This file is being used by two programs. I am rewriting one of them. When I generate the c++ files for this definition, the namespace generated is main. This causes clash with the main function. Right now I just wrap the header and the source of the generated files with this:
#define main protocol
//Generated code
#undef main
I want to know if it is safe for me to rename the package in the .proto file and if I did, would the resulting protocol buffer messages will be compatible.
Something like
package xxx;
//Same definitions
Yes this would be 100% compatible in terms of data using the binary protocol - the binary DAT excludes all names, only tag numbers are included.
If you're using the JSON protocol, some names matter - in particular the member names.
Related
The intention here is that when the program starts, a particular function will read a configuration file and set some #defines. In other parts of this project, these preprocessor directives will decide what code to execute and what not.
Example:
A file X contains:
#define WHAT 0
A file Y contains:
#if (WHAT)
// Do this
How and where should these types of #defines be organized so that they are accessible where they should be without creating a mess?
Preprocessor directives are resolved when the program is compiled, not when it starts, so what you're asking for can't be done.
You'll need a runtime mechanism to make this work, but that doesn't guarantee code exclusion from the compiled binary.
The intention here is that when the program starts, a particular
function will read a configuration file and set some #defines. In
other parts of this project, these preprocessor directives will decide
what code to execute and what not.
As the other answer has said, this is not possible as the preprocessor directives like #define are consumed by the compilers pre-processors. What your executable binaries actually see is the compiled modified source which remains the same irrespective of every time you run with a different file that you open. Moreover, there is no concept like loading a configuration file and changing the run time as, C++ is a compiled language and not an interpreted.
What actually is possible is to
load the configuration file (preferable in a stand format)
Parse it with publicly available libraries for standard format or write your own parser.
Use STL objects like map to create a mapping between the configuration key and value
Place the STL in some namespace so as not to pollute the global namespace and make it extern. Ensure that an extern declaration is present in a header file and the variable is defined in a .cpp file so that the variable can be accesses from a translation unit different from where it was defined.
Consume the mapped configuration anywhere within your program.
Lot of the times when I watch other people's code I see some are including a .h file and some are including a .c/.cpp file. What is the difference?
It depends on what is in the file(s).
The #include preprocessor directive simply inserts the referenced file at that point in the original file.
So what the actual compiler stage (which runs after the preprocessor) sees is the result of all that inserting.
Header files are generally designed and intended to be used via #include. Source files are not, but it sometimes makes sense. For instance when you have a C file containing just a definition and an initializer:
const uint8_t image[] = { 128, 128, 0, 0, 0, 0, ... lots more ... };
Then it makes sense to make this available to some piece of code by using #include. It's a C file since it actually defines (not just declares) a variable. Perhaps it's kept in its own file since the image is converted into C source from some other (image) format used for editing.
.h files are called header files, they should not contain any code (unless it happens to contain information about a C++ templated object). They typically contain function prototypes, typedefs, #define statements that are used by the source files that include them. .c files are the source files. They typically contain the source code implementation of the functions that were prototyped in the appropriate header file.
Source- http://cboard.cprogramming.com/c-programming/60805-difference-between-h-c-files.html
you can look at gcc website (https://gcc.gnu.org/onlinedocs/gcc/Invoking-G_002b_002b.html) that reports a good summary of all the extensions that you can use in C/C++:
C++ source files conventionally use one of the suffixes ‘.C’, ‘.cc’, ‘.cpp’, ‘.CPP’, ‘.c++’, ‘.cp’, or ‘.cxx’; C++ header files often use ‘.hh’, ‘.hpp’, ‘.H’, or (for shared template code) ‘.tcc’; and preprocessed C++ files use the suffix ‘.ii’. GCC recognizes files with these names and compiles them as C++ programs even if you call the compiler the same way as for compiling C programs (usually with the name gcc).
Including header file with declarations is the main, recommended and used almost anywhere, method for making consistent declarations among a project. Including another source file is another (very rare) kind of beast, and it's useful and possible under specific conditions:
There is a reason to split code to separate source files despite it shall be compiled as a single module. For example, there are different versions of some functions which shan't be visible from another modules. So, they are declared static but which version is included is regulated by compile options. Another variant is size and/or maintanenance credentials issues.
The included file isn't compiled by itself as a project module. So, its exported definitions aren't in conflict with the module that file is included to.
Here, I used terms definition and declaration in the manner that the following are declarations:
extern int qq;
void f(int);
#define MYDATATYPE double
and the following are definitions:
int qq; // here, the variable is allocated and exported
void f(int x) { printf("%d\n", x); } // the same for function
(Also, declarations include C++ methods with bodies declared inside their class definition.)
Anyway, the case another .c/.cxx/etc. file is included into source file are very confusing and shall be avoided until a very real need. Sometimes a specific suffix (e.g. .tpl) is used for such files, to avoid reader's confusion.
I know that include is for classes and using is for some built-in stuff, like namespace std... When you include something, you can create objects and play around with them, but when you are "using" something, then you can use some sort of built-in functions. But then how am I supposed to create my own "library" which I could "use"?
Simply put #include tells the pre-compiler to simply copy and paste contents of the header file being included to the current translation unit. It is evaluated by the pre-compiler.
While using directive directs the compiler to bring the symbol names from another scope in to current scope. This is essentially put in effect by the compiler.
But then how am I supposed to create my own "library" which I could "use"?
Namespaces are something which are used for preventing symbol name clashes. And usually every library implementer will have their functionality wrapped up in one or many namespaces.
'include' basically does a copy-paste the value of a file to the location of the "include" line. This is used to make your source code (usually .c file) aware of a declaration of other source code (usually sits in .h file).
'using' basically tells the compiler that in the next code you are using something (usually a namespace) so you won't have to do it explicitly each time:
Instead of:
std::string a;
std::string b;
std::string c;
You could write:
using namespace std;
string a;
string b;
string c;
You can say both gives the same functionality but #include is not done by compiler where as using is done by compiler. In #include all the code is placed in file where #include is given where as namespace gives the definition of function and variables from one scope to another. If you have function with same name in two header files and both are included then there will be an error of redeclaration but you can use same name functions if they are from different namespaces.
When you run bison, it creates a stack class for you in "stack.hh". The file name is fixed, but the contents are wrapped in a namespace of your choosing.
If you use bison to generate 2 separate grammars (ie 2 *.y files) and you use the C++ mode, the "stack.hh" files conflict and get overwritten.
A similar thing happens for the "location.hh" and "position.hh" classes that are autogenerated, but there is a work around in bison 2.7
%define api.location.type "foo::location"
that lets you reuse the foo grammar namespace in your bar grammar namespace.
But I can't find anyway of doing this exercise when dealing with the "stack.hh" file.
The easiest way to deal with this is just to put the Bison files in two separate directories. Then when you generate the code the files will not conflict, assuming each set of files gets generated in the same location as the corresponding Bison file.
I've got a C/C++ question, can I reuse functions across different object files or projects without writing the function headers twice? (one for defining the function and one for declaring it)
I don't know much about C/C++, Delphi and D. I assume that in Delphi or D, you would just write once what arguments a function takes and then you can use the function across diferent projects.
And in C you need the function declaration in header files *again??, right?. Is there a good tool that will create header files from C sources? I've got one, but it's not preprocessor-aware and not very strict. And I've had some macro technique that worked rather bad.
I'm looking for ways to program in C/C++ like described here http://www.digitalmars.com/d/1.0/pretod.html
Imho, generating the headers from the source is a bad idea and is unpractical.
Headers can contain more information that just function names and parameters.
Here are some examples:
a C++ header can define an abstract class for which a source file may be unneeded
A template can only be defined in a header file
Default parameters are only specified in the class definition (thus in the header file)
You usually write your header, then write the implementation in a corresponding source file.
I think doing the other way around is counter-intuitive and doesn't fit with the spirit of C or C++.
The only exception is can see to that is the static functions. A static function only appears in its source file (.cor .cpp) and can't (obviously) be used elsewhere.
While I agree it is often annoying to copy the header definition of a method/function to the source file, you can probably configure your code editor to ease this. I use Vim and a quick script helped me with this a lot. I guess a similar solution exists for most other editors.
Anyway, while this can seem annoying, keep in mind it also gives a greater flexibility. You can distribute your header files (.h, .hpp or whatever) and then transparently change the implementation in source files afterward.
Also, just to mention it, there is no such thing as C/C++: there is C and there is C++; those are different languages (which indeed share much, but still).
It seems to me that you don't really need/want to auto-generate headers from source; you want to be able to write a single file and have a tool that can intelligently split that into a header file and a source file.
Unfortunately, I'm not aware of any such tool. It's certainly possible to write one - but you'd need a given a C++ front end. You could try writing something using clang - but it would be a significant amount of work.
Considering you have declared some functions and wrote their implementation you will have a .c/cpp file and a header .h file.
What you must do in order to use those functions:
Create a library (DLL/so or static library .a/.lib - for now I recommend static library for the ease of use) from the files were the implementation resides
Use the header file (#include it) (you don't need to rewrite the header file again) in your programs to obtain the function definitions and link with your library from step 1.
Though >this< is an example for Visual Studio it makes perfect sense for other development environments also.
This seems like a rudimentary question, so assuming I have not mis-read,
Here is a basic example of re-use, to answer your first question:
#include "stdio.h"
int main( int c, char ** argv ){
puts( "Hello world" );
}
Explanation:
1. stdio.h is a C header file containing (among others) the definition of a function called puts().
2. in main, puts() is called, from the included definition.
Some compilers (including gcc I think ) have an option to generate headers.
There is always very much confusion about headers and source-files in C++. The links I provided should help to clear that up a little.
If you are in the situation that you want to extract headers from source-file, then you probably went about it the wrong way. Usually you first declare your function in a header-file, and then provide an implementation (definition) for it in a source-file. If your function is actually a method of a class, you can also provide the definition in header file.
Technically, a header file is just a bunch of text that is actually inserted into the source file by the preprocessor:
#include <vector>
tells the preprocessor to insert contents of the file vector at the exact place where the #include appears. This really just text-replacement. So, header-files are not some kind of special language construct. They contain normal code. But by putting that code into a separate file, you can easily include it in other files using the preprocessor.
I think it's a good question which is what led me to ask this: Visual studio: automatically update C++ cpp/header file when the other is changed?
There are some refactoring tools mentioned but unfortunately I don't think there's a perfect solution; you simply have to write your function signatures twice. The exception is when you are writing your implementations inline, but there are reasons why you can't or shouldn't always do this.
You might be interested in Lazy C++. However, you should do a few projects the old-fashioned way (with separate header and source files) before attempting to use this tool. I considered using it myself, but then figured I would always be accidentally editing the generated files instead of the lzz file.
You could just put all the definitions in the header file...
This goes against common practice, but is not unheard of.