I'm currently getting compiler warnings that resemble the warning I gave in the question title. Warnings such as....
warning: 'boost::system::generic_category' defined but not used
warning: 'boost::system::posix_category' defined but not used
warning: 'boost::system::errno_ecat' defined but not used
warning: 'boost::system::native_ecat' defined but not used
As far as I know the program isn't being affected in any way. However, I don't like warnings hanging around, but I have no idea what these warnings are trying to tell me besides that something defined and related to boost is hanging around somewhere not being used. However, everything that I've defined, I've used. The boost libraries I'm using are the random library and the filesystem library.
When I check the source of the warning it brings up Boost's error_category.hpp file and highlights some static consts that are commented as either "predefined error categories" or "deprecated synonyms". Maybe the problem has something to do with my error handling (or lack of) when using the library?
Can anyone give some insight regarding why these warnings are popping up? Am I completely missing something?
P.S. Warnings are at max level.
I agree with #Charles Salvia, but wanted to add that at least as of Boost 1.44.0, these definitions are now wrapped -- to be excluded as deprecated. So if you aren't using them, just include the following lines before you include the header file:
#ifndef BOOST_SYSTEM_NO_DEPRECATED
#define BOOST_SYSTEM_NO_DEPRECATED 1
#endif
This relates to the error_code library in the Boost.System library. Boost error_codes contain two attributes: values and categories. In order to make error_codes extensible so that library users can design their own error categories, the boost designers needed some way to represent a unique error code category. A simple ID number wouldn't suffice, because this could result in two projects using conflicting ID numbers for custom error categories.
So basically, what they did was to use memory addresses, in the form of static objects that inherit from the base class error_category. These variables don't actually do anything except to serve as unique identifiers of a certain error category. Because they are essentially static dummy objects with unique addresses in memory, you can easily create your own custom error categories which won't interfere with other error category "IDs." See here for more information.
I suppose that what you're seeing is a side-effect of this design decision. Since these variables are never actually used in your program, the compiler is generating warnings. Suffice it to say, I don't think you're doing anything wrong.
I tried the BOOST_SYSTEM_NO_DEPRECATED suggested by #M.Tibbits, and it seemed to remove some instances of the warnings (in a big system built under linux), but not all.
However, using -isystem instead of -I to include the boost headers (and ignore their problems) did work for me.
Suggested by https://exceptionshub.com/how-do-you-disable-the-unused-variable-warnings-coming-out-of-gcc.html
Explained (obliquely) by GNU GCC: http://gcc.gnu.org/onlinedocs/gcc/Directory-Options.html
Related
I have the following errors reported when trying to build my application:
error C2143: syntax error : missing '}' before 'constant'
error C2143: syntax error : missing ';' before 'constant'
error C2059: syntax error : 'constant'
For the following code:
namespace oP
{
enum adjustment
{
AUTO_OFF,
AUTO_ONCE,
AUTO_CONTINUOUS,
AUTO_SEMI,
ABSOLUTE, // The line that the errors point to.
NUDGE
};
}
Lower case "absolute" builds ok, and if I misspell ABSOLUTE then it builds without errors.
I've searched my entire codebase and there's nowhere else using the term "ABSOLUTE".
I've investigated the built artifact without this change and I can't find any reference to ABSOLUTE in it.
Does anyone have pointers as to what's wrong or how to debug this?
Thanks
ABSOLUTE is #defined (to the number 1) in one of the windows API headers <windi.h>. That is what is confusing the compiler.
You could #undef it, remove <windows.h> if you don't need it, or rename your enumeration.
You've a macro defined in that name somewhere in one of your included files; check them. The easier way is to inspect the preprocessor's output.
If you use GCC, use the -E flag to stop after the preprocessing stage. With VC++ compiler you should be using /E and/or /P. See How do I see a C/C++ source file after preprocessing in Visual Studio? for details.
Usually the convention is to name macros in all uppercase; this is applicable to enumerations too, if you use C++03's (ordinary) enums. A better alternative is to use C++11's strongly-typed, scoped enumerations.
The name of each entry can be in Pascal case and with the enumeration's name decoration they become very readable Adjustment::Absolute as opposed to the older, unscoped enumeration's ABSOLUTE. This isn't very readable since the reader might confuse herself with the macro which wingdi.h declares (as Bathsheba points out). Apart from readability, it also avoids polluting the enclosing namespace.
Your are using visual c++ compiler and #include then you should get this error. In windows.h the file #include < wingdi.h> is included and in wingdi.h you will find
/* Coordinate Modes */
#define ABSOLUTE 1
#define RELATIVE 2
Hence error occurred.
How do I find something that the C++ compiler thinks is defined as a constant?
If your compiler is unwilling to produce helpful messages (usually it prints where term has been defined previously) or if you suspect that you fell victim of macro voodoo in WinAPI headers...
Selectively comment out lines of code and recompile to pinpoint the problem.
If you comment out one line and your program compiles after that, that line is source of your problem. If your code block is big, do "binary search" - comment out a whole block, then half of it, so you narrow down the problem quickly.
In IDEs you often can mouse over the item to see where it is defined or press a key or use context menus to "jump to definition".
In addition to that you can investigate preprocessor output.
and can't selectively comment out headers to test when it changes - since the new list of compiler warnings would be too onerous to work through
Make a blank *.cpp file and copy the problematic definitions into it till you break it. That would allow you to pinpoint the problem.
It is a good practice to always include only the minimal set of necessary headers into your own *.h files, preferably completely avoiding OS-specific headers, although that is not really possible in this case.
In your particular scenario another good option is to change your naming style for enum values. Normally ALL_UPPERCASE is reserved for macros only (macros definition and macro constants). A notable exception to this rule is the min and max macros defined within Windows headers (they can be disabled). Because you used it in an enum, you clashed with OS-specific definition. I would use the same naming convention for enums as for constant and local variables.
In a C++ library used in many places in our collaboration, we have mistakenly defined multiple enums in the same lib namespace to define the constant values. An enum is a distinct type but not a distinct namespace. As a consequence all enum values end up in the same namespace. This is an open door to enum identifier collision and also inconvenient when using automatic completion. In order to solve this we are considering moving the different enums in distinct namespaces.
To easy evolution of code using this library we would like to be able to display at compile time a "deprecate" warning message suggesting the code change when the old enum identifiers are met in the code.
The following question and answers Does there exist a static_warning? provide a way to define a deprecate warning when a condition is met. How could I achieve the same effect when an enum identifier shows up in user code ?
If you use Visual C++ you might be able to use #pragma deprecated.
For GCC there the __attribute__ compiler extension, which might be used to mark variables or functions as deprecated. Don't know about enumerations though.
There are two problems I run into occasionally. One is compile time assertions and the other is a header file being included in multiple places in weird ways (this is not my code so I cannot fix it by not including it in weird ways. Even if I tried, it would take too many hours/days as it is deeply embedded), e.g.:
class Foo
{
public:
#include "VariableDeclarations.h" // Some file that has all the variables that need to be declared
// which is also included by several other classes in the same way
};
The above code is a simplication of what I am currently dealing with. Of course, the class Foo is doing other things as well.
Now if I add another variable declaration to the header in this case, and the file Class Foo is in does not know about the type, I will get a compile error. To fix it, I include the necessary headers. Problem is, all the compiler tells me is "undeclared identifier" and the file name comes up as VariableDeclarations.h. I would like to know which file included the declarations and consequently did not know about the type that I just added.
Similar thing happens with compile time assertions. I have no indication as to which line/file caused the error. It just gives me the error (e.g. in Eigen math library, I was experiencing this a lot).
In g++, you can use the verbose option, -v. For intel, the same flag -v should work. For MSVC there is a project option you can tweak somewhere in one of the build settings: How can I make Visual Studio's build be very verbose?
The preprocessor pound sign (#) has to be the first symbol on the line for it to be processed, and the trailing ; shouldn't be there either:
class Foo
{
public:
# include "VariableDeclarations.h"; // Some file that has all the variables that need to be declared
// which is also included by several other classes in the same way
};
Also, both GCC and MSVC have a switch to only run the preprocessor and show you the generated file. That's an excellent tool to debug this kind of stuff.
What is the proper layout of a C++ .h file?
What I mean is header guard, includes, typedefs, enums, structs, function declarations, class definitions, classes, templates, etc, etc
I am porting an old code base that is over 10 years old and moving to a modern compiler from Codewarrior 8 is proving interesting as things seem all over the place. I get a lot of dont name a type errors, forbidding declaring without a type, etc, etc.
There is no silver bullet regarding how to organize your headers.
However one important rule is to keep it consistent across the project so that all persons involved in the project know what to expect.
Usually typedefs and defines are at the top of the file in my headers, but that can not be regarded as a rule, then come class/template definitions.
A rule that I follow for C++ is one header per class, which usually keeps the headers small enough to allow grasping the content and finding things without scrolling too much.
It depends on what you mean by proper. If you mean language-enforced, there really isn't one. In fact, you don't even have to name it ".h". I've seen ".c" files #include'd in working commercial code (name withheld to protect the guilty). #include is just a preprocessor hack to get some kind of rough modularity in the language by allowing files to textually include other files. Anything else you tend to see as standard practice is just useful idioms people have developed over time.
That doesn't help your current issue though.
I'd guess that what you are actually seeing is a lot of missing symbols due to platform differences. Nothing due to weirdly-formed .h files at all.
It is possible that the old code was written to work with an old K&R-style C compiler. They had oddities like implicit function declarations (any reference to an undeclared routine assumed it returned int and all its parameters were int). You could try seeing if your compiler has a K&R flag, but a lot of the flagged stuff may actually be latent errors in the old code.
It sounds like you're running into assumptions made based on the previous implementation (Codewarrior). For example:
#include <iostream>
int main() {
std::cout << "string literal\n";
return 0;
}
This relies on iostream including something it's not required to declare: the operator<<(ostream&, char const*) overload (it's a free function, not a method of ostream like the others). And to be completely unambiguous, #include <ostream> is also required above. In C++, library headers are allowed to include any other library header, in general, so this problem crops up whenever someone inadvertently depends on this.
(That the extra header is required in this particular circumstance is considered a flaw by many, including me, and almost all implementations do provide the declaration of this function in iostream. It is still the shortest, common example I know of to illustrate this.)
It's often more subtle and complicated than this simple example, but the core issue is the same. The solution is to check every header to make sure it includes any libraries it requires, starting with the ones giving you the errors. E.g. #include <vector> and make sure you use std::vector (to avoid relying on it being in the global namespace, which is done in some, mostly old and obsolete now, implementations) when you get "vector does not name a type".
You might also be running into dependent types, in which case you'd add typename.
I think best thing you can do is to check out layout of any library files.
I'm trying to move a project from an old linux platform to a kubunutu 9.04. Now I get this error when compiling with gcc 4.3.3:
/usr/src/linux-headers-2.6.28-11-generic/include/linux/cpumask.h:600:37: error: "and" may not appear in macro parameter list
If I understand the message right, it is not allowed to use "and" as a macro parameter, since it is a "reserved command". Two questions about that:
How is this possible? I cannot imagine that there is such a mistake in the linux header files... Did I do something wrong before? I tried an #undef and but this won't help.
How do I fix this error? It cannot be true that I have to change the linux header files, can it?
Thanks for help.
I believe the problem is that and is a keyword in C++ but not C (they use &&).
The kernel guys sometimes macros as an alternative to inline functions. Sometimes, however, they need macros because what they want to do has to be done in the scope of the calling function, and defining a function to do that won't work (for instance a macro to find out the name of the current function).
Assuming the macros in question are really fake inlined functions, it would be possible to write your own .c file full of nothing but functions calling these macros, compile it, and refer to those functions via an extern "C" header. You would get the same behavior, but slightly worse performance (which is unlikely to be a problem).
If the macros actually have to be macros, then your best bet is to hand edit them to be C++ compliant.
The linux headers are C headers, not C++.
define for_each_cpu_and(cpu, mask, and) на #define for_each_cpu_and(cpu, mask, and_deb)
Found this solution # http://www.linux.org.ru/forum/development/4797542
It would help if you also showed the line in question. Perhaps it's all down to context, if you do something crazy before including the header, the compiler might be confused and generate a non-obvious error message.
There are cases when "and" is indeed a reserved word, and if it's C++-only the kernel developers won't care too much since the kernel is focused on C.