I've been refactoring my horrible mess of C++ type-safe psuedo-enums to the new C++0x type-safe enums because they're way more readable. Anyway, I use them in exported classes, so I explicitly mark them to be exported:
enum class __attribute__((visibility("default"))) MyEnum : unsigned int
{
One = 1,
Two = 2
};
Compiling this with g++ yields the following warning:
type attributes ignored after type is already defined
This seems very strange, since, as far as I know, that warning is meant to prevent actual mistakes like:
class __attribute__((visibility("default"))) MyClass { };
class __attribute__((visibility("hidden"))) MyClass;
Of course, I'm clearly not doing that, since I have only marked the visibility attributes at the definition of the enum class and I'm not re-defining or declaring it anywhere else (I can duplicate this error with a single file).
Ultimately, I can't make this bit of code actually cause a problem, save for the fact that, if I change a value and re-compile the consumer without re-compiling the shared library, the consumer passes the new values and the shared library has no idea what to do with them (although I wouldn't expect that to work in the first place).
Am I being way too pedantic? Can this be safely ignored? I suspect so, but at the same time, having this error prevents me from compiling with Werror, which makes me uncomfortable. I would really like to see this problem go away.
You can pass the -Wno-attributes flag to turn the warning off.
(It's probably a bug in gcc?)
It works for me with g++ 4.8.2 the following way:
enum class MyEnum : unsigned int
__attribute__((visibility("default")))
{
One = 1,
Two = 2
};
(change the position of the attribute declaration)
Related
This seems like a trivial thing, but I'm not an expert in C++, and I haven't found a good solution to this online as of yet. I'm suspecting I'm missing some basic coding construct that might solve this issue. I have the following definition in one of my main header files:
static const Foo INVALID_FOO = {};
where Foo is a POD class (it doesn't have constructors, as it's used in a union in a C++03 project). This seems fine, except for sources which include the header but don't use INVALID_FOO, I'm getting the warning:
error: 'Foo::INVALID_FOO' defined but not used [-Werror=unused-variable]
I've tried removing the static but then I get duplicate definitions. I could make this a forward declaration, and define it in a .c file, but then the compiler would need to access it through a reference and would not be able to make any optimizations. I'd also like to not disable the -Wall compiler flag. I'm wondering if there's a good way to do this?
You can suppress the GCC warning like this:
static const Foo INVALID_FOO __attribute__ ((unused)) = {};
Note that unused is correct here, all it does is that it suppresses the warning (and it's still fine to reference the identifier). There is also a used attribute which suppresses the warning and tells GCC to emit the definition in the object file even if the compiler does not see a reference to it in the source code—in most cases, this results in unnecessary code bloat.
You can suppress the warning portably using static_cast<void>(INVALID_FOO); statement.
Also note that static const at global and namespace scope is a bit of tautology - const makes it static, so that static is superfluous.
I have a problem with compiling boost.bimap library. My test program is a blank main function and only one include directive(like #include <boost/bimap.hpp>).
After some investigations I found out that preprocessor had made some interesting constructions from header file like:
struct A { struct B{}; struct B; };
I don't know if this is correct or not, but gcc accepts it while clang and icc don't. Who is right and what can I do to compile programs with bimap library? Unfortunately, I can't use gcc in this case.
struct B{}; defines a nested class, then struct B; is a re-declaration of the same nested class.
GCC is wrong to accept the code (bug report), because the standard says in [class.mem]:
A member shall not be declared twice in the member-specification, except that a nested class or member class template can be declared and then later defined,
In your case the nested class is defined then declared, which is not allowed, so Clang and ICC are correct to give a diagnostic. However, when I test it they only give a warning, not an error, so maybe you are using -Werror, in which case stop doing that and the code should compile.
The problem in the Boost.Bimap code is a known bug.
I searched about forward declaration and didn't see any way to make my situation work. So here it is:
1) There is a C-header file, an export interface so to speak for a large multi-component software, that contains an enum typedef
"export.h":
// This is in "C"!
typedef enum _VM_TYPE {...., ...., ...,} VM_TYPE;
2) A part of the code, in C++, uses that export.
"cpp_code.cpp":
// This is in C++
#include "export.h"
#include "cpp_header.hpp"
{ .... using VM_TYPE values to do stuffs....}
"cpp_header.hpp":
// Need to somehow forward declear VM_TYPE here but how?
Struct VM_INFO {
....
VM_TYPE VType; //I need to add this enum to the struct
....
};
So quite obvious, the problem is in cpp_head.hpp, as it doesn't know about the enum.
I tried adding to cpp_header.hpp
typedef enum _VM_TYPE VM_TYPE;
and it'll actually work. So why does THIS work? Because it has C-style syntax?!
Anyway, I was told to not do that ("it's C++, not C here") by upper "management".
Is there other way to make this work at all, based on how things are linked currently? They don't want to change/add include files; "enum class" is c++ only, correct? Adding just "enum VM_TYPE" to cpp_header.hpp will get error about redefinition.
Any idea? Thanks.
In the particular situation described in your question, you don't need to forward declare at all. All the files you #include are going to essentially get copy-pasted into a single translation unit before compilation proper begins, and since you #include "export.h" before you #include "cpp_header.hpp", then it'll just work, because by the time the compiler sees the definition of struct VM_INFO, it'll already have seen the definition of enum _VM_TYPE, so you've got no problem. There's basically no difference here between including "export.h" in "cpp_header.hpp", and including them both in "cpp_code.cpp" in that order, since you end up with essentially the same code after preprocessing. So all you have to do here is make sure you get your includes in the correct order.
If you ever wanted to #include "cpp_header.hpp" without including "export.h" in a translation unit where you need to access the members of struct VM_INFO (so that leaving it as an incomplete type isn't an option) then "export.h" is just badly designed, and you should break out the definition of anything you might need separately into a new header. If, as the comments suggest, you absolutely cannot do this and are required to have a suboptimal design, then your next best alternative would be to have two versions of "cpp_header.hpp", one which just repeats the definition of enum _VM_TYPE, and one which does not. You'd #include the first version in any translation unit where you do not also #include "export.h", and #include the second version in any translation unit where you do. Obviously any code duplication of this type is inviting problems in the future.
Also, names beginning with an underscore and a capital letter are always reserved in C, so you really shouldn't use them. If a future version of C ever decides to make use of _VM_TYPE, then you'll be stuck with either using an outdated version of C, or having all this code break.
A enum can not be forward declarations because the compiler needs to know the size of the enum. The underlying a enumerator is compiler specific, but usually a int. Can you just cast the enum as an int?
"I could be and often am wrong"
I have run into a problem while writing C++ code that needs to compile in Visual Studio 2008 and in GCC 4.6 (and needs to also compile back to GCC 3.4): static const int class members.
Other questions have covered the rules required of static const int class members. In particular, the standard and GCC require that the variable have a definition in one and only one object file.
However, Visual Studio creates a LNK2005 error when compiling code (in Debug mode) that does include a definition in a .cpp file.
Some methods I am trying to decide between are:
Initialize it with a value in the .cpp file, not the header.
Use the preprocessor to remove the definition for MSVC.
Replace it with an enum.
Replace it with a macro.
The last two options are not appealing and I probably won't use either one. The first option is easy -- but I like having the value in the header.
What I am looking for in the answers is a good looking, best practice method to structure the code to make both GCC and MSVC happy at the same time. I am hoping for something wonderfully beautiful that I haven't thought of yet.
I generally prefer the enum way, because that guarantees that it will always be used as immediate value and not get any storage. It is recognized as constant expression by the compiler.
class Whatever {
enum { // ANONYMOUS!!!
value = 42;
};
...
}
If you can't go that way, #ifdef away the definition in the .cpp for MSVC, because if you ifdef away the value in declaration, it will always get storage; the compiler does not know the value, so it can't inline it (well, the "link time code generation" should be able to fix that up if enabled) and can't use it where constant is needed like value template arguments or array sizes.
If you don't dislike the idea of using non-standard hacks, for VC++ there's always __declspec(selectany). My understanding is that it will ensure that at link time, any conflicts are resolved by dropping all but one definition. You can potentially put this in an #ifdef _MSC_VER block.
The Visual C++ 2010 accepts this:
// test.hpp:
struct test {
static const int value;
};
// test.cpp:
#include "test.hpp"
const int test::value = 10;
This is still a problem with VS2013. I've worked around it by putting my standard-compliant definition, in the cpp file, inside a #if preventing VS.
a.h:
class A
{
public:
static unsigned const a = 10;
};
a.cpp:
#ifndef _MSC_VER
unsigned const A::a;
#endif
I also commented it well, so the next guy in the file knows which compiler to blame.
I have a lot of code in C++, originally built on a PC. I'm trying to make it work with Objective-C on a Mac. To that end, I created an Objective-C framework to house the C++ code and added a thin wrapper. But I'm running into a typedef problem in my C++ code.
When I was working with C++ on a PC I used the BOOL variable defined in WinDef.h. So when I moved everything over the Mac I added in typedef int BOOL; to make sure the BOOL variable would still compile as expected.
But when I try to compile I get an error: "Conflicting declaration 'typedef int BOOL'". I assume this is because BOOL is a keyword in Objective-C, and so is already defined. I also can't just use the Objective-C BOOL since it is an unsigned char and not an int.
While I was looking for possible solutions I found one that mentions undefining BOOL, but I couldn't find how to do that (nor do I know if it actually works). Another suggests renaming BOOL in the C++ files to something that isn't a keyword. This suggestion isn't an option for me because other projects rely on the C++ code. Ideally, any changes I make should stay in one file or at least should not negatively impact the code on a Windows machine.
How can I undefine the Objective-C definition of BOOL for my C++ files and use the C++ definition I added? Is there a better way to deal with this problem?
In case it helps I'm using: Xcode 3.2.5, 64-bit and Mac OS X 10.6.6
Thanks for any guidance!
I'm a little confused by some of the discussion, but instead of typedef int BOOL; how about just:
#ifndef BOOL
#define BOOL int
#endif
If you're using typedef then #undef won't work since they are two different things. #define/#undef operate on preprocessor symbols that perform replacements, whereas a typedef is part of the language that creates an alias of another type.
A preprocessor symbol can be undefined at any point because it is simply an instruction to the preprocessor that tells it to no longer use that definition when performing replacements. However, a typedef can't be undefined, as it is something that gets created within a particular scope, rather than the linear processing that occurs using the preprocessor. (In the same way, you wouldn't expect to be able to declare a global variable int x; and then at some point in your code be able to say 'stop recognizing x as a variable.')
The reason I suggest the #define in my answer is that it's possible that the ObjectiveC #define is being hit only while compiling some of your code. That could be an explanation as to why you may get errors in your C++ when you removed the typedef, but still get a conflict if it is in. But, so, if the assumptions are correct, once you check for the definition prior to trying to define it, you should be able to avoid the conflicting definitions when they occur.
As a final note: in this particular situation, you could also just put your typedef inside the check instead of a #define. However, I gravitated toward doing it the way I did both because it's a very common idiom, and because that block will also prevent you from defining it twice in your C++ code if it ends up included twice. Probably neither very compelling reasons if you very much prefer the typedef and know it's not an issue in the code. :)
AFAIK, BOOL is a #define in Objective-C too. You will have problems with your conflicting BOOL define, even if you manage to
#undef BOOL
because your type and its type do not necessarily match in size and "signedness". Must your BOOL really be an int, instead of whatever it is that Obj-C defines it as? In other words, can't you omit your #define and simply use the Obj-C one?
If you could go back in time, I would say "Don't use typedef int BOOL in your own code".
Now that you've actually done it, though, you are in a bit of a pickle. Generally, you should avoid using external data types for your own code except to interface with external code. Standard types are fine, assuming that you can guarantee compiling with a standard compliant compiler on every platform you target.
The most forward looking solution is to stop using BOOL as a type in your platform agnostic code. In the meantime, you can use all sorts of preprocessor hackery to make your use of BOOL compile, but you might hit some strange link errors if you don't use BOOL in the same way (via #includes) everywhere.
I use cmake to load the freeimage library and i'm using kubuntu 14.x
I had this problem with
"error: conflicting declaration ‘typedef CARD8 BOOL’"
and I thought it would be good to share my solution with people that have this problem!
install FreeImage on Linux:
sudo apt-get install libfreeimage-dev
In my CMakeLists.txt file I have:
set(FREEIMAGE_LIBRARY_AND_HEADER_DIRRECTORY /usr/libs)
find_path(FREEIMAGE_LIBRARY_AND_HEADER_DIRRECTORY, FreeImage.h)
find_library(FREEIMAGE_LIBRARY_AND_HEADER_DIRRECTORY, freeimage)
include_directories(${FREEIMAGE_LIBRARY_AND_HEADER_DIRRECTORY})
target_link_libraries(freeimage)
And in my main.cpp I have:
#include <FreeImage.h>
#ifndef CARD8
#define BYTE CARD8
#define BOOL CARD8
#endif
And some extra code to capture OpenGl frame on disk:
void generateImage(){
int w, h; // get the width and height of the OpenGL window!
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
GLubyte * pixels = new GLubyte[3*w*h];
glReadPixels(0,0,w,h,GL_RGB,GL_UNSIGNED_BYTE, pixels);
FIBITMAP * image = FreeImage_ConvertFromRawBits(pixels,w,h,3 * w, 24, 0x0000FF, 0xFF0000, 0x00FF00, false);
FreeImage_Save(FIF_BMP,image, "../img/text.bmp",0);
//Free resource
FreeImage_Unload(image);
delete[] pixels;
}
I hope this helps the ones who have problem with this!
Regards
Kahin