unalignment warning when initializing SQL DBPROP structure - c++

I have a Visual Studio 2008 C++ project for Windows Mobile 6 ARMV4I using Microsoft SQLCE 3.5. When I initialize the VARIANT component of a DBPROP structure (as below), I get a compiler warning message: C4366: The result of the unary '&' operator may be unaligned.
#include <sqlce_oledb.h>
DBPROP prop = { 0 };
::VariantInit( &prop.vValue ); // warning here
I can add __unaligned casts to the line, but because VariantInit doesn't take an __unaligned, I get another C4090 warning.
I notice that the DBPROP definition in *sqlce_oledb.h* includes packing directives for MIPS architecture:
#if defined(MIPSII_FP) || defined(MIPSIV) || defined(MIPSIV_FP)
#pragma pack(push,8)
#endif
typedef struct tagDBPROP
{
DBPROPID dwPropertyID;
DBPROPOPTIONS dwOptions;
DBPROPSTATUS dwStatus;
DBID colid;
VARIANT vValue;
} DBPROP;
#if defined(MIPSII_FP) || defined(MIPSIV) || defined(MIPSIV_FP)
#pragma pack(pop)
#endif
So, I can make the warning go away by doing something like this:
#define MIPSIV
#include <sqlce_oledb.h>
#undef MIPSIV
But, that feels dirty. My question is: Did the designers just overlook ARM in their packing directives (meaning I should do the dirty and claim to be a MIPS processor)? Or, should I just silence the warning and ignore it? Or, is there something else I should do?
Thanks,
PaulH

If you plan to pass the DBPROP structure to other APIs, do NOT change its alignment, since that can change the packing and it will stop working. I notice the following comment in the header:
#if 0
//DBPROPINFO is an unaligned structure. MIDL workaround. 42212352
typedef DBPROPINFO *PDBPROPINFO;
#else
typedef DBPROPINFO UNALIGNED * PDBPROPINFO; //????????????
#endif
So it seems someone was aware of a similar problem but did not change the packing, probably to avoid breaking existing code. I don't see the rest of your code from here, but you could try one of the following:
VARIANT tmp; ::VariantInit(&tmp); prop.vValue = tmp;
prop.vValue.vt = VT_EMPTY;

Related

What is a good technique for compile-time detection of mismatched preprocessor-definitions between library-code and user-code?

Motivating background info: I maintain a C++ library, and I spent way too much time this weekend tracking down a mysterious memory-corruption problem in an application that links to this library. The problem eventually turned out to be caused by the fact that the C++ library was built with a particular -DBLAH_BLAH compiler-flag, while the application's code was being compiled without that -DBLAH_BLAH flag, and that led to the library-code and the application-code interpreting the classes declared in the library's header-files differently in terms of data-layout. That is: sizeof(ThisOneParticularClass) would return a different value when invoked from a .cpp file in the application than it would when invoked from a .cpp file in the library.
So far, so unfortunate -- I have addressed the immediate problem by making sure that the library and application are both built using the same preprocessor-flags, and I also modified the library so that the presence or absence of the -DBLAH_BLAH flag won't affect the sizeof() its exported classes... but I feel like that wasn't really enough to address the more general problem of a library being compiled with different preprocessor-flags than the application that uses that library. Ideally I'd like to find a mechanism that would catch that sort of problem at compile-time, rather than allowing it to silently invoke undefined behavior at runtime. Is there a good technique for doing that? (All I can think of is to auto-generate a header file with #ifdef/#ifndef tests for the application code to #include, that would deliberately #error out if the necessary #defines aren't set, or perhaps would automatically-set the appropriate #defines right there... but that feels a lot like reinventing automake and similar, which seems like potentially opening a big can of worms)
One way of implementing such a check is to provide definition/declaration pairs for global variables that change, according to whether or not particular macros/tokens are defined. Doing so will cause a linker error if a declaration in a header, when included by a client source, does not match that used when building the library.
As a brief illustration, consider the following section, to be added to the "MyLibrary.h" header file (included both when building the library and when using it):
#ifdef FOOFLAG
extern int fooflag;
static inline int foocheck = fooflag; // Forces a reference to the above external
#else
extern int nofooflag;
static inline int foocheck = nofooflag; // <ditto>
#endif
Then, in your library, add the following code, either in a separate ".cpp" module, or in an existing one:
#include "MyLibrary.h"
#ifdef FOOFLAG
int fooflag = 42;
#else
int nofooflag = 42;
#endif
This will (or should) ensure that all component source files for the executable are compiled using the same "state" for the FOOFLAG token. I haven't actually tested this when linking to an object library, but it works when building an EXE file from two separate sources: it will only build if both or neither have the -DFOOFLAG option; if one has but the other doesn't, then the linker fails with (in Visual Studio/MSVC):
error LNK2001: unresolved external symbol "int fooflag"
(?fooflag##3HA)
The main problem with this is that the error message isn't especially helpful (to a third-party user of your library); that can be ameliorated (perhaps) by appropriate use of names for those check variables.1
An advantage is that the system is easily extensible: as many such check variables as required can be added (one for each critical macro token), and the same idea can also be used to check for actual values of said macros, with code like the following:
#if FOOFLAG == 1
int fooflag1 = 42;
#elif FOOFLAG == 2
int fooflag2 = 42;
#elif FOOFLAG == 3
int fooflag3 = 42;
#else
int fooflagX = 42;
#endif
1 For example, something along these lines (with suitable modifications in the header file):
#ifdef FOOFLAG
int CANT_DEFINE_FOOFLAG = 42;
#else
int MUST_DEFINE_FOOFLAG = 42;
#endif
Important Note: I have just tried this technique using the clang-cl compiler (in Visual Studio 2019) and the linker failed to catch a mismatch, because it is completely optimizing away all references to the foocheck variable (and, thus, to the dependent fooflag). However, there is a fairly trivial workaround, using clang's __attribute__((used)) directive (which also works for the GCC C++ compiler). Here is the header section for the last code snippet shown, with that workaround added:
#if defined(__clang__) || defined(__GNUC__)
#define KEEPIT __attribute__((used))
// Equivalent directives may be available for other compilers ...
#else
#define KEEPIT
#endif
#ifdef FOOFLAG
extern int CANT_DEFINE_FOOFLAG;
KEEPIT static inline int foocheck = CANT_DEFINE_FOOFLAG; // Forces reference to above
#else
extern int MUST_DEFINE_FOOFLAG;
KEEPIT static inline int foocheck = MUST_DEFINE_FOOFLAG; // <ditto>
#endif
In the Microsoft C++ frontend and linker, the #pragma detect_mismatch directive can be used in a very similar spirit as the solution presented in Adrian Mole's answer. Like that answer, mismatches are detected at link time, not at compilation time. It "places a record in an object. The linker checks these records for potential mismatches."
Say something like this is in a header file that is included in different compilation units:
#ifdef BLAH_BLAH
#pragma detect_mismatch("blah_blah_enabled", "true")
#else
#pragma detect_mismatch("blah_blah_enabled", "false")
#endif
Attempting to link object files with differing values of "blah_blah_enabled" will fail with LNK2038:
mismatch detected for 'name': value 'value_1' doesn't match value 'value_2' in filename.obj
Based on the mention of automake in the question, I assume that the asker isn't using the Microsoft C++ toolchain. I'm posting this here in case it helps someone in a similar situation who is using that toolchain.
I believe the closest MSVC analogue to the __attribute__((used)) in Adrian Mole's answer is the /INCLUDE:symbol-name linker option, which can be injected from a compilation unit via #pragma comment(linker, "/include:symbol-name").
As an alternative to #adrian's (excellent) answer, here's a suggestion for a runtime check which might be of interest.
For the sake of example, let's assume there are two flags, FOO1 and FOO2. First of all, for my scheme to work, and since the OP seems to be using #ifdef rather than #if, the library needs to provide a header file that looks like this (header guards omitted for clarity):
// MyLibrary_config_check.h
#ifdef FOO1
#define FOO1_VAL 1
#else
#define FOO1_VAL 0
#endif
#ifdef FOO2
#define FOO2_VAL 1
#else
#define FOO2_VAL 0
#endif
... etc ...
Then, the same header file declares the following function:
bool CheckMyLibraryConfig (int expected_flag1, int expected_flag2 /* , ... */);
The library then implements this like so:
bool CheckMyLibraryConfig (int expected_flag1, int expected_flag2 /* , ... */)
{
static const int configured_flag1 = FOO1_VAL;
static const int configured_flag2 = FOO2_VAL;
// ...
if (expected_flag1 != configured_flag1)
return false;
if (expected_flag2 != configured_flag2)
return false;
// ...
return true;
}
And the consumer of the library can then do:
if (!CheckMyLibraryConfig (FOO1_VAL, FOO2_VAL /* , ... */))
halt_and_catch_fire ();
On the downside, it's a runtime check, and that's not what was asked for. On the upside, CheckMyLibraryConfig could instead be implemented something like this:
std::string CheckMyLibraryConfig (int expected_flag1, int expected_flag2 /* , ... */)
{
if (expected_flag1 != configured_flag1)
return std::string ("Expected value of FOO1 does not match configured value, expected: ") + std::to_string (expected_flag1) + ", configured: " + std::to_string (expected_flag2);
...
return "";
}
And the consumer can then check for and display any non-empty string returned. Get as fancy as you like (that code could certainly be factored better) and check all the flags before returning a string reporting all the mis-matches, go crazy.

How do I make compilation stop nicely if a constant is used in my source file?

I want to test for the use of a constant in a source file and if it is used, stop compilation.
The constant in question is defined in a generic driver file which a number of driver implementations inherit from. However, it's use has been deprecated so subsequent updates to each drivers should switch to using a new method call and not the use of this const value.
This doesn't work obviously
#ifdef CONST_VAR
#error "custom message"
#endif
How can I do this elegantly? As It's an int, I can define CONST_VAR as a string and let it fail, but that might make it difficult for developers to understand what actually went wrong. I was hoping for a nice #error type message.
Any suggestions?
The Poison answer here is excellent. However for older versions of VC++ which don't support [[deprecated]] I found the following works.
Use [[deprecated]] (C++14 compilers) or __declspec(deprecated)
To treat this warning as an error in a compilation unit, put the following pragma near the top of the source file.
#pragma warning(error: 4996)
e.g.
const int __declspec(deprecated) CLEAR_SOURCE = 0;
const int __declspec(deprecated("Use of this constant is deprecated. Use ClearFunc() instead. See: foobar.h"));
AFAIK, there's no standard way to do this, but gcc and clang's preprocessors have #pragma poison which allows you to do just that -- you declare certain preprocessor tokens (identifiers, macros) as poisoned and if they're encountered while preprocessing, compilation aborts.
#define foo
#pragma GCC poison printf sprintf fprintf foo
int main()
{
sprintf(some_string, "hello"); //aborts compilation
foo; //ditto
}
For warnings/errors after preprocessing, you can use C++14's [[deprecated]] attribute, whose warnings you can turn into errors with clang/gcc's -Werror=deprecated-declarations .
int foo [[deprecated]];
[[deprecated]] int bar ();
int main()
{
return bar()+foo;
}
This second approach obviously won't work for on preprocessor macros.

#pragma pack(show) with GCC

Is there a way to show the memory "pack" size with GCC ?
In Microsoft Visual C++, I am using:
#pragma pack(show)
which displays the value in a warning message; see Microsoft's documentation.
What is the equivalent with GCC?
Since I can't see such functionality listed in the pertinent documentation, I'm going to conclude that GCC cannot do this.
I use a static assertion whenever I pack a structure and want to see its size.
/*
The static_assert macro will generate an error at compile-time, if the predicate is false
but will only work for predicates that are resolvable at compile-time!
E.g.: to assert the size of a data structure, static_assert(sizeof(struct_t) == 10)
*/
#define STATIC_ASSERT(COND,MSG) typedef char static_assertion_##MSG[(!!(COND))*2-1]
/* token pasting madness: */
#define COMPILE_TIME_ASSERT3(X,L) STATIC_ASSERT(X,at_line_##L) /* add line-number to error message for better warnings, especially GCC will tell the name of the variable as well */
#define COMPILE_TIME_ASSERT2(X,L) COMPILE_TIME_ASSERT3(X, L) /* expand line-number */
#define static_assert(X) COMPILE_TIME_ASSERT2(X, __LINE__) /* call with line-number macro */
#define PACKED __attribute__ ((gcc_struct, __packed__))
typedef struct {
uint8_t bytes[3];
uint32_t looong;
} PACKED struct_t;
static_assert(sizeof(struct_t) == 7);
This will give you a compile time warning whenever the static assertion fails.

suppress warnings on multiple compiler builds

Is there a generic suppress warning that i can use?
The problem is that there are times i may build using one compiler version (gcc) and then i have a partner that uses some of the common things but uses a different compiler. So the warning # are different.
The only way i could think of doing was making a macro that was defined in a file that i would pass in some generic value:
SUPPRESS_WARNING_BEGIN(NEVER_USED)
//code
SUPPRESS_WARNING_END
then the file would have something like:
#if COMPILER_A
NEVER_USED = 245
#endif
#if COMPILER_B
NEVER_USED = 332
#endif
#define SUPPRESS_WARNING_BEGIN(x) /
#if COMPILER_A
//Compiler A suppress warning x
#endif
#if COMPILER_B
//Compiler B suppress warning x
#endif
#define SUPPRESS_WARNING_END /
#if COMPILER_A
// END Compiler A suppress warning
#endif
#if COMPILER_B
// END Compiler A suppress warning
#endif
Don't know if there is an easier way? Also i know ideally we all would just use the same compiler but that is unfortunately not an option. Just trying to find the least complicated way to support something like this and am hoping there is a simpler way then mentioned above.
thanks
There's no portable way to do that. Different compilers do it in different ways (e.g. #pragma warning, #pragma GCC diagnostic, etc.).
The easiest and best thing to do is to write code that does not generate any warnings with at compiler at any warning level.
If your goal is to suppress warnings about unused variables, I recommend using a macro:
#define UNUSED(x) ((void)sizeof(x))
...
void some_function(int x, int y)
{
// No warnings will be generated if x is otherwise unused
UNUSED(x);
....
}
The sizeof operator is evaluated at compile-time, and the cast to void produces no result, so any compiler will optimize the UNUSED statement away into nothing but consider the operand to be used.
GCC also has the unused attribute`:
// No warnings will be generated if x is otherwise unused
int x __attribute__((unused));

vs2010 C4353 why isn't this an error

I ran into this today in an if and after looking into it found that all these are all valid statements that generate the C4353 . My only guess is that this is the old way of doing noop in C. Why is this not an error. When would you use this to do anything useful.
int main()
{
nullptr();
0();
(1 == 2)();
return 0;
}
Using constant 0 as a function expression is an extension that is specific to Microsoft. They implemented this specifically because they saw a reason for it, which explains why it's wouldn't make sense to treat it as an error. But since it's non-standard, the compiler emits a warning.
You are correct that it is an alternative to using __noop().
All of these :
nullptr();
0();
(1 == 2)();
are no-op statements (meaning they don't do anything).
btw I hope you are not ignoring warnings. Most of the time it is a good practice to fix all warnings.
As explained in the C4353 warning page and in the __noop intrinsic documentation, the use of 0 as a function expression instructs the Microsoft C++ compiler to ignore calls to the function but still generate code that evaluates its arguments (for side effects).
The example given is a trace macro that gets #defined either to __noop or to a print function, depending on the value of the DEBUG preprocessor symbol:
#if DEBUG
#define PRINT printf_s
#else
#define PRINT __noop
#endif
int main() {
PRINT("\nhello\n");
}
The MSDN page for that warning has ample explanation and a motivating example:
// C4353.cpp
// compile with: /W1
void MyPrintf(void){};
#define X 0
#if X
#define DBPRINT MyPrint
#else
#define DBPRINT 0 // C4353 expected
#endif
int main(){
DBPRINT();
}
As you can see it is to support archaic macro usage.