I've got an interesting problem that I am sure someone would have come across. I am writing the front end UI in Objective C Cocoa, and the backend in C++. In C++ I have
#define NULL 0
Unfortunately, this has dire consequences for nil. Especially with nil terminated function calls as I now get this warning - "Missing sentinel in method dispatch", which I assume means it couldn't find the nil terminator. This is the only definition I could find for nil:
#ifndef NULL
#define NULL __DARWIN_NULL
#endif /* ! NULL */
#ifndef nil
#define nil NULL
#endif /* ! nil */
which seems to me that nil is NULL, and that my earlier define for NULL is messing everything up although I don't know how. The NULL is defined in C++ so that it can be platform independent. I have tried redefining NULL and nil, but nothing seems to take. Any suggestions on the correct way to go about this would be appreciated.
In either C or C++, attempting to define NULL yourself leads to undefined behavior. Sorry, but you're just allowed to do that. Instead of trying to define it yourself, you need to include one of the headers that already defines it for you.
Reminded me of this question I saw recently: How to wrap a C++ lib in objective-C?
So, what about:
#ifdef __cplusplus
#define NULL 0
#endif
You could #define it while it is being used within your .h/.cpp. Then, at the end of those files, #undef NULL so that the ObjC's definitions take over. That way the definition is cleaned-up.
You can also simply use 0 in C++ instead of NULL.
Related
As part of my homework, I've been given this code to help with the task they've given us... to create a basic shell that supports piping, background processes, and a number of builtin commands, etc. I've read through the code they've given us for parsing...
I'm familiar with the #define keyword in C, however, I've not seen it used like in the below code: namely, what is c for? I'm guessing it has been assigned to mean a character but I'm not sure:
#define PIPE ('|')
#define BG ('&')
#define RIN ('<')
#define RUT ('>')
#define ispipe(c) ((c) == PIPE)
#define isbg(c) ((c) == BG)
#define isrin(c) ((c) == RIN)
#define isrut(c) ((c) == RUT)
#define isspec(c) (ispipe(c) || isbg(c) || isrin(c) || isrut(c))
Any help or advice much appreciated.
The last five #define statements you give define macros, each taking an argument, which is here always called c. Your first four #define statements are also, technically, macros, but they don't need an argument - they are simply substituted for their 'values' when encountered; frequently, programmers refer to macros with no argument as tokens, with the PIPE token here having a token value of ('|').
Later on in the file (possibly) there will be cases where one or more of these macros is called, and that call will have a value for the actual argument, like this, for example:
if (ispipe(test_arg)) ++npipes;
This macro "call" will be replaced (by the pre-processor) with the following expansion:
if (((test) == ('|')) ++npipes;
And, similarly, for the other #define XXX(c) macros.
Note: It is quite common to add (lots of) parentheses in macro definitions, just to be sure that the code does what you 'think' it will after the pre-processor has done its stuff.
Feel free to ask for further explanation and/or clarification.
#define is not a function, it is a preprocessor directive.
c could be anything. If you write ispipe(42), then the preprocessor will change it into ((42) == PIPE). If you write ispipe(while(1);), then the preprocessor will change it into ((while(1);) == PIPE), which will dumbfound the compiler when it reads it.
The preprocessor is blind, and does not know much about C syntax, and nothing of its semantics; the only way to understand what c is supposed to be is either to reverse-engineer the intended usage, or to ask whoever wrote the code without comments to tell you what they meant.
After the edit, it is rather reasonable to expect that c should be a char, in order to be meaningfully compared to '|' and similar. But even passing 0xDEADBEEF should compile correctly (returning FALSE).
I am trying to use a C++ library named MP4v2 in Swift. It is mostly working in that I can can call some functions, use some classes, etc.
I am having trouble with a particular function that returns a void pointer. It is NULL on failure, or some other value on success. There is a constant defined to check with, but neither that nor checking for nil works.
if file != MP4_INVALID_FILE_HANDLE {
throws /<path_to_project>/main.swift:19:12: Use of unresolved identifier 'MP4_INVALID_FILE_HANDLE', but it is DOES exist (other constants work).
if file != NULL just causes the same problem, and if file != nil never is true, even if the function failed. What am I doing wrong?
Looking at MP4v2 documentation, here is the definition of the macro to check for invalid handle:
#define MP4_INVALID_FILE_HANDLE ((MP4FileHandle)NULL)
The reason it cannot be used in Swift is because it involves a NULL. In fact, if you define something like
#define MY_NULL NULL
in your Objective-C(++) code and try to use it in Swift, Swift will suggest that you use nil instead.
The handle type MP4FileHandle is
typedef void * MP4FileHandle
So, if you are calling a function like
MP4FileHandle aCPPFunction()
You should be able to check the return value as follows in Swift:
let h : MP4FileHandle = aCPPFunction()
if h != nil
{
// The handle is valid and can be given as an argument to
// other library functions.
}
else
{
// The handle is NULL
}
I understand you tried this. It should work, please double-check. If for whatever strange reason this doesn't work for you, there are some other options:
Write a simple helper function in C, C++, Objective-C or
Objective-C++ to check if the handle is valid and return a integer
flag, which should be easily understood by Swift.
Check h.hashValue. If it is 0, then the handle is invalid,
otherwise it is valid. This is a bad undocumented hack, but it has
worked for me. I would stay away from this one.
I'm trying to make a program run with the eclipse IDE but I get the above mentioned error.
#if defined(__cplusplus) || defined(c_plusplus)
extern "C" {
#endif
tCRU_BUF_CHAIN_HEADER *CRU_BUF_Allocate_MsgBufChain ARG_LIST((UINT4 u4Size,UINT4 u4ValidOffsetValue));
[some more macros where this error comes]
#if defined(__cplusplus) || defined(c_plusplus)
}
#endif
is one of the errors, there comes the error:
"expected initialiser before 'ARG_LIST'"
To be accurate, there are 18 macros of the same type that give this error, in the moment i delete the "ARG_LIST" the error goes away, but because this isn't code that I created I don't want to delete this part.
I tried to find a solution in the net but couldn't find something and now I'm hoping someone here can help me.
If you need some more information I try to answer it as fast as possible.
I think you can safely delete the ARG_LIST part. Macros like ARG_LIST were used in old (1970s) versions of C++ where functions/methods didn't specify the parameters they took. For example, you declared a function like this:
tCRU_BUF_CHAIN_HEADER *CRU_BUF_Allocate_MsgBufChain();
And you could call it with any number of arguments.
Then, when full function signatures were added to the language, programmers defined macros to take advantage of type checking in compilers that supported it, but still make the code compatible with compilers that didn't support it:
#ifdef FULL_SIGNATURES_SUPPORTED
#define ARG_LIST(list) list
#else
#define ARG_LIST(list) ()
#endif
Nowadays all compilers support full signatures, so there's no point to use such macros.
OK, I have some C++ code in a header that is declared like this:
void StreamOut(FxStream *stream,const FxChar *name = nil);
and I get: error:
'nil' was not declared in this scope
nil is a pascal thing, correct?
Should I be using NULL?
I thought they were both the same or at least Zero, no?
In C++ you need to use NULL, 0, or in some brand new compilers nullptr. The use of NULL vs. 0 can be a bit of a debate in some circles but IMHO, NULL is the more popular use over 0.
nil does not exist in standard C++. Use NULL instead.
Yes. It's NULL in C and C++, while it's nil in Objective-C.
Each language has its own identifier for no object. In C the standard library, NULL is a typedef of ((void *)0). In C++ the standard library, NULL is a typedef of 0 or 0L.
However IMHO, you should never use 0 in place of NULL, as it helps the readability of the code, just like having constant variables in your code: without using NULL, the value 0 is used for null pointers as well as base index value in loops as well as counts/sizes for empty lists, it makes it harder to know which one is which. Also, it's easier to grep for and such.
0 is the recommended and common style for C++
If you run a search through glibc you'll find this line of code:
#define NULL 0
It's just a standard way (not sure if it was published anywhere) of marking empty pointers. Variable value of 0 is still a value. Pointer pointing to 0 (0x0000... (it's decimal zero)) is actually pointing nowhere. It's just for readability.
int *var1, var2;
var1 = 0;
var2 = 0;
The above two assignments are not the same though they both look the same
just add at the beginning
#define null '\0'
or whatever you want instead of null and stick with what you prefer. The null concept in C++ is just related to a pointer pointing to nothing (0x0)..
Mind that every compiler may have its own definition of null, nil, NULL, whatever.. but in the end it is still 0.
Probably in the source you are looking at there is a
#define nil '\0'
somewhere in a header file..
I saw some comments on why not to use 0. Generally people don't like magic numbers, or numbers with meaning behind them. Give them a name. I would rather see ANSWER_TO_THE_ULTIMATE_QUESTION over 42 in code.
As for nil, I know Obj-C using nil as well. I would hate to think that someone went against the very popular convention (or at least what I remember) of NULL, which I thought was in a standard library header somewhere. I haven't done C++ in awhile though.
recently i discovered in a relatively large project, that ugly runtime crashes occurred because various headers were included in different order in different cpp files.
These headers included #pragma pack - and these pragmas were sometimes not 'closed' ( i mean, set back to the compiler default #pragma pack() ) - resulting in different object layouts in different object files. No wonder the application crashed when it accessed struct members being created in one module and passed to another module. Or derived classes accessing members from base classes.
Since i like the idea to create a more general debugging and assertion strategy from every bug i find, i would really like to assert that object layouts are always and everywhere the same.
So it would be easy to assert
ASSERT( offsetof(membervar) == 4 )
But this would not catch a different layout in another module - or require manual updates whenever the struct layout changes .. so my favourite idea would be something like
ASSERT( offsetof(membervar) == offsetof(othermodule_membervar) )
Would this be possible with an assertion? Or is this a case for a unit test?
Thanks,
H
ASSERT( offsetof(membervar) == offsetof(othermodule_membervar) )
I can't see way to make this technically possible. Further, even if it was phyiscally possible, it isn't practical. You'd need an assert for every pair of source files:
ASSERT( offsetof(A.c::MyClass.membervar) == offsetof(B.c::MyClass.membervar) )
ASSERT( offsetof(A.c::MyClass.membervar) == offsetof(C.c::MyClass.membervar) )
ASSERT( offsetof(A.c::MyClass.membervar) == offsetof(D.c::MyClass.membervar) )
ASSERT( offsetof(B.c::MyClass.membervar) == offsetof(C.c::MyClass.membervar) )
ASSERT( offsetof(B.c::MyClass.membervar) == offsetof(D.c::MyClass.membervar) )
etc
You might be able to get away with this by asserting the sizeof(class) in different files. If the packing is causing the size of the object to be smaller, than I would expect that sizeof() would show that up.
You could also do this as a static assert using C++0x's static assert, or Boost's (or a handrolled one of course)
On the part of not wanting to do this in every file, I would recommend putting together a header file that includes all the headers you're worried about, and the static_asserts.
Personally though, I'd just recommend searching through the code base over the list of pragmas and fix them.
Wendy,
In Win32, there are single functions that can populate different versions of a given struct. Over the years, the FOOBAR struct might have new features added to it, so they create a FOOBAR2 or FOOBAREX. In some cases there are more than two versions.
Anyway, the way they handle this is to have the caller pass in sizeof(theStruct) in addition to the pointer to the struct:
FOOBAREX foobarex = {0};
long lResult = SomeWin32Api(sizeof(foobarex), &foobarex);
Within the implementation of SomWin32Api(), they check the first parameter and determine which version of the struct they're dealing with.
You could do something similar in a debug build to assure that the caller and callee agree on the size of the struct being referred to, and assert if the value doesn't match the expected size. With macros, you might even be able to automate/hide this so that it only happens in a debug build.
Unfortunately, this is a run-time check and not a compile-time check...
What you want isn't directly possible as such. If you're using VC++, the following may be of interest:
http://blogs.msdn.com/vcblog/archive/2007/05/17/diagnosing-hidden-odr-violations-in-visual-c-and-fixing-lnk2022.aspx
There's probably scope to create some way of semi-automating the process it describes, collating the output and cross-referencing.
To detect this sort of problem somewhat more automatically, the following occurs to me. Create a file that defines a struct that will have a particular size with the designated default packing amount, but a different size with different pack values. Also include some kind of static assert that its size is correct. For example, if the default is 4-byte packing:
struct X {
char c;
int i;
double d;
};
extern const char g_check[sizeof(X)==16?1:-1];
Then #include this file at the start of every header (just write a program to put the extra includes in if there's too many to do by hand), and compile and see what happens. This won't directly detect changes in struct layout, just non-standard packing settings, which is what you're interested in anyway.
(When adding new headers one would put this #include at the top, along with the usual ifdef boilerplate and so on. This is unfortunate but I'm not sure there's any way around it. The best solution is probably to ask people to do it, but assume they'll forget, and run the extra-include-inserting program every now and again...)
Apologies for posting an answer - which this is not - but I don't know how to post code in comments. Sorry.
To wrap Brone's idea in a macro, here is what free we currently use (feel free to edit it):
/** Our own assert macro, which will trace a FATAL error message if the assert
* fails. A FATAL trace will cause a system restart.
* Note: I would love to use CPPUNIT_ASSERT_MESSAGE here, for a nice clean
* test failure if testing with CppUnit, but since this header file is used
* by C code and the relevant CppUnit include file uses C++ specific
* features, I cannot.
*/
#ifdef TESTING
/* ToDo: might want to trace a FATAL if integration testing */
#define ASSERT_MSG(subsystem, message, condition) if (!(condition)) {printf("Assert failed: \"%s\" at line %d in file \"%s\"\n", message, __LINE__, __FILE__); fflush(stdout); abort();}
/* we can also use this, which prints of the failed condition as its message */
#define ASSERT_CONDITION(subsystem, condition) if (!(condition)) {printf("Assert failed: \%s\" at line %d in file \%s\"\n", #condition, __LINE__, __FILE__); fflush(stdout); abort();}
#else
#define ASSERT_MSG(subsystem, message, condition) if (!condition) DebugTrace(FATAL, subsystem, __FILE__, __LINE__, "%s", message);
#define ASSERT_CONDITION(subsystem, condition) if (!(condition)) DebugTrace(FATAL, subsystem, __FILE__, __LINE__, "%s", #condition);
#endif
What you would be looking for is an assertion like ASSERT_CONSISTENT(A_x, offsetof(A,x)), placed in a header file. Let me explain why, and what the problem is.
Because the problem exists across translation units, you can only detect the error at link time. That means you need to force the linker to spit out an error. Unfortunately, most cross-translation unit problems are formally of the "no diagnosis needed" kind. The most familiar one is the ODR rule. We can trivially cause ODR violations with such assertions, but you just can't rely on the linker to warn you about them. If you can, the implementation of the ODR can be as simple as
#define ASSERT_CONSISTENT(label, x) class ASSERT_ ## label { char test[x]; };
But if the linker doesn't notice these ODR violations, this will pass by silently. And here lies the problem: the linker really only needs to complain if it can't find something.
With two macro's the problem is solved:
template <int i> class dummy; // needed to differentiate functions
#define ASSERT_DEFINE(label, x) void ASSERT_label(dummy<x>&) { }
#define ASSERT_CHECK(label, x) void (*check)(dummy<x>&) = &ASSERT_label;
You'd need to put the ASSERT_DEFINE macro in a .cpp, and ASSERT_CHECK in its header. If the x value checked isn't the x value defined for that label, you're taking the address of an undefined function. Now, a linker doesn't need to warn about multiple definitions, but it must warn about missing definitions.
BTW, for this particular problem, see Diagnosing Hidden ODR Violations in Visual C++ (and fixing LNK2022)