typedef of nested structs - c++

I'm trying to typedef a group of nested structs using this:
struct _A
{
struct _Sim
{
struct _In
{
STDSTRING UserName;
VARIANT Expression;
int Period;
bool AutoRun;
//bool bAutoSave;
} In;
struct _Out
{
int Return;
} Out;
} Sim;
} A;
typedef _A._Sim._In SIM_IN;
The thing is the editor in VS2010 likes it. It recognizes the elements in the typedef, I can include it as parameters to functions but when you go to build it I get warnings first C4091 (ignored on left when no variable is declared) and then that leads to error C2143 "missing ';' before '.'.
The idea of the typedef is to make managing type definitions (in pointers, prototypes, etc) to _A._Sim._In easy with one name...a seemingly perfect use for typedef if the compiler allowed it.
How can I refer to the nested structure with one name to make pointer management and type specifiction easier than using the entire nested name (_A._Sim._In) ?

The dot operator is a postfix operator applied to an object (in terms of C). I.e., you can not apply it to a type.
To reach what you want you can use a function or a macro, e.g.:
#define SIM_IN(x) x._Sim._In

It might not be preferable to do so but, if it cannot be achieved using a typedef, I guess you could always do
#define _A._Sim._In SIM_IN
But as I said you might not prefer that for various reasons. :)

Related

Convert complex struct / opaquepointer / function from C++ header to Delphi

I'm converting from C/C++ header to Delphi.
I've carefully read the great Rudy's Delphi Corner article about this kind of conversion. Anyway, I'm facing something I'm hard to understand.
There's an opaque pointer, then a function prototype that has that pointer as parameter, followed by the struct declaration og the function type.
Maybe the code will make things clearer.
source .h code:
struct my_ManagedPtr_t_;
typedef struct my_ManagedPtr_t_ my_ManagedPtr_t;
typedef int (*my_ManagedPtr_ManagerFunction_t)(
my_ManagedPtr_t *managedPtr,
const my_ManagedPtr_t *srcPtr,
int operation);
typedef union {
int intValue;
void *ptr;
} my_ManagedPtr_t_data_;
struct my_ManagedPtr_t_ {
void *pointer;
my_ManagedPtr_t_data_ userData[4];
my_ManagedPtr_ManagerFunction_t manager;
};
typedef struct my_CorrelationId_t_ {
unsigned int size:8; // fill in the size of this struct
unsigned int valueType:4; // type of value held by this correlation id
unsigned int classId:16; // user defined classification id
unsigned int reserved:4; // for internal use must be 0
union {
my_UInt64_t intValue;
my_ManagedPtr_t ptrValue;
} value;
} my_CorrelationId_t;
... i'm lost. :-( I can't figure out where to start.
The structure? The function?
Thank you.
As you clarified in the comments, the immediate area of confusion for you is the circular reference. The function pointer parameters refer to the struct, but the struct contains the function pointer. In the C code this is dealt with by the opaque struct type declaration which is simply a forward declaration. A forward declaration simply promises that the type will be fully declared at some later point.
In Delphi you can deal with this in a directly analogous manner. You need to use a forward type declaration. I don't want to translate all the types in your question because that would require dealing with unions and bitfields which I deem to be separate topics. Instead I will present a simple Delphi example that shows how to deal with such circular type declarations. You can take the concept and apply it to your specific types.
type
PMyRecord = ^TMyRecord; // forward declaration
TMyFunc = function(rec: PMyRecord): Integer; cdecl;
TMyRecord = record
Func: TMyFunc;
end;
It is a little hard to find out where to start, but #DavidHeffernan's explanation of forward declaring a pointer type should give you a start.
I would translate this to following (untested) code:
type
_my_ManagedPtr_p = ^my_ManagedPtr_t;
my_ManagedPtr_ManagerFunction_t = function(
managedPtr: my_ManagedPtr_p;
scrPtr: my_ManagedPtr_p;
operation: Integer): Integer cdecl;
my_ManagedPtr_t_data = record
case Boolean of
False: (intValue: Integer);
True: (ptr: Pointer);
end;
my_ManagedPtr_t = record
ptr: Pointer;
userData: array[0..3] of my_ManagedPr_t_data;
manager: my_ManagedPtr_ManagerFunction_t;
end;
my_CorrelationId_t = record
typeData: UInt32; // size, valueType, classId and reserved combined in one integer.
case Byte of
0: (intValue: my_UInt64_t);
1: (ptrValue: my_ManagedPtr_t;
end;
I am not going to do the bitfields, but please read the Bitfields section of my article Pitfalls of converting again (I see you mentioned it already) to find a few solutions. If you want to make it really nice, use the methods and indexed access, otherwise just use shifts and masks to access the bitfields contained in the member I called typeData. How this can be done is explained in the article and is far too much to repeat here.
If you have problems with them anyway, ask a new question.

C++ Type-safe detection of the offset of a structure

I'm playing a bit with the C++ syntax to figure out a generalized way to keep track of an offset within a class, sort of like offsetof, but in a type-safe way and without #defines
I know that a template class can be template-parametrized with fields, besides types and constants. So I came out with this prototype:
#include <iostream>
template <typename class_type, typename field_type>
struct offsetter
{
offsetter(const char* name, field_type class_type::*field)
: name(name)
{
fprintf(stderr, "%zu\n", field);
}
const char* const name;
};
struct some_struct
{
float avg;
int min;
int max;
struct internal
{
unsigned flag;
int x;
} test;
char* name;
};
int main()
{
offsetter<some_struct, float>("%h", &some_struct::avg);
offsetter<some_struct, int>("%h", &some_struct::min);
offsetter<some_struct, char*>("%h", &some_struct::name);
offsetter<some_struct, some_struct::internal>("x", &some_struct::test);
return 0;
}
This code is actually able to print the field offset, but I'm not really sure on what I'm doing here. Indeed it feels utterly wrong to reference field without referring to an instance (foo.*field).
But it does the job: it prints the offset. My guess is that I'm hitting on some loophole though, since for instance I can't assign size_t offset = field.
I figured out I probably want something like this:
size_t offset = (&(std::declval<class_type>().*field) - &(std::declval<class_type>()))
Which however wont' work as I can't take the address of an xvalue:
taking address of xvalue (rvalue reference)
Is there any way to do this?
AFAIK there isn't a standard way of doing this. Even the standard offsetof is defined only for standard layout types.
What you are doing is UB. You are using the wrong specifier zu. There isn't much you can do with a member pointer. You can't even do pointer arithmetics on them, you can't convert to char* nor to an integer type.
Also if your assumption is that a member pointer is just an integer representing the offset from the beginning of the structure that is false, not only in theory, but also in practice. Having multiple inheritance and virtual inheritance made sure of that.

How do I capture this struct vector? c++

So I've made a class, one of it's function returns a struct vector, like so:
vector<highscore::players> highscore::returnL(){
load();
return list;
}
So list is basically,
struct players {
string name;
int score;
};
vectors<players> list;
In my source cpp, I tried to capture this vector, so I made another struct and struct vector.
Source.cpp:
struct players1 {
string name;
int score;
};
vector<players1> highscorelist;
Then I tried to
highscore high; //class' name is highscore
highscorelist = high.returnL();
But I get the error message:
No operator "=" matches these operands
" operand types are std::vector<players1, std::allocator<players1>> = std::vector<highscore::players, std::allocator<highscore::players>> "
Is it not possible to do it this way?
I honestly don't know what to search for so this might have been answered before, apologize if that's the case.
You could use reinterpret_cast, but that's not a good solution. Why don't you use highscore::player?
std::vector<highscore::player> highscoreList;
highscoreList = high.returnL(); // ok
highscore::player and player1 are different types, even though they have the same variables and probably even the same memory layout. You cannot just interchange types like that. Also, if you change one of those types, you have to change the other, which is just a maintenance nightmare if it were possible.
If you can, you could also use auto:
auto highscoreList = high.returnL();

Trouble with template parameters used in macros

I'm trying to compile the following piece of code, I get an error on the line which specializes std::vector, it seems the one parameter being passed-in is somehow being assumed to be two parameters. Is it perhaps something to do with angle-brackets?
Is there a special way/mechanism where by such parameters can be correctly passed to the macro?
#include <vector>
template<typename A>
struct AClass {};
#define specialize_AClass(X)\
template<> struct AClass<X> { X a; };
specialize_AClass(int) //ok
specialize_AClass(std::vector<int,std::allocator<int> >) //error
int main()
{
return 0;
}
The error that I get is as follows:
1 Line 55: error: macro "specialize_AClass" passed 2 arguments, but takes just 1
2 Line 15: error: expected constructor, destructor, or type conversion before 'int'
3 compilation terminated due to -Wfatal-errors.
Link: http://codepad.org/qIiKsw4l
template<typename TypeX, typename TypeY>
class Test
{
public:
void fun(TypeX x, TypeY y)
{
std::wcout << _T("Hello") << std::endl;
std::wcout << x << std::endl;
std::wcout << y << std::endl;
}
};
#define COMMOA ,
#define KK(x) x val;
void main()
{
KK(Test<int COMMOA int>);
val.fun(12, 13);
}
I have a new way to solve this trouble. hope it can help you :)
You have two options. One of which was mentioned already: Using __VA_ARGS__. This however has the disadvantage that it doesn't work in strict C++03, but requires a sufficiently C99/C++0x compatible preprocessor.
The other option is to parenthesize the type-name. But unlike another answer claims, it's not as easy as just parenthesizing the type name. Writing a specialization as follows is ill-formed
// error, NOT valid!
template<> struct AClass<(int)> { X a; };
I have worked around this (and boost probably uses the same under the hood) by passing the type name in parentheses, and then building up a function type out of it
template<typename T> struct get_first_param;
template<typename R, typename P1> struct get_first_param<R(P1)> {
typedef P1 type;
};
With that, get_first_param<void(X)>::type denotes the type X. Now you can rewrite your macro to
#define specialize_AClass(X) \
template<> struct AClass<get_first_param<void X>::type> {
get_first_param<void X>::type a;
};
And you just need to pass the type wrapped in parentheses.
There is a couple of issues here.
First of all, macros are extremely dumb, they're complicated, but essentially amounts to a pure text replacement processus.
There are therefore 2 (technical) issues with the code you exposed:
You cannot use a comma in the middle of a macro invocation, it just fails, BOOST_FOREACH is a well-known library and yet the only thing they could do was to told the user that it's arguments should not contain commas, unless they could be wrapped in parenthesis, which is not always the case
Even if the replacement occurred, your code would fail in C++03, because it would create a >> symbol at the end of the template specialization, which would not be parsed correctly.
There are preprocessing / template metaprogramming tricks, however the simpler solution is to use a type without commas:
typedef std::vector<int, std::allocator<int> > FooVector;
specialize_AClass(FooVector)
Finally, there is an aesthetic issue, because of their pervasiveness, macros are best given names that cannot possibly clash with "regular" (types, functions, variables) names. The consensus is usually to use all upper case identifiers, like in:
SPECIALIZE_ACLASS
Note that this cannot begin by an underscore, because the standard restricts the use of identifiers matching _[A-Z].* or [^_]*__.* to the compiler writers for the standard library or whatever they feel like (those are not smileys :p)
Since the preprocessor runs before semantic analysis, the comma in your template parameter is being interpreted as the argument separator for the macro. Instead, you should be able to use variadic macros to do something like this:
#define specialize_AClass(...)\
template<> struct AClass< __VA_ARGS__ > { X a; };
If you are willing to add a little more code before calling your macro, you could always do this as a workaround:
typedef std::vector<int,std::allocator<int> > myTypeDef;
specialize_AClass(myTypeDef) //works
#define EMPTY()
#define DEFER( ... ) __VA_ARGS__ EMPTY()
specialize_AClass( DEFER (std::vector<int,std::allocator<int> >) )
For simple things you can use typedef
#include <vector>
template<typename A>
struct AClass {};
#define specialize_AClass(X)\
template<> struct AClass<X> { X a; };
specialize_AClass(int) //ok
typedef std::vector<int,std::allocator<int>> AllocsVector;
specialize_AClass(AllocsVector) //ok
int main()
{
return 0;
}
There are lots of other problems with your code, but to address the specific question, the preprocessor just treats < and > as less-than and greater-than operators.
That's the extent of its knowledge about C++.
There are some tricks that can be used to allow template expressions to be passed as macro arguments, but the simple and by an extremely large margin best answer for a beginner is:
DON'T DO THAT.
Cheers & hth.,

Use Enum or #define?

I'm building a toy interpreter and I have implemented a token class which holds the token type and value.
The token type is usually an integer, but how should I abstract the int's?
What would be the better idea:
// #defines
#define T_NEWLINE 1
#define T_STRING 2
#define T_BLAH 3
/**
* Or...
*/
// enum
enum TokenTypes
{
t_newline = 1,
t_string = 2,
t_blah = 3
};
Enums can be cast to ints; furthermore, they're the preferred way of enumerating lists of predefined values in C++. Unlike #defines, they can be put in namespaces, classes, etc.
Additionally, if you need the first index to start with 1, you can use:
enum TokenTypes
{
t_newline = 1,
t_string,
t_blah
};
Enums work in debuggers (e.g. saying "print x" will print the "English" value). #defines don't (i.e. you're left with the numeric and have to refer to the source to do the mapping yourself).
Therefore, use enums.
There are various solutions here.
The first, using #define refers to the old days of C. It's usually considered bad practice in C++ because symbols defined this way don't obey scope rules and are replaced by the preprocessor which does not perform any kind of syntax check... leading to hard to understand errors.
The other solutions are about creating global constants. The net benefit is that instead of being interpreted by the preprocessor they will be interpreted by the compiler, and thus obey syntax checks and scope rules.
There are many ways to create global constants:
// ints
const int T_NEWLINE = 1;
struct Tokens { static const int T_FOO = 2; };
// enums
enum { T_BAR = 3; }; // anonymous enum
enum Token { T_BLAH = 4; }; // named enum
// Strong Typing
BOOST_STRONG_TYPEDEF(int, Token);
const Token NewLine = 1;
const Token Foo = 2;
// Other Strong Typing
class Token
{
public:
static const Token NewLine; // defined to Token("NewLine")
static const Token Foo; // defined to Token("Foo")
bool operator<(Token rhs) const { return mValue < rhs.mValue; }
bool operator==(Token rhs) const { return mValue == rhs.mValue; }
bool operator!=(Token rhs) const { return mValue != rhs.mValue; }
friend std::string toString(Token t) { return t.mValue; } // for printing
private:
explicit Token(const char* value);
const char* mValue;
};
All have their strengths and weaknesses.
int lacks from type safety, you can easily use one category of constants in the place where another is expected
enum support auto incrementing but you don't have pretty printing and it's still not so type safe (even though a bit better).
StrongTypedef I prefer to enum. You can get back to int.
Creating your own class is the best option, here you get pretty printing for your messages for example, but that's also a bit more work (not much, but still).
Also, the int and enum approach are likely to generate a code as efficient as the #define approach: compilers substitute the const values for their actual values whenever possible.
In the cases like the one you've described I prefer using enum, since they are much easier to maintain. Especially, if the numerical representation doesn't have any specific meaning.
Enum is type safe, easier to read, easier to debug and well supported by intellisense. I will say use Enum whenever possible, and resort to #define when you have to.
See this related discussion on const versus define in C/C++ and my answer to this post also list when you have to use #define preprocessor.
Shall I prefer constants over defines?
I vote for enum
#define 's aren't type safe and can be redefined if you aren't careful.
Another reason for enums: They are scoped, so if the label t_blah is present in another namespace (e.g. another class), it doesn't interfere with t_blah in your current namespace (or class), even if they have different int representations.
enum provided type-safety and readability and debugger. They are very important, as already mentioned.
Another thing that enum provides is a collection of possibilities. E.g.
enum color
{
red,
green,
blue,
unknown
};
I think this is not possible with #define (or const's for that matter)
Ok, many many answers have been posted already so I'll come up with something a little bit different: C++0x strongly typed enumerators :)
enum class Color /* Note the "class" */
{
Red,
Blue,
Yellow
};
Characteristics, advantages and differences from the old enums
Type-safe: int color = Color::Red; will be a compile-time error. You would have to use Color color or cast Red to int.
Change the underlying type: You can change its underlying type (many compilers offer extensions to do this in C++98 too): enum class Color : unsigned short. unsigned short will be the type.
Explicit scoping (my favorite): in the example above Red will be undefined; you must use Color::Red. Imagine the new enums as being sort of namespaces too, so they don't pollute your current namespace with what is probably going to be a common name ("red", "valid", "invalid",e tc).
Forward declaration: enum class Color; tells the compiler that Color is an enum and you can start using it (but not values, of course); sort of like class Test; and then use Test *.