How to make a getter function for a double pointer? - c++

I am needing to modify an open source project to prevent reusing code (more efficient just to create a GetGameRulesPtr() function than to keep going into the engine to retrieve it. The problem is, it is stored as void **g_pGameRules. Ive never really grasped the concept of a pointer to a pointer, and I am a bit confused.
I am creating a GetGameRules() function to retrieve this pointer, but im not sure if my getter function should be void* ret type and then return *g_pGameRules, or how exactly I should go about this. I am actually brushing on my pointer usage now, but wanted to find out the proper method to learn from.
Here is the code, lines 58-89 are the SDK function that retrieve the g_pGameRules pointer from the game engine. The other functions are what I am adding the getter function to.
// extension.cpp
class SDKTools_API : public ISDKTools
{
public:
virtual const char *GetInterfaceName()
{
return SMINTERFACE_SDKTOOLS_NAME;
}
virtual unsigned int GetInterfaceVersion()
{
return SMINTERFACE_SDKTOOLS_VERSION;
}
virtual IServer *GetIServer()
{
return iserver;
}
virtual void *GetGameRules()
{
return *g_pGameRules;
}
} g_SDKTools_API;
// extension.h
namespace SourceMod
{
/**
* #brief SDKTools API.
*/
class ISDKTools : public SMInterface
{
public:
virtual const char *GetInterfaceName() = 0;
virtual unsigned int GetInterfaceVersion() = 0;
public:
/**
* #brief Returns a pointer to IServer if one was found.
*
* #return IServer pointer, or NULL if SDKTools was unable to find one.
*/
virtual IServer* GetIServer() = 0;
/**
* #brief Returns a pointer to GameRules if one was found.
*
* #return GameRules pointer, or NULL if SDKTools was unable to find one.
*/
virtual void* GetGameRules() = 0;
};
}
// vglobals.cpp
void **g_pGameRules = NULL;
void *g_EntList = NULL;
void InitializeValveGlobals()
{
g_EntList = gamehelpers->GetGlobalEntityList();
char *addr;
#ifdef PLATFORM_WINDOWS
/* g_pGameRules */
if (!g_pGameConf->GetMemSig("CreateGameRulesObject", (void **)&addr) || !addr)
{
return;
}
int offset;
if (!g_pGameConf->GetOffset("g_pGameRules", &offset) || !offset)
{
return;
}
g_pGameRules = *reinterpret_cast<void ***>(addr + offset);
#elif defined PLATFORM_LINUX || defined PLATFORM_APPLE
/* g_pGameRules */
if (!g_pGameConf->GetMemSig("g_pGameRules", (void **)&addr) || !addr)
{
return;
}
g_pGameRules = reinterpret_cast<void **>(addr);
#endif
}

You want to return a void*, and do the casting back to the appropriate SomeType** within implementation code. This is because void** has strange semantics (which I can't find on google right now). It also tells more info to the user than they really need. The whole point of using void* to begin with was to avoid giving information to the user that they don't need.
If it is an option, I'd personally recommend avoiding void* altogether, and simply providing an opaque reference type for them to call your APIs with. One way to do this would be to define a fake structure, like struct GameObjectRef {};, and pass the user back a GameObjectRef*, casted from whatever pointer your system actually uses. This allows the user to write strongly typed code, so they can't accidentally provide the wrong pointer type to your functions, as they can with void*.
How pointers (and pointers-to-pointers) work:
Imagine you are asking me where your aunt lives. Then, I hand you a piece of paper with an address to go to. That piece of paper is a pointer to a house.
Now, take that piece of paper with the address, take a photo of it with your digital camera, and place the image onto your personal wiki site.
Now, if your sister calls, asking for your aunt's address, just tell her to look it up on your wiki. If she asks for the URL, write it on a piece of paper for her. This second piece of paper is a pointer to a pointer to a house.
You can see how an address isn't the same as the real thing. Just because someone has your website address doesn't mean they know your aunt's address. And just because they have your aunt's address doesn't mean they're knocking on her door. The same is true for pointers to objects.
You can also see how you can make copies of addresses (pointers), but that doesn't make a copy of the underlying object. When you take a photo of your aunt's address, your aunt doesn't get a shiny new house.
And you can see how dereferencing a pointer will lead you back to the original object. If you to go the wiki site, you get your aunt's address. If you drive to that address, you can leave a package on her doorstep.
Note that these aren't perfect metaphors, but they are close enough to be somewhat descriptive. Real pointers-to-pointers are a lot cleaner than those examples. They describe only two things - the type of the final object (say, GameObject), and the number of levels of indirection (say, GameObject** - two levels).

I think its not a double pointer, but a pointer to pointer, and yes if you want to get void* you must return *g_pGameRules.
The think is, that the pointers is like levels. You must show which level you want to get.

A pointer to a pointer is useful if you want have the pointer change (to a larger block of memory if you run out, for example), but keep a common reference to the item, wherever it moves.
If you are not going to be reallocating or moving the block pointed to, then dereferencinig like you have in your getter is fine. I haven't looked at the library you are using, but one thing to consider is that it may have reference counting when you get an instance in order to ensure the object isn't changed after you get the pointer.
So, what I would recommend is that you look to see if the library has any "Factory" functions or "instance" creating functions and use those.
-Dan8080

Related

How can I calculate a hash/checksum/fingerprint of an object in c++?

How can I calculate a hash/checksum/fingerprint of an object in c++?
Requirements:
The function must be 'injective'(*). In other words, there should be no two different input objects, that return the same hash/checksum/fingerprint.
Background:
I am trying to come up with a simple pattern for checking whether or not an entity object has been changed since it was constructed. (In order to know which objects need to be updated in the database).
Note that I specifically do not want to mark the object as changed in my setters or anywhere else.
I am considering the following pattern: In short, every entity object that should be persisted, has a member function "bool is_changed()". Changed, in this context, means changed since the objects' constructor was called.
Note: My motivation for all this is to avoid the boilerplate code that comes with marking objects as clean/dirty or doing a member by member comparison. In other words, reduce risk of human error.
(Warning: psudo c++ code ahead. I have not tried compiling it).
class Foo {
private:
std::string my_string;
// Assume the "fingerprint" is of type long.
long original_fingerprint;
long current_fingerprint()
{
// *** Suggestions on which algorithm to use here? ***
}
public:
Foo(const std::string& my_string) :
my_string(my_string)
{
original_fingerprint = current_fingerprint();
}
bool is_changed() const
{
// If new calculation of fingerprint is different from the one
// calculated in the constructor, then the object has
// been changed in some way.
return current_fingerprint() != original_fingerprint;
}
void set_my_string(const std::string& new_string)
{
my_string = new_string;
}
}
void client_code()
{
auto foo = Foo("Initial string");
// should now return **false** because
// the object has not yet been changed:
foo.is_changed();
foo.set_my_string("Changed string");
// should now return **true** because
// the object has been changed:
foo.is_changed();
}
(*) In practice, not necessarily in theory (like uuids are not unique in theory).
You can use the CRC32 algorithm from Boost. Feed it with the memory locations of the data you want to checksum. You could use a hash for this, but hashes are cryptographic functions intended to guard against intentional data corruption and are slower. A CRC performs better.
For this example, I've added another data member to Foo:
int my_integer;
And this is how you would checksum both my_string and my_integer:
#include <boost/crc.hpp>
// ...
long current_fingerprint()
{
boost::crc_32_type crc32;
crc32.process_bytes(my_string.data(), my_string.length());
crc32.process_bytes(&my_integer, sizeof(my_integer));
return crc32.checksum();
}
However, now we're left with the issue of two objects having the same fingerprint if my_string and my_integer are equal. To fix this, we should include the address of the object in the CRC, since C++ guarantees that different objects will have different addresses.
One would think we can use:
process_bytes(&this, sizeof(this));
to do it, but we can't since this is an rvalue and thus we can't take its address. So we need to store the address in a variable instead:
long current_fingerprint()
{
boost::crc_32_type crc32;
void* this_ptr = this;
crc32.process_bytes(&this_ptr, sizeof(this_ptr));
crc32.process_bytes(my_string.data(), my_string.length());
crc32.process_bytes(&my_integer, sizeof(my_integer));
return crc32.checksum();
}
Such a function does not exist, at least not in the context that you are requesting.
The STL provides hash functions for basic types (std::hash), and you could use these to implement a hash function for your objects using any reasonable hashing algorithm.
However, you seem to be looking for an injective function, which causes a problem. Essentially, to have an injective function, it would be necessary to have an output of size greater or equal to that of the object you are considering, since otherwise (from the pigeon hole principle) there would be two inputs that give the same output. Given that, the most sensible option would be to just do a straight-up comparison of the object to some sort of reference object.

Why does PyCXX handle new-style classes in the way it does?

I'm picking apart some C++ Python wrapper code that allows the consumer to construct custom old style and new style Python classes from C++.
The original code comes from PyCXX, with old and new style classes here and here. I have however rewritten the code substantially, and in this question I will reference my own code, as it allows me to present the situation in the greatest clarity that I am able. I think there would be very few individuals capable of understanding the original code without several days of scrutiny... For me it has taken weeks and I'm still not clear on it.
The old style simply derives from PyObject,
template<typename FinalClass>
class ExtObj_old : public ExtObjBase<FinalClass>
// ^ which : ExtObjBase_noTemplate : PyObject
{
public:
// forwarding function to mitigate awkwardness retrieving static method
// from base type that is incomplete due to templating
static TypeObject& typeobject() { return ExtObjBase<FinalClass>::typeobject(); }
static void one_time_setup()
{
typeobject().set_tp_dealloc( [](PyObject* t) { delete (FinalClass*)(t); } );
typeobject().supportGetattr(); // every object must support getattr
FinalClass::setup();
typeobject().readyType();
}
// every object needs getattr implemented to support methods
Object getattr( const char* name ) override { return getattr_methods(name); }
// ^ MARKER1
protected:
explicit ExtObj_old()
{
PyObject_Init( this, typeobject().type_object() ); // MARKER2
}
When one_time_setup() is called, it forces (by accessing base class typeobject()) creation of the associated PyTypeObject for this new type.
Later when an instance is constructed, it uses PyObject_Init
So far so good.
But the new style class uses much more complicated machinery. I suspect this is related to the fact that new style classes allow derivation.
And this is my question, why is the new style class handling implemented in the way that it is? Why is it having to create this extra PythonClassInstance structure? Why can't it do things the same way the old-style class handling does? i.e. Just type convert from the PyObject base type? And seeing as it doesn't do that, does this mean it is making no use of its PyObject base type?
This is a huge question, and I will keep amending the post until I'm satisfied it represents the issue well. It isn't a good fit for SO's format, I'm sorry about that. However, some world-class engineers frequent this site (one of my previous questions was answered by the lead developer of GCC for example), and I value the opportunity to appeal to their expertise. So please don't be too hasty to vote to close.
The new style class's one-time setup looks like this:
template<typename FinalClass>
class ExtObj_new : public ExtObjBase<FinalClass>
{
private:
PythonClassInstance* m_class_instance;
public:
static void one_time_setup()
{
TypeObject& typeobject{ ExtObjBase<FinalClass>::typeobject() };
// these three functions are listed below
typeobject.set_tp_new( extension_object_new );
typeobject.set_tp_init( extension_object_init );
typeobject.set_tp_dealloc( extension_object_deallocator );
// this should be named supportInheritance, or supportUseAsBaseType
// old style class does not allow this
typeobject.supportClass(); // does: table->tp_flags |= Py_TPFLAGS_BASETYPE
typeobject.supportGetattro(); // always support get and set attr
typeobject.supportSetattro();
FinalClass::setup();
// add our methods to the extension type's method table
{ ... typeobject.set_methods( /* ... */); }
typeobject.readyType();
}
protected:
explicit ExtObj_new( PythonClassInstance* self, Object& args, Object& kwds )
: m_class_instance{self}
{ }
So the new style uses a custom PythonClassInstance structure:
struct PythonClassInstance
{
PyObject_HEAD
ExtObjBase_noTemplate* m_pycxx_object;
}
PyObject_HEAD, if I dig into Python's object.h, is just a macro for PyObject ob_base; -- no further complications, like #if #else. So I don't see why it can't simply be:
struct PythonClassInstance
{
PyObject ob_base;
ExtObjBase_noTemplate* m_pycxx_object;
}
or even:
struct PythonClassInstance : PyObject
{
ExtObjBase_noTemplate* m_pycxx_object;
}
Anyway, it seems that its purpose is to tag a pointer onto the end of a PyObject. This will be because Python runtime will often trigger functions we have placed in its function table, and the first parameter will be the PyObject responsible for the call. So this allows us to retrieve the associated C++ object.
But we also need to do that for the old-style class.
Here is the function responsible for doing that:
ExtObjBase_noTemplate* getExtObjBase( PyObject* pyob )
{
if( pyob->ob_type->tp_flags & Py_TPFLAGS_BASETYPE )
{
/*
New style class uses a PythonClassInstance to tag on an additional
pointer onto the end of the PyObject
The old style class just seems to typecast the pointer back up
to ExtObjBase_noTemplate
ExtObjBase_noTemplate does indeed derive from PyObject
So it should be possible to perform this typecast
Which begs the question, why on earth does the new style class feel
the need to do something different?
This looks like a really nice way to solve the problem
*/
PythonClassInstance* instance = reinterpret_cast<PythonClassInstance*>(pyob);
return instance->m_pycxx_object;
}
else
return static_cast<ExtObjBase_noTemplate*>( pyob );
}
My comment articulates my confusion.
And here, for completeness is us inserting a lambda-trampoline into the PyTypeObject's function pointer table, so that Python runtime can trigger it:
table->tp_setattro = [] (PyObject* self, PyObject* name, PyObject* val) -> int
{
try {
ExtObjBase_noTemplate* p = getExtObjBase( self );
return ( p -> setattro(Object{name}, Object{val}) );
}
catch( Py::Exception& ) { /* indicate error */
return -1;
}
};
(In this demonstration I'm using tp_setattro, note that there are about 30 other slots, which you can see if you look at the doc for PyTypeObject)
(in fact the major reason for working this way is that we can try{}catch{} around every trampoline. This saves the consumer from having to code repetitive error trapping.)
So, we pull out the "base type for the associated C++ object" and call its virtual setattro (just using setattro as an example here). A derived class will have overridden setattro, and this override will get called.
The old-style class provides such an override, which I've labelled MARKER1 -- it is in the top listing for this question.
The only the thing I can think of is that maybe different maintainers have used different techniques. But is there some more compelling reason why old and new style classes require different architecture?
PS for reference, I should include the following methods from new style class:
static PyObject* extension_object_new( PyTypeObject* subtype, PyObject* args, PyObject* kwds )
{
PyObject* pyob = subtype->tp_alloc(subtype,0);
PythonClassInstance* o = reinterpret_cast<PythonClassInstance *>( pyob );
o->m_pycxx_object = nullptr;
return pyob;
}
^ to me, this looks absolutely wrong.
It appears to be allocating memory, re-casting to some structure that might exceed the amount allocated, and then nulling right at the end of this.
I'm surprised it hasn't caused any crashes.
I can't see any indication anywhere in the source code that these 4 bytes are owned.
static int extension_object_init( PyObject* _self, PyObject* _args, PyObject* _kwds )
{
try
{
Object args{_args};
Object kwds{_kwds};
PythonClassInstance* self{ reinterpret_cast<PythonClassInstance*>(_self) };
if( self->m_pycxx_object )
self->m_pycxx_object->reinit( args, kwds );
else
// NOTE: observe this is where we invoke the constructor, but indirectly (i.e. through final)
self->m_pycxx_object = new FinalClass{ self, args, kwds };
}
catch( Exception & )
{
return -1;
}
return 0;
}
^ note that there is no implementation for reinit, other than the default
virtual void reinit ( Object& args , Object& kwds ) {
throw RuntimeError( "Must not call __init__ twice on this class" );
}
static void extension_object_deallocator( PyObject* _self )
{
PythonClassInstance* self{ reinterpret_cast< PythonClassInstance* >(_self) };
delete self->m_pycxx_object;
_self->ob_type->tp_free( _self );
}
EDIT: I will hazard a guess, thanks to insight from Yhg1s on the IRC channel.
Maybe it is because when you create a new old-style class, it is guaranteed it will overlap perfectly a PyObject structure.
Hence it is safe to derive from PyObject, and pass a pointer to the underlying PyObject into Python, which is what the old-style class does (MARKER2)
On the other hand, new style class creates a {PyObject + maybe something else} object.
i.e. It wouldn't be safe to do the same trick, as Python runtime would end up writing past the end of the base class allocation (which is only a PyObject).
Because of this, we need to get Python to allocate for the class, and return us a pointer which we store.
Because we are now no longer making use of the PyObject base-class for this storage, we cannot use the convenient trick of typecasting back to retrieve the associated C++ object.
Which means that we need to tag on an extra sizeof(void*) bytes to the end of the PyObject that actually does get allocated, and use this to point to our associated C++ object instance.
However, there is some contradiction here.
struct PythonClassInstance
{
PyObject_HEAD
ExtObjBase_noTemplate* m_pycxx_object;
}
^ if this is indeed the structure that accomplishes the above, then it is saying that the new style class instance is indeed fitting exactly over a PyObject, i.e. It is not overlapping into the m_pycxx_object.
And if this is the case, then surely this whole process is unnecessary.
EDIT: here are some links that are helping me learn the necessary ground work:
http://eli.thegreenplace.net/2012/04/16/python-object-creation-sequence
http://realmike.org/blog/2010/07/18/introduction-to-new-style-classes-in-python
Create an object using Python's C API
to me, this looks absolutely wrong. It appears to be allocating memory, re-casting to some structure that might exceed the amount allocated, and then nulling right at the end of this. I'm surprised it hasn't caused any crashes. I can't see any indication anywhere in the source code that these 4 bytes are owned
PyCXX does allocate enough memory, but it does so by accident. This appears to be a bug in PyCXX.
The amount of memory Python allocates for the object is determined by the first call to the following static member function of PythonClass<T>:
static PythonType &behaviors()
{
...
p = new PythonType( sizeof( T ), 0, default_name );
...
}
The constructor of PythonType sets the tp_basicsize of the python type object to sizeof(T). This way when Python allocates an object it knows to allocate at least sizeof(T) bytes. It works because sizeof(T) turns out to be larger that sizeof(PythonClassInstance) (T is derived from PythonClass<T> which derives from PythonExtensionBase, which is large enough).
However, it misses the point. It should actually allocate only sizeof(PythonClassInstance) . This appears to be a bug in PyCXX - that it allocates too much, rather than too little space for storing a PythonClassInstance object.
And this is my question, why is the new style class handling implemented in the way that it is? Why is it having to create this extra PythonClassInstance structure? Why can't it do things the same way the old-style class handling does?
Here's my theory why new style classes are different from the old style classes in PyCXX.
Before Python 2.2, where new style classes were introduced, there was no tp_init member int the type object. Instead, you needed to write a factory function that would construct the object. This is how PythonExtension<T> is supposed to work - the factory function converts the Python arguments to C++ arguments, asks Python to allocate the memory and then calls the constructor using placement new.
Python 2.2 added the new style classes and the tp_init member. Python first creates the object and then calls the tp_init method. Keeping the old way would have required that the objects would first have a dummy constructor that creates an "empty" object (e.g. initializes all members to null) and then when tp_init is called, would have had an additional initialization stage. This makes the code uglier.
It seems that the author of PyCXX wanted to avoid that. PyCXX works by first creating a dummy PythonClassInstance object and then when tp_init is called, creates the actual PythonClass<T> object using its constructor.
... does this mean it is making no use of its PyObject base type?
This appears to be correct, the PyObject base class does not seem to be used anywhere. All the interesting methods of PythonExtensionBase use the virtual self() method, which returns m_class_instance and completely ignore the PyObject base class.
I guess (only a guess, though) is that PythonClass<T> was added to an existing system and it seemed easier to just derive from PythonExtensionBase instead of cleaning up the code.

confusing notation in C++ (OMNeT++)

While going through the OMNeT tutorials given at: http://www.omnetpp.org/doc/omnetpp/tictoc-tutorial/part2.html at tutorial 9 I came across some confusing notation:
void Tic9::sendCopyOf(cMessage *msg)
{
cMessage *copy = (cMessage *) msg->dup();
send(copy, "out");
}
The code is pretty short and neat, however due to the fact that I have little experience with C++ / OMNeT I could not understand what this line here does: cMessage *copy = (cMessage *) msg->dup(); , more specifically the (cMessage *). I know msg->dup() actually means (*msg).dup().
Could anyone please elaborate, what actually happens in the memory?
post Edit Addendum:
code for dup():
virtual cMessage *dup() const
{
return new cMessage(*this);
}
description for dup(): Creates and returns an exact copy of this object.
Does this mean that (cMessage *) msg->dup() internally passes the address of object returned by msg->dup() to *copy?
The other confusing notation:
cMessage *Tic9::generateNewMessage()
{
// Generate a message with a different name every time.
char msgname[20];
sprintf(msgname, "tic-%d", ++seq);
cMessage *msg = new cMessage(msgname);
return msg;
}
What does the * in front of class name mean here: *Tic9::generateNewMessage()
Let us assume that msg->dup() returned void * -- that is, a pointer to void, which means a pointer whose type the compiler doesn't track. But you may know, e.g. because of documentation on that function, or because certain preconditions have been met, that msg->dup() will return a pointer to CMessage. Before you can use the return value as such, you need to tell the compiler what the type actually is. You do that by casting the void * to CMessage *, which uses the syntax you see.
Nothing happens in memory. It is just a C-style type cast.
http://en.cppreference.com/w/cpp/language/explicit_cast
You might want to learn more about the basics of the language. C++ is a tricky one to use.

Correct __bridge usage with ARC and C++ interop? (how to avoid memory leaks?)

I have a bit of pure C++ code which is reading from Objective-C data structures with the help of a function pointer to a method in an Objective C class. I'm treating the Objective-C class instance to read from as an opaque pointer. For example, the C++ method that does the reading has a signature like this:
typedef void(*DataGetterFunc)(void * dataSource, int key, int * outValue);
...
void readData(void * dataSource, DataGetterFunc dataReadingFunc);
When I call the C++ method from Objective-C, I do the following:
MYDataStructure * objectiveCData;
cppObject->readData((__bridge void*)objectiveCData, DataGetterFuncImpl);
Finally, DataGetterFuncImpl dereferences the Objective-C class like so:
void DataGetterFuncImpl(void * dataSource, int key, int * outValue)
{
MYDataStructure * objCData = (__bridge MYDataStructure*)dataSource;
...
}
Originally in DataGetterFuncImpl I was using __bridge_transfer, but then I was getting EXC_BAD_ACCESS the next time ARC called retain on MYDataStructure, So I assumed it was being over-released by the use of __bridge_transfer and changed it to just __bridge.
Are there any memory leaks I should look for by just using __bridge, or do I need to use some combination of __bridge_retain and __bridge_transfer in this case?
When you're using __bridge to convert to or from objc, owership is just not affected. That means, that while you're using the object in C++ you must make sure that there's still a strong reference around.
If you, on the other hand, use __bridge_retain to convert to void* and __bridge_transfer to convert back to id (or any other retainable object type), you must make sure that each __bridge_retain is matched by exactly one __bridge_transfer later.

Pointer object in C++

I have a very simple class that looks as follows:
class CHeader
{
public:
CHeader();
~CHeader();
void SetCommand( const unsigned char cmd );
void SetFlag( const unsigned char flag );
public:
unsigned char iHeader[32];
};
void CHeader::SetCommand( const unsigned char cmd )
{
iHeader[0] = cmd;
}
void CHeader::SetFlag( const unsigned char flag )
{
iHeader[1] = flag;
}
Then, I have a method which takes a pointer to CHeader as input and looks
as follows:
void updateHeader(CHeader *Hdr)
{
unsigned char cmd = 'A';
unsigned char flag = 'B';
Hdr->SetCommand(cmd);
Hdr->SetFlag(flag);
...
}
Basically, this method simply sets some array values to a certain value.
Afterwards, I create then a pointer to an object of class CHeader and pass it to
the updateHeader function:
CHeader* hdr = new CHeader();
updateHeader(hdr);
In doing this, the program crashes as soon as it executes the Hdr->SetCommand(cmd)
line. Anyone sees the problem, any input would be really appreciated
When you run into a crash, act like a crime investigator: investigate the crime scene.
what is the information you get from your environment (access violation? any debug messages? what does the memory at *Hdr look like? ...)
Is the passed-in Hdr pointer valid?
Then use logical deduction, e.g.:
the dereferencing of Hdr causes an access violation
=> passed in Hdr points to invalid memory
=> either memory wasn't valid to start with (wrong pointer passed in), or memory was invalidated (object was deleted before passing in the pointer, or someone painted over the memory)
...
It's probably SEGFAULTing. Check the pointers.
After
your adding some source code
your comment that the thing runs on another machine
the fact that you use the term 'flag' and 'cmd' and some very small datatypes
making me assume the target machine is quite limited in capacity, I suggest testing the result of the new CHeader for validity: if the system runs out of resources, the resulting pointer will not refer to valid memory.
There is nothing wrong with the code you've provided.
Are you sure the pointer you've created is the same same address once you enter the 'updateHeader' function? Just to be sure, after new() note the address, fill the memory, sizeof(CHeader), with something you know is unique like 0XDEAD, then trace into the updateHeader function, making sure everything is equal.
Other than that, I wonder if it is an alignment issues. I know you're using 8 bit values, but try changing your array to unsigned ints or longs and see if you get the same issue. What architecture are you running this on?
Your code looks fine. The only potential issue I can see is that you have declared a CHeader constructor and destructor in your class, but do not show the implementation of either. I guess you have just omitted to show these, else the linker should have complained (if I duplicate this project in VC++6 it comes up with an 'unresolved external' error for the constructor. It should also have shown the same error for the destructor if you had a... delete hdr; ...statement in your code).
But it is actually not necessary to have an implementation for every method declared in a class unless the methods are actually going to get called (any unimplemented methods are simply ignored by the compiler/linker if never called). Of course, in the case of an object one of the constructor(s) has to be called when the object is instantiated - which is the reason the compiler will create a default constructor for you if you omit to add any constructors to your class. But it will be a serious error for your compiler to compile/link the above code without the implementation of your declared constructor, so I will really be surprised if this is the reason for your problem.
But the symptoms you describe definitely sounds like the 'hdr' pointer you are passing to the updateHeader function is invalid. The reason being that the 1st time you are dereferencing this pointer after the updateHeader function call is in the... Hdr->SetCommand(cmd); ...call (which you say crashes).
I can only think of 2 possible scenarios for this invalid pointer:
a.) You have some problem with your heap and the allocation of memory with the 'new' operator failed on creation of the 'hdr' object. Maybe you have insufficient heap space. On some embedded environments you may also need to provide 'custom' versions of the 'new' and 'delete' operator. The easiest way to check this (and you should always do) is to check the validity of the pointer after the allocation:
CHeader* hdr = new CHeader();
if(hdr) {
updateHeader(hdr);
}
else
//handle or throw exception...
The normal behaviour when 'new' fails should actually be to throw an exception - so the following code will cater for that as well:
try{
CHeader* hdr = new CHeader();
} catch(...) {
//handle or throw specific exception i.e. AfxThrowMemoryException() for MFC
}
if(hdr) {
updateHeader(hdr);
}
else
//handle or throw exception...
}
b.) You are using some older (possibly 16 bit and/or embedded) environment, where you may need to use a FAR pointer (which includes the SEGMENT address) for objects created on the heap.
I suspect that you will need to provide more details of your environment plus compiler to get any useful feedback on this problem.