Specific actionscript functions for a c++ progammer - c++

I am coming from the C++ world and i want to do some simple stuff with Actionscript 3.0.
Have search around this site and google and haven't found a universally accepted way to do so. I will give you the C++ code of the analogous of what I am trying to do in Actionscript 3.0.
Pass by reference:
void somefunction (string &passvariable);
Create instance of, deep copy:
string something;
string somethingelse;
something = "randomtext";
somethingelse = something;

Pass by reference
Every object is passed by reference. As far as I know, there are no explicit & address of or * dereference operators. Actionscript is a higher level language than that.
Primitive types (and Strings are primitive - see link) are Immutable in Actionscript, so pass by value / pass by reference are effectively the same.
Deep Copy / Instance of
ObjectUtil.clone / ObjectUtil.copy will create sometimes-deep copies of Objects, if you're working in Flex. I usually don't rely on it for anything deep, however. In most cases you will want to create your own clone style method to create a deep copy.
A generic, flexible clone method can be found here

The rules for pass as reference are different for simple data types like string and number than they are for objects and complex data types.
If you are passing a string to a function, it creates a copy, leaving the original untouched.
So to pass by reference, try creating an object:
var str:Object = {string:"foo"};
passByref(str);
trace(str.string);
private function passByref(str:Object):void
{
str.string = str.string + "bar";
trace("inside", str);
}
As for deep object cloning, this works great:
package
{
import flash.utils.ByteArray;
public class DeepCopyUtil
{
public static function clone (source : Object) : *
{
var array : ByteArray = new ByteArray ();
array.writeObject (source);
array.position = 0;
return array.readObject ();
}
}
}
Credit where credit is due:
http://cookbooks.adobe.com/post_How_to_create_deep_copies_of_objects_and_arrays-19261.html

In Actionscript you have to define all things with function, var or const.
You should define the (return type) after the variable name, like var:String
Creating a function
function someFunction (var:String):void
{
}
Copy a string
var something:String;
var somethingElse:String;
something = "randomtext";
somethingelse = something;

Related

c++ passing json object by reference

In the below code, I am taking requests from a client, put them together on a json object on my server class and sending it to a pusher(directly connected to a website, putting my data in there so I can search data easily)
The code is working perfectly fine, but my manager said that I need to pass json by reference in this code, and I have no idea what to do.
On Server Class:
grpc::Status RouteGuideImpl::PubEvent(grpc::ServerContext *context,
const events::PubEventRequest *request,
events::PubEventResponse *response){
for(int i=0; i<request->event_size();i++){
nhollman::json object;
auto message = request->events(i);
object["uuid"]=message.uuid();
object["topic"]=message.type();
pusher.jsonCollector(obj);
}
...
}
On Pusher Class:
private:
nholmann::json queue = nlohmann::json::array();
public:
void Pusher::jsonCollector(nlohmann::json dump){
queue.push_back(dump);
}
void Pusher::curlPusher(){
std::string str = queue.dump();
curl_easy_setopt(curl, CURLOPT_POSTFIELDS, str.data());
...
}
As much as I understand, I need to send the json object by reference. How can I do that?
The simple answer is to change
void Pusher::jsonCollector(nlohmann::json dump)
to
void Pusher::jsonCollector(const nlohmann::json& dump)
(note that if this is inside the class then Pusher:: is a non-standard visual studio extension).
This will reduce the number of times the object is copied from 2 to 1 however you can avoid the copy completely by using std::move:
void Pusher::jsonCollector(nlohmann::json dump){
queue.push_back(std::move(dump));
}
And call it with:
pusher.jsonCollector(std::move(obj));
If you want to enforce this behaviour to ensure that callers of jsonCollector always use std::move you can change jsonCollector to:
void Pusher::jsonCollector(nlohmann::json&& dump){
queue.push_back(std::move(dump));
}
Well, references are one of the many, many features, that distinguishes C from C++.
In other languages, like python or java, when you pass an object (not basic types) to a function and change it there, it is changed in the caller entity as well. In these languages, you don't have pointers, but you need to pass the object, not a copy.
That's what you have with references in C++. They are used like value types, but they are no copy.
Pointers can be nullptr (or NULL in C), references cannot. The address a pointer points to can be changed (assigned), you cannot change what object a reference refers to.
Have a look at this https://en.cppreference.com/w/cpp/language/reference for more information.

Convert data type a of class object (C++)

I am writing a game in which one Object has an ability to turn into an object of another class (e.g. Clark Kent -> Superman). I would like to know what is the most efficient way to implement this.
The logic of my current code:
I have created a turnInto() function inside the ClarkKent class. The turnInto function calls the constructor of Superman class, passing all needed infos to it. The next step is to assign the address of Superman object to the current ClarkKent object.
void ClarkKent::turnInto() {
Superman sMan(getName(), getMaxHP(), getDamage());
&(*this) = &w; // <- error here
this->ClarkKent::~ClarkKent();
}
As you might have guessed, the compiler gives an error that the expression is not assignable. Not sure how to find a correct solution to this.
Keep it simple and don't play tricks you don't understand with your objects.
Superman ClartkKent::turnInto() {
return {getName(), getMaxHP(), getDamage()};
}
At the callee:
ClartkKent some_guy{...};
auto some_other_guy = some_guy.tunInto();
Or if you need something fancy:
using NotBatman = std::variant<ClartkKent, Superman>;
NotBatman some_guy = ClartkKent{...};
using std::swap;
swap(some_guy, some_guy.tunInto());
IDK

Why does PyCXX handle new-style classes in the way it does?

I'm picking apart some C++ Python wrapper code that allows the consumer to construct custom old style and new style Python classes from C++.
The original code comes from PyCXX, with old and new style classes here and here. I have however rewritten the code substantially, and in this question I will reference my own code, as it allows me to present the situation in the greatest clarity that I am able. I think there would be very few individuals capable of understanding the original code without several days of scrutiny... For me it has taken weeks and I'm still not clear on it.
The old style simply derives from PyObject,
template<typename FinalClass>
class ExtObj_old : public ExtObjBase<FinalClass>
// ^ which : ExtObjBase_noTemplate : PyObject
{
public:
// forwarding function to mitigate awkwardness retrieving static method
// from base type that is incomplete due to templating
static TypeObject& typeobject() { return ExtObjBase<FinalClass>::typeobject(); }
static void one_time_setup()
{
typeobject().set_tp_dealloc( [](PyObject* t) { delete (FinalClass*)(t); } );
typeobject().supportGetattr(); // every object must support getattr
FinalClass::setup();
typeobject().readyType();
}
// every object needs getattr implemented to support methods
Object getattr( const char* name ) override { return getattr_methods(name); }
// ^ MARKER1
protected:
explicit ExtObj_old()
{
PyObject_Init( this, typeobject().type_object() ); // MARKER2
}
When one_time_setup() is called, it forces (by accessing base class typeobject()) creation of the associated PyTypeObject for this new type.
Later when an instance is constructed, it uses PyObject_Init
So far so good.
But the new style class uses much more complicated machinery. I suspect this is related to the fact that new style classes allow derivation.
And this is my question, why is the new style class handling implemented in the way that it is? Why is it having to create this extra PythonClassInstance structure? Why can't it do things the same way the old-style class handling does? i.e. Just type convert from the PyObject base type? And seeing as it doesn't do that, does this mean it is making no use of its PyObject base type?
This is a huge question, and I will keep amending the post until I'm satisfied it represents the issue well. It isn't a good fit for SO's format, I'm sorry about that. However, some world-class engineers frequent this site (one of my previous questions was answered by the lead developer of GCC for example), and I value the opportunity to appeal to their expertise. So please don't be too hasty to vote to close.
The new style class's one-time setup looks like this:
template<typename FinalClass>
class ExtObj_new : public ExtObjBase<FinalClass>
{
private:
PythonClassInstance* m_class_instance;
public:
static void one_time_setup()
{
TypeObject& typeobject{ ExtObjBase<FinalClass>::typeobject() };
// these three functions are listed below
typeobject.set_tp_new( extension_object_new );
typeobject.set_tp_init( extension_object_init );
typeobject.set_tp_dealloc( extension_object_deallocator );
// this should be named supportInheritance, or supportUseAsBaseType
// old style class does not allow this
typeobject.supportClass(); // does: table->tp_flags |= Py_TPFLAGS_BASETYPE
typeobject.supportGetattro(); // always support get and set attr
typeobject.supportSetattro();
FinalClass::setup();
// add our methods to the extension type's method table
{ ... typeobject.set_methods( /* ... */); }
typeobject.readyType();
}
protected:
explicit ExtObj_new( PythonClassInstance* self, Object& args, Object& kwds )
: m_class_instance{self}
{ }
So the new style uses a custom PythonClassInstance structure:
struct PythonClassInstance
{
PyObject_HEAD
ExtObjBase_noTemplate* m_pycxx_object;
}
PyObject_HEAD, if I dig into Python's object.h, is just a macro for PyObject ob_base; -- no further complications, like #if #else. So I don't see why it can't simply be:
struct PythonClassInstance
{
PyObject ob_base;
ExtObjBase_noTemplate* m_pycxx_object;
}
or even:
struct PythonClassInstance : PyObject
{
ExtObjBase_noTemplate* m_pycxx_object;
}
Anyway, it seems that its purpose is to tag a pointer onto the end of a PyObject. This will be because Python runtime will often trigger functions we have placed in its function table, and the first parameter will be the PyObject responsible for the call. So this allows us to retrieve the associated C++ object.
But we also need to do that for the old-style class.
Here is the function responsible for doing that:
ExtObjBase_noTemplate* getExtObjBase( PyObject* pyob )
{
if( pyob->ob_type->tp_flags & Py_TPFLAGS_BASETYPE )
{
/*
New style class uses a PythonClassInstance to tag on an additional
pointer onto the end of the PyObject
The old style class just seems to typecast the pointer back up
to ExtObjBase_noTemplate
ExtObjBase_noTemplate does indeed derive from PyObject
So it should be possible to perform this typecast
Which begs the question, why on earth does the new style class feel
the need to do something different?
This looks like a really nice way to solve the problem
*/
PythonClassInstance* instance = reinterpret_cast<PythonClassInstance*>(pyob);
return instance->m_pycxx_object;
}
else
return static_cast<ExtObjBase_noTemplate*>( pyob );
}
My comment articulates my confusion.
And here, for completeness is us inserting a lambda-trampoline into the PyTypeObject's function pointer table, so that Python runtime can trigger it:
table->tp_setattro = [] (PyObject* self, PyObject* name, PyObject* val) -> int
{
try {
ExtObjBase_noTemplate* p = getExtObjBase( self );
return ( p -> setattro(Object{name}, Object{val}) );
}
catch( Py::Exception& ) { /* indicate error */
return -1;
}
};
(In this demonstration I'm using tp_setattro, note that there are about 30 other slots, which you can see if you look at the doc for PyTypeObject)
(in fact the major reason for working this way is that we can try{}catch{} around every trampoline. This saves the consumer from having to code repetitive error trapping.)
So, we pull out the "base type for the associated C++ object" and call its virtual setattro (just using setattro as an example here). A derived class will have overridden setattro, and this override will get called.
The old-style class provides such an override, which I've labelled MARKER1 -- it is in the top listing for this question.
The only the thing I can think of is that maybe different maintainers have used different techniques. But is there some more compelling reason why old and new style classes require different architecture?
PS for reference, I should include the following methods from new style class:
static PyObject* extension_object_new( PyTypeObject* subtype, PyObject* args, PyObject* kwds )
{
PyObject* pyob = subtype->tp_alloc(subtype,0);
PythonClassInstance* o = reinterpret_cast<PythonClassInstance *>( pyob );
o->m_pycxx_object = nullptr;
return pyob;
}
^ to me, this looks absolutely wrong.
It appears to be allocating memory, re-casting to some structure that might exceed the amount allocated, and then nulling right at the end of this.
I'm surprised it hasn't caused any crashes.
I can't see any indication anywhere in the source code that these 4 bytes are owned.
static int extension_object_init( PyObject* _self, PyObject* _args, PyObject* _kwds )
{
try
{
Object args{_args};
Object kwds{_kwds};
PythonClassInstance* self{ reinterpret_cast<PythonClassInstance*>(_self) };
if( self->m_pycxx_object )
self->m_pycxx_object->reinit( args, kwds );
else
// NOTE: observe this is where we invoke the constructor, but indirectly (i.e. through final)
self->m_pycxx_object = new FinalClass{ self, args, kwds };
}
catch( Exception & )
{
return -1;
}
return 0;
}
^ note that there is no implementation for reinit, other than the default
virtual void reinit ( Object& args , Object& kwds ) {
throw RuntimeError( "Must not call __init__ twice on this class" );
}
static void extension_object_deallocator( PyObject* _self )
{
PythonClassInstance* self{ reinterpret_cast< PythonClassInstance* >(_self) };
delete self->m_pycxx_object;
_self->ob_type->tp_free( _self );
}
EDIT: I will hazard a guess, thanks to insight from Yhg1s on the IRC channel.
Maybe it is because when you create a new old-style class, it is guaranteed it will overlap perfectly a PyObject structure.
Hence it is safe to derive from PyObject, and pass a pointer to the underlying PyObject into Python, which is what the old-style class does (MARKER2)
On the other hand, new style class creates a {PyObject + maybe something else} object.
i.e. It wouldn't be safe to do the same trick, as Python runtime would end up writing past the end of the base class allocation (which is only a PyObject).
Because of this, we need to get Python to allocate for the class, and return us a pointer which we store.
Because we are now no longer making use of the PyObject base-class for this storage, we cannot use the convenient trick of typecasting back to retrieve the associated C++ object.
Which means that we need to tag on an extra sizeof(void*) bytes to the end of the PyObject that actually does get allocated, and use this to point to our associated C++ object instance.
However, there is some contradiction here.
struct PythonClassInstance
{
PyObject_HEAD
ExtObjBase_noTemplate* m_pycxx_object;
}
^ if this is indeed the structure that accomplishes the above, then it is saying that the new style class instance is indeed fitting exactly over a PyObject, i.e. It is not overlapping into the m_pycxx_object.
And if this is the case, then surely this whole process is unnecessary.
EDIT: here are some links that are helping me learn the necessary ground work:
http://eli.thegreenplace.net/2012/04/16/python-object-creation-sequence
http://realmike.org/blog/2010/07/18/introduction-to-new-style-classes-in-python
Create an object using Python's C API
to me, this looks absolutely wrong. It appears to be allocating memory, re-casting to some structure that might exceed the amount allocated, and then nulling right at the end of this. I'm surprised it hasn't caused any crashes. I can't see any indication anywhere in the source code that these 4 bytes are owned
PyCXX does allocate enough memory, but it does so by accident. This appears to be a bug in PyCXX.
The amount of memory Python allocates for the object is determined by the first call to the following static member function of PythonClass<T>:
static PythonType &behaviors()
{
...
p = new PythonType( sizeof( T ), 0, default_name );
...
}
The constructor of PythonType sets the tp_basicsize of the python type object to sizeof(T). This way when Python allocates an object it knows to allocate at least sizeof(T) bytes. It works because sizeof(T) turns out to be larger that sizeof(PythonClassInstance) (T is derived from PythonClass<T> which derives from PythonExtensionBase, which is large enough).
However, it misses the point. It should actually allocate only sizeof(PythonClassInstance) . This appears to be a bug in PyCXX - that it allocates too much, rather than too little space for storing a PythonClassInstance object.
And this is my question, why is the new style class handling implemented in the way that it is? Why is it having to create this extra PythonClassInstance structure? Why can't it do things the same way the old-style class handling does?
Here's my theory why new style classes are different from the old style classes in PyCXX.
Before Python 2.2, where new style classes were introduced, there was no tp_init member int the type object. Instead, you needed to write a factory function that would construct the object. This is how PythonExtension<T> is supposed to work - the factory function converts the Python arguments to C++ arguments, asks Python to allocate the memory and then calls the constructor using placement new.
Python 2.2 added the new style classes and the tp_init member. Python first creates the object and then calls the tp_init method. Keeping the old way would have required that the objects would first have a dummy constructor that creates an "empty" object (e.g. initializes all members to null) and then when tp_init is called, would have had an additional initialization stage. This makes the code uglier.
It seems that the author of PyCXX wanted to avoid that. PyCXX works by first creating a dummy PythonClassInstance object and then when tp_init is called, creates the actual PythonClass<T> object using its constructor.
... does this mean it is making no use of its PyObject base type?
This appears to be correct, the PyObject base class does not seem to be used anywhere. All the interesting methods of PythonExtensionBase use the virtual self() method, which returns m_class_instance and completely ignore the PyObject base class.
I guess (only a guess, though) is that PythonClass<T> was added to an existing system and it seemed easier to just derive from PythonExtensionBase instead of cleaning up the code.

luabind: cannot retrieve values from table indexed by non-built-in classes‏

I'm using luabind 0.9.1 from Ryan Pavlik's master distribution with Lua 5.1, cygwin on Win XP SP3 + latest patches x86, boost 1.48, gcc 4.3.4. Lua and boost are cygwin pre-compiled versions.
I've successfully built luabind in both static and shared versions.
Both versions pass all the tests EXCEPT for the test_object_identity.cpp test which fails in both versions.
I've tracked down the problem to the following issue:
If an entry in a table is created for NON built-in class (i.e., not int, string, etc), the value CANNOT be retrieved.
Here's a code piece that demonstrates this:
#include "test.hpp"
#include <luabind/luabind.hpp>
#include <luabind/detail/debug.hpp>
using namespace luabind;
struct test_param
{
int obj;
};
void test_main(lua_State* L)
{
using namespace luabind;
module(L)
[
class_<test_param>("test_param")
.def_readwrite("obj", &test_param::obj)
];
test_param temp_object;
object tabc = newtable(L);
tabc[1] = 10;
tabc[temp_object] = 30;
TEST_CHECK( tabc[1] == 10 ); // passes
TEST_CHECK( tabc[temp_object] == 30 ); // FAILS!!!
}
tabc[1] is indeed 10 while tabc[temp_object] is NOT 30! (actually, it seems to be nil)
However, if I use iterate to go over tabc entries, there're the two entries with the CORRECT key/value pairs.
Any ideas?
BTW, overloading the == operator like this:
#include <luabind/operator.hpp>
struct test_param
{
int obj;
bool operator==(test_param const& rhs) const
{
return obj == rhs.obj;
}
};
and
module(L)
[
class_<test_param>("test_param")
.def_readwrite("obj", &test_param::obj)
.def(const_self == const_self)
];
Doesn't change the result.
I also tried switching to settable() and gettable() from the [] operator. The result is the same. I can see with the debugger that default conversion of the key is invoked, so I guess the error arises from somewhere therein, but it's beyond me to figure out what exactly the problem is.
As the following simple test case show, there're definitely a bug in Luabind's conversion for complex types:
struct test_param : wrap_base
{
int obj;
bool operator==(test_param const& rhs) const
{ return obj == rhs.obj ; }
};
void test_main(lua_State* L)
{
using namespace luabind;
module(L)
[
class_<test_param>("test_param")
.def(constructor<>())
.def_readwrite("obj", &test_param::obj)
.def(const_self == const_self)
];
object tabc, zzk, zzv;
test_param tp, tp1;
tp.obj = 123456;
// create new table
tabc = newtable(L);
// set tabc[tp] = 5;
// o k v
settable( tabc, tp, 5);
// get access to entry through iterator() API
iterator zzi(tabc);
// get the key object
zzk = zzi.key();
// read back the value through gettable() API
// o k
zzv = gettable(tabc, zzk);
// check the entry has the same value
// irrespective of access method
TEST_CHECK ( *zzi == 5 &&
object_cast<int>(zzv) == 5 );
// convert key to its REAL type (test_param)
tp1 = object_cast<test_param>(zzk);
// check two keys are the same
TEST_CHECK( tp == tp1 );
// read the value back from table using REAL key type
zzv = gettable(tabc, tp1);
// check the value
TEST_CHECK( object_cast<int>(zzv) == 5 );
// the previous call FAILS with
// Terminated with exception: "unable to make cast"
// this is because gettable() doesn't return
// a TRUE value, but nil instead
}
Hopefully, someone smarter than me can figure this out,
Thx
I've traced the problem to the fact that Luabind creates a NEW DISTINCT object EVERY time you use a complex value as key (but it does NOT if you use a primitive one or an object).
Here's a small test case that demonstrates this:
struct test_param : wrap_base
{
int obj;
bool operator==(test_param const& rhs) const
{ return obj == rhs.obj ; }
};
void test_main(lua_State* L)
{
using namespace luabind;
module(L)
[
class_<test_param>("test_param")
.def(constructor<>())
.def_readwrite("obj", &test_param::obj)
.def(const_self == const_self)
];
object tabc, zzk, zzv;
test_param tp;
tp.obj = 123456;
tabc = newtable(L);
// o k v
settable( tabc, tp, 5);
iterator zzi(tabc), end;
std::cerr << "value = " << *zzi << "\n";
zzk = zzi.key();
// o k v
settable( tabc, tp, 6);
settable( tabc, zzk, 7);
for (zzi = iterator(tabc); zzi != end; ++zzi)
{
std::cerr << "value = " << *zzi << "\n";
}
}
Notice how tabc[tp] first has the value 5 and then is overwritten with 7 when accessed through the key object. However, when accessed AGAIN through tp, a new entry gets created. This is why gettable() fails subsequently.
Thx,
David
Disclaimer: I'm not an expert on luabind. It's entirely possible I've missed something about luabind's capabilities.
First of all, what is luabind doing when converting test_param to a Lua key? The default policy is copy. To quote the luabind documentation:
This will make a copy of the parameter. This is the default behavior when passing parameters by-value. Note that this can only be used when passing from C++ to Lua. This policy requires that the parameter type has an accessible copy constructor.
In pratice, what this means is that luabind will create a new object (called "full userdata") which is owned by the Lua garbage collector and will copy your struct into it. This is a very safe thing to do because it no longer matters what you do with the c++ object; the Lua object will stick around without really any overhead. This is a good way to do bindings for by-value sorts of objects.
Why does luabind create a new object each time you pass it to Lua? Well, what else could it do? It doesn't matter if the address of the passed object is the same, because the original c++ object could have changed or been destroyed since it was first passed to Lua. (Remember, it was copied to Lua by value, not by reference.) So, with only ==, luabind would have to maintain a list of every object of that type which had ever been passed to Lua (possibly weakly) and compare your object against each one to see if it matches. luabind doesn't do this (nor do I think should it).
Now, let's look at the Lua side. Even though luabind creates two different objects, they're still equal, right? Well, the first problem is that, besides certain built-in types, Lua can only hold objects by reference. Each of those "full userdata" that I mentioned before is actually a pointer. That means that they are not identical.
But they are equal, if we define an __eq meta operation. Unfortunately, Lua itself simply does not support this case. Userdata when used as table keys are always compared by identity, no matter what. This actually isn't special for userdata; it is also true for tables. (Note that to properly support this case, Lua would need to override the hashcode operation on the object in addition to __eq. Lua also does not support overriding the hashcode operation.) I can't speak for the authors of Lua why they did not allow this (and it has been suggested before), but there it is.
So, what are the options?
The simplest thing would be to convert test_param to an object once (explicitly), and then use that object to index the table both times. However, I suspect that while this fixes your toy example, it isn't very helpful in practice.
Another option is simply not to use such types as keys. Actually, I think this is a very good suggestion, since this kind of light-weight binding is quite useful, and the only other option is to discard it.
It looks like you can define a custom conversion on your type. In your example, it might be reasonable to convert your type to a Lua number which will behave well as a table index.
Use a different kind of binding. There will be some overhead, but if you want identity, you'll have to live with it. It sounds like luabind has some support for wrappers, which you may need to use to preserve identity:
When a pointer or reference to a registered class with a wrapper is passed to Lua, luabind will query for it's dynamic type. If the dynamic type inherits from wrap_base, object identity is preserved.

How can I bind a C/C++ structure to Ruby?

I need some advice how can I bind a C/C++ structure to Ruby. I've read some manuals and I found out how to bind class methods to a class, but I still don't understand how to bind structure fields and make them accessible in Ruby.
Here is the code I'm using:
myclass = rb_define_class("Myclass", 0);
...
typedef struct nya
{
char const* name;
int age;
} Nya;
Nya* p;
VALUE vnya;
p = (Nya*)(ALLOC(Nya));
p->name = "Masha";
p->age = 24;
vnya = Data_Wrap_Struct(myclass, 0, free, p);
rb_eval_string("def foo( a ) p a end"); // This function should print structure object
rb_funcall(0, rb_intern("foo"), 1, vnya); // Here I call the function and pass the object into it
The Ruby function seems to assume that a is a pointer. It prints the numeric value of the pointer instead of it's real content (i.e., ["Masha", 24]). Obviously the Ruby function can't recognize this object —I didn't set the object's property names and types.
How can I do this? Unfortunately I can't figure it out.
You have already wrapped your pointer in a Ruby object. Now all you have to do is define how it can be accessed from the Ruby world:
/* Feel free to convert this function to a macro */
static Nya * get_nya_from(VALUE value) {
Nya * pointer = 0;
Data_Get_Struct(value, Nya, pointer);
return pointer;
}
VALUE nya_get_name(VALUE self) {
return rb_str_new_cstr(get_nya_from(self)->name);
}
VALUE nya_set_name(VALUE self, VALUE name) {
/* StringValueCStr returns a null-terminated string. I'm not sure if
it will be freed when the name gets swept by the GC, so maybe you
should create a copy of the string and store that instead. */
get_nya_from(self)->name = StringValueCStr(name);
return name;
}
VALUE nya_get_age(VALUE self) {
return INT2FIX(get_nya_from(self)->age);
}
VALUE nya_set_age(VALUE self, VALUE age) {
get_nya_from(self)->age = FIX2INT(age);
return age;
}
void init_Myclass() {
/* Associate these functions with Ruby methods. */
rb_define_method(myclass, "name", nya_get_name, 0);
rb_define_method(myclass, "name=", nya_set_name, 1);
rb_define_method(myclass, "age", nya_get_age, 0);
rb_define_method(myclass, "age=", nya_set_age, 1);
}
Now that you can access the data your structure holds, you can simply define the high level methods in Ruby:
class Myclass
def to_a
[name, age]
end
alias to_ary to_a
def to_s
to_a.join ', '
end
def inspect
to_a.inspect
end
end
For reference: README.EXT
This is not a direct answer to your question about structures, but it is a general solution to the problem of porting C++ classes to Ruby.
You could use SWIG to wrap C/C++ classes, structs and functions. In the case of a structure, it's burning a house to fry an egg. However, if you need a tool to rapidly convert C++ classes to Ruby (and 20 other languages), SWIG might be useful to you.
In your case involving a structure, you just need to create a .i file which includes (in the simplest case) the line #include <your C++ library.h>.
P.S. Once more, it's not a direct answer to your question involving this one struct, but maybe you could make use of a more general solution, in which case this may help you.
Another option is to use RubyInline - it has limited support for converting C and Ruby types (such as int, char * and float) and it also has support for accessing C structurs - see accessor method in the API.