SCIP: About the "SCIP_ReaderData" in the bin packing example - c++

A quesion about the reader plugin defined in the binpacking example. I found the following declaration in the interface method (file reader_bpa.c),
SCIP_READERDATA* readerdata;
readerdata = NULL;
I know SCIP_READERDATA is defined in file type_reader.h:
typedef struct SCIP_ReaderData SCIP_READERDATA;
However, the struct SCIP_ReaderData is not defined in the binpacking reader, so which is the actual struct that "SCIP_READERDATA* readerdata;" reference to? what kind of pointer is readerdata?
PS: I noticed that the default readers in SCIP have similar usage.

That is more a C-question than a SCIP question if I am not mistaken. The interface functions SCIPincludeReader() and SCIPincludeReaderBasic() require a pointer to reader data as last argument. Reader data is supposed to allow the plugin author to connect arbitrary data with their reader plugin by declaring the corresponding struct SCIP_ReaderData as many other plugins do.
If you try to do anything with the pointer, e.g., allocate memory for it using SCIPallocMemory(scip, &readerdata), you will get compiler errors because the pointer refers to an incomplete type, namely struct SCIP_ReaderData.
More useful information on incomplete types is found, e.g., here
The point is, the example uses this to make it clearer which arguments are passed to the SCIPIncludeReaderBasic()-function, where you would see NULL otherwise.

Related

What's the purpose of VkStructureType? [duplicate]

In all of the create info structs (vk*CreateInfo) in the new Vulkan API, there is ALWAYS a .sType member. Why is this there if the value can only be one thing? Also the Vulkan specification is very explicit that you can only use vk*CreateInfo structs as parameters for their corresponding vkCreate* function. It seems a little redundant. I can see that if the driver was passing this struct straight to the GPU, you might need to have it (I did notice it is always the first member). But this seems like a really bad idea for the app to do it because if the driver did it, apps would be much less error prone, and prepending an int to a struct doesn't seems like an extremely computational inefficient operation. I just don't see why it exists.
TL;DR
Why do the vk*CreateInfo structs have the .sType member?
They have one so that the pNext field actually works.
Yes, the API takes a struct with a proper C type, so both the caller and the receiver agree on what type that struct is. But especially nowadays, many such structs have linked lists of structures that provide additional information to the implementation. These extension structures (though many are core in Vulkan 1.1/2) are just like all other structures, with their own sType field.
These fields are crucial because the linked lists are built with pNext pointers... which are void*s. They have no set type. The way the implementation determines what a non-NULL pNext pointer points to is by examining the first 4 bytes stored there. This is the sType field; it allows the implementation to know what type to cast the pointer to.
Of course, the primary struct that an API takes doesn't strictly need an sType field, since its type is part of the API itself. However, there is a hypothetical reason to do so (it hasn't panned out in Vulkan releases).
A later version of Vulkan could expand on the creation of, for example, command buffer pools. But how would it do that? Well, they could add a whole new entrypoint: vkCreateCommandPool2. But this function would have almost the exact same signature as vkCreateCommandPool; the only difference is that they take different pCreateInfo structures.
So instead, all you have to do is declare a VkCommandPoolCreateInfo2 structure. And then declare that vkCreateCommandPool can take either one. How would the implementation tell which one you passed in?
Because the first 4 bytes of any such structure is sType. They can test that value. If the value is VK_STRUCTURE_TYPE_COMMAND_POOL_CREATE_INFO, then it's the old structure. If it's VK_STRUCTURE_TYPE_COMMAND_POOL_CREATE_INFO_2, then it's the new one.
Of course, as previously stated, this hasn't panned out; post-1.0 Vulkan versions opted to incorporate extension structs rather than replacing existing ones. But the option is there.

Serializing a struct whose definition is not known

I am using geos library in my software as the geometry engine. I am currently using its capi(as that is the recommended api).
Now the problem is I would like to serialize and deserialize the struct GEOSGeometry. The library itself is in c++ and the capi is a wrapper around it. So the struct definition is not available per say. What are my options?
This is what the capi mentions
/* When we're included by geos_c.cpp, those are #defined to the original
* JTS definitions via preprocessor. We don't touch them to allow the
* compiler to cross-check the declarations. However, for all "normal"
* C-API users, we need to define them as "opaque" struct pointers, as
* those clients don't have access to the original C++ headers, by design.
*/
#ifndef GEOSGeometry
typedef struct GEOSGeom_t GEOSGeometry;
And this is how it is wrapped
// Some extra magic to make type declarations in geos_c.h work -
// for cross-checking of types in header.
#define GEOSGeometry geos::geom::Geometry
Any help is appreciated.
First of all, if you really can't access struct's definition in a source file, I'd try to inspect it with C++11 type_traits classes, e.g. is_pod, is_trivial, is_standard_layout, ...
This way, you can get an idea of what are you dealing with. If you see that the struct is quite simple, you can "hope" that it stores all data inside itself, i.e. not points to other memory areas. Sadly, as far as I know, there is no way to find out if a class has got a member pointer.
Eventually, all you can do is trying to serialize it brutally writing to your output sizeof(GEOSGeometry) bytes (chars). Then read it back and... good luck!

UML representation for C/C++ function pointers

What would be the best representation of a C/C++ function pointer (fp) in an UML structural diagram?
I'm thinking about using an interface element, may be even if 'degenerate' with the constraint of having at most a single operation declared.
I found some proposal in this document: C and UML Synchronization User Guide, Section 5.7.4. But this sounds quite cumbersome and not very useful in practice. Even if right from a very low level of semantic view. Here's a diagram showing their concept briefly:
IMHO in C and C++ function pointers are used as such a narrowed view of an interface which only provides a single function and it's signature. In C fp's would be used also to implement more complex interfaces declaring a struct containing a set of function pointers.
I think I can even manage to get my particular UML tool (Enterprise Architect) to forward generate the correct code, and synchronizing with code changes without harm.
My questions are:
Would declaration of fp's as part of interface elements in UML proivde a correct semantic view?
What kind of stereotype should be used for single fp declaration? At least I need to provide a typedef in code so this would be my guts choice.(I found this stereotype is proprietary for Enterprise Architect) and I need to define an appropriate stereotype to get the code generation adapted. Actually I have chosen the stereotype name 'delegate', does this have any implications or semantic collisions?
As for C++, would be nesting a 'delegate' sterotyped interface with in a class element enough to express a class member function pointer correctly?
Here's a sample diagram of my thoughts for C language representation:
This is the C code that should be generated from the above model:
struct Interface1;
typedef int (*CallbackFunc)(struct Interface1*);
typedef struct Interface1
{
typedef void (*func1Ptr)(struct Interface1*, int, char*);
typedef int (*func2Ptr)(struct Interface1*, char*);
typedef int (*func3Ptr)(struct Interface1*, CallbackFunc);
func1Ptr func1;
func2Ptr func2;
func3Ptr func3;
void* instance;
};
/* The following extern declarations are only dummies to satisfy code
* reverse engineering, and never should be called.
*/
extern void func1(struct Interface1* self, int p1, char* p2) = 0;
extern int func2(struct Interface1* self, char*) = 0;
extern int func3(struct Interface1* self, CallbackFunc p1) = 0;
EDIT:
The whole problem boils down what would be the best way with the UML tool at hand and its specific code engineering capabilities. Thus I have added the enterprise-architect tag.
EA's help file has the following to say on the subject of function pointers:
When importing C++ source code, Enterprise Architect ignores function pointer declarations. To import them into your model you could create a typedef to define a function pointer type, then declare function pointers using that type. Function pointers declared in this way are imported as attributes of the function pointer type.
Note "could." This is from the C++ section, the C section doesn't mention function pointers at all. So they're not well supported, which in turn is of course due to the gap between the modelling and programming communities: non-trivial language concepts are simply not supported in UML, so any solution will by necessity be tool-specific.
My suggestion is a bit involved and it's a little bit hacky, but I think it should work pretty well.
Because in UML operations are not first-class and cannot be used as data types, my response is to create first-class entities for them - in other words, define function pointer types as classes.
These classes will serve two purposes: the class name will reflect the function's type signature so as to make it look familiar to the programmer in the diagrams, while a set of tagged values will represent the actual parameter and return types for use in code generation.
0) You may want to set up an MDG Technology for steps 1-4.
1) Define a tagged value type "retval" with the Detail "Type=RefGUID;Values=Class;"
2) Define a further set of tagged value types with the same Detail named "par1", "par2" and so on.
3) Define a profile with a Class stereotype "funptr" containing a "retval" tagged value (but no "par" tags).
4) Modify the code generation scripts Attribute Declaration and Parameter to retrieve the "retval" (always) and "par1" - "parN" (where defined) and generate correct syntax for them. This will be the tricky bit and I haven't actually done this. I think it can be done without too much effort, but you'll have to try it. You should also make sure that no code is generated for "funptr" class definitions as they represent anonymous types, not typedefs.
5) In your target project, define a set of classes to represent the primitive C types.
With this, you can define a function pointer type as a «funptr» class with a name like "long(*)(char)" for a function that takes a char and returns a long.
In the "retval" tag, select the "long" class you defined in step 4.
Add the "par1" tag manually, and select the "char" class as above.
You can now use this class as the type of an attribute or parameter, or anywhere else where EA allows a class reference (such as in the "par1" tag of a different «funptr» class; this allows you to easily create pointer types for functions where one of the parameters is itself of a function pointer type).
The hackiest bit here is the numbered "par1" - "parN" tags. While it is possible in EA to define several tags with the same name (you may have to change the tagged value window options to see them), I don't think you could retrieve the different values in the code generation script (and even if you could I don't think the order would necessarily be preserved, and parameter order is important in C). So you'd need to decide the maximum number of parameters beforehand. Not a huge problem in practice; setting up say 20 parameters should be plenty.
This method is of no help for reverse engineering, as EA 9 does not allow you to customize the reverse-engineering process. However, the upcoming EA 10 (currently in RC 1) will allow this, although I haven't looked at it myself so I don't know what form this will take.
Defining of function pointers is out of scope of UML specification. What is more, it is language-specific feature that is not supported by many UML modeling software. So I think that the general answer to your first question suggests avoiding of this feature. Tricks you provided are relevant to Enterprise Architect only and are not compatible with other UML modeling tools. Here is how function pointers is supported in some other UML software:
MagicDraw UML uses <<C++FunctionPtr>> stereotypes for FP class members and <<C++FunctionSignature>> for function prototype.
Sample of code (taken from official site -- see "Modeling typedef and function pointer for C++ code generation" viewlet):
class Pointer
{
void (f*) ( int i );
}
Corresponding UML model:
Objecteering defines FP attributes with corresponding C++ TypeExpr note.
Rational Software Architect from IBM doesn't support function pointers. User might add them to generated code in user-defined sections that are leaved untouched during code->UML and UML->code transformations.
Seems correct to me. I'm not sure you should dive into the low-level details of descripting the type and relation of your single function pointer. I usually find that description an interface is enough detalization without the need to decompose the internal elements of it.
I think you could virtually wrap the function pointer with a class. I think UML has not to be blueprint level to the code, documenting the concept is more important.
My feeling is that you desire to map UML interfaces to the struct-with-function-pointers C idiom.
Interface1 is the important element in your model. Declaring function pointer object types all over the place will make your diagrams illegible.
Enterprise Architect allows you to specify your own code generators. Look for the Code Template Framework. You should be able to modify the preexisting code generator for C with the aid of a new stereotype or two.
I have been able to get something sort of working with Enterprise Architect. Its a bit of a hacky solution, but it meets my needs. What I did:
Create a new class stereotype named FuncPtr. I followed the guide here: http://www.sparxsystems.com/enterprise_architect_user_guide/10/extending_uml_models/addingelementsandmetaclass.html
When I did this I made a new view for the profile. So I can keep it contained outside of my main project.
Modified the Class code templates. Basically selecting the C language and start with the Class Template and hit the 'Add New Stereotype Override' and add in FuncPtr as a new override.
Add in the following code to that new template:
%PI="\n"%
%ClassNotes%
typedef %classTag:"returnType"% (*%className%)(
%list="Attribute" #separator=",\n" #indent=" "%
);
Modified the Attribute Declaration code template. Same way as before, adding in a new Stereotype
Add in the following code to the new template:
%PI=""% %attConst=="T" ? "const" : ""%
%attType%
%attContainment=="By Reference" ? "*" : ""%
%attName%
That's all that I had to do to get function pointers in place in Enterprise Architect. When I want to define a function pointer I just:
Create a regular class
Add in the tag 'returnType' with the type of return I want
Add in attributes for the parameters.
This way it'll create a new type that can be included as attributes or parameters in other classes (structures), and operators. I didn't make it an operator itself because then it wouldn't have been referenced inside the tool as a type you can select.
So its a bit hacky, using special stereotyped classes as typedefs to function pointers.
Like your first example I would use a Classifier but hide it away in a profile. I think they've included it for clarity of the explaining the concept; but in practice the whole idea of stereotypes is abstract away details into profiles to avoid the 'noise' problem. EA is pretty good for handling Profiles.
Where I differ from your first example is that I would Classify the Primitive Type Stereotype not the Data Type stereotype. Data Type is a Domain scope object, while Primitive Type is an atomic element with semantics defined out side the scope of UML. That is not to say you cannot add notes, especially in the profile or give it a very clear stereotype name like functionPointer.

How to implement virtual table in c++

Virtual table is arrary of function pointers.
How can i implement it as every function has different signature ?
You don't implement it.
The compiler generates it (or something with equivalent functionality), and it's not constrained by the type system so it can simply store the function addresses and generate whatever code is needed to call them correctly.
You can implement something vaguely similar using a struct containing different types of function pointer, rather than an array. That's quite a common way of implementing dynamic polymorphism in C; for example, the Linux kernel provides polymorphic behaviour for file-like objects by defining an interface along the lines of:
struct fileops {
int (*fo_read) (struct file *fp, ...);
int (*fo_write) (struct file *fp, ...);
// and so on
};
If functions in a virtual table have different signatures, you'll have to implement it as a structure type containing members with heterogeneous types.
Alternately, if you have other information telling you what the signatures are, you can cast a function pointer to another function pointer type, as long as you cast it back to the correct type before calling it.
If you know every function at compile time, then you could use a struct of differently typed function pointers (however, if you know every function at compile time, why wouldn't you just use a class with virtual methods?).
If you want to do this at runtime, then an array of void* would probably suffice. You'd need to cast the pointers in when you store them and out (to the correct type) again before you call them. Of course, you'll need to keep track of the function types (including calling convention) somewhere else.
Without knowing what you're planning to do with this it's very difficult to give a more useful answer.
There are valid reasons for implementing vtables in code. They're an implementation detail though, so you'll need to be targeting a known ABI rather than just 'C++'. The only time I've done this was an experiment to dynamically create new COM classes at runtime (the ABI expected of a COM object is a pointer to a vtable that contains functions following the __stdcall calling convention where the first 3 functions implement the IUnknown interface).

writing structs and classes to disk

The following function writes a struct to a file.
#define PAGESIZE sizeof(BTPAGE)
#define HEADERSIZE 2L
int btwrite(short rrn, BTPAGE *page_ptr)
{
long addr;
addr = (long) rrn * (long) PAGESIZE + HEADERSIZE;
lseek(btfd, addr, 0);
return (write(btfd, page_ptr, PAGESIZE));
}
The following is the struct.
typedef struct {
short keycount; /* number of keys in page */
int key[MAXKEYS]; /* the actual keys */
int value[MAXKEYS]; /* the actual values */
short child[MAXKEYS+1]; /* ptrs to rrns of descendants */
} BTPAGE;
What would happen if I changed the struct to a class, would it still work the same?
If I added class functions, would the size it takes up on disk increase?
There's a lot you need to learn here.
First of all, you're treating a structure as an array of bytes. This is strictly undefined behavior due to the strict aliasing rule. Anything can happen. So don't do it. Use proper serialization (for example via boost) instead. Yes, it's tedious. Yes, it's necessary.
Even if you ignore the undefinedness, and choose to become dependant on some particular compiler implementation (which may change even in the next compiler version), there's still reasons not to do it.
If you save a file on one machine, then load it on another, you may get garbage, because the second machine uses a different float representation, or a different endianness, or has different alignment rules, etc.
If your struct contains any pointers, it's very likely that saving them verbatim then loading them back will result in an address that doesn't not point to any meaningful place.
Typically when you add a member function, this happens:
the function's machine code is stored in a place shared by all the class instances (it wouldn't make sense to duplicate it, since it's logically immutable)
a hidden "this" pointer is passed to the function when it's called, so it knows which object it's been called on.
none of this requires any storage space in the instances.
However, when you add at least one virtual function, the compiler typically needs to also add a data chunk called a vtable (read up on it). This makes it possible to call different code depending on the current runtime type of the object (aka polymorphism). So the first virtual function you add to the class likely does increase the object size.
In C++, the difference between a struct and a class is simply that the members and base classes of a struct are public by default, whereas for a class they are private by default.
The technique of simply writing the bytes of the struct to a file and then reading them back in again only works if the struct is a plain old data, or POD, type. If you modify your struct such that it is no longer POD, this technique is not guaranteed to work (the rules describing what makes a POD struct are listed in answers to thet linked question).
If the class has any virtual function, then you're in trouble; if no virtual functions, you should still be OK (the same applies to a struct, of course, since it, too, could have virtual functions: the difference between struct and class is just that the default visibility in struct is public, in class it's private).
If you are doing more serialisation of classes consider using google protocol buffers, or something similar see this question