I want to have items created using c++ macros like this:
#define PROPERTIES\
UPROPERTY(blabla)\
variableType varName;\
UPROPERTY(blabla)\
variableType2 varName2;
So I can add properties to other files including this and using PROPERTIES macro on them.
class XYZ...
{
...
public:
PROPERTIES
...
UPROPERTY()
varibleType3 name3;
....
}
This design comes from a typical organization, where those parameters come from another class because they are related to that class.
So I could include diferent files, and include all properties to show in the editor from all of them.
What's happening is that the Unreal Header Tool completely ignores UPROPERTY, of course, probably because it's being processed before and then the Macro... surprisingly is working but only the variable, not UPROPERTY... So the editor doesn't show it.
Any ideas on how to do this?
Are you sure that your properties are defined correctly? You should use something like UPROPERTY(EditAnywhere, Category="My category") Category is required when exposing Functions or Properties to editor. Properties with this macro would be exposed to instance properties editor and to class defaults editor as well.
Also I'd like to point out, that this design is really poor and I think that you may have problems with it. As you correctly pointed out, Epic has custom reflection system (UHT)which allows you to use automatic memory management with GC and so on. More on Build System UE Docs.
I think that you should decide how those properties will be used. Since you want to distribute them over multiple actors, you can choose from:
Inheritance
UObject reference
Component system
The first is really trivial and I believe you are familiar with this concept. And I think that you had your reasons why chose that macro solution. Note that you aren't allowed to use multiple inheritance (which is possible in classic C++) and this is due to UBT.
The second one goes like this:
UCLASS(BlueprintType)
class UMyProps : public UObject
{
// TODO define properties in a standard way
}
// some actor's properties
UPROPERTY(BlueprintReadWrite, Category = "My Game | My Subcategory")
UMyProps* myProperties;
Main advantage of the second system is that you can add sets of properties to objects that requires them while having type-safe environment and full Intellisence support (and UE Editor as well of course).
Or you can use Components, which is more or less like the second approach, but it could be totally dynamic (you don't need actual reference to your component, you can get it by calling GetComponent(...)).
These three are good, solid and transparent approaches to extend your object's properties. By using your macro's approach, you could easily ends up with a mountain of a mess, if this even works with UE build system).
Related
How can I detect incompatible API changes in C++? (not ABI but API changes)
Where compatible changes are things that can't break compilation of code using the API like:
parameter(s) added to method with default argument
methods added to a class
members added to a class
classes added
order of members or methods changed
comments/documentation changes
And incompatible changes are things that potentially break compilation of code using the API like:
removed arguments, (public/protected) methods, members, classes
type changes of arguments or members
name changes of public/protected members or methods
classes moved from one header to another
OP is right to think that C++ parsing is probably necessary. Likely deep reasoning, too.
I think the way to pose the question is,
for a particular set of uses of an API in an existing application, does changing the API change or break of the application?
If you don't limit yourself to a specific set of uses, almost any change to an API will change its semantics. Otherwise, why would you make them (modulo refactoring?). And if you use the full set of API features in the application, then its semantics must change somehow too.
With a specific set of uses, one can arguably determine which properties of the API might affect the specific uses, and determine if in fact they do. Ultimately you have to parse the original code accurately to determine the specific set of uses and the context in which they are used. You also have to determine the semantic properties on which the existing application depends, including the properties provided by the legacy API. Finally, you need to determine the properties defined by the new API, and verify still support the needs of the application.
In general, you need a theorem prover over the program properties to check this. And, while theorem proving technology has advanced significantly over the last 50 years, AFAIK said technology isn't strong enough to take generally arbitrary program properties and prove them, let alone overcome the problem of reasoning about arbitrarily complex programs.
Consider:
// my application
int x=0;
int y=foo(x); // API ensures that fail...
if (y>3) then fail(); // shouldn't happen
exit();
// my legacy API
int foo(int x) { return x+1; }
Now imagine the API is changed to:
// my new API
int foo(int x) { return x+2; }
The application still functions correctly.
How about:
// my new API
int foo(int x) { return TuringMachine(x); }
How are we going to prove that TuringMachine(x) produces a value < 3?
If we can't do this for such tiny programs, how are we going to do it
for ones that we write in practice?
Now, you might be able to limit the set of changes you will consider to
simply "syntactic" operations, such as "move method", "add parameter with initial value", etc.
You'll still need to parse the original program and modified APIs, and check that the syntactic properties imply semantic properties that don't damage the original program. You'll likely need control and dataflow analysis, alias analysis to worry about pointers, etc, and the tool will at best be able to tell for a limited number of cases when no change has occurred.
I'm sure there are research papers on this topic. A quick check at scholar.google.com didn't find anything obvious.
See "Source Compatibility" tab of the report of the ABICC tool. Most of the mentioned API changes and API breaks are detected by this tool.
There are two approaches to use the tool. Original via analysis of header files and a new via analysis of the library debug info ([1]). Use the second one. It's more reliable and simple.
You can find some report examples here: http://abi-laboratory.pro/tracker/
Tutorial: https://sourceware.org/glibc/wiki/Testing/ABI_checker#Usage
I'm writing a program that requires the user to be very flexible in manipulating data on a given object. I figured I would use a property browser of some kind; Qtilities' ObjectDynamicPropertyBrowser caught my eye.
However, I need to be able to add my own data types. The documentation is not clear on how to do so.
How can I allow my own data types to be represented in Qtilities' property browser widgets?
Also, more about my needs:
The data types are not part of Qt, nor are they even Q_OBJECTs.
Qt-specific modifications to the relevant classes are not an option.
Declaring the relevant classes via Q_DECLARE_METATYPE is okay.
In particular, I need to represent vector and matrix types (and possibly more later).
The browser you refer to depends on the QObject property system. So, unless your classes are QObjects, it won't work - but don't despair, Qt 5.5 to the rescue (read on). The browser seems to use a QTreeView and provides an adapter model that exposes the QObject property system. So, it leverages Qt's type and delegate system.
In Qt 5.5, there is a general purpose property system, known as gadgets, that can be used on any class, as long as there is a QMetaObject describing that class. By adding the Q_GADGET macro to a class deriving from the subject class, and describing the properties using Q_PROPERTY macro, you can leverage moc and the gadget system to access your unmodified types' properties.
The only reason you'd do that is to require minimal changes to the ObjectPropertyBrowser system. You don't want the ObjectDynamicPropertyBrowser, since it works on dynamic properties, and your objects don't have any. They have static properties, given through Q_PROPERTY macros and code generated by moc.
So, you'll proceed as you would for implementing your own type support for QVariant and views in general. You also need Qt 5.5, since you need gadget support for it to work. A solution for Qt 5.4 and below requires a different approach and might be less cumbersome to implement in another way.
See this answer for a reference on using gadget property system for object serialization, it's fundamentally what the property browser would do, sans the serialization proper of course.
There are three steps. First, you need to address simple custom types that don't have a structure, but represent a single value (such as a date, or time, or geographic position, etc.), or a collection of simple values (such as a matrix).
Ensure that QVariant can carry the simple types. Add the Q_DECLARE_METATYPE macro right after the type's definition in an interface (header file).
Implement delegates for the types. For types with table structure, such as a matrix, you can leverage QTableView and provide an adaptor model that exposes the type's contents as a table model.
Secondly, you get to your complex types that have internal structure:
Create a wrapper class that derives from the complex type, declares all properties using Q_PROPERTY, and has the Q_GADGET macro (not Q_OBJECT since they are not QObjects). Such class should not have any members of its own. Its only methods should be optional property accessors. The Q_GADGET macro adds the static (class) member staticMetaObject.
The underlying type can be static_cast to the wrapper class if needed, but that's normally not necessary.
At this point, any class that you wrote a wrapper for is accessible to the QMetaProperty system directly, without casting! You'd use the wrapper's staticMetaObject for its static metaobject, but the QMetaProperty readOnGadget and writeOnGadget will take the pointers to the base class directly.
Thirdly, since ObjectPropertyBrowser most likely doesn't implement the support for gadgets in Qt 5.5, as that's quite new, you'll have to modify it to provide such support. The changes will be minimal and have to do with using QMetaProperty::readOnGadget and QMetaProperty::writeOnGadget instead of QMetaProperty::read and QMetaProperty::write. See the serialization answer for comparison between the two.
I've been handed a legacy C++ application to patch and add some new features, and I'm having a terrible time following some of the code, as it makes fairly extensive use of globals, huge #define macros and many extremely tersely named variables/functions (3 letter functions from 2 inheritance levels up, etc...). As such, determining the source of many of the functions or variables is rater challenging.
It also uses Hungarian notation.... sometimes (m_Thingie is a member variable, but sometimes so is thingie).
Is there any way to make it so class member access without specifying this-> fails? That would let me use the compiler to effectively determine variable source.
I don't mind if it's a horrible hack, if I can turn it on for a little while when doing refactoring, and then off for any release compilation, that would be fine.
Pick an IDE with advanced colorization, Visual Studio can do it then if you're already using it you don't need to learn anything else.
Click Tools menu then click Options.
Expand Environment settings group from the list in the left and select Fonts and Colors.
Scroll down Display items in the right panel until you find C++ ... items. There you can change settings for things you need (and more):
Change settings to highlight variables and functions according to your needing. Be aware that you can change only color (background and foreground) but size is shared. Too many colors will confuse you then you may need to make some tests before you find right combination for you.
Final result may be:
In this example you can see different colors for:
Local variables.
Global functions (anything declared outside a class).
Function parameters.
Member functions (you may also set a different color for static member functions).
Fields (you may also set a different color for static class fields).
Global variables.
Macros.
Of course literals (string, characters and numbers), user types and enumerations can have their own color combination (also specialized for templates). When you're done with refactoring you can restore default settings clicking on Use Defaults.
I was looking into using the to implement the style and rendering in a little GUI toolkit of mine using the Factory Pattern, and then this crazy Idea struck me. Why not use the Factory Pattern for all widgets?
Right now I'm thinking something in style of:
Widget label = Widget::create("label", "Hello, World");
Widget byebtn = Widget::create("button", "Good Bye!");
byebtn.onEvent("click", &goodByeHandler);
New widget types would be registered with a function like Widget::register(std::string ref, factfunc(*fact))
What would you say is the pros and cons of this method?
I would bet on that there is almost no performance issues using a std::map<std::string> as it is only used when setting things up.
And yes, forgive me for being damaged by JavaScript and any possible syntax errors in my C++ as it was quite a while since I used that language.
Summary of pros and cons
Pros
It would be easy to load new widgets dynamically from a file
Widgets could easily be replaced on run-time (it could also assist with unit testing)
Appeal to Dynamic People
Cons (and possible solutions)
Not compile-time checked (Could write own build-time check. JS/Python/name it, is not compile-time checked either; and I believe that the developers do quite fine without it)
Perhaps less performant (Do not need to create Widgets all the time. String Maps would not be used all the time but rather when creating the widget and registering event handlers)
Up/Downcasting hell (Provide usual class instantiation where needed)
Possible of making it harder to configure widgets (Make widgets have the same options)
My GUI Toolkit is in its early stages, it lacks quite much on the planning side. And as I'm not used to make GUI frameworks there is probably a lot of questions/design decisions that I have not even though of yet.
Right now I can't see that widgets would be too different in their set of methods, but that might rather just be because I've not gone trough the more advanced widgets in my mind yet.
Thank you all for your input!
Pros
It would be easy to read widgets from file or some other external source and create them by their names.
Cons
It would be easy to pass incorrect type name into Widget::create and you'll get error in run-time only.
You will have no access to widget-specific API without downcasting - i.e. one more source of run-time errors.
It might be not that easy to find all places where you create a widget of particular type.
Why not get best of both worlds by allowing widget creation both with regular constructors and with a factory?
I think this is not very intuitive. I would rather allow direct accesss to the classes. The question here is, why you want to have it this way. Is there any technical reason? Do you have a higher abstraction layer above the "real classes" that contain the widgets?
For example, you have pure virtual classes for the outside API and then an implementation-class?
But it is difficult to say if this is any good without knowing how your library is set up, but I am not a big fan of factories myself, unless you really need them.
If you have this higher abstraction layer and want that factory, take a look at irrlicht. It uses pure virtual classes for the API and then "implementation classes". The factory chooses the correct implementation depending on the chosen renderer (directx, opengl, software).
But they don't identify the wanted class by a string. That's something I really wouldn't do in C++.
Just my 2 cents.
I would advise against using a string as a key, and promote a real method for each subtype.
class Factory
{
public:
std::unique_ptr<Label> createLabel(std::string const& v);
std::unique_ptr<Button> createButton(std::string const& v, Action const* a);
};
This has several advantages:
Compiler-checked, you could mispell the string, but you cannot get a running program with a mispelled method
More precise return type, and thus, no need for up-casting (and all its woes)
More precise parameters, for not all elements require the same set of parameters
Generally, this is done to allow customization. If Factory is an abstract interface, then you can have several implementations (Windows, GTK, Mac) that create them differently under the hood.
Then, instead of setting up one creator function per type, you would switch the whole factory at once.
Note: performance-wise, virtual functions are much faster than map-lookup by string key
Run into a bit of an issue, and I'm looking for the best solution concept/theory.
I have a system that needs to use objects. Each object that the system uses has a known interface, likely implemented as an abstract class. The interfaces are known at build time, and will not change. The exact implementation to be used will vary and I have no idea ahead of time what module will be providing it. The only guarantee is that they will provide the interface. The class name and module (DLL) come from a config file or may be changed programmatically.
Now, I have all that set up at the moment using a relatively simple system, set up something like so (rewritten pseudo-code, just to show the basics):
struct ClassID
{
Module * module;
int number;
};
class Module
{
HMODULE module;
function<void * (int)> * createfunc;
static Module * Load(String filename);
IObject * CreateClass(int number)
{
return createfunc(number);
}
};
class ModuleManager
{
bool LoadModule(String filename);
IObject * CreateClass(String classname)
{
ClassID class = AvailableClasses.find(classname);
return class.module->CreateObject(class.number);
}
vector<Module*> LoadedModules;
map<String, ClassID> AvailableClasses;
};
Modules have a few exported functions to give the number of classes they provide and the names/IDs of those, which are then stored. All classes derive from IObject, which has a virtual destructor, stores the source module and has some methods to get the class' ID, what interface it implements and such.
The only issue with this is each module has to be manually loaded somewhere (listed in the config file, at the moment). I would like to avoid doing this explicitly (outside of the ModuleManager, inside that I'm not really concerned as to how it's implemented).
I would like to have a similar system without having to handle loading the modules, just create an object and (once it's all set up) it magically appears.
I believe this is similar to what COM is intended to do, in some ways. I looked into the COM system briefly, but it appears to be overkill beyond belief. I only need the classes known within my system and don't need all the other features it handles, just implementations of interfaces coming from somewhere.
My other idea is to use the registry and keep a key with all the known/registered classes and their source modules and numbers, so I can just look them up and it will appear that Manager::CreateClass finds and makes the object magically. This seems like a viable solution, but I'm not sure if it's optimal or if I'm reinventing something.
So, after all that, my question is: How to handle this? Is there an existing technology, if not, how best to set it up myself? Are there any gotchas that I should be looking out for?
COM very likely is what you want. It is very broad but you don't need to use all the functionality. For example, you don't need to require participants to register GUIDs, you can define your own mechanism for creating instances of interfaces. There are a number of templates and other mechanisms to make it easy to create COM interfaces. What's more, since it is a standard, it is easy to document the requirements.
One very important thing to bear in mind is that importing/exporting C++ objects requires all participants to be using the same compiler. If you think that ever could be a problem to you then you should use COM. If you are happy to accept that restriction then you can carry on as you are.
I don't know if any technology exists to do this.
I do know that I worked with a system very similar to this. We used XML files to describe the various classes that different modules made available. Our equivalent of ModuleManager would parse the xml files to determine what to create for the user at run time based on the class name they provided and the configuration of the system. (Requesting an object that implemented interface 'I' could give back any of objects 'A', 'B' or 'C' depending on how the system was configured.)
The big gotcha we found was that the system was very brittle and at times hard to debug/understand. Just reading through the code, it was often near impossible to see what concrete class was being instantiated. We also found that maintaining the XML created more bugs and overhead than expected.
If I was to do this again, I would keep the design pattern of exposing classes from DLL's through interfaces, but I would not try to build a central registry of classes, nor would I derive everything from a base class such as IObject.
I would instead make each module responsible for exposing its own factory functions(s) to instantiate objects.