How to properly use multiple destructors in UE4 - c++

In my game I am creating some big class, which store references to smaller classes, which store some references too. And during gameplay I need to recreate this big class with all its dependencies by destroying it and making new one.
It's creation looks like that:
ABigClass::ABigClass()
{
UE_LOG(LogTemp, Warning, TEXT("BigClass created"));
SmallClass1 = NewObject<ASmallClass>();
SmallClass2 = NewObject<ASmallClass>();
......
}
And it works. Now I want to destroy and re create it by calling from some function
BigClass->~ABigClass();
BigClass= NewObject<ABigClass>();
which destroys BigClass and creates new one with new small classes, the problem is, that old small classes are still in memory, I can see it by logging their destructors.
So I try to make such destructor for Big Class
ABigClass::~ABigClass()
{
SmallClass1->~ASmallClass();
SmallClass2->~ASmallClass();
......
UE_LOG(LogTemp, Warning, TEXT("BigClass destroyed"));
}
SmallClass is inherited from other class, which have its own constructor and destructor, but I do not call it anywhere.
Sometimes it works, but mostly causes UE editor to crash when code compiled, or when game starsted/stopped.
Probably there is some more common way to do what I want to do? Or some validation which will prevent it from crash?
Please help.

Don't manually call a destructor. Replace
SmallClass1->~ASmallClass();
SmallClass2->~ASmallClass();
with either
delete SmallClass1;
SmallClass1 = nullptr;
or by nothing if those type are ref-counted by unreal in some fashion (likely).

Finally I found the way to do this.
First, I needed to get rid of all UPROPERTYs, related to classes, which I am going to destroy, except UPROPERTYs in a class, which controls their creation and destruction. If I need to expose these classes to blueprints somewhere else, it can be done with BlueprintCallable getter and setter functions.
Then I needed to calm down UE's garbage collector, which destroy objects on hot reload and on game shutdown in random order, ignoring my destructor hierarchy, which results in attempt to destroy already destroyed object and crash. So before doing something in destructor with other objects, I need to check if there is something to destroy, adding IsValidLowLevel() check.
And instead of delete keyword it is better to use DestructItem() function, it seems to be more garbage collector-friendly in many ways.
Also, I did not found a way to safely destroy objects, which are spawned in level. Probably because they are referenced somewhere else, but I dont know where, but since they are lowest level in my hierarchy, i can just Destroy() them in the world, and do not bother when exactly they will disappear from memory by garbage collector's will.
Anyway, I ended up with following code:
void AGameModeBase::ResetGame()
{
if (BigClass->IsValidLowLevel()) {
DestructItem(BigClass);
BigClass= nullptr;
BigClass= NewObject<ABigClass>();
UE_LOG(LogTemp, Warning, TEXT("Bigclass recreated"));
}
}
ABigClass::~ABigClass()
{
if (SmallClass1) {
if (SmallClass->IsValidLowLevel())
{
DestructItem(SmallClass1);
}
SmallClass1 = nullptr;
}
if (SmallClass2) {
if (SmallClass->IsValidLowLevel())
{
DestructItem(SmallClass2);
}
SmallClass2 = nullptr;
}
...
UE_LOG(LogTemp, Warning, TEXT("BigClass destroyed"));
}
ASmallClass::~ASmallClass()
{
for (ATinyClass* q : TinyClasses)
{
if (q->IsValidLowLevel())
{
q->Destroy();
}
}
TinyClasses = {};
UE_LOG(LogTemp, Warning, TEXT("SmallClass destroyed"));
}
And no crashes. Probably someone will find it useful in cases it is needed to clear game lever from hierarchy of objects without fully reloading it.

Related

What will happens to a local pointer if thread is terminated?

what happens to data created in local scope of thread if thread is terminated, memory leak?
void MyThread()
{
auto* ptr = new int[10];
while (true)
{
// stuff
}
// thread is interrupted before this delete
delete[] ptr;
}
Okay, my perspective.
If the program exits, the threads exit wherever they are. They don't clean up. But in this case you don't care. You might care if it's an open file and you want it flushed.
However, I prefer a way to tell my threads to exit cleanly. This isn't perfect, but instead of while (true) you can do while (iSHouldRun) and set the field to false when it's time for the thread to exit.
You can also set a flag that says, iAmExiting at the end, then myThread.join() once the flag is set. That gives your exit code a chance to clean up nicely.
Coding this from the beginning helps when you write your unit tests.
The other thing -- as someone mentioned in comments -- use RAII. Pretty much if you're using raw pointers, you're doing something you shouldn't do in modern C++.
That's not an absolute. You can write your own RAII classes. For instance:
class MyIntArray {
MyArray(int sizeIn) { ... }
~MyArray() { delete array; }
private:
int * array = nullptr;
int size = 0;
};
You'll need a few more methods to actually get to the data, like an operator[]. Now, this isn't any different than using std::vector, so it's only an example of how to implement RAII for your custom data, for instance.
But your functions should NEVER call new like this. It's old-school. If your method pukes somehow, you have a memory leak. If it pukes on exit(), no one cares. But if it pukes for another reason, it's a problem. RAII is a much, much better solution than the other patterns.

How to free up a resource used by a function in a googletest ASSERT_THROW statement?

In googletest you can use ASSERT_THROW to test that a certain function throws an error. For instance
ASSERT_THROW(PhysicalPropertyResource p("other/identifier72652"), InappropriateResourceException);
How would you explicitely call another method for releasing some memory used by PhysicalPropertyResource? Normally used like so:
PhysicalPropertyResource p("other/identifier72652");
p.free();
Or should I just not worry about the memory leak since its only in a test and therefore benign. (Its just my OCD to want to keep valgrind completly happy).
I've tried this:
ASSERT_THROW(
PhysicalPropertyResource p("other/identifier72652");
p.free();
, InappropriateResourceException);
}
which doesn't free the memory.
The proper syntax for executing multiple instructions would to wrap them in a block:
ASSERT_THROW({PhysicalPropertyResource p("other/identifier72652");
p.free();},
InappropriateResourceException);
However, this won't help it's not the last line that throws, because then p.free() will not be called. You could create your own "assertion"
void shouldThrow() {
std::unique_ptr<PhysicalPropertyResource> p;
try {
p = std::make_unique<PhysicalPropertyResource>("other/identifier72652");
} catch (const InappropriateResourceException&) {
if (p) //note that this is not possible if it is really constructor that throws
p->free();
return; //test succeeded
} catch (...) {
if (p)
p->free();
FAIL();
}
if (p)
p->free;
FAIL();
}
Note: If you are writing C++ code that uses C library, then your best solution is creation of a RAII wrapper that will take care of resources. If you create a wrapper class, it can free resources in destructor and you will never leak them. You can even write such wrapper for unit tests only, just to avoid this ugly assertion-like code.

shared_ptr that removes itself from a container that owns it, is there a better way?

What I want to do is basically queue a bunch to task objects to a container, where the task can remove itself from the queue. But I also don't want the object to be destroyed when it removes itself, so it can continue to finish whatever the work is doing.
So, a safe way to do this is either call RemoveSelf() when the work is done, or use a keepAlive reference then continue to do the work. I've verified that this does indeed work, while the DoWorkUnsafe will always crash after a few iterations.
I'm not particularly happy with the solution, because I have to either remember to call RemoveSelf() at the end of work being done, or remember to use a keepAlive, otherwise it will cause undefined behavior.
Another problem is that if someone decides to iterate through the ownerList and do work, it would invalidate the iterator as they iterate, which is also unsafe.
Alternatively, I know I can instead put the task onto a separate "cleanup" queue and destroy finished tasks separately. But this method seemed neater to me, but with too many caveats.
Is there a better pattern to handle something like this?
#include <memory>
#include <unordered_set>
class SelfDestruct : public std::enable_shared_from_this<SelfDestruct> {
public:
SelfDestruct(std::unordered_set<std::shared_ptr<SelfDestruct>> &ownerSet)
: _ownerSet(ownerSet){}
void DoWorkUnsafe() {
RemoveSelf();
DoWork();
}
void DoWorkSafe() {
DoWork();
RemoveSelf();
}
void DoWorkAlsoSafe() {
auto keepAlive = RemoveSelf();
DoWork();
}
std::shared_ptr<SelfDestruct> RemoveSelf() {
auto keepAlive = shared_from_this();
_ownerSet.erase(keepAlive);
return keepAlive;
};
private:
void DoWork() {
for (auto i = 0; i < 100; ++i)
_dummy.push_back(i);
}
std::unordered_set<std::shared_ptr<SelfDestruct>> &_ownerSet;
std::vector<int> _dummy;
};
TEST_CASE("Self destruct should not cause undefined behavior") {
std::unordered_set<std::shared_ptr<SelfDestruct>> ownerSet;
for (auto i = 0; i < 100; ++i)
ownerSet.emplace(std::make_shared<SelfDestruct>(ownerSet));
while (!ownerSet.empty()) {
(*ownerSet.begin())->DoWorkSafe();
}
}
There is a good design principle that says each class should have exactly one purpose. A "task object" should exist to perform that task. When you start adding additional responsibilities, you tend to end up with a mess. Messes can include having to remember to call a certain method after completing the primary purpose, or having to remember to use a hacky workaround to keep the object alive. Messes are often a sign of inadequate thought put into the design. Being unhappy with a mess speaks well of your potential for good design.
Let us backtrack and look at the real problem. There are task objects stored in a container. The container decides when to invoke each task. The task must be removed from the container before the next task is invoked (so that it is not invoked again). It looks to me like the responsibility for removing elements from the container should fall to the container.
So we'll re-envision your class without that "SelfDestruct" mess. Your task objects exist to perform a task. They are probably polymorphic, hence the need for a container of pointers to task objects rather than a container of task objects. The task objects don't care how they are managed; that is work for someone else.
class Task {
public:
Task() {}
// Other constructors, the destructor, assignment operators, etc. go here
void DoWork() {
// Stuff is done here.
// The work might involve adding tasks to the queue.
}
};
Now focus on the container. The container (more precisely, the container's owner) is responsible for adding and removing elements. So do that. You seem to prefer removing the element before invoking it. That seems like a good idea to me, but don't try to pawn off the removal on the task. Instead use a helper function, keeping this logic at the abstraction level of the container's owner.
// Extract the first element of `ownerSet`. That is, remove it and return it.
// ASSUMES: `ownerSet` is not empty
std::shared_ptr<Task> extract(std::unordered_set<std::shared_ptr<Task>>& ownerSet)
{
auto begin = ownerSet.begin();
std::shared_ptr<Task> first{*begin};
ownerSet.erase(begin);
return first;
}
TEST_CASE("Removal from the container should not cause undefined behavior") {
std::unordered_set<std::shared_ptr<Task>> ownerSet;
for (int i = 0; i < 100; ++i)
ownerSet.emplace(std::make_shared<Task>());
while (!ownerSet.empty()) {
// The temporary returned by extract() will live until the semicolon,
// so it will (barely) outlive the call to DoWork().
extract(ownerSet)->DoWork();
// This is equivalent to:
//auto todo{extract(ownerSet)};
//todo->DoWork();
}
}
From one perspective, this is an almost trivial change from your approach, as all I did was shift a responsibility from the task object to the owner of the container. Yet with this shift, the mess disappears. The same steps are performed, but they make sense and are almost forced when moved to a more appropriate context. Clean design tends to lead to clean implementation.

Segfault After Registering Callback Functions

I am registering four callback functions:
glfwSetMouseButtonCallback(procMouseButton);
glfwSetMousePosCallback(procMousePosition);
glfwSetCharCallback(procCharInput);
glfwSetKeyCallback(procKeyInput);
Each callback function looks similar to this:
void GLFWCALL procMouseButton(int button, int action) {
Input::instance().processMouseButton(button, action); // doesn't do anything yet
}
Input is a singleton:
Input& Input::instance()
{
static Input instance;
return instance;
}
After the callback functions are registered, a segfault occurs. I have narrowed down the problem to two things.
First: Excluding any of the process functions causes the segfault to disappear. For example,
// this works
glfwSetMouseButtonCallback(procMouseButton);
//glfwSetMousePosCallback(procMousePosition);
glfwSetCharCallback(procCharInput);
glfwSetKeyCallback(procKeyInput);
// this works also
glfwSetMouseButtonCallback(procMouseButton);
glfwSetMousePosCallback(procMouseButton); // exclude procMousePosition
glfwSetCharCallback(procCharInput);
glfwSetKeyCallback(procKeyInput);
Second: Segfault occurs when popping or pushing a std::vector declared here in singleton Engine:
class Engine
{
public:
static Engine& instance();
std::list<GameState*> states;
private:
Engine() {}
Engine(Engine const& copy);
Engine& operator=(Engine const& copy);
};
// either causes segfault after registering functions
Engine::instance().states.push_back(NULL);
Engine::instance().states.pop_front();
I am completely baffled. I am assuming the problem is related to static initialization order fiasco, but I have no idea how to fix it. Can anyone explain why this error is occurring?
Important notes:
If I reverse the linking order, it no longer segfaults.
I am using MinGW/GCC for compiling.
I am running single threaded.
The singletons do not have default constructors, everything is initialized by Singleton::instance().initialize();
The exact segfault call stack:
0047B487 std::__detail::_List_node_base::_M_hook(std::__detail::_List_node_base*) ()
00000000 0x00401deb in std::list >::_M_insert()
00000000 0x00401dbb in std::list >::push_back()
00401D92 Engine::pushState(GameState*) ()
00404710 StartupState::initialize() ()
00402A11 Engine::initialize() ()
00000000 0x00403f29 in main()
Without seeing the rest of your program, it's hard to say why it's segfaulting. It sounds timing-related. Here's a few things you can try:
Put breakpoints in the constructors of your Engine class, Input class, (any other involved classes,) and the callback-setting code. That will tell you if the callbacks are registering before the singletons they use construct. Note that breakpoints might throw off your program's timing, so if one class hits first, you can disable that breakpoint and rerun. Try this multiple times to check the results are consistent.
Is there a reason you can't try the change to pointers instead of references (like the "fiasco" mentions)?
(Your update while I was writing this makes this part not-so-useful since the callstack shows it's not in a constructor. )This sounds like the callbacks are registering during construction of some class. If that's the case:
Can you move the registration calls so they happen under main()? That ought to get you past initializations.
Split up the class construction into two phases: the normal constructor, and an init() function. Put the critical code inside init(), and call that after everybody has finished constructing.
You could also prevent the callbacks from happening until later. If you can't move the callback registration to a later time in your game's startup, you could put flags so they don't do anything until a "safe" time. Adjusting when this flag enables could let you see "how late" is "late enough". The extra if() overhead is better than a crash. :)
volatile bool s_bCallbackSafe = false; // set this at some point in your game/app
void GLFWCALL procMouseButton(int button, int action) {
if (s_bCallbackSafe)
Input::instance().processMouseButton(button, action); // doesn't do anything yet
}

If a constructor throws exception, then does it make sense to have a global object of that class?

I am asking this question for general coding guidelines:
class A {
A() { ... throw 0; }
};
A obj; // <---global
int main()
{
}
If obj throws exception in above code then, it will eventually terminate the code before main() gets called. So my question is, what guideline I should take for such scenario ? Is it ok to declare global objects for such classes or not ? Should I always refrain myself from doing so, or is it a good tendency to catch the error in the beginning itself ?
If you NEED a global instance of an object whose constructor can throw, you could make the variable static, instead:
A * f(){
try {
//lock(mutex); -> as Praetorian points out
static A a;
//unlock(mutex);
return &a;
}
catch (...){
return NULL;
}
}
int main() {
A * a = f(); //f() can be called whenever you need to access the global
}
This would alleviate the problem caused by a premature exception.
EDIT: Of course, in this case the solution is 90% of the way to being a Singleton. Why not just fully turn it into one, by moving f() into A?
No, you should not declare such objects global - any exception will be unhandled and very hard to diagnose. The program will just crash which means that it will have very poor (below zero) user experience and will be rather hard to maintain.
As #Kerrek SB has mentioned in the comments, the answer to this is dependent on the reasons that can cause your class to throw. If you're trying to acquire a system resource that might be unavailable, I feel you shouldn't declare a global object. Your program will crash as soon as the user tries to run it; needless to say, that doesn't look very good. If it can throw a std::bad_alloc or some such exception that is unlikely under normal circumstances (assuming you're not trying to allocate a few GB of memory) you could make a global instance; however, I would still not do that.
Instead, you could declare a global pointer to the object, instantiate the object right at the beginning of main (before any threads have been spawned etc.) and point the pointer to this instance, then access it through the pointer. This gives your program a chance to handle exceptions, and maybe prompt the user to take some sort of remedial measures (like popping up a Retry button to try and reacquire the resource, for instance).
Declaring a global object is fine, but the design of your class is insignificant, it lacks details to be compatible with practical needs and use.
One solution no one seems to have mentionned is to use a function try
block. Basically, if the situation is that without the constructed
object, the rest of your program won't work or be able to do anything
useful, then the only real problem is that your user will get some sort
of incomprehensible error message if the constructor terminates with an
exception. So you wrap the constructor in a function try block, and
generate a comprehensible message, followed by an error return:
A::() try
: var1( initVar1 )
// ...
{
// Additional initialization code...
} catch ( std::exception const& ) {
std::cerr << "..." << std::endl;
exit(EXIT_FAILURE);
} catch (...) {
std::cerr << "Unknown error initializing A" << std::endl;
exit(EXIT_FAILURE);
}
This solution is really only appropriate, however, if all instances of
the object are declared statically, or if you can isolate a single
constructor for the static instances; for the non-static instances, it
is probably better to propagate the exception.
Like #J T have said, you can write like this:
struct S {
S() noexcept(false);
};
S &globalS() {
try {
static S s;
return s;
} catch (...) {
// Handle error, perhaps by logging it and gracefully terminating the application.
}
// Unreachable.
}
Such scenario is quite a problem, please read ERR58-CPP. Handle all exceptions thrown before main() begins executing for more detail.