Someone had some clean up code that looked like:
for (int i = 0; i < m_pDragImage->GetImageCount();i++) {
m_pDragImage->Remove(i);
}
m_pDragImage->DeleteImageList();
delete m_pDragImage;
Now I know the remove loop is wrong and should have been:
for (int i = 0; i < m_pDragImage->GetImageCount();) {
if (!m_pDragImage->Remove(i)) {
i++;
}
}
But now the question is, what is the difference between Remove() everything in a loop vs calling DeleteImageList() ? The way I understand it, when you Add() something it just saves the bitmap representation of it so you can destroy handles after Add(). So to me, the Remove() would seem to clean up everything itself, so not sure if DeleteImageList() would be needed in that case? Or better, not bother with the loop and just use DeleteImageList(), or better yet, the object does it on destruction?
TIA!!
DeleteImageList() and Remove() each element are doing same thing.
CImageList::Remove() and CImageList::DeleteImageList() calls ImageList_Remove(); and ImageList_Destroy() Win API functions respectively.
As per Creating and Destroying Image Lists:
When you no longer need an image list, you can destroy it by
specifying its handle in a call to the ImageList_Destroy function.
and Adding and Removing Images:
The ImageList_Remove function removes an image from an image list.
This might be the reason (assumption) why, MFC don't have wrapper around ImageList_RemoveAll() because MFC already have function DeleteImageList() to delete complete list.
Related
What I want to do is basically queue a bunch to task objects to a container, where the task can remove itself from the queue. But I also don't want the object to be destroyed when it removes itself, so it can continue to finish whatever the work is doing.
So, a safe way to do this is either call RemoveSelf() when the work is done, or use a keepAlive reference then continue to do the work. I've verified that this does indeed work, while the DoWorkUnsafe will always crash after a few iterations.
I'm not particularly happy with the solution, because I have to either remember to call RemoveSelf() at the end of work being done, or remember to use a keepAlive, otherwise it will cause undefined behavior.
Another problem is that if someone decides to iterate through the ownerList and do work, it would invalidate the iterator as they iterate, which is also unsafe.
Alternatively, I know I can instead put the task onto a separate "cleanup" queue and destroy finished tasks separately. But this method seemed neater to me, but with too many caveats.
Is there a better pattern to handle something like this?
#include <memory>
#include <unordered_set>
class SelfDestruct : public std::enable_shared_from_this<SelfDestruct> {
public:
SelfDestruct(std::unordered_set<std::shared_ptr<SelfDestruct>> &ownerSet)
: _ownerSet(ownerSet){}
void DoWorkUnsafe() {
RemoveSelf();
DoWork();
}
void DoWorkSafe() {
DoWork();
RemoveSelf();
}
void DoWorkAlsoSafe() {
auto keepAlive = RemoveSelf();
DoWork();
}
std::shared_ptr<SelfDestruct> RemoveSelf() {
auto keepAlive = shared_from_this();
_ownerSet.erase(keepAlive);
return keepAlive;
};
private:
void DoWork() {
for (auto i = 0; i < 100; ++i)
_dummy.push_back(i);
}
std::unordered_set<std::shared_ptr<SelfDestruct>> &_ownerSet;
std::vector<int> _dummy;
};
TEST_CASE("Self destruct should not cause undefined behavior") {
std::unordered_set<std::shared_ptr<SelfDestruct>> ownerSet;
for (auto i = 0; i < 100; ++i)
ownerSet.emplace(std::make_shared<SelfDestruct>(ownerSet));
while (!ownerSet.empty()) {
(*ownerSet.begin())->DoWorkSafe();
}
}
There is a good design principle that says each class should have exactly one purpose. A "task object" should exist to perform that task. When you start adding additional responsibilities, you tend to end up with a mess. Messes can include having to remember to call a certain method after completing the primary purpose, or having to remember to use a hacky workaround to keep the object alive. Messes are often a sign of inadequate thought put into the design. Being unhappy with a mess speaks well of your potential for good design.
Let us backtrack and look at the real problem. There are task objects stored in a container. The container decides when to invoke each task. The task must be removed from the container before the next task is invoked (so that it is not invoked again). It looks to me like the responsibility for removing elements from the container should fall to the container.
So we'll re-envision your class without that "SelfDestruct" mess. Your task objects exist to perform a task. They are probably polymorphic, hence the need for a container of pointers to task objects rather than a container of task objects. The task objects don't care how they are managed; that is work for someone else.
class Task {
public:
Task() {}
// Other constructors, the destructor, assignment operators, etc. go here
void DoWork() {
// Stuff is done here.
// The work might involve adding tasks to the queue.
}
};
Now focus on the container. The container (more precisely, the container's owner) is responsible for adding and removing elements. So do that. You seem to prefer removing the element before invoking it. That seems like a good idea to me, but don't try to pawn off the removal on the task. Instead use a helper function, keeping this logic at the abstraction level of the container's owner.
// Extract the first element of `ownerSet`. That is, remove it and return it.
// ASSUMES: `ownerSet` is not empty
std::shared_ptr<Task> extract(std::unordered_set<std::shared_ptr<Task>>& ownerSet)
{
auto begin = ownerSet.begin();
std::shared_ptr<Task> first{*begin};
ownerSet.erase(begin);
return first;
}
TEST_CASE("Removal from the container should not cause undefined behavior") {
std::unordered_set<std::shared_ptr<Task>> ownerSet;
for (int i = 0; i < 100; ++i)
ownerSet.emplace(std::make_shared<Task>());
while (!ownerSet.empty()) {
// The temporary returned by extract() will live until the semicolon,
// so it will (barely) outlive the call to DoWork().
extract(ownerSet)->DoWork();
// This is equivalent to:
//auto todo{extract(ownerSet)};
//todo->DoWork();
}
}
From one perspective, this is an almost trivial change from your approach, as all I did was shift a responsibility from the task object to the owner of the container. Yet with this shift, the mess disappears. The same steps are performed, but they make sense and are almost forced when moved to a more appropriate context. Clean design tends to lead to clean implementation.
In my game I am creating some big class, which store references to smaller classes, which store some references too. And during gameplay I need to recreate this big class with all its dependencies by destroying it and making new one.
It's creation looks like that:
ABigClass::ABigClass()
{
UE_LOG(LogTemp, Warning, TEXT("BigClass created"));
SmallClass1 = NewObject<ASmallClass>();
SmallClass2 = NewObject<ASmallClass>();
......
}
And it works. Now I want to destroy and re create it by calling from some function
BigClass->~ABigClass();
BigClass= NewObject<ABigClass>();
which destroys BigClass and creates new one with new small classes, the problem is, that old small classes are still in memory, I can see it by logging their destructors.
So I try to make such destructor for Big Class
ABigClass::~ABigClass()
{
SmallClass1->~ASmallClass();
SmallClass2->~ASmallClass();
......
UE_LOG(LogTemp, Warning, TEXT("BigClass destroyed"));
}
SmallClass is inherited from other class, which have its own constructor and destructor, but I do not call it anywhere.
Sometimes it works, but mostly causes UE editor to crash when code compiled, or when game starsted/stopped.
Probably there is some more common way to do what I want to do? Or some validation which will prevent it from crash?
Please help.
Don't manually call a destructor. Replace
SmallClass1->~ASmallClass();
SmallClass2->~ASmallClass();
with either
delete SmallClass1;
SmallClass1 = nullptr;
or by nothing if those type are ref-counted by unreal in some fashion (likely).
Finally I found the way to do this.
First, I needed to get rid of all UPROPERTYs, related to classes, which I am going to destroy, except UPROPERTYs in a class, which controls their creation and destruction. If I need to expose these classes to blueprints somewhere else, it can be done with BlueprintCallable getter and setter functions.
Then I needed to calm down UE's garbage collector, which destroy objects on hot reload and on game shutdown in random order, ignoring my destructor hierarchy, which results in attempt to destroy already destroyed object and crash. So before doing something in destructor with other objects, I need to check if there is something to destroy, adding IsValidLowLevel() check.
And instead of delete keyword it is better to use DestructItem() function, it seems to be more garbage collector-friendly in many ways.
Also, I did not found a way to safely destroy objects, which are spawned in level. Probably because they are referenced somewhere else, but I dont know where, but since they are lowest level in my hierarchy, i can just Destroy() them in the world, and do not bother when exactly they will disappear from memory by garbage collector's will.
Anyway, I ended up with following code:
void AGameModeBase::ResetGame()
{
if (BigClass->IsValidLowLevel()) {
DestructItem(BigClass);
BigClass= nullptr;
BigClass= NewObject<ABigClass>();
UE_LOG(LogTemp, Warning, TEXT("Bigclass recreated"));
}
}
ABigClass::~ABigClass()
{
if (SmallClass1) {
if (SmallClass->IsValidLowLevel())
{
DestructItem(SmallClass1);
}
SmallClass1 = nullptr;
}
if (SmallClass2) {
if (SmallClass->IsValidLowLevel())
{
DestructItem(SmallClass2);
}
SmallClass2 = nullptr;
}
...
UE_LOG(LogTemp, Warning, TEXT("BigClass destroyed"));
}
ASmallClass::~ASmallClass()
{
for (ATinyClass* q : TinyClasses)
{
if (q->IsValidLowLevel())
{
q->Destroy();
}
}
TinyClasses = {};
UE_LOG(LogTemp, Warning, TEXT("SmallClass destroyed"));
}
And no crashes. Probably someone will find it useful in cases it is needed to clear game lever from hierarchy of objects without fully reloading it.
I have a Pointer to a Object that i pass to a lambda function. Because the lambda function is called 1 second after the initial method call, the object is sometimes not valid any more, leading to a segmentation fault.
How can I verify that the item is still valid within the lambda function before using it?
This is how my method using the lambda function looks like:
void myTab::myMethod(QStandardItem *item)
{
QColor blue(0, 0, 128, 20);
QBrush brush(blue);
item->setBackground(brush);
//Restore background after 1000ms
QTimer::singleShot(1000, [item, this]() mutable {
item->setBackground(Qt::transparent); //<-need some advice here
});
}
How can I verify that the item is still valid within the lambda function before using it?
The easiest approach would be to have item be a shared_ptr<QStandardItem> that your lambda just gets a copy of. This guarantees the the item will live long enough:
void myTab::myMethod(std::shared_ptr<QStandardItem> item)
{
QColor blue(0, 0, 128, 20);
QBrush brush(blue);
item->setBackground(brush);
//Restore background after 1000ms
QTimer::singleShot(1000, [item]{
item->setBackground(Qt::transparent);
});
}
Otherwise, you can't really tell from a pointer if it points to an object that is still valid or not. Or other weirdness like the object was deleted and a new one happens to be allocated at the same memory and now you have a bug where some random item is becoming transparent occasionally. Better to sidestep all of those problems.
Potentially better as Loki suggests would be to store a weak_ptr to the item. If the item is dead before we can set it to transparent, that's fine - we just don't set it to transparent. If we don't actually need to extend its lifetime, just don't:
QTimer::singleShot(1000, [weak_item = std::weak_ptr<QStandardItem>(item)]{
if (auto item = weak_item.lock()) {
item->setBackground(Qt::transparent);
}
});
No suggestion seemed to work for me. I tried std::shared_ptr, std::weak_ptr and QSharedPointer. Seems like the model the item is added to somehow removes the item anyway?! However, I ended up using a QStack where I pushed all the item pointers on the stack and once restored or removed the Items were poped off the stack. Since the time is always the same, this approach works perfekt for me, altough is is not very well implemented.
I have a simulation program. In the main class of the simulation I am "creating + adding" and "removing + destroying" Agents.
The problem is that once in a while (once every 3-4 times I run the program) the program crashes because I am apparently calling a function of an invalid agent in the main loop. The program works just fine most of the time. There are normally thousands of agents in the list.
I don't know how is it possible that I have invalid Agents in my Loop.
It is very difficult to debug the code because I receive the memory exception inside the "Agent::Step function" (which is too late because I cannot understand how was the invalid Agent in the list and got called).
When I look into the Agent reference inside the Agent::Step function (exception point) no data in the agent makes sense, not even the initialized data. So it is definitely invalid.
void World::step()
{
AddDemand();
// run over all the agents and check whether they have remaining actions
// Call their step function if they have, otherwise remove them from space and memory
list<Agent*>::iterator it = agents_.begin();
while (it != agents_.end())
{
if (!(*it)->AllIntentionsFinished())
{
(*it)->step();
it++;
}
else
{
(*it)->removeYourselfFromSpace(); //removes its reference from the space
delete (*it);
agents_.erase(it++);
}
}
}
void World::AddDemand()
{
int demand = demandIdentifier_.getDemandLevel(stepCounter_);
for (int i = 0; i < demand; i++)
{
Agent* tmp = new Agent(*this);
agents_.push_back(tmp);
}
}
Agent:
bool Agent::AllIntentionsFinished()
{
return this->allIntentionsFinished_; //bool flag will be true if all work is done
}
1- Is it possible that VStudio 2012 optimization of Loops (i.e. running in multi-thread if possible) creates the problem?
2- Any suggestions on debugging the code?
If you're running the code multi-threaded, then you'll need to add code to protect things like adding items to and removing items from the list. You can create a wrapper that adds thread safety for a container fairly easily -- have a mutex that you lock any time you do a potentially modifying operation on the underlying container.
template <class Container>
thread_safe {
Container c;
std::mutex m;
public:
void push_back(typename Container::value_type const &t) {
std::lock_guard l(m);
c.push_back(t);
}
// ...
};
A few other points:
You can almost certainly clean your code up quite a bit by having the list hold Agents directly, instead of a pointer to an Agent that you have to allocate dynamically.
Your Agent::RemoveYourselfFromSpace looks/sounds a lot like something that should be handled by Agent's destructor.
You can almost certainly do quite a bit more to clean up the code by using some standard algorithms.
For example, it looks to me like your step could be written something like this:
agents.remove_if([](Agent const &a) { return a.AllIntentionsFinished(); });
std::for_each(agents.begin(), agents.end(),
[](Agent &a) { a.step(); });
...or, you might prefer to continue using an explicit loop, but use something like:
for (Agent & a : agents)
a.step();
The problem is this:
agents_.erase(it++);
See Add and remove from a list in runtime
I don't see any thread-safe components in the code you showed, so if you are running multiple threads and sharing data between them, then absolutely you could have a threading issue. For instance, you do this:
(*it)->removeYourselfFromSpace(); //removes its reference from the space
delete (*it);
agents_.erase(it++);
This is the worst possible order for an unlocked list. You should: remove from the list, destruct object, delete object, in that order.
But if you are not specifically creating threads which share lists/agents, then threading is probably not your problem.
I have this very annoying issue, whenever i call a function:
void renderGame::renderMovingBlock(movingBlock* blockToRender){
sf::Shape blockPolygon;
sf::Shape blockLine = sf::Shape::Line(blockToRender->getLineBegin().x,blockToRender->getLineBegin().y,blockToRender->getLineEnd().x,blockToRender->getLineEnd().y, 3.f,movingBlockLineColor);
for(auto i = blockToRender->getVertexArray()->begin(); i!=blockToRender->getVertexArray()->end(); ++i){
blockPolygon.AddPoint(i->x, i->y, movingBlockBlockColor);
}
renderToWindow->Draw(blockLine);
renderToWindow->Draw(blockPolygon);
}
Which is a simple function, it takes a pointer to an object and uses SFML to render it on the screen. It's a simple polygon that moves on a rail.
getVertexArray() returns a pointer to the object's vector of vertices, renderToWindow is a pointer to sf::RenderWindow
The very weird issue i have is that i can call this function it won't return from it, VC++ breaks and points me to:
int __cdecl atexit (
_PVFV func
)
{
return (_onexit((_onexit_t)func) == NULL) ? -1 : 0;
}
I'm getting weird behavoir here, i can stop this function right before exiting by calling the Display() function and system("pause"), it'll display everything perfectly fine, but one step further and it breaks.
I'll add that i'm sending a dynamically allocated object, when i set a regular one everything's fine. It's weird, when i debug the program then the polygon and line have the right coordinates, everything displays properly, but it just can't return from the function.
If a function will not return sounds like you messed up the stack somewhere previously - this is most likely an out-of-bounds write.
Or possibly because you are ending up in atexit there could have been an uncaught exception thrown.
Either way - welcome to the joys of programming - now you have to find an error which probably happens long before your function gets stuck
You could try some tools like valgrind (if its available for windows) or some other bounds checkers.