I have a list with about 2500 custom items. I set them with:
const std::vector<const Items::AbstractItem *> results = _engine.request(text);
if (!results.empty())
{
for (auto i : results){
QListWidgetItem *lwi = new QListWidgetItem;
_results->addItem(lwi);
ListItemWidget *w = new ListItemWidget;
w->setName(i->name());
w->setTooltip(i->path());
_results->setItemWidget(lwi, w);
}
_results->setFixedHeight(std::min(5,_results->count()) * 48); // TODO
_results->show();
}
This takes about 5 seconds on an i5-4590. Hiding the widget is twice as fast. Is this normal or do I have look for errors?
A few ideas:
Try assigning proper parents to your QWidgets, thats way the layout doesn't have to do this
mapping for you. This should help performance.
Call setUpdatesEnable(false) before starting the insert, and to true after it's done
As for hiding the widget while adding large amounts of items, this will help to alleviate extraneous update calls. The second suggestion above should mitigate that.
I think this is fully expected behavior for controls like Lists or Trees that are not based on any data model. And I believe that the data model was invented mainly to fix this issue.
In your situation you have a ListWidget control that stores its data on its own. You need to pass all 2500 items before your app can go on, and you need to do this even if your list shows only 10 items at a time. Even if you just run and close your app, the user won't see all the items but you still need to pass them to your ListWidget. Some GUI frameworks use internal allocation of items and in such case they can optimize things a bit, you could do the same if you allocated your Items in chunks but it's still not a good solution.
Now let's say you introduce some object that could be asked about item properties. The Control will ask about some item and your object will respond with the contents. Your object don't even need to know about all your items, it will just learn when needed.
You Control can ask about few first items and stop when it realize it can fill up its entire height. This way you can avoid work that is not needed for now. The Control can also ask about the item count, so it can set-up its vertical slider.
It needs to be said that the model will not solve your problem automatically, it's just a programming paradigm that allows you to do it better.
So the solution for you would be to replace your QListWidget with a QListView and implement you own data model inheriting QAbstractListModel. You could pass the results to the model and it will pass the items data when needed.
If your QListWidgetItem's always has fixed size, call setUniformItemSizes on your QListWidget, pass true.
Related
I have a QTableWidget that can display a huge number of elements (like, 20000 elements). Displaying per se and scrolling up and down works find, however, populating the widget works extremely slowly. I found out that constructing the QVector of elements (strings) that are displayed works very fast, however, inserting the elements into the QTableWidget is very slow.
I need to implement filtration over the elements, so if the user filters half of the elements out with a wildcard, it's still necessary to clean the QTreeWidget and insert 10000 elements back (or hide 10000 elements which is equally slow). Reasonably fast performance is critical here because the user can't wait for several minutes every time he presses a button.
Valgrind doesn't help much as apparently much of the resources are eaten by some implicitly called functions, particularly QHeaderView::sectionSize() and QHeaderView::isSectionHidden()
Migrate Your code to model-view pattern.
Create a model (subclass QStandardItemModel) and place all Your data there.
Display all the data in QTableView, ensure everything is OK
Now, You can use QSortFilterModel model for fast data-filtering, or you can subclass QProxyModel for more complex filters.
I basically want to have the same virtual performance I can get with a List-View control. With a List-View control you can set an ItemCount and in the LVN_GETDISPINFO notification you then fill in the information for the items once they are scrolled visible.
Now, the virtual functionality the Tree-View provides is good for very deep trees, so you would only add items once a node expands (via TVN_ITEMEXPANDING), and TVN_GETDISPINFO can be used for filling in item information once the item is scrolled visible. But what to do if you have an "always expanded" two-level tree (just for design matters) where TVN_ITEMEXPANDING wouldn't be of any use and only want to add the items once they would be visible. The problem is, there's no such thing as SetItemCount() or similar to already resize the tree.
In my case, the filling of item information (text, image, selected image) isn't the expensive part, but the inserting of items (all at one level) is.
One option would be to only insert the items which would be visible plus one invisible one, once the invisible one gets visible (detected in TVN_GETDISPINFO), I'd insert a few more and so on. But then the scrollbar would always get smaller the more I scroll down, I think that's weird.
Are there any other ideas to achieve what I want except from drawing my own control?
The whole tree would just look like this, pretty much a list, it's just that I like the tree-look.
RootNode
|
|--Item 1
|--Item 2
|--Item 3
|--Item 4
|--Item 5
|--Item 6
|--Item 7
...
|__Item 1000
As stated in many other posts, the really expensive part about the Tree-View control is using InsertItem and DeleteItem. A quick way to improve performance for those operations is making use of SetRedraw. Not only does it hide the flickering but does really speed things up, since the drawing seems to be expensive - even though TVN_GETDISPINFO is used.
Also, it's faster to rename existing items and change their data instead of deleting and adding new ones. So when I have a big list and know that the next update will contain about the same amount +/- a couple ones, I go iterate through the items, rename them, change their lparams and sync (i.e. remove/add) the remaining ones in accordance to the new data. Depending on the size of the list, making those extra calculations can have a huge performance improvement.
The Win32 TreeView control does not support the kind of virtual mode you are looking for. So you will need a custom control.
I have a QTableView thats using a model i made which extends the QAbstractTableModel. The model uses a
QList< QVector<QString> * >
as the collection. The table view is used to display logging messages from the application so the collection will eventually get extremely large... After it inserts a few thousand rows i notice the table view starts to slow down a lot and eventually the view freezes for a few seconds before refreshing.. Is it the type of collection im using thats making it slow down soo much? Is there a better way to store the data thats being inserted? Does the QTableView support a large amount of data?
Edit
Posted code on Qt forumn:
http://www.qtforum.org/article/37326/qttableview-slows-down-when-a-lot-of-data-is-inserted.html
I have successfully used QTableView displaying ~10000 rows so QTableView is capable of supporting it but your collection leaves to be desired.
QList is very expensive to insert in the middle since you have to reallocate everything sitting below the index you are trying to insert at granted you are only shifting pointers around but still.
Normally for data storage I would use an std::vector< data_struct * > rather then using vectors of strings. QVariant is quite capable of presenting integers and other types so you don't have to do the conversion up front.
The best suggestion that I can come up with would be to run gprof or a similar tool to time where exactly you are spending time and then address that particular piece.
Greetings,
I've been writing some nasty code to support the undo/redo of deletion of an arbitrary set of objects from my model. I feel like I'm going about this correctly, as all the other mutators (adding/copy-pasting) are subsets of this functionality.
The code is nastier than it needs to me, mostly because the only way to mutate the model involves calling beginInsertRows/beginRemoveRows and removing the rows in a range (just doing 1 row at a time, no need to optimize "neighbors" into a single call yet)
The problem with beginInsertRows/beginRemoveRows is that removal of a row could affect another QModelIndex (say, one cached in a list). For instance:
ParentObj
->ChildObj1
->ChildObj2
->ChildObj3
Say I select ChildObj1 and ChildObj3 and delete them, if I remove ChildObj1 first I've changed ChildObj3's QModelIndex (row is now different). Similar issues occur if I delete a parent object (but I've fixed this by "pruning" children from the list of objects).
Here are the ways I've thought of working around this interface limitation, but I thought I'd ask for a better one before forging ahead:
Move "backwards", assuming a provided list of QModelIndices is orderered from top to bottom just go from bottom up. This really requires sorting to be reliable, and the sort would probably be something naive and slow (maybe there's a smart way of sorting a collection of QModelIndexes? Or does QItemSelectionModel provide good (ordered) lists?)
Update other QModelIndeces each time an object is removed/added (can't think of a non-naive solution, search the list, get new QModelIndeces where needed)
Since updating the actual data is easy, just update the data and rebuild the model. This seems grotesque, and I can imagine it getting quite slow with large sets of data.
Those are the ideas I've got currently. I'm working on option 1 right now.
Regards,
Dan O
Think of beginRemoveRows/endRemoveRows, etc. as methods to ask the QAbstractItemModel base class to fix up your persistent model indexes for you instead of just a way of updating views, and try not to confuse the QAbstractItemModel base class in its work on those indexes. Check out http://labs.trolltech.com/page/Projects/Itemview/Modeltest to exercise your model and see if you are keeping the QAbstractItemModel base class happy.
Where QPersistentModelIndex does not help is if you want to keep your undo/redo data outside of the model. I built a model that is heavily edited, and I did not want to try keeping everything in the model. I store the undo/redo data on the undo stack. The problem is that if you edit a column, storing the persistent index of that column on the undo stack, and then delete the row holding that column, the column's persistent index becomes invalid.
What I do is keep both a persistent model index, and a "historical" regular QModelIndex. When it's time to undo/redo, I check if the persistent index has become invalid. If it has, I pass the historical QModelIndex to a special method of my model to ask it to recreate the index, based on the row, column, and internalPointer. Since all of my edits are on the undo stack, by the time I've backed up to that column edit on the undo stack, the row is sure to be there in the model. I keep enough state in the internalPointer to recreate the original index.
I would consider using a "all-data" model and a filter proxy model with the data model. Data would only be added to the all-data model, and never removed. That way, you could store your undo/redo information with references to that model. (May I suggest QPersistentModelIndex?). Your data model could also track what should be shown, somehow. The filter model would then only return information for the items that should be shown at a given time.
I am using a List Control to display a representation of elements within a vector. When the list is clicked on another control shows information about that element. The index of the element is currently determined by its index in the control, however if I wish to sort or filter the results this will no longer work.
I have been told that I could use a virtual list control, but the MSDN is not very friendly, can someone run me through how I could use a virtual list control for this?
Frankly - tying data (the position in your data vector) to the presentation of the data in the list control (the position in the list ctrl) is something I would stay away from.
In MFC each control has a "Data" DWORD member variable - when coding in MFC I Always called "SetItemData" for each item added and passed in a pointer that the relevant row referred to e.g.
YourListCtrl.SetItemData((DWORDPTR)&YourData);
Then when the ListCtrl item is selected, you just call
DataTypeYouWant* pData = (DataTypeYouWant*)(YourListCtrl.GetItemData(indexofselecteditem));
Or somesuch thing.
Alternatively - if you don't want to use pointers - hold the index of the item in your original vector in the itemdata for your row (just pass it into the above fns).
To use a virtual list control, set the LVS_OWNERDATA style. You then need to handle the LVN_GETDISPINFO notification message (which is sent via WM_NOTIFY).
If you do this, you are entirely responsible for the data, including the order in which it is shown. Therefore it is up to you to handle sorting and so forth.
By far the easiest way is just to use the item data to set/get an ID that can be used to retrieve the original data, whether that's a vector index or a pointer to the data, or even a key into an associative container.
It really depends on the performance you require.
I have personally seen MAJOR increases in performance for lists holding massive amount of data. However it is much more work to implement, thus for simple uses with not so many data I recommend staying away from it.
Basically, what happens with virtual list controls is that you have your data somewhere in some data structure of your own. Since the list view shows only a small subset of the whole data, it queries you for the content to display when ever something happens (redraw necessary, scroll up or down, change the sorting, etc.).
I don't have handy examples for you. But you can look on codeguru, I am quite sure there are very good examples to start from.
The purpose of virtual list controls is totally different: You should use it for performance reason when you have A LOT of items in your list (I'd say 2500+).
In your case, all you need is to store the vector index in the list item data as NotJarvis explains.