In my project, I have a function that recursively iterates over the model of a QTreeView. At certain points, I append values to a QStringList that is stored in each item's Qt::UserRole.
Here's the issue... the recursive scanning does a ton of checking, reading from JSON file, importing icons from disk, etc etc HOWEVER, all of that is miles faster than simply appending 1 or 2 strings to the QStringList for about 5% of the items in the model.
I did some basic profiling and found that if I comment out all calls to QStringList::append() but LEAVE IN all the crazy JSON reading, icon setting, color changing, etc, it is 3 times faster than if I left them in. And it is noticeably slower... frustratingly slower.
So I decided to narrow it down to only 1 call to QStringList::append() on about 5% of the items. Here is the example of code:
QStringList rightClickList = mainItem->data(Qt::UserRole+8).toStringList();
rightClickList.append("customName");//comment this out and it runs 3x faster
//than allllll the recursive scanning combined!
mainItem->setData(rightClickList, Qt::UserRole+8);
I would estimate about 5% of all the items in a given model have any QStringList changes at all. The rest are left alone. Are QStringList types really that slow? If so, what alternative would you recommend?
Thanks for your time!
It could be memory pressure: as array-based storage grows, the runtime stops and allocates storage to keep up.
It could also be a side-effect of recursion; if the problem persists, try a stack-based recursion.
Related
I am working on a map/reduce review and I always have reduce_overflow_error each time I run the view, if I set reduce_limit = false in couchdb configuration, it is working, I want to know if there is negative effect if I change this config setting? thank you
The setting reduce_limit=true enforces CouchDB to control the size of reduced output on each step of reduction. If stringified JSON output of a reduction step has more than 200 chars and it‘s twice or more longer than input, CouchDB‘s query server throws an error. Both numbers, 2x and 200 chars, are hard-coded.
Since a reduce function runs inside SpiderMonkey instance(s) with only 64Mb RAM available, the limitation set by default looks somehow reasonable. Theoretically, reduce must fold, not blow up the data given.
However, in real life it‘s quite hard to fly under the limitation in all cases. You can not control number of chunks for a (re)reduction step. It means you can run into situation, when your output for a particular chunk is more than twice longer in chars, although other chunks reduced are much shorter. In this case even one uncomfortable chunk breaks entire reduction if reduce_limit is set.
So unsetting reduce_limit might be helpful, if your reducer can sometimes output more data, than it received.
Common case – unrolling arrays into objects. Imagine you receive list of arrays like [[1,2,3...70], [5,6,7...], ...] as input rows. You want to aggregate your list in a manner {key0:(sum of 0th elts), key1:(sum of 1st elts)...}.
If CouchDB decides to send you a chunk with 1 or 2 rows, you have an error. Reason is simple – object keys are also accounted calculating result length.
Possible (but very hard to achieve) negative effect is SpiderMonkey instance constantly restarting/falling on RAM overquota, when trying to process a reduction step or entire reduction. Restarting SM is CPU and RAM intensive and costs hundreds milliseconds in general.
I have to give some background first. I want to implement an optimized storage engine for OSM planet data (50GB+). The purpose of this engine is to enable map area extractions as fast as possible - while also remaining the ability for minutely updates. The design I've chosen for several reasons (not mentioning all of them here) is to use one data cell per grid. E.g. think of a all cells on a map being distinct files or databases: http://3.bp.blogspot.com/_CntRFtGsdQo/TTU5UMlLkTI/AAAAAAAAARk/_hW8n33t4Ok/s1600/utmworld.gif
(Jut to get the idea though, this is not the actual cell grid I'll be using)
I have never used leveldb before, but settled on it for it's bulk insert and update performance. However, I'd like to know about the "performance characteristics" when opening many very small and very large leveldb databases. very small meaning just a few kB, very large meaning a few hundred MB
I expect that I have to open / close somewhere between 10-100 dbs per minute. I'd rule out leveldb if it needs significant initialization time.
An answer to this question could be either concrete figures, or insight to what leveldb does during initialization and how it relates to data / index size.
PS. I'll also do my own measurements of course. But as with all tests, I may draw wrong conclusions from my sample data.
I'm working on an iOS music app (written in C++) and my model looks more or less like this:
--Song
----Track
----Track
------Pattern
------Pattern
--------Note
--------Note
--------Note
So basically a Song has multiple Tracks, a Track can have multiple Patterns and a Pattern has multiple Notes. Each one of those things is represented by a class and except for the Song object, they're all stored inside vectors.
Each Note has a "frame" parameter so that I can calculate when a note should be played. For example, if I have 44100 samples / second and the frame for a particular note is 132300 I know that I need that Note at the start of the third second.
My question is how I should represent those notes for best performance? Right now I'm thinking of storing the notes in a vector datamember of each pattern and than loop all the Tracks of the Song, than look the Patterns and than loop the Notes to see which one has a frame datamember that is greater than 132300 and smaller than 176400 (start of 4th second).
As you can tell, that's a lot of loops and a song could be as long as 10 minutes. So I'm wondering if this will be fast enough to calculate all the frames and send them to the buffer on time.
One thing you should remember is that to improve performance, normally memory consumption would have to increase. It is also relevant (and justified) in this case, because I believe you want to store the same data twice, in different ways.
First of all, you should have this basic structure for a song:
map<Track, vector<Pattern>> tracks;
It maps each Track to a vector of Patterns. Map is fine, because you don't care about the order of tracks.
Traversing through Tracks and Patterns should be fast, as their amounts will not be high (I assume). The main performance concern is to loop through thousands of notes. Here's how I suggest to solve it:
First of all, for each Pattern object you should have a vector<Note> as your main data storage. You will write all the changes on the Pattern's contents to this vector<Note> first.
vector<Note> notes;
And for performance considerations, you can have a second way of storing notes:
map<int, vector<Notes>> measures;
This one will map each measure (by its number) in a Pattern to the vector of Notes contained in this measure. Every time data changes in the main notes storage, you will apply the same changes to data in measures. You could also do it only once every time before the playback, or even while playback, in a separate thread.
Of course, you could only store notes in measures, without having to sync two sources of data. But it may be not so convenient to work with when you have to apply mass operations on bunches of notes.
During the playback, before the next measure starts, the following algorithm would happen (roughly):
In every track, find all patterns, for which pattern->startTime <= [current playback second] <= pattern->endTime.
For each pattern, calculate current measure number and get vector<Notes> for the corresponding measure from the measures map.
Now, until the next measure (second?) starts, you only have to loop through current measure's notes.
Just keep those vectors sorted.
During playback, you can just keep a pointer (index) into each vector for the last note player. To search for new notes, you check have to check the following note in each vector, no looping through notes required.
Keep your vectors sorted, and try things out - that is more important and any answer you can receive here.
For all of your questions you should seek to answer then with tests and prototypes, then you will know if you even have a problem. And also while trying it out you will see things that you wouldn't normally see with just the theory alone.
and my model looks more or less like this:
Several critically important concepts are missing from your model:
Tempo.
Dynamics.
Pedal
Instrument
Time signature.
(Optional) Tonality.
Effect (Reverberation/chorus, pitch wheel).
Stereo positioning.
Lyrics.
Chord maps.
Composer information/Title.
Each Note has a "frame" parameter so that I can calculate when a note should be played.
Several critically important concepts are missing from your model:
Articulation.
Aftertouch.
Note duration.
I'd advise to take a look at lilypond. It is typesetting software, but it is also one of the most precise way to represent music in human-readable text format.
My question is how I should represent those notes for best performance?
Put them all into std::map<Timestamp, Note> and find segment you want to playing using lower_bound/upper_bound. Alternatively you could binary search them in flat std::vector as long as data is sorted.
Unless you want to make a "beeper", making music application is much more difficult than you think. I'd strongly recommend to try another project.
What is the choise better than QListWidget to display a lot of log lines in GUI that are coming from backend at average speed 40 lines per second?
QListWidget gives a flickering and even white box instead of a widget for a long time when a lot of strings are already placed into ListWidget.
Is there any better solution to dynamically display log lines to a user?
update:
Changed architecture. Adding new QStrings to std::deque< QString* >. Using QTimer i add that strings every 1/10 of second to QPlainTextEdit, deleting from deque. boost::mutex is used to protect std::deque (log lines are coming from different threads).
Would be nice to have a time to implement my own QListView and keep strings in big chunks of pre-allocated memory.
Are you sure you need the functionalities of a QListWidget? If you just want to display log lines, I think a simple read-only QPlainTextEdit would be more appropriate.
You might try to use QListView and you own implementation of QAbstractItemModel. Then you can store your lines as you wish and append new lines in big groups (about every second should be ok). Then view is not refreshed at adding every line but only in groups, which should highly improve performace.
I would suggest setting a refresh rate and append all gathered items at once. You will avoid repaint of widget every line you append.
Long story short:
QTimer with refresh rate (~1-3 seconds would be enough), QListWidget::addItems instead of QListWidget::addItem
I am working on a application built uppon WT.
We have a performance problem, as it must display a lot of data in a WTableView associated with a WStandardItemModel.
For each new item to be added in the table it does:
model->setData( row, column, data )
(which happens a few thousand times).
Is there some way to make it faster? some other way to add data in the table?
it can take 2 seconds to generate the data and several minutes to display it ...
WStandardItemModel is a general-purpose model that is easy to use, but it's not optimal for very large models. Try to specialize a WAbstractTableModel; you only need to reimplement three methods and you can read your data from wherever it resides, or compute it on the fly.
It's not normal that a view takes minutes to display. I've used views on tables with many thousands of entries without performance problems. Was your system swapping because of the memory wasted in a (extremely large) WStandardItemModel?