I am looking for a way to create an infinite view on a model that is not initialized completely. I would like to create something similar to an Excel spreadsheet, and all I came in was to start with an initialized model (e.g. 100x100 empty cells, maybe working on a database that has empty values), and then just dynamically add next rows/columns (and update view) once we are close to the end of a scrollbar.
But I am wondering if it is the best solution - I think I would definitively benefit from a model that's filled in only partially - by that, I mean store information in the model only about filled cells, and let view handle showing 'empty cells' (which would have been created once we - for example - click them).
I know it would be necessary to store XY positions and cell data (instead of only a 2D container with data), but I would like to try different solutions:
a) have a pointer-like container which would contain a list of filled cells with their positions on a 2D plane
b) have a 2D container with size (x,y), where x and y would mean the 'last filled cell' in a given dimension
And for both solutions, I would like to dynamically allocate more place once data is written.
So there is my question - how can it be achieved with QT model/view programming, if it is even possible to show 'ghost cells' without a model filled with empty data? It would be also nice if I could get a brief explanation of how it is done in apps like excel etc.
Well, your table will never be truly infinite unless you implement some indexing with numbers with infinite digit count and in that case, you will probably not be able to use Qt classes.
But I think you should choose some big enough number to define the maximum. It can be a really large number... if you are on a 64-bit machine, then your 'infinite' table can have 9,223,372,036,854,775,807 rows and the same number of columns. This big number happens to be the maximum of signed 64-bit int. And int is used for indexing with QModelIndex in QAbstractItemModel. So you can have a total of 8.5070592e+37 cells in your two-dimensional 'Excel' table. If this table size is not big enough for you then I do not know what is. Just for comparison, there are approximately 7e+27 atoms in the average human body, maybe a bit more after the covid lockdowns because people were eating instead of practicing sports. :) So you can count all atoms of all people on this planet (say there are a bit less than 10e+10 people altogether). But you will need to buy a bit bigger computer for this task.
So if you decide to go this way, then you can easily override QAbstractTableModel and display it in QTableView. Of course, you cannot save the underlying data in a two-dimensional array because you do not have enough memory. But you have to choose some other method. For example a QHash<QPoint, QString> where QPoint will represent the coordinates and QString the value (you can choose any other type instead of a string of course). Then when you will want to get the value for the given coordinates, you just look up the value in the hash table. The number of data points you will be able to hold depends only on your memory size. This solution is very simple, I guess it will be some 30 rows of code, not more.
Related
We have very sparse data that we are attempting to plot with Google Charts. There are 16 different vectors and each has about 12,000 points. The points are times. All of the times are different. My reading of the API is that I need to create a row where each element corresponds to a different vector. So that's a set of 192,000, where the first element in each row is the time and all of the other elements are null except for the one that has data there, for a total of 3,072,000 elements. When we give this to Google Charts, the browser dies.
The problem with using arrayToDataTable is that our array is sparse. Likewise, arrayToDataTable doesn't work.
My question: is there a more efficient way to do this? Can I plot each data value independently, rather than all at the same time?
It turns out that the answer to this question is to do server-side data reduction in the form of binning. The individual rows each have their own timestamp, but because we are displaying this in a graph with a width of at most 2000 pixel, it makes sense to bin on the server into 2000 individual rows, each one with 16 columns. Then the total array is 32,000 elements, which appears well within the limits of the browser.
I've hit a snag in continuing my work in a C++ program, I'm not sure what the best way to approach my problem is. Here is the situation in non-programming terms: I have a list of children and each child has a specific weight, age, and happiness. I have a way that people can visually view the bones of the child that is specific to these characteristics. (Think of an MMO character customization where there are sliders for each characteristic and when you slide the weight slider to heavy, the walk cycle looks like the character is heavier).
Before, my system had a set walk cycle for each end of the spectrum for each characteristic. For example, there is one specific walk cycle for the heaviest walk, one for the lightest walk, one for youngest walk, etc. There was not middle input, the output was the position of the slider on the scale and the heaviest walk cycle and the lightest walk cycle were averaged by a specific percentage, the position of the slider.
Now to the problem, I have a large library of preset walk cycles and each walk cycle has a specific weight, age, and happiness. So, Joe has a weight of 4, an age of 7, and happiness level of 8 and Sally 2, 3, 5. When the sliders move to a the specific value (weight 5, age 8, happiness 7). However, only one slider can be moved at one time and the slider that was moved last is the most important characteristic to find the closest match to. I want to find in my library the child that has the closest to all three of these values and Joe will be the closest.
I was told to check out using a 3 dimensional array but I would rather use an array of child objects and do multiple searches on that array which, I am a rookie and I know the search will take a bit of computing time but I keep leaning towards using the single array. I could also use a two dimensional array but I'm not sure. What data structure would be the best to search for three values in?
Thank you for any help!
How many different values can each slider take? If there are say ten values for each slider this would mean there are 10*10*10=1000 different possible character classes. If your library has less than 1000 walk cycles just reading through them all looking for the nearest match is probably gonna be fast enough.
Of course if there are 100 values for each slider then you may want something more clever. My point is there are some thing that don't have to be optimized.
Also is your library of walk cycles fixed once and for all? If so perhaps you could pre compute the walk cycle for each setting of the sliders and write that to a static array.
I agree with Wilf that the number of walk cycles is critical, as even if there are say 100,000 cycles you could easily use a brute-force find-the-maximum over...
weight_factor * diff(candidate.weight, target.weight) +
age_factor * diff(candidate.age, target.age) +
happiness_factor * diff(candidate.happiness, target.happiness)
...where the factor for the last-moved slider was higher than the others.
For more cycles than that you'd want to limit the search space somewhat, and some indices would be useful, e.g.:
map<int, map< int, map<int, vector<Cycle*>> cycles_by_weight_age_happiness;
You'd populate that adding a pointer to each walk cycle - characterised by { weight, age, happiness } - to cycles[rw(weight)][ra(age)][rh(happiness)], where each of rw, ra and rh rounded the parameters by whatever granularity you liked (e.g. round weight down to nearest 5kgs, group ages by integer part of log base 1.5 of age, leave happiness alone). Then to search you evaluate the entries "around" your target { rw(weight), ra(age), rh(happiness) } indices... the further from there you deviate (especially on the last-slider-moved parameter, the less likely you are to find a better fit than you've already seen, so tune to taste.
The above indexing is a refinement of what I think Wilf intended - just using functions to decouple the mapping from value space into vectors in the index.
I have a collection of object with a position (x, y)
These objects randomly move
Could have thousands of it
At any moment I would have the list of object in a (constant) radius RAD from a position POS.
Edit - Context : It's for a gameserver, which would (utopically) have thousands of players. When a player moves/[makes an action], I want to send the update to others players in the radius.
The easy way, every time I need the list :
near_objects;
foreach( objects o ) {
if( o.distance( POS ) < RAD )
near_objects.add( o )
}
I guess there are better/faster methods, but I don't know what to search.
Here are two suggestions.
Usually you compute distance using sqrt( (a.x-b.x)^2 + (a.y-b.y)^2 ) and the expensive part is computing sqrt(), if you compute RAD^2 once outside the loop and compare it to the inside of the sqrt() you can avoid computing sqrt() in the loop.
If most of the objects are far away, you can eliminate them by using
if( abs(a.x-b.x) > RAD ) continue;
if( abs(a.y-b.y) > RAD ) continue;
I assume this is for some kind of MMO - can't imagine 'thousands' of players in any other scenario. So your problem is actually more complex - you need to determine which players should receive the update about each player, so it turns into O(n^2) problem and we're dealing with millions. First thing to consider is do you really want to send updates based only on distance? You could divide your world into zones and keep separate lists of players for each zone and check it only for these lists, so for m zones we have O(m * (n/m)^2) = O(n^2/m). Obviously you also want to send updates to players in the same party and allow players near zone transition spots to know about each other(but make sure to keep that area small and unattractive for players so they don't just stand there). Also considering huge world and relatively slow player speed you don't have to update that info all that often.
Also keep in mind that memory/cache usage is extremely important for performance and I was referring to list as an abstract term - you should keep data accessed in tight loops in arrays, but make sure elements aren't too big. In this case consider making a simple class containing basic player data for those intensive loops and keep a pointer to a bigger class containing other data.
And on a total side note - your question seems to be quite basic, yet you are trying to build an MMO, which is not only technically complicated, but also requires a ton of work. I believe, that pursuing a smaller, less ambitious project, that you will be actually able to complete would be more beneficial.
You could put your objects into an ordered data structure, indexed by their distance from POS. This is similar to a priority queue, but you don't want to push/pop items all the time.
You'd have to update an object's key whenever it moves to the new position. To iterate over the items within a given radius RAD, you'd simply iterate over the items of this ordered data structure as long as the distance (the key) is less than RAD.
I have a 2D top-down 45 degree game like pokemon or Zelda. Since the y value of an object determines it's depth, objects need to be drawn in order of their y value. So when your standing behind a tree for example, the tree is drawn on top of your player to look like you are standing behind the tree.
My current design would be to draw a row of tiles, and then draw any players standing on that row, then draw the next row, and then draw any players standing on that. This way any tile that has a higher y value than the player is drawn in front of them to simulate depth.
However, my players are currently a std::vector of objects that is simply iterated and drawn after all the tiles are drawn. For my method to work, I would have to either iterate the vector for every row of tiles, and only render if they are on the current row, OR sort every player by y value somehow each frame. Both these methods seem quite CPU intensive, and maybe I am over thinking it and there is a simpler way of simulating depth in this type of game.
Edit:
This game is an MMORPG type game, so there could potentially be many players/NPC's walking around which is why I need a very efficient method.
And ideas or comments would be appreciated!
Thanks
You could use std::set or std::map instead of vector to keep your objects in sorted order. Unfortunately you wouldn't be able to simply modify their position, you would have to remove/insert each time you needed to change the y coordinate.
(Disgregard my original suggestion, if you read it; it was daft.)
As far as I can tell, you basically have to iterate over a sorted container each time you render a frame; there will be a computational penalty for this, and having to do a copy and sort each time will not be that bad (O(N log N) sorting time, I'd guess).
A clever data structure might help you here; an array containing a vector of game objects as each element, for example. Each element of the array represents a row of your tiled grid, and you iterate over your vector of objects and bin them into your depth buffer array (which will take approximately O(N) time). Then just iterate over the vector representing each row and draw each object in turn, again O(N).
There are probably cleverer z-buffering techniques, but I'll leave those as an exercise for the reader.
As long as you aren't also attempting to sort your players according to other criteria then you can get away with just sorting your players by y-coordinate every time you want to iterate through them. As long as you use a sort that runs in linear time when the input is mostly sorted then you will probably not even incur O(n log n) time for this step. I'm making the assumption here that your set of players changes slowly and that their y-coordinates will also change slowly. If this is the case then every time you need to sort the vector it will already be mostly-sorted, and something like Smooth sort or Tim sort will run in linear time.
For c++ it looks like you can find an implementation of Tim sort here.
If you have space, another option would be to create an array of lists of players, with the index of the array being the row number, and the array containing a collection of the players in that row.
As I mentioned, this will require a bit of extra memory, and some bookeeping every time a player moves to a different row, but then drawing is simply a matter of iterating through the rows, and drawing the players that are present in that row.
I'm looking at creating a basic side-scroller using OpenGL and C++, however I'm having trouble tackling a few basic conceptual issues, namely:
Dividing the window into easy "blocks" (some sort of grid system). On what level would I want to do this? (The OpenGL viewport size, or through some abstraction that ensures working with multiples of x?)
Storing the data for all these "blocks" to allow for collision detection and some special effects. What's a sensible way of going about this - I was thinking along the lines of a multi-dimensional array of objects (which contain information such as the tile type), but this doesn't seem like a very elegant or efficient solution.
Usually, it isn't the window (ie. viewport) which is divided into a grid but rather the "gameplay area". Pick a good size according to the style of your art (something like 64px - you might want to select a size that is a power of two for technical reasons) and create tiles of that size for your game (stored and loaded as a 1D array)
Then, these tiles are referenced (by an offset) in a tilemap, which actually describes what your level looks like. Each tile can also have extra metadata for handling collisions and game events. You might want to use an existing map format and toolset to save time, like Mappy.
Tips:
Think of the display and the data separately (lookup Model-View-Controller).
How you store the data depends on how it's going to be accessed, not on how it's going to be displayed. Think of it from the computers point of view!
Hint, it's often easiest to work in a one dimensional storage of the data and work out where along it the next point up/down is than work in 2d.