I am designing a class that manages a collection of rectangles.
Each rectangle represents a button, so contains the following properties:
x position
y position
width
height
a callback function for what happens when it is pressed.
The concept itself is fairly straightforward and is managed through a command line interface. In particular.
If I type "100, 125" it looks up whether there is a rectangle that contains this point (or multiple) and performs their callback functions.
My proposal is to iterate over all rectangles in this collection and perform the callback of each individual rectangle which contains this point, or stop upon the first rectangle that matches (simulating z order).
I fear however that this solution is sloppy, as this iteration becomes longer the more rectangles I have.
This is fine for the console application, since it can easily go over 10,000 rectangles and find which ones match, expensive computation, but time wise it is not particularly an issue.
The issue is that if I were to implement this algorithm in a GUI application which needs to perform this check every time a mouse is moved (to simulate mouse over effect), moving your mouse 10 pixels over a panel with 10,000 object would require checking of 100,000 objects, that's not even 1000 pixels or over people tend to move mouse over at.
Is there a more elegant solution to this issue, or will such programs always need to be so expensive?
Note: I understand that most GUIs do not have to deal with 10,000 active objects at once, but that is not my objective.
The reason I choose to explain this issue in terms of buttons is because they are simpler. Ideally, I would like a solution which would be able to work in GUIs as well as particle systems which interact with the mouse, and other demanding systems.
In a GUI, I can easily use indirection to reduce amount of checks drastically, but this does not alleviate the issue of
needing to perform checks every single time a mouse is moved, which can be quite demanding even if there are 25 buttons, as moving over 400 pixels with 25 objects(in ideal conditions) would be as bad as moving 1 pixel with 10,000 objects.
In a nutshell, this issue is twofold:
What can I do to reduce the amount of checks from 10,000 (granted there are 10,000 objects).
Is it possible to reduce amount of checks required in such a GUI application from every mouse move, to something more reasonable.
Any help is appreciated!
There are any number of 2D-intersection acceleration structures you could apply. For example, you could use a Quadtree (http://en.wikipedia.org/wiki/Quadtree) to recursively divide the viewport into quadrants. Subdivide each quadrant that doesn't fall entirely within or entirely outside every rectangle and place a pointer to either the top or the list of rectangles at each leaf (or NULL if no rectangles land there). The structure isn't trivial, but it's fairly straightforward conceptually.
Is there a more elegant solution to this issue, or will such programs
always need to be so expensive?
Instead of doing a linear search through all the objects you could use a data structure like a quad tree that lets you efficiently find the nearest object.
Or, you could come up with a more realistic set of requirements based on the intended use for your algorithm. A GUI with 10,000 buttons visible at once is a poor design for many reasons, the main one being that the poor user will have a very hard time finding the right button. A linear search through a number of rectangles more typical of a UI, say somewhere between 2 and 100, will be no problem from a performance point of view.
Related
I'm working on a top-down RPG game in SDL2. I've gotten to the point where I can move a character around, use sprite-sheets and all that jazz. I started work on the camera using SDL_RenderSetViewport. However, I ran into a bunch of issues that I don't really want to deal with. So, instead of moving the viewport when I move the player, I just moved everything else. I have a function that moves all the sprites (excluding the player), which are stored in a vector.
I know this results in a lot of math calculations, but would it be inefficient from a rendering perspective? My logic is that since the objects are copied to the SDL_Renderer every time that it refreshes. Since they are being recopied, regardless of changes, wouldn't it be the same amount of processing every time?
Yes, this is completely fine.
SDL_RendererSetViewport isn't relevant to this issue either way. This function affects the bounds of the final frame, relative to the window. As such, you don't really have any other option.
Since they are being recopied, regardless of changes, wouldn't it be the same amount of processing every time?
You're right, this doesn't matter. Not to mention, you're (I assume) clearing the screen at the beginning of the frame.
If you're feeling adventurous, you might want to look at using glTranslatef, though I don't recommend it, since SDL uses Direct3D on Windows, meaning you'd have to force OpenGL somehow.
I'd say you're safe with your current approach; from looking around the internet, this appears to be a common way to do it.
I have a function that moves all the sprites (excluding the player), which are stored in a vector.
You're overthinking this a little bit. You can just transform each sprite's position when you're rendering it, which is a lot more efficient than dealing with an entire vector.
Is there a "standard" method for 3d picking? What do most game companies do? (for accurate picking)
I thought the fastest way is to use the gpu and render every object with an "color index", and then to use glReadPixels(), but then I heard that it's considered slow because of glFlush(), glFinish() calls.
There's also this ray casting approach, which is nice but isn't accurate because of the spheres/AABBs approximations.
Any question about what is "standard" is probably going to invoke some opinionated responses, but I would suggest that the closest to "standard" here is raycasting.
Take your watertight ray/triangle intersection function and test a ray that is unprojected from your mouse cursor position against the triangles in your scene.
Normally this would be quite slow, requiring linear complexity. So the next step is to accelerate it to something better, like logarithmic time. This is typically achieved with a data structure such as an octree, BVH, K-D tree, or BSP. Sometimes people skip this step and just try to make the ray/tri intersection really fast and really parallel, possibly even using GPGPU.
It takes a lot more work upfront than framebuffer-based solutions, but complex applications tend to go this route probably because:
Portability: it's decoupled from the rendering engine. It doesn't have to be tied to OpenGL or DirectX, e.g., and that improves portability.
Generality: typically the accelerator and associated queries are needed for other things. For example, an FPS game might have players and enemies constantly shooting at each other. Figuring out what projectiles hit what tends to require these kinds of intersection queries occurring constantly, and not just from a uniform viewing angle.
Simplicity: the developers can afford the extra work upfront to simplify things later on.
There's also this ray casting approach, which is nice but isn't
accurate because of the spheres/AABBs approximations.
There should be nothing inaccurate about using AABBs or bounding spheres for acceleration purposes. Those are purely to accelerate the tests and quickly reduce the number of the more costly ray/triangle intersections that need to occur by doing cheaper tests and ones that eliminate large batches of triangles to check in bulk. Normally they should be constructed to encompass the elements in the scene. If you do a ray/AABB intersection first, e.g., and if that hits, test the elements encompassed within the AABB. Any acceleration structure that doesn't give the same results without the accelerator would typically be a glitchy one.
For example, a very basic form of acceleration is just put a bounding box around one mesh element in a scene, like a character, and sometimes this basic form without involving a full-blown accelerator might be useful for very dynamic elements in the scene (to avoid the cost of constantly updating the accelerator). If the ray intersects the character's bounding box, then check all the triangles making up the character. As long as you check the triangles within the AABB afterwards, it becomes about acceleration rather than approximation. Of course if you only checked the AABB and nothing else, then it would be a crude approximation.
I'm using SDL2 for my game.
I have an std::vector of SDL_Rects (that is, rectangle objects) that holds the solid platforms (i.e. platforms that the player can't go through) of a level in my game.
When checking for collision, my current code does the following:
for (SDL_Rect rect : rects) {
if (player.collides(rect)) {
// handle collision
}
}
Consider I have a level with numerous (e.g. 500) solid platform rectangles, is it inefficient to go through all of them and check for collision? Is there a better way to do this?
The collides() function only checks for AABB collision (4 simple conditions).
I think this is reasonable. You have simple shapes and are doing simple collision checking. Imagine a more graphically intense game. Even then, they may have a complex skeletal mesh for the character, but just do the collision checking against an easy-to-calculate bounding shape, and they probably have a lot more than 500 things going on at once.
In a more complex game engine, different types may be blocking against some types and non-blocking against others, so not only will it be checking for simple overlap events, it has to know if the overlapping objects should interact or not. Or there might be different interactions for different objects.
With games, by and large your bottleneck is rendering and the associated calculations, so unless you know you're in danger of doing something incredibly slowly with the game logic (complex path finding or AI or something like that), I would concentrate my optimizing efforts on the rendering.
I am currently brainstorming strategies on how to compute the distance in a 2D array, of all points from sets of points with specific attributes. A good example (and one of my likely uses) would be a landscape with pools of water on it. The idea would be to calculate the distance of all points in this landscape from water.
These are the criteria that I would like to abide to and their reasoning:
1) Execution speed is my greatest concern. The terrain is dynamic and the code will need to be run in a semi continuous manner. What I mean by that is that there are be periods of terrain updates which will require constant updates.
2) Memory overhead is not a major concern of mine. This will be run as the primary application.
3) It must be able to update dynamically. See #1 for the reasons behind this. These updates can be localized.
4) Multi-threading is a possibility. I am already make extensive use of multi-threading as my simulation is very CPU intensive. I'd prefer to avoid it since it would speed up development but I can do it if necessary.
I have come up with the following possible approach and am looking for feedback and/or alternative suggestions.
1) Iterate through the entire array and make a collection positions in a container class corresponding to points are next to those with the particular property. Assign a value of 1 to these points and 0 to the points with the property.
2) Use the positions to look up those points adjacent to them that are the next distance away, place them in a second container class.
3) Repeat this process till no points are left unsigned.
4) Save the list of points directly one unit away for future updates.
The idea is to basically flow outward from distance 0, and save computation by continually narrowing the list of points in the loop.
1) The only other way of doing this well, which I can think of, would be with the Cartesian distance formula, but your way seems like it would have less CPU time (since the Cartesian way must calculate to every point on each point).
2) Or, if I understand your desire correctly, you could iterate through once, saving all of the points with your special attribute in a container (point to them), and then iterate through one more time, only using the distance formula from each iteration to each of the saved points (then repeat). Please comment and ask about it if this explanation is unclear. It's late, and I am quite tired.
If you want to run this comparison in the background, you have no choice but to multi-thread the whole program. However, whether or not to multi-thread the functionality of this process is up to you. If you go with the second option I provided, I think you will have cut down enough CPU usage to forgo multi-threading the process in question. The more I think about it, the more I like the second option.
I am preparing for an interview and I came across these questions. Can some one please help how to solve these questions.
Imagine you've 2D system which is just testing whether 2 rectangles are in collision state or not, and you supposed to make a program which takes the code of this system from its developers and test it automatically to see if it's working good or not, and output the percentage of the error in the code ?
write out the queue and de-queue methods for a fixed length queue that is shared between two objects. Here what does objects refer to ? Thread synchronization ?
Thanks & Regards,
Mousey
For 1) You should test for overlap in the rectangles. The first test I would develop would simply start with the rectangles on top of each other and move them apart slowly until no collisions was detected. Error would most likely have to be measure either percentage of overlap or # of pixels that are overlapping. I'd do both... Who knows they may have developed the algorithm to be accurate to a pixel error or a % size of object error. ie.. more accurate for smaller objects. After this initial "quick test" I'd attempt to develop a more general case with more variation in the overlap. ie... 1 pixel in the top left corner overlapping with 1 pix in the bottom left corner of the other rectangle with varying sizes of rectangles. Testing some smart corner cases and some pseudo-random overlapping triangles seems like a good test design to me.
I always develop simple tests first to get immediate feedback then try to move to more general and thorough tests. Obviously if you put two rectangles down that are completely overlapping and there is no collisions something is wrong.
For 2) Counting semaphores comes to mind as a way to solve this problem. You want it to block when the queue is full on the input side and block when the queue is empty on the dequeue side. I'm not sure if both objects can queue and dequeue, but it really doesn't matter if you're using semaphores to keep track of the state of the queue. You also want to obtain an exclusive lock whenever you modify the queue.
For the first one, just plug in a known dataset and write the results. Sounds more like a coding assignment than a conceptual test.
For the second, write a circular queue. Generally, something is wrong with your job if you are writing a general data structure rather than using a library.
Unless they mention threads, I wouldn't make a big deal of it. But throwing critical sections around everything couldn't hurt.