Creating Optimized MBR for implementing R-tree - c++

Simplest way for creating Minimum Bounding Rectangle(MBR) for R-tree is to start working by keeping horizontal length initially fixed and changing the vertical part till the required number of threshold points fall into it and if region is empty up-to maximum height then increase the horizontal length to accommodate the points and continue the process repeatedly.
Is there any other approach for creating optimized MBR tree..??

Related

Direct2D Drawing with DirectX11: Aligning rectangles on a display graph

I'm working on a graphical application in C++ using Direct2d (DirectX11). The application takes in sensor data and displays the input using rectangles that are placed side-by-side across the x-axis (which represents time). Each rectangle is filled with a linear gradient brush that represents multiple sensor readings at the discrete time interval displayed along the y-axis.
When a reading is acquired, the placement for the starting 'x' position of the next rectangle should be exactly where the last one finished i.e. rect1.right should be rect2.left. The start point for each rect is calculated using the pseudocode below:
//find the number of rectangles needed to represent the time scale (rects must be an integer, as we cannot display partial rectangles
int nNumXRects = fAxisLength/fTimeDivision;
//calculate the X-axis increment for each rectangle
float fXIncrement = fXAxisLineLength/(float)NumXRects;
//Get the next x position
rect2.left = rect1.right;
rect2.right = rect2.left + fXIncrement;
My problem is that the graph only appears correctly when the value of fXIncrement is exactly a whole number e.g. 3.0f. This obviously restricts the length of the X-Axis to figures that are multiples of the number of rectangles, times the length of each rectangle. This affects the area available to all the other elements of the application.
If the value of the increment is anything other that a whole number, small black lines appear between the rectangles which destroys the appearance and makes the data much harder to interpret. I realise why this is happening in principle - we cannot display a fraction of a pixel for instance, but how should this be done properly so that the rectangles will always match up exactly, regardless of the length of the axis? It would seem that Direct2D is perfect for this and should intrinsically cope with mapping fractional values to physical pixels exactly, but I don't know what the correct approach is beyond by current simplistic solution which is to keep the length of the x-axis fixed (meaning I cannot scale properly and other elements do not have enough space in the horizontal).
Any pointers in the right direction would be much appreciated!
Can't this be fixed by setting the appropriate anti alias mode when drawing the rectangles?
pRenderTarget->SetAntialiasMode(D2D1_ANTIALIAS_MODE_ALIASED);

Explore a matrix with undefined size

I am trying to explore an environment by modelling it with 2 dimensional matrix. However, I don't know the size of the matrix beforehand.
Currently, I am using std::vector< std::vector > structure to abstract the matrix and resize it to certain size. If my application reaches the limit of my original resize, I do that operation again.
I am exploring this matrix with a combination of DFS and A* algorithms. My explorer agent can move forward, backward, left and right. Every time the explorer reaches a position, he adds the neighbors to the stack of DFS. For example, if he is at position (25, 25), it will add the neighbors (25,24), (25, 26), (24, 25) and (26, 25).
So far, it has worked properly. However, there is a scenario that I did not thought. I was always testing my algorithm with the explorer beginning at a corner of the matrix, which behaves great. But, if the explorer starts at the middle of the room or any other position that is not in a corner, my algorithm does not work properly.
That happens because I start my explorer at position 0,0 in the matrix. Therefore, if the explorer begins at the middle of the room, some positions would not be explored, because they would generate negative index for my explorer. Does anyone has any idea of what I can do in order to solve this ?
One way is to simplify it like you said and force it to start from a corner.
The more complicated way would be to, whenever you encounter an index that WOULD be negative, resize the array and all indexes previously generated to force them positive. For performance, probably in large chunks, like simply adding 10 or 100 to everything.
So you add a check for negative numbers when you go to add neighbors and if any of them are negative you apply the same addition to all indexes you've generated so far to force every index positive.
It's just an imaginary coordinate system, the important part is their relative positions. At the end, decide which one should be 0,0 and subtract enough from x,y from it and ALL indexes to normalize the vector back.
Also a performance concern, if you start from a large enough positive number, you may be able to reduce or eliminate the need for this coordinate map shifting until the very end. Like if you start from 100,100 then you would need to travel 100 nodes before you got negative. If there were less than 100 nodes in any direction, you wouldn't have to translate until you've completed mapping.

C++ Molecular Dynamics Simulation Code

I am working on a project to simulate a hard sphere model of a gas. (Similar to the ideal gas model.)
I have written my entire project, and it is working. To give you an idea of what I have done, there is a loop which does the following: (Pseudo code)
Get_Next_Collision(); // Figure out when the next collision will occur
Step_Time_Forwards(); // Step to time of collision
Process_Collision(); // Process collision between 2 particles
(Repeat)
For a large number of particles (say N particles), O(N*N) checks must be made to figure out when the next collision occurs. It is clearly inefficient to follow the above procedure, because in the vast majority of cases, collisions between pairs of particles are unaffected by the processing of a collision elsewhere. Therefore it is desirable to have some form of priority queue which stores the next event for each particle. (Actually, since a collision involves 2 particles, only half that number of events will be stored, because if A collides with B then B also collides with A, and at exactly the same time.)
I am finding it difficult to write such an event/collision priority queue.
I would like to know if there are any Molecular Dynamics simulators which have been written and which I can go and look at the source code in order to understand how I might implement such a priority queue.
Having done a google search, it is clear to me that there are many MD programs which have been written, however many of them are either vastly too complex or not suitable.
This may be because they have huge functionality, including the ability to produce visualizations or ability to compute the simulation for particles which have interacting forces acting between them, etc.
Some simulators are not suitable because they do calculations for a different model, ie: something other than the energy conserving, hard sphere model with elastic collisions. For example, particles interacting with potentials or non-spherical particles.
I have tried looking at the source code for LAMMPS, but it's vast and I struggle to make any sense of it.
I hope that is enough information about what I am trying to do. If not I can probably add some more info.
A basic version of a locality-aware system could look like this:
Divide the universe into a cubic grid (where each cube has side A, and volume A^3), where each cube is sufficiently large, but sufficiently smaller than the total volume of the system. Each grid cube is further divided into 4 sub-cubes whose particles it can theoretically give to its neighboring cubes (and lend for calculations).
Each grid cube registers particles that are contained within it and is aware of its neighboring grid cubes' contained particles.
Define a particle's observable universe to have a radius of (grid dimension/2). Define timestep=(griddim/2) / max_speed. This postulates that particles from a maximum of four, adjacent grid cubes can theoretically interact in that time period.
For every particle in every grid cube, run your traditional collision detection algorithm (with mini_timestep < timestep, where each particle is checked for possible collisions with other particles in its observable universe. Store the collisions into any structure sorted by time, even just an array, sorted by the time of collision.
The first collision that happens within a mini_timestep resets your universe(and universe clock) to (last_time + time_to_collide), where time_to_collide < mini_timestep. I suppose that does not differ from your current algorithm. Important note: particles' absolute coordinates are updated, but which grid cube and sub-cube they belong to are not updated.
Repeat step 5 until the large timestep has passed. Update the ownership of particles by each grid square.
The advantage of this system is that for each time window, we have (assuming uniform distribution of particles) O(universe_particles * grid_size) instead of O(universe_particles * universe_size) checks for collision. In good conditions (depending on universe size, speed and density of particles), you could improve the computation efficiency by orders of magnitude.
I didn't understand how the 'priority queue' approach would work, but I have an alternative approach that may help you. It is what I think #Boyko Perfanov meant with 'make use of locality'.
You can sort the particles into 'buckets', such that you don't have to check each particle against each other ( O(n²) ). This uses the fact that particles can only collide if they are already quite close to each other. Create buckets that represent a small area/volume, and fill in all particles that are currently in the area/volume of the bucket ( O(n) worst case ). Then check all particles inside a bucket against the other particles in the bucket ( O(m*(n/m)²) average case, m = number of buckets ). The buckets need to be overlapping for this to work, or else you could also check the particles from neighboring buckets.
Update: If the particles can travel for a longer distance than the bucket size, an obvious 'solution' is to decrease the time-step. However this will increase the running time of the algorithm again, and it works only if there is a maximum speed.
Another solution applicable even when there is no maximum speed, would be to create an additional 'high velocity' bucket. Since the velocity distribution is usually a gaussian curve, not many particles would have to be placed into that bucket, so the 'bucket approach' would still be more efficient than O(n²).

'Stable' multi-dimensional scaling algorithm

I have a wireless mesh network of nodes, each of which is capable of reporting its 'distance' to its neighbors, measured in (simplified) signal strength to them. The nodes are geographically in 3d space but because of radio interference, the distance between nodes need not be trigonometrically (trigonomically?) consistent. I.e., given nodes A, B and C, the distance between A and B might be 10, between A and C also 10, yet between B and C 100.
What I want to do is visualize the logical network layout in terms of connectness of nodes, i.e. include the logical distance between nodes in the visual.
So far my research has shown the multidimensional scaling (MDS) is designed for exactly this sort of thing. Given that my data can be directly expressed as a 2d distance matrix, it's even a simpler form of the more general MDS.
Now, there seem to be many MDS algorithms, see e.g. http://homepage.tudelft.nl/19j49/Matlab_Toolbox_for_Dimensionality_Reduction.html and http://tapkee.lisitsyn.me/ . I need to do this in C++ and I'm hoping I can use a ready-made component, i.e. not have to re-implement an algo from a paper. So, I thought this: https://sites.google.com/site/simpmatrix/ would be the ticket. And it works, but:
The layout is not stable, i.e. every time the algorithm is re-run, the position of the nodes changes (see differences between image 1 and 2 below - this is from having been run twice, without any further changes). This is due to the initialization matrix (which contains the initial location of each node, which the algorithm then iteratively corrects) that is passed to this algorithm - I pass an empty one and then the implementation derives a random one. In general, the layout does approach the layout I expected from the given input data. Furthermore, between different runs, the direction of nodes (clockwise or counterclockwise) can change. See image 3 below.
The 'solution' I thought was obvious, was to pass a stable default initialization matrix. But when I put all nodes initially in the same place, they're not moved at all; when I put them on one axis (node 0 at 0,0 ; node 1 at 1,0 ; node 2 at 2,0 etc.), they are moved along that axis only. (see image 4 below). The relative distances between them are OK, though.
So it seems like this algorithm only changes distance between nodes, but doesn't change their location.
Thanks for reading this far - my questions are (I'd be happy to get just one or a few of them answered as each of them might give me a clue as to what direction to continue in):
Where can I find more information on the properties of each of the many MDS algorithms?
Is there an algorithm that derives the complete location of each node in a network, without having to pass an initial position for each node?
Is there a solid way to estimate the location of each point so that the algorithm can then correctly scale the distance between them? I have no geographic location of each of these nodes, that is the whole point of this exercise.
Are there any algorithms to keep the 'angle' at which the network is derived constant between runs?
If all else fails, my next option is going to be to use the algorithm I mentioned above, increase the number of iterations to keep the variability between runs at around a few pixels (I'd have to experiment with how many iterations that would take), then 'rotate' each node around node 0 to, for example, align nodes 0 and 1 on a horizontal line from left to right; that way, I would 'correct' the location of the points after their relative distances have been determined by the MDS algorithm. I would have to correct for the order of connected nodes (clockwise or counterclockwise) around each node as well. This might become hairy quite quickly.
Obviously I'd prefer a stable algorithmic solution - increasing iterations to smooth out the randomness is not very reliable.
Thanks.
EDIT: I was referred to cs.stackexchange.com and some comments have been made there; for algorithmic suggestions, please see https://cs.stackexchange.com/questions/18439/stable-multi-dimensional-scaling-algorithm .
Image 1 - with random initialization matrix:
Image 2 - after running with same input data, rotated when compared to 1:
Image 3 - same as previous 2, but nodes 1-3 are in another direction:
Image 4 - with the initial layout of the nodes on one line, their position on the y axis isn't changed:
Most scaling algorithms effectively set "springs" between nodes, where the resting length of the spring is the desired length of the edge. They then attempt to minimize the energy of the system of springs. When you initialize all the nodes on top of each other though, the amount of energy released when any one node is moved is the same in every direction. So the gradient of energy with respect to each node's position is zero, so the algorithm leaves the node where it is. Similarly if you start them all in a straight line, the gradient is always along that line, so the nodes are only ever moved along it.
(That's a flawed explanation in many respects, but it works for an intuition)
Try initializing the nodes to lie on the unit circle, on a grid or in any other fashion such that they aren't all co-linear. Assuming the library algorithm's update scheme is deterministic, that should give you reproducible visualizations and avoid degeneracy conditions.
If the library is non-deterministic, either find another library which is deterministic, or open up the source code and replace the randomness generator with a PRNG initialized with a fixed seed. I'd recommend the former option though, as other, more advanced libraries should allow you to set edges you want to "ignore" too.
I have read the codes of the "SimpleMatrix" MDS library and found that it use a random permutation matrix to decide the order of points. After fix the permutation order (just use srand(12345) instead of srand(time(0))), the result of the same data is unchanged.
Obviously there's no exact solution in general to this problem; with just 4 nodes ABCD and distances AB=BC=AC=AD=BD=1 CD=10 you cannot clearly draw a suitable 2D diagram (and not even a 3D one).
What those algorithms do is just placing springs between the nodes and then simulate a repulsion/attraction (depending on if the spring is shorter or longer than prescribed distance) probably also adding spatial friction to avoid resonance and explosion.
To keep a "stable" diagram just build a solution and then only update the distances, re-using the current position from previous solution as starting point. Picking two fixed nodes and aligning them seems a good idea to prevent a slow drift but I'd say that spring forces never end up creating a rotational momentum and thus I'd expect that just scaling and centering the solution should be enough anyway.

Finding largest rectangle in 2D array

I need an algorithm which can parse a 2D array and return the largest continuous rectangle. For reference, look at the image I made demonstrating my question.
Generally you solve these sorts of problems using what are called scan line algorithms. They examine the data one row (or scan line) at a time building up the answer you are looking for, in your case candidate rectangles.
Here's a rough outline of how it would work.
Number all the rows in your image from 0..6, I'll work from the bottom up.
Examining row 0 you have the beginnings of two rectangles (I am assuming you are only interested in the black square). I'll refer to rectangles using (x, y, width, height). The two active rectangles are (1,0,2,1) and (4,0,6,1). You add these to a list of active rectangles. This list is sorted by increasing x coordinate.
You are now done with scan line 0, so you increment your scan line.
Examining row 1 you work along the row seeing if you have any of the following:
new active rectangles
space for existing rectangles to grow
obstacles which split existing rectangles
obstacles which require you to remove a rectangle from the active list
As you work along the row you will see that you have a new active rect (0,1,8,1), we can grow one of existing active ones to (1,0,2,2) and we need to remove the active (4,0,6,1) replacing it with two narrower ones. We need to remember this one. It is the largest we have seen to far. It is replaced with two new active ones: (4,0,4,2) and (9,0,1,2)
So at the send of scan line 1 we have:
Active List: (0,1,8,1), (1,0,2,2), (4,0,4,2), (9, 0, 1, 2)
Biggest so far: (4,0,6,1)
You continue in this manner until you run out of scan lines.
The tricky part is coding up the routine that runs along the scan line updating the active list. If you do it correctly you will consider each pixel only once.
Hope this helps. It is a little tricky to describe.
I like a region growing approach for this.
For each open point in ARRAY
grow EAST as far as possible
grow WEST as far as possible
grow NORTH as far as possible by adding rows
grow SOUTH as far as possible by adding rows
save the resulting area for the seed pixel used
After looping through each point in ARRAY, pick the seed pixel with the largest area result
...would be a thorough, but maybe not-the-most-efficient way to go about it.
I suppose you need to answer the philosophical question "Is a line of points a skinny rectangle?" If a line == a thin rectangle, you could optimize further by:
Create a second array of integers called LINES that has the same dimensions as ARRAY
Loop through each point in ARRAY
Determine the longest valid line to the EAST that begins at each point and save its length in the corresponding cell of LINES.
After doing this for each point in ARRAY, loop through LINES
For each point in LINES, determine how many neighbors SOUTH have the same length value or less.
Accept a SOUTHERN neighbor with a smaller length if doing so will increase the area of the rectangle.
The largest rectangle using that seed point is (Number_of_acceptable_southern_neighbors*the_length_of_longest_accepted_line)
As the largest rectangular area for each seed is calculated, check to see if you have a new max value and save the result if you do.
And... you could do this without allocating an array LINES, but I thought using it in my explanation made the description simpler.
And... I think you need to do this same sort of thing with VERTICAL_LINES and EASTERN_NEIGHBORS, or some cases might miss big rectangles that are tall and skinny. So maybe this second algorithm isn't so optimized after all.
Use the first method to check your work. I think Knuth said "...premature optimization is the root of all evil."
HTH,
Perry
ADDENDUM:Several edits later, I think this answer deserves a group upvote.
A straight forward approach would be to do a loop through all the potential rectangles in the grid, figure out their area, and if it is greater than the current highest area, select it as the highest:
var biggestFound
for each potential rectangle:
if area(this potential rectangle) > area(biggestFound)
biggestFound = this potential rectangle
Then you simply need to find the potential rectangles.
for each square in grid:
recursive loop 1:
if not occupied:
grow right until occupied, and return a rectangle
grow down one and recurse (call loop 1)
This will duplicate a lot of work (for example you will re-evaluate a lot of sub-rectangles), but it should give you an answer.
Edit
An alternate approach might be to start with a single square the size of the grid, and "subtract" occupied squares to end up with a final set of potential rectangles. There might be optimization opportunities here using quadtrees, and in ensuring that you keep split rectangles "in order", top to bottom, left to right, in case you need to re-combine rectangles farther down in the algorithm.
If you are actually starting out with rectangular data (for your "populated grid" set), instead of a loose pixel grid, then you could easily get better perf out of a rectangle/region subtracting algorithm.
I'm not going to post pseudo-code for this because the idea is completely experimental, and I have no idea if the perf will be any better for a loose pixel grid ;)
Windows system "regions" and "dirty rectangles", as well as general "temporal caching" might be good inspiration here for more efficiency. There are also a lot of z-buffer tricks if this is for a graphics algorithm...
Use dynamic programming approach. Consider a function S(x,y) such that S(x,y) holds the area of the largest rectangle where (x,y) are the lowest-right-most corner cell of the rectangle; x is the row co-ordinate and y is the column co-ordinate of the rectangle.
For example, in your figure, S(1,1) = 1, S(1,2)=2, S(2,1)=2, and S(2,2) = 4. But, S(3,1)=0, because this cell is filled. S(8,5)=40, which says that the largest rectangle for which the lowest-right cell is (8,5) has the area 40, which happens to be the optimum solution in this example.
You can easily write a dynamic programming equation of S(x,y) from the value of S(x-1,y), S(x,y-1) and S(x-1,y-1). Using that you can obtain the values of all S(x,y) in O(mn) time, where m and n are the row and column dimension of the given table. Once, S(x,y) are know for all 1<=x <= m, and for all 1 <= y <= n, we simply need to find the x, and y for which S(x,y) is the largest; this step also takes O(mn) time. By keeping addition data, you can also find the side-length of the largest rectangle.
The overall complexity is O(mn). To understand more on this, Read Chapter 15 or Cormen's algorithm book, specifically Section 15.4.