Lets say i have a point with its position on 2d plane.
This point is going to change it position randomly, but thats not the point, so lets assume that it has its own velocity and its moving on plane with restricted width and height;
So after a while of movement this point is going to reach plane boundary.
But its not allowed to leave plane.
So now i can check point position each frame to see is it reached bound or not.
if(point.x>bound.xMax)point.x=bound.xMax
if i want point to teleport itself to second side of plane i can simply :
point.x = point.x%bound.xMax;
but then i need to store point position in integers.
For 10 milion values on my corei7 1.6 both solutions
have similar timings. 41ms vs 47 on second,
so there is no sense in using modulo function in that case, its faster to just check value.
But, is there any kind of trick to make it faster?
Multiple threads for iterating array approach is not a solution.
Maybe i can scale my bound value to some wierd value and for example discard a part of binary interpretation of position value.
And if there is some trick to do it i think that somebody did it before me :)
Do you know any kind of solution that could help me?
If there is some way you can add information around the plane coordinates you could very well make a "border" around the plane which contains a value that is identified as "out of boundaries". For example if you have a 10x10 board, make it 12x12 and use the 2 extra rows and columns to insert that information.
Now you can do (pseudo-code):
IF point IN board IS "out of boundaries value" THEN
do your thing
END IF
Note that this method is only an optimization if your point has both x and y values (my assumption on your case).
Related
The problem didn't let me sleep at night.
Given floating point x and y coordinates of infinite 2D space, and the range variable, I need to get all possible intergrer coordinates that are in range.
The green blocks are in range, and the red ones are not.
Now, I have an answer, but I'm not sure if it's the best one.
Make an 2D array with all the values in a square around the point (from -distance, distance to distance, -distance) and then iterate through the entire array, each time checking the distance if it is closer or further than needed, and if so then insert it into another array.
Starting from the center point, go both ways horizontally to the furthest points that are in the range.
For each of these points encountered, calculate the maximum vertical coordinate either way which will still be in the range, and add all the squares along this line.
I was interested in how to draw a line with a specific width (or multiple lines) using a fragment shader. I stumbled on the this post which seems to explain it.
The challenge I have is understanding the logic behind it.
A couple of questions:
Our coordinate space in this example is (0.0-1.0,0.0-1.0), correct?
If so, what is the purpose of the "uv" variable. Since thickness is 500, the "uv" variable will be very small. Therefore the distances from it to pont 1 and 2 (stored in the a and b variables)?
Finally, what is the logic behind the h variable?
i will try to answer all of your questions one by one:
1) Yes, this is in fact correct.
2) It is common in 3d computer graphics to express coordinates(within certain boundaries) with floating-point values between 0 and 1(or between -1 and 1). First of all, this makes it quite easy to decide whether a given value crosses said boundary or not, and abstracts away from a concept of "pixel" being a discrete image unit; furthermore this common practise can be found pretty much everywhere else(think of device coordinates or texture coordinates)
Don't be afraid that values that you are working with are less than one; in fact, in computer graphics you usually deal with floating-point arithmetics, and FLOAT types are quite good at expressing Real values line around the "1" point.
3) The formula give for h consists of 2 parts: the square-root part, and the 2/c coefficient. The square root part should be well known from scholl math classes - this is Heron formula for the area of a triangle(between a,b,c). 2/c extracts the height of the said triangle, which is stored in h and is also the distance between point uv and the "ground line" of the triangle. This distance is then used to decide, where is uv in relation to the line p1-p2.
I'm trying to determine from a large set of positions how to narrow my list down significantly.
Right now I have around 3000 positions (x, y, z) and I want to basically keep the positions that are furthest apart from each other (I don't need to keep 100 positions that are all within a 2 yard radius from each other).
Besides doing a brute force method and literally doing 3000^2 comparisons, does anyone have any ideas how I can narrow this list down further?
I'm a bit confused on how I should approach this from a math perspective.
Well, I can't remember the name for this algorithm, but I'll tell you a fun technique for handling this. I'll assume that there is a semi-random scattering of points in a 3D environment.
Simple Version: Divide and Conquer
Divide your space into a 3D grid of cubes. Each cube will be X yards on each side.
Declare a multi-dimensional array [x,y,z] such that you have an element for each cube in your grid.
Every element of the array should either be a vertex or reference to a vertex (x,y,z) structure, and each should default to NULL
Iterate through each vertex in your dataset, determine which cube the vertex falls in.
How? Well, you might assume that the (5.5, 8.2, 9.1) vertex belongs in MyCubes[5,8,9], assuming X (cube-side-length) is of size 1. Note: I just truncated the decimals/floats to determine which cube.
Check to see if that relevant cube is already taken by a vertex. Check: If MyCubes[5,8,9] == NULL then (inject my vertex) else (do nothing, toss it out! spot taken, buddy)
Let's save some memory
This will give you a nicely simplified dataset in one pass, but at the cost of a potentially large amount of memory.
So, how do you do it without using too much memory?
I'd use a hashtable such that my key is the Grid-Cube coordinate (5,8,9) in my sample above.
If MyHashTable.contains({5,8,9}) then DoNothing else InsertCurrentVertex(...)
Now, you will have a one-pass solution with minimal memory usage (no gigantic array with a potentially large number of empty cubes. What is the cost? Well, the programming time to setup your structure/class so that you can perform the .contains action in a HashTable (or your language-equivalent)
Hey, my results are chunky!
That's right, because we took the first result that fit in any cube. On average, we will have achieved X-separation between vertices, but as you can figure out by now, some vertices will still be close to one another (at the edges of the cubes).
So, how do we handle it? Well, let's go back to the array method at the top (memory-intensive!).
Instead of ONLY checking to see if a vertex is already in the cube-in-question, also perform this other check:
If Not ThisCubeIsTaken()
For each SurroundingCube
If not Is_Your_Vertex_Sufficiently_Far_Away_From_Me()
exit_loop_and_outer_if_statement()
end if
Next
//Ok, we got here, we can add the vertex to the current cube because the cube is not only available, but the neighbors are far enough away from me
End If
I think you can probably see the beauty of this, as it is really easy to get neighboring cubes if you have a 3D array.
If you do some smoothing like this, you can probably enforce a 'don't add if it's with 0.25X' policy or something. You won't have to be too strict to achieve a noticeable smoothing effect.
Still too chunky, I want it smooth
In this variation, we will change the qualifying action for whether a vertex is permitted to take residence in a cube.
If TheCube is empty OR if ThisVertex is closer to the center of TheCube than the Cube's current vertex
InsertVertex (overwrite any existing vertex in the cube
End If
Note, we don't have to perform neighbor detection for this one. We just optimize towards the center of each cube.
If you like, you can merge this variation with the previous variation.
Cheat Mode
For some people in this situation, you can simply take a 10% random selection of your dataset and that will be a good-enough simplification. However, it will be very chunky with some points very close together. On the bright side, it takes a few minutes max. I don't recommend it unless you are prototyping.
Say we have an object at point A. It wants to find out if it can move to point B. It has limited velocity so it can only move step by step. It casts a ray at direction it is moving to. Ray collides with an object and we detect it. How to get a way to pass our ray safely (avoiding collision)?
btw, is there a way to make such thing work in case of object cast, will it be as/nearly fast as with simple ray cast?
Is there a way to find optimal in some vay path?
What you're asking about is actually a pathfinding question; more specifically, it's the "any-angle pathfinding problem."
If you can limit the edges of obstacles to a grid, then a popular solution is to just use A* on that grid, then apply path-smoothing. However, there is a (rather recent) algorithm that is both simpler to implement/understand and gives better results than path-smoothing. It's called Theta*.
There is a nice article explaining Theta* (from which I stole the above image) here
If you can't restrict your obstacles to a grid, you'll have to generate a navigation mesh for your map:
There are many ways of doing this, of varying complexity; see for example here, here, or here. A quick google search also turns up plenty of libraries available to do this for you, such as this one or this one.
One approach could be to use a rope, or several ropes, where a rope is made of a few points connected linearly. You can initialize the points in random places in space, but the first point is the initial position of A, and the last point is the final position of A.
Initially, the rope will be a very bad route. In order to optimize, move the points along an energy gradient. In your case the energy function is very simple, i.e. the total length of the rope.
This is not a new idea but is used in computer vision to detect boundaries of objects, although the energy functions are much more complicated. Yet, have look at "snakes" to give you an idea how to move each point given its two neighbors: http://en.wikipedia.org/wiki/Snake_(computer_vision)
In your case, however, simply deriving a direction for each point from the force exerted by its neighbors will be just fine.
Your problem is a constrained problem where you consider collision. I would really go with #paddy's idea here to use a convex hull, or even just a sphere for each object. In the latter case, don't move a point into a place where its distance to B is less than the radius of A plus the radius of B plus a fudge factor considering that you don't have an infinite number of points.
A valid solution requires that the longest distance between any neighbors is smaller than a threshold, otherwise, the connecting line between two points will intersect with the obstacle.
How about a simple approach to begin with....
If this is just one object, you could compute the convex hull of all the vertices of the obstacle, plus the start and end points. You would then examine the two directions to get from A to B by traversing the hull clockwise and anti-clockwise. Choose the shortest path.
It's a little more complex because the shape you are moving is not just a point. You can't just blindly move its centre or it will collide. It gets more complicated still as it moves past a vertex, because you have to graze an edge of your object against the vertex of the obstacle.
But hopefully that gives you an idea to ponder over, that's not conceptually difficult to understand.
I have made this image to tell my idea for reaching the object to point B.
Objects in the image :-
The dark blue dot represents the object. The red lines are obstacles. The grey dot and line are the area which can be reached. The purple arrow is the direction of the point B. The grey line of the object is the field of visibility.
Understanding the image :-
The object will have a certain field of visibility. This is a 2d situation so i have assumed the field of visibility to be 180deg. (for human field of visibility refer http://en.wikipedia.org/wiki/Human_eye#Field_of_view ) The object will measure distance by using the idea of SONAR. With the help of SONAR the object can find out the area where it can reach. Using BACKTRACKING, the object can find out the way to the object. If there is no way to go, the object must change its field of visibility
One way to look at this is as a shadow casting problem. Make A the "light source" and then decide whether each point in the scene is in or out of shadow. Those not in shadow are accessible by rays from A. The other areas are not. If you find B is in shadow, then you need only locate the nearest point in the scene that is in light.
If you discretize this problem into "pixels," then the above approach has very well-known solutions in the huge computer graphics literature on shadow rendering. For example, you can use a Shadow Map to paint each pixel with a boolean flag that indicates whether it's in shadow or not. Finding the nearest lit pixel is just a simple search of growing concentric circles around B. Both of these operations can be made extremely fast by exploiting GPU hardware.
One other note: You can treat a general object path finding problem as a point path problem. The secret is to "grow" the obstacles by an appropriate amount using Minkowski Differences. See for example this work on robot path planning.
I am trying to understand the glLookAt function.
It takes 3 triplets. The first is the eye position, the second is the point at which the eye stares. That point will appear in the center of my viewport, right? The third is the 'up' vector. I understand the meaning of the 'up' vector if it is perpendicular to the vector from eye to starepoint. The question is, is it allowed to specify other vectors for up, and, if yes, what's the meaning then?
A link to a graphical detailed explanation of gluPerstpective, glLookAt and glFrustum would be also much appreciated. The official OpenGL documentation appears not to be intended for newbies.
Please note that I understand the meaning of up vector when it is perpendicular to eye->object vector. The question is what is the meaning (if any), if it is not. I can't figure that out with playing with parameters.
It works as long as it is "sufficiently perpendicular" to the up vector. What matters is the plane between the up-vector and the look-at vector.
If these two become aligned the up-direction will be more or less random (based on the very small bits in your values), as a small adjustment of it will leave it pointing above/left/right of the look-at vector.
If they have a sufficiently large separating angle (in 32-bit floating point math) it will work well. This angle needs usually not be more than a degree or so, so they can be very close. But if the difference is down to a few bits, each changed bit will yield a huge direcitonal change.
It comes down to numerical precision.
(I'm sure there are more mathematical terms & definitions for this, but it's been a few years since college.. :)
final word: If the vectors are parallel, then the up-direction is completely undefined and you'll get a degenerate view matrix.
The up vector lets openGL know what way your have your camera.
Think in the really world, if you have to points in space, you can draw a line from one to the other. You can then align an object, such as a camera so that it points from one to the other. But you have no way of knowing how you object should be rotated around this axis that the line makes. The up vector dictates which direction the camera should be standing.
most of the time, your up vector will be (0,1,0) which means that the camera will be rotated just like you would normally hold a camera, or if you held your head up straight. if you set your up vector (1,0,0) it would be like holding your head on its side, so from the base of your head to the top of your head it pointing to the right. You are still looking from the same point (more or less) to the same point, but your 'up' has changed. A look vector of(0,-1,0) would make the camera be up side down, like if you where doing a hand stand.
One way you could think about this, your arm is a vector from the camera position (your shoulder) to the camera look at point (your index finger) if you stick you thumb out, this is your up vector.
This picture may help you http://images.gamedev.net/features/programming/oglch3excerpt/03fig11.jpg
EIDT
Perpendicular or not.
I see what you are asking now. example, you at (10,10,10) looking at (0,0,0) the resulting vector for your looking direction is (-10,-10,-10) the vector perpendicular to this does not matter for the purpose of you up vector glLookAt, if you wanted the view to orientated so that you are like a normal person just looking down a bit, just set you up vector to (0,1,0) In fact, unless you want to be able to roll the camera, you don't need this to be nay thing else.
In this website you have a great tutorial
http://www.xmission.com/~nate/tutors.html
http://users.polytech.unice.fr/~buffa/cours/synthese_image/DOCS/www.xmission.com/Nate/tutors.html
Download the executables and you can change the values of the parameters to the glLookAt function and see what happens "in real-time".
The up vector does not need to be perpendicular to the looking direction. As long as it is not parallel (or very close to being parallel) to the looking direction, you should be fine.
Given that you have a view plane normal, N (the looking direction) and a up vector (which mustn't be parallel to N), UV you calculate the actual up vector which will be used in the camera transform by first calculating the vector V = UV - (N * UV)N. V is in turn used to calculate the actual up vector used by creating a vector which is perpendicular to both N and V as U = N x V.
Yes. It is arbitrary, which lets you make the camera "roll", i.e. appear as if the scene is rotating around the eye axis.