Collision Detection Techniques to Generate Sloping Planes between two Mesh - c++

I've asked this question both at game SE and math SE, but the response were not so encouraging. So I reasked again, with a bit more of a twist.
I have a terrain, which is defined by mesh. And there are a lot of other polygonal faces scattered throughout the terrain, they can be located above, or below or cutting through the terrain. You can think of those faces as platforms.
A screenshot below should clarify what I mean. Despite looking smooth, all the mesh are actually consist of small elements (number> 10k) combined together, giving the false appearance of smoothness. The obvious disconnected area are platforms.
My question is, how can I generate the planes that connect between the platforms and other platforms/ terrain? Here are the rules to generate the series of sloped planes:
They could go up or down, depending on which direction will make them hit the terrain/neigbouring platform first.
The plane generation rule is that, the plane will start at the edge of a platform, and moving 45 degree upward/downward with respect to z axis for a certain length, then it will move 0 degree with respect to z axis for another certain length, and repeat. So it will be a series of piecemeal planes until at some points of the planes, obstacles are hit.
The algorithm should be focused on plane generation and plane generation alone; I don't want it to be tied to any renderer ( e.g, opengl and whatnot), I can render it myself.
In short, I want to generate a set of "planes" that is actually something like a flight of stairs or a piece of corrugated paper, leading up or down from one point in space until it makes contact with a given mesh
Sounds like a straight forward collision detection problem that are frequent in physics simulation, right? Is there any game/physics libraries that I can use to attack this problem?
Note that since I am not doing any animation, so frame-by-frame update and all those stuffs are not relevant to me; this is why I am hesitant to use existing game physics library like bullet. What is relevant to me is how to use existing libraries to generate those connecting planes according to the above rules.

Related

how to detect the coordinates of certain points on image

I'm using the ORB algorithm to detect and get the coordinates of the crossings of rope shown in the image, which is represented by the red dot. I want to detect the coordinates of the four points surrounding the crossing represented by the blue dots. All the four points have the same distance from the red spot.
Any idea how to get their coordinates by getting use of the red spot coordinate.
Thank you in Advance
Although you're using ORB, you're still going to need an algorithm to segment the rope from the background, or at least some technique to identify image chunks that belong to the rope and that are equidistant from the red dot. There are a number of options to explore.
It's important to consider your lighting & imaging as separate problems to be solved if this is meant to be a real-world application. This looks a bit like a problem for a class rather than for a application you'll sell and support, but you should still consider lighting:
Will your algorithm(s) still work when light level is reduced?
How will detection be affected by changes in camera pose relative to the surface where the rope will be located?
If you'll be detecting "black" rope, will the algorithm also be required to detect rope of different colors? dirty rope? rope on different backgrounds?
Since you're object of interest is rope, you have to consider a class of algorithms suitable for detection of non-rigid objects. Always consider the simplest solution first!
Connected Components
Connected components labeling is a traditional image processing algorithm and still suitable as the starting point for many applications. The last I knew, this was implemented in OpenCV as findContours(). This can also be called "blob finding" or some variant thereof.
https://en.wikipedia.org/wiki/Connected-component_labeling
https://docs.opencv.org/2.4/modules/imgproc/doc/structural_analysis_and_shape_descriptors.html?highlight=findcontours
Depending on lighting, you may have to take different steps to binarize the image before running connected components. As a start, convert the color image to grayscale, which will simplify the task significantly.
Try a manual threshold since you can quickly test a number of values to see the effect. Don't be too discouraged if the binarization isn't quite right--this can often be fixed with preprocessing.
If a range of manual thresholds works (e.g. 52 - 76 in an 8-bit grayscale range), then use an algorithm that will automatically calculate the threshold for you: Otsu, entropy-based methods, etc., will all offer comparable performance. Whichever technique works best, the code/algorithm can be tweaked further to optimize for your rope application.
If thresholding and binarization don't work--which for your rope application seems unlikely, at least how you've presented it--then switch to thinking in terms of gradient-based (edge-based, energy-based) techniques.
But assuming you can separate the rope from the background, you're still going to need a method to start at the red dot [within the rope] and move equal distances out to the blue points. More about that later after a discussion of other rope segmentation methods.
Note: connected components labeling can work in scenarios beyond just binarizing black & white images. If you can create a texture field or some other 2D representation of the image that makes it possible to distinguish the black rope from the relatively light background, you may be able to use a connected components algorithm. (Finding a "more complicated" or "more modern" algorithm isn't necessarily going to be the right approach.)
In a binarized image, blobs can be nested: on a white background you can have several black blobs, inside of one or more of which are white blobs, inside of which are black blobs, etc. An earlier version of OpenCV handled this reasonably well. (OpenCV is a nice starting point, and a touchpoint for many, but for a number of reasons it doesn't always compare favorably to other open source and commercial packages; popularity notwithstanding, OpenCV has some issues.)
Once you have a "blob" (a 4-connected region of pixels) in a 2D digital image, you can treat the blob as an object, at which point you have a number of options:
Edge tracing: trace around the inside and outside edges of the blob. From what I recall, OpenCV does (or at least should) have some relatively straightforward method to get the edges.
Split the blob into component blobs, each of which can be treated separately
Convert the blob to a polygon
...
A connected components algorithm should be high on the list of techniques to try if you have a non-rigid object.
Boolean Operations
Once you have the rope as a connected component (and possibly even without this), you can use boolean image operations to find the spots at the blue dots in your image:
Create a circular region in data, or even in the image
Find the intersection of the circle (an annulus) and the black region representing the rope. Using your original image, you should have four regions.
Find the center point of the intersection regions.
You could even try this without using connected components at all, but using connected components as part of the solution could make it more robust.
Polygon Simplification
If you have a blob, which in your application would be a connected set of black pixels representing the rope on the floor, then you can consider converting this blob to one or more polygons for further processing. There are advantages to working with polygons.
If you consider only the outside boundary of the rope, then you can see that the set of pixels defining the boundary represents a polygon. It's a polygon with a lot of points, and not a convex polygon, but a polygon nonetheless.
To simplify the polygon, you can use an algorithm such as Ramer-Douglas-Puecker:
https://en.wikipedia.org/wiki/Ramer%E2%80%93Douglas%E2%80%93Peucker_algorithm
Once you have a simplified polygon, you can try a few techniques to render useful data from the polygon
Angle Bisector Network
Triangulation (e.g. using ear clipping)
Triangulation is typically dependent on initial conditions, so the resulting triangulation for slighting different polygons (that is, rope -> blob -> polygon -> simplified polygon). So in your application it might be useful to triangulate the dark rope region, and then to connect the center of one triangle to the center of the next nearest triangle. You'll also have to deal with crossings, such as the rope overlap. Ultimately this can yield a "skeletonization" of the rope. Speaking of which...
Skeletonization
If the rope problem was posed to you as a class exercise, then it may have been a prompt to try skeletonization. You can read about it here:
https://en.wikipedia.org/wiki/Topological_skeleton
Skeletonization and thinning have their own problems to solve, but you should dig into them a bit and see those problems themselves.
The Medial Axis Transform (MAT) is a related concept. Long story there.
Edge-based techniques
There are a number of techniques to generate "edge images" based on edge strength, energy, entropy, etc. Making them robust takes a little effort. If you've had academic training in image processing you've likely heard of Harris, Sobel, Canny, and similar processing methods--none are magic bullets, but they're simple and dependable and will yield data you need.
An "edge image" consists of pixels representing the image gradient strength [and sometimes the gradient direction]. People may call this edge image something else, but it's the concept that matters.
What you then do with the edge data is another subject altogether. But one reason to think of edge images (or at least object borders) is that it reduces the amount of information your algorithm(s) will need to process.
Mean Shift (and related)
To get back to segmentation mentioned in the section on connected components, there are other methods for segmenting figures from a background: K-means, mean shift, and so on. You probably won't need any of those, but they're neat and worth studying.
Stroke Width Transform
This is an intriguing technique used to extract text from noisy backgrounds. Although it's intended for OCR, it could work for rope since the rope width is relatively constant, the rope shape varies, there are crossings, etc.
In short, and simplifying quite a bit, you can think of SWT as a means to find "strokes" (thick lines) by finding gradients antiparallel to each other. On either side of a stroke (or line), the edge gradient points normal to the object edge. The normal on one side of the stroke points opposite the direction of the normal on the other side of the stroke. By filtering for pixel-gradient pairs within a certain distance of each other, you can isolate certain strokes--even automatically. For your example the collection of points representing edge pairs for the rope would be much more common than other point pairs.
Non-Rigid Matching
There are techniques for matching non-rigid shapes, but they would not be worth exploring. If any of the techniques I mentioned above is unfamiliar to you, explore some of those first before you try any fancier algorithms.
CNNs, machine learning, etc.
Just don't even think of these methods as a starting point.
Other Considerations
If this were an application for industry, security, or whatnot, you'd have to determine how well your image processing worked under all environmental considerations. That's not an easy task, and can make all the difference between a setup that "works" in the lab and a setup that actually works in practice.
I hope that's of some help. Feel free to post a reply if I've confused more than helped, or if you want to explore some idea in more detail. Though I tried to touch on some common(ish) techniques, I didn't mention all the different ways of addressing this problem.
And briefly: once you have a skeleton, point network, or whatever representing a reduced data set for the rope and the red dot (the identified feature), a few techniques to find the items at the blue dots:
For a skeleton, trace along each "branch" of the rope outward from the know until the geodesic distance or straight-line 2D distance is the distance D that you want.
To use geometry, create a circle of width 1 - 2 pixels. Find the intersection of that circle and the rope. Find the center point of the intersections of circle and rope. (Also described above.)
Good luck!

Mesh simplification of a grid-like structure

I'm working on a 3D building app. The building is done on a 3D grid (like a Rubik's Cube), and each cell of the grid is either a solid cube or a 45 degree slope. To illustrate, here's a picture of a chamfered cube I pulled off of google images:
Ignore the image to the right, the focus is the one on the left. Currently, in the building phase, I have each face of each cell drawn separately. When it comes to exporting it, though, I'd like to simplify it. So in the above cube, I'd like the up-down-left-right-back-front faces to be composed of a single quad each (two triangles), and the edges would be reduced from two quads to single quads.
What I've been trying to do most recently is the following:
Iterate through the shape layer by layer, from all directions, and for each layer figure out a good simplification (remove overlapping edges to create single polygon, then split polygon to avoid holes, use ear clipping to triangulate).
I'm clearly over complicating things (at least I hope I am). If I've got a list of vertices, normals, and indices (currently with lots of duplicate vertices), is there some tidy way to simplify? The limitations are that indices can't be shared between faces (because I need the normals pointing in different directions), but otherwise I don't mind if it's not the fastest or most optimal solution, I'd rather it be easy to implement and maintain.
EDIT: Just to further clarify, I've already performed hidden face removal, that's not an issue. And secondly, it's of utmost importance that there is no degradation in quality, only simplification of the faces themselves (I need to retain the sharp edges).
Thanks goes to Roger Rowland for the great tips! If anyone else stumbles upon this question, here's a short summary of what I did:
First thing to tackle: ensure that the mesh you are attempting to simplify is a manifold mesh! This is a requirement for traversing halfedge data structures. One instance where I has issues with this was overlapping quads and triangles; I initially resolved to just leave the quads whole, rather than splitting them into triangles, because it was easier, but that resulted in edges that broke the halfedge mesh.
Once the mesh is manifold, create a halfedge mesh out of the vertices and faces.
With that done, decimate the mesh. I did it via edge collapsing, determining which edges to collapse through normal deviation (in my case, if the resulting faces from the collapse had normals not equal to their original values, then the collapse was not performed).
I did this via my own implementation at first, but I started running into frustrating bugs, and thus opted to use OpenMesh instead (it's very easy to get started with).
There's still one issue I have yet to resolve: if there are two cubes diagonally to one another, touching, the result is an edge with four faces connected to it: a complex edge! I suspect it'd be trivial to iterate through the edges checking for the number of faces connected, and then resolving by duplicating the appropriate vertices. But with that said, it's not something I'm going to invest the time in fixing, unless it becomes a critical issue later on.
I am giving a theoretical answer.
For the figure left, find all 'edge sharing triangles' with same normal (same x,y,z coordinates)(make it unit normal because of uneffect of direction of positive scaling of vectors). Merge them. Then triangulate it with maximum aspect ratio will give a solution you want.
Another easy and possible way for mesh simplification is I am proposing now.
Take the NORMALS and divide with magnitude(root of sum of squares of coordinates), gives unit normal vector. And take the adjucent triangles and take DOT PRODUCT between them(multiply x,y,z coordinates each and add). It gives the COSINE value of angle between these normals or triangles. Take a range(like 0.99-1) and consider the all adjacent triangles in this range with respect to referring triangle and merge them and retriangulate. We definitely can ignore some triangles in weird directions with smaller areas.
There is also another proposal for a more simple mesh reduction like in your left figure or building figures. Define a pre-defined number of faces (here 6+8 = 14) means value of normals, and classify all faces according to the direction close to these(by dot product) and merge and retriangulate.
Google "mesh simplification". You'll find that this problem is a huge one and is heavily researched. Take a look at these introductory resources: link (p.11 starts the good stuff) and link. CGAL has a good discussion, as well: link.
Once familiar with the issues, you'll have some decisions for applying simplification to your problem. How fast should the simplification be? How important is accuracy? (Iterative vertex clustering is a quick and dirty approach, but its results can be arbitrarily ugly.) Can you rely on a 3rd party library? (i.e. CGAL? GTS doesn't appear active any longer, but there are others) .

Collision Detection between quads OpenGL

I am making a simple 3D OpenGL game. At the moment I have four bounding walls, a random distribution of internal walls and a simple quad cube for my player.
I want to set up collision detection between the player and all of the walls. This is easy with the bounding walls as i can just check if the x or z coordinate is less than or greater than a value. The problem is with the interior walls. I have a glGenList which holds small rectangular wall segments, at the initial setup i randomly generate an array of x,z co ordinates and translate these wall segments to this position in the draw scene. I have also added a degree of rotation, either 45 or 90 which complicates the collision detection.
Could anyone assist me with how I might go about detecting collisions here. I have the co ordinates for each wall section, the size of each wall section and also the co ordinates of the player.
Would i be looking at a bounded box around the player and walls or is there a better alternative?
I think your question is largely about detecting collision with a wall at an angle, which is essentially the same as "detecting if a point matches a line", which there is an answer for how you do here:
How can I tell if a point belongs to a certain line?
(The code may be C#, but the math behind it applies in any language). You just have to replace the Y in those formulas for Z, since Y appears to not be a factor in your current design.
There has been MANY articles and even books written on how to do "good" collision detection. Part of this, of course, comes down to a "do you want very accurate or very fast code" - for "perfect" simulations, you may sacrifice speed for accuarcy. In most games, of the players body "dents" the wall just a little bit because the player has gone past the wall intersection, that's perhaps acceptable.
It is also useful to "partition the space". The common way for this is "Binary space partitioning", which is nicely described and illustrated here:
http://en.wikipedia.org/wiki/Binary_space_partition
Books on game programming should cover basic principles of collision detection. There is also PLENTY of articles on the web about it, including an entry in wikipedia: http://en.wikipedia.org/wiki/Collision_detection
Short of making a rigid body physics engine, one could use point to plane distance to see if any of the cubes corner points are less than 0.0f away from the plane (I would use FLT_MIN so the points have a little radius to them). You will need to store a normalized 3d vector (vector of length 1.0f) to represent the normal of the plane. If the dot product between the vector from the center of the plane to the point and the plain normal is less than the radius you have a collision. After that, you can take the velocity (the length of the vector) of the cube, multiply it by 0.7f for some energy absorption and store this as the cubes new velocity. Then reflect the normalized velocity vector of the cube over the normal of the plane, then multiply that by the previously calculated new velocity of the cube.
If you really want to get into game physics, grab a pull from this guys github. I've used his book for a Physics for games class. There are some mistakes in the book so be sure to get all code samples from github. He goes through making a mass aggregate physics engine and a rigid body one. I would also brush up on matrices and tensors.

Cocos2d - Moving objects in a curved path with different velocities

I'm trying to develop a game where cars move along roads and stop according to the signal of the traffic lights. They've got different velocities. Sometimes cars need to decelerate in order to not hit the leading car. They need to stop at the red lights. They have to make turns and etc. This is all relatively easy when working with straight intersecting roads. But how can I move a car/cars along a curved path? So far it was easy because I was just using either x or y of a car's position. But this time it's not the case, both coordinates seem to be necessary for moving it ahead. With straight roads I can just give a car an arbitrary speed and it will move along x or y axis with that speed. But how can I determine the velocity, if both coordinates have to be taken into account? Acceleration and decelerations are also mistery to me in this case. Thanks ahead.
Although this is about moving a train over a freeform track, the same issues and principles apply to cars moving across freeform roads. Actually, cars may be easier because they don't need to stick to their track 100% accurately.
In short: it's not easy, but doable. How hard it is going to be depends on how realistic you want your cars to look and finding corners to cut.
In your case the cars should simply follow a path (series of points). Since CCActions are bad for frequent direction/velocity changes, you should use your own system of detecting path points and heading to the next. Movement along a bezier curve is not going to have your cards move at constant speed, that rules out the CCBezier* actions.

Techniques for generating a 2D game world

I want to make a 2D game in C++ using the Irrlicht engine. In this game, you will control a tiny ship in a cave of some sort. This cave will be created automatically (the game will have random levels) and will look like this:
Suppose I already have the the points of the polygon of the inside of the cave (the white part). How should I render this shape on the screen and use it for collision detection? From what I've read around different sites, I should use a triangulation algorithm to make meshes of the walls of the cave (the black part) using the polygon of the inside of the cave (the white part). Then, I can also use these meshes for collision detection. Is this really the best way to do it? Do you know if Irrlicht has some built-in functions that can help me achieve this?
Any advice will be apreciated.
Describing how to get an arbitrary polygonal shape to render using a given 3D engine is quite a lengthy process. Suffice to say that pretty much all 3D rendering is done in terms of triangles, and if you didn't use a tool to generate a model that is already composed of triangles, you'll need to generate triangles from whatever data you have there. Triangulating either the black space or the white space is probably the best way to do it, yes. Then you can build up a mesh or vertex list from that, and render those triangles that way. The triangles in the list then also double up for collision detection purposes.
I doubt Irrlicht has anything for triangulation as it's quite specific to your game design and not a general approach most people would take. (Typically they would have a tool which permits generation of the game geometry and the navigation geometry side by side.) It looks like it might be quite tricky given the shapes you have there.
One option is to use the map (image mask) directly to test for collision.
For example,
if map_points[sprite.x sprite.y] is black then
collision detected
assuming that your objects are images and they aren't real polygons.
In case you use real polygons you can have a "points sample" for every object shape,
and check the sample for collisions.
To check whether a point is inside or outside your polygon, you can simply count crossings. You know (0,0) is outside your polygon. Now draw a line from there to your test point (X,Y). If this line crosses an odd number of polygon edges (e.g. 1), it's inside the polygon . If the line crosses an even number of edges (e.g. 0 or 2), the point (X,Y) is outside the polygon. It's useful to run this algorithm on paper once to convince yourself.