I am trying to figure out how to get the vertices of my cube after it has been rotates with glRotate3f()
Im using a sphere bounding box to narrow dow the possible collisions but I need to know the exact vertices to get an accurate collision. Can anyone help me figure this out. Thanks.
Perhaps not the answer you're looking for, but you're approaching your problem from the wrong angle. OpenGL should be used to draw your scene and only that. Scene management and anything that goes along with it (transformations, deformations...to name just a few) should be handled by you (or an appropriate framework).
Related
I'm still trying to density control (grade) meshes in CGAL. Specifically tet-meshing a polygon surface (or multiple surface manifolds) that I simply load as OFF files. I can also load lists of selected faces or face nodes too.
But I can't seem to get to first base on this with the polygon tet-mesher. All I want to do is assign and enforce a mesh density/size at selected faces in the OFF file.
I CAN get some kinds of mesh density working by inserting 1-D features with volumetric data meshing, but for CAD and 3D printing purposes it has to be computed from an STL-like triangular surface manifold, so volume-based meshing is not do-able.
Is what I'm trying to do even possible in CGAL? It feels to me like it must be, and I'm just missing something obvious.
I really hope someone can help here. FYI i'm mostly working with the Mesh3 example using v4.14.
Thanks very much.
Look at the Mesh_facet_criteria and in particular this constructor where SizingField is where you can control the size. For locating the point wrt a face, you can use the AABB-tree function closest_point_and_primitive().
Im trying to write a game in 2D with Sfml. For that game i need a Lightengine and some code that can give me the area of the world that is visible to the player. AS both problems fit very well together (are pratically the same) i would like to solve both problems at once.
My world will be loaded from files in which the hitboxes of objects will be represented as Polygons.
I now wrote some code that takes a list of Polygons and the Direction of a Ray that follows the mouse and finds the closest intersection with any of these polygons.
The next step now would be to cast rays from the players or lights Position towards the edges of the polygons, aswell rays offset by +-0.000001 radians to determine the visible area and give it back as a polygon.
The Problem though is that my algorithm (it calculates the inersection between two lines with vector mathematics) is too slow.
In my very good PC i get 100fps with 300 egdes and one Ray.
I now read many articles online but couldnt find one best solution. But as far as i read it should be much faster to calculate intersections with triangles.
My question now: would it be meaningly faster to triangulate the polygons once while loading the map and then use ray-triangle intersection or is there any better way that you know of to solve my problem?
I also heard of bounding Volumen hierachies but i dont know howmuch impact that would have.
Im a bit surprised of how much power my algorithm consumes, as it only has to calculate some 2 dimensional intersections...
For everyone looking for the solution I finally went with:
I discovered the Box2D Physics Engine and I am now using the b2World::RayCast(...) function to determine whether and where a ray hits an object in my scene.
For now everything works fine and smooth (did no exact benchmark yet) :)
http://www.iforce2d.net/b2dtut/world-querying
I got it to work with the help of this site
Have a nice Day! :)
I have made several attempts to fix this and read all I could find here/forum/google. I used a CCD treshold mush lower than my objects move speed and using a CCD radius much smaller than the objects half radius. The only thing this does is make the multisphere get stuck on seams. I also tried to set ERP/ERP2 to 0.9/1.0.
[EDIT] Ok, so after some more reading; CCD will not work if the sphere is already touching the ground and ERP only affeccts objectts with joints if I understand correctly.
The ground is a trimesh made in Blender and using the obtainStaticNodeShape to get the shape. I have tried to scale the mesh to get smaller polygons but even the smallest (for the game acceptable) size does not work, about 32k indices with 11k polys, 500x500 units, the multisphere has a radius of 0.45 units.
[EDIT] the multi-sphere is two spheres on top of each other and they are restricted to angular movement around the Y-axis only, so no rolling.
The sphere gets "sucked" fast through the ground it does not sink slowly. I tried to make the fixedtimestep smaller 1/420 with 64 substeps did not give any better results. This happens most often while ascending or descending a slope. My ground is gently sloped but an incline of 20% seems to be enough for it to fall through a lot but it can happen on level ground too, just not as often.
When I did my first test I used a big stretched out cube as ground and it worked well.
So my problem now is I don't even know why this is happening so I have no idea what to try next? Can anyone please give me a solution or some pointers.
Is there any use in increasing the multi-sphere size (for the game I can not increase more than 25-30%) I have not explicitly set any collision margins but I think this would just make my sphere float over the ground? Is there any profit in changing the ground from a static object to a kinematic?
Would it work to use a raytest from the sphere straight down and push it up if it is lower than the ground? I think not, why would it fall through if it could detect the ground in the first place..?
[EDIT: additional info]
There are quite a few occurrences of similar problems floating around on forums and also here at stack overflow. Most seem to be about very small objects. Small objects (>0.2m) is clearly not a good option for bullet unless you want to increase the number of simulation steps quite a lot. My problems does not seem to fall under this category since my smallest object is 0.9m in diameter?
I have now also done a debug draw to see the normals of the trimesh that I use as ground. I can not find any errors with the normals.
I also tried to increase the collission margins of the speheres but to no avail.
I further tried to use suggested settings:
((btDefaultCollisionConfiguration)world.collisionConfiguration).setPlaneConvexMultipointIterations(3,3); ((btDefaultCollisionConfiguration)world.collisionConfiguration).setConvexConvexMultipointIterations(3, 3);
No difference.
I did however read about big trimeshes not working very well for raycasting, my mesh is big 512x512 units but I am not sure if this could cause my object to fall through the mesh?
I also read that sphere shapes has problems with trimeshes, but again I am not sure if this would be my case? The sphere I am using is locked for rotation on all axes.
I have also tried using a btCapsule but it gave same results.. Would a cylinder work better?
[EDIT]
I have tried using a cylinder instead since sphere and capsule did not work. The cylinder is working a lot better. I have still got it to fall through once though. The clyinder was jerking around a lot before it went through where the sphere/capsule would just go through really fast and easy. Maybe this could be a clue of whats the underlaying problem? A cylinder is not the best for a character shape though..
An other possible reason could be if a triangle in the mesh has too long sides or a large ratio between sides. I found a few of those on a slope where my sphere always falls through. If this is indeed the problem can I do anything about it except manually editing the mesh in Blender?
As you can see there are a lot of these questions and a lot of possible answers and I have no idea which one corresponds to my case, someone with better insight giving some pointers would mean a lot, thanks!
I have a collsion map, and some places that I want to be light sources. The light source provides light that is actually a shape where I can see the ground. It now looks like this:
So the light goes through the walls. I want to make it look like this:
(I marked the collisions with walls with dark yellow)
So the light rays stop when meeting the wall. I want to get the shape of the correct light, the best would be bitmap containing it)
My first idea was to cast rays from the source and check when they collide with the wall (I know how to do this), but then I would need to cast ray each 0.001 deg for example, so its too much time to generate lights. The next thing is that The light shape isn't always circle, sometimes it can be ellipse or half-ellipse, even triangle or part of the circle. Generally, I have the bitmap with light that doesnt collide anything, and I want to subtract it a bit to make it look like on the second image.
And the last thing, Im using allegro 4.2.1, but all previously mentioned bitmaps are 2-dimmension arrays with 0's and 1's.
Thanx for any help, sorry for long question and my bad english.
The basic idea is that you calculate the shadow region of your walls and just not color that.
This article should give you a good start.
In your particular example you can easily brute-force it by checking the line-of-sight from each (empty) pixel to the center of your light source. If you have line-of-sight and the distance is within the falloff, then you have light there. If not, then it's dark.
The MadKeithV solution need O(number of pixels^2) time.
My solution is a expanded MadKeithV idea, but run in O(number of pixels) time. With some improvements, it will work in O(number of pixels in light)
First, start with the pixel containing the source of light. Then using BFS procedure 'infect' nearest pixels with light and store angle range of which direction the light can progress from each point.
In following BFS instances, repeat this procedure, considering only pixels in 'infect range'.
I'm working on a relatively small 2D (top-view) game demo, using OpenGL for my graphics. It's going for a basic stealth-based angle, and as such with all my enemies I'm drawing a sight arc so the player knows where they are looking.
One of my problems so far is that when I draw this sight arc (as a filled polygon) it naturally shows through any walls on the screen since there's nothing stopping it:
http://tinyurl.com/43y4o5z
I'm curious how I might best be able to prevent something like this. I do already have code in place that will let me detect line-intersections with walls and so on (for the enemy sight detection), and I could theoretically use this to detect such a case and draw the polygon accordingly, but this would likely be quite fiddly and/or inefficient, so I figure if there's any built-in OpenGL systems that can do this for me it would probably do it much better.
I've tried looking for questions on topics like clipping/occlusion but I'm not even sure if these are exactly what I should be looking for; my OpenGL skills are limited. It seems that anything using, say, glClipPlanes or glScissor wouldn't be suited to this due to the large amount of individual walls and so on.
Lastly, this is just a demo I'm making in my spare time, so graphics aren't exactly my main worry. If there's a (reasonably) painless way to do this then I'd hope someone can point me in the right direction; if there's no simple way then I can just leave the problem for now or find other workarounds.
This is essentially a shadowing problem. Here's how I'd go about it:
For each point around the edge of your arc, trace a (2D) ray from the enemy towards the point, looking for intersections with the green boxes. If the green boxes are always going to be axis-aligned, the math will be a lot easier (look for Ray-AABB intersection). Rendering the intersection points as a triangle fan will give you your arc.
As you mention that you already have the line-wall intersection code going, then as long as that will tell you the distance from the enemy to the wall, then you'll be able to use it for the sight arc. Don't automatically assume it'll be too slow - we're not running on 486s any more. You can always reduce the number of points around the edge of your arc to speed things up.
OpenGL's built-in occlusion handling is designed for 3D tasks and I can't think of a simple way to rig it to achieve the effect you are after. If it were me, the way I would solve this is to use a fragment shader program, but be forewarned that this definitely does not fall under "a (reasonably) painless way to do this". Briefly, you first render a binary "occlusion map" which is black where there are walls and white otherwise. Then you render the "viewing arc" like you are currently doing with a fragment program that is designed to search from the viewer towards the target location, searching for an occluder (black pixel). If it finds an occluder, then it renders that pixel of the "viewing arc" as 100% transparent. Overall though, while this is a "correct" solution I would definitely say that this is a complex feature and you seem okay without implementing it.
I figure if there's any built-in OpenGL systems that can do this for me it would probably do it much better.
OpenGL is a drawing API, not a geometry processing library.
Actually your intersection test method is the right way to do it. However to speed it up you should use a spatial subdivision structure. In your case you have something that's cries for a Binary Space Partitioning tree. BSP trees have the nice property, that the complexity for finding intersections of a line with walls is in average about O(log n) and worst case is O(n log n), or in other words, BSP tress are very efficient. See the BSP FAQ for details http://www.opengl.org//resources/code/samples/bspfaq/index.html