I have a collsion map, and some places that I want to be light sources. The light source provides light that is actually a shape where I can see the ground. It now looks like this:
So the light goes through the walls. I want to make it look like this:
(I marked the collisions with walls with dark yellow)
So the light rays stop when meeting the wall. I want to get the shape of the correct light, the best would be bitmap containing it)
My first idea was to cast rays from the source and check when they collide with the wall (I know how to do this), but then I would need to cast ray each 0.001 deg for example, so its too much time to generate lights. The next thing is that The light shape isn't always circle, sometimes it can be ellipse or half-ellipse, even triangle or part of the circle. Generally, I have the bitmap with light that doesnt collide anything, and I want to subtract it a bit to make it look like on the second image.
And the last thing, Im using allegro 4.2.1, but all previously mentioned bitmaps are 2-dimmension arrays with 0's and 1's.
Thanx for any help, sorry for long question and my bad english.
The basic idea is that you calculate the shadow region of your walls and just not color that.
This article should give you a good start.
In your particular example you can easily brute-force it by checking the line-of-sight from each (empty) pixel to the center of your light source. If you have line-of-sight and the distance is within the falloff, then you have light there. If not, then it's dark.
The MadKeithV solution need O(number of pixels^2) time.
My solution is a expanded MadKeithV idea, but run in O(number of pixels) time. With some improvements, it will work in O(number of pixels in light)
First, start with the pixel containing the source of light. Then using BFS procedure 'infect' nearest pixels with light and store angle range of which direction the light can progress from each point.
In following BFS instances, repeat this procedure, considering only pixels in 'infect range'.
Related
Im trying to write a game in 2D with Sfml. For that game i need a Lightengine and some code that can give me the area of the world that is visible to the player. AS both problems fit very well together (are pratically the same) i would like to solve both problems at once.
My world will be loaded from files in which the hitboxes of objects will be represented as Polygons.
I now wrote some code that takes a list of Polygons and the Direction of a Ray that follows the mouse and finds the closest intersection with any of these polygons.
The next step now would be to cast rays from the players or lights Position towards the edges of the polygons, aswell rays offset by +-0.000001 radians to determine the visible area and give it back as a polygon.
The Problem though is that my algorithm (it calculates the inersection between two lines with vector mathematics) is too slow.
In my very good PC i get 100fps with 300 egdes and one Ray.
I now read many articles online but couldnt find one best solution. But as far as i read it should be much faster to calculate intersections with triangles.
My question now: would it be meaningly faster to triangulate the polygons once while loading the map and then use ray-triangle intersection or is there any better way that you know of to solve my problem?
I also heard of bounding Volumen hierachies but i dont know howmuch impact that would have.
Im a bit surprised of how much power my algorithm consumes, as it only has to calculate some 2 dimensional intersections...
For everyone looking for the solution I finally went with:
I discovered the Box2D Physics Engine and I am now using the b2World::RayCast(...) function to determine whether and where a ray hits an object in my scene.
For now everything works fine and smooth (did no exact benchmark yet) :)
http://www.iforce2d.net/b2dtut/world-querying
I got it to work with the help of this site
Have a nice Day! :)
I have made several attempts to fix this and read all I could find here/forum/google. I used a CCD treshold mush lower than my objects move speed and using a CCD radius much smaller than the objects half radius. The only thing this does is make the multisphere get stuck on seams. I also tried to set ERP/ERP2 to 0.9/1.0.
[EDIT] Ok, so after some more reading; CCD will not work if the sphere is already touching the ground and ERP only affeccts objectts with joints if I understand correctly.
The ground is a trimesh made in Blender and using the obtainStaticNodeShape to get the shape. I have tried to scale the mesh to get smaller polygons but even the smallest (for the game acceptable) size does not work, about 32k indices with 11k polys, 500x500 units, the multisphere has a radius of 0.45 units.
[EDIT] the multi-sphere is two spheres on top of each other and they are restricted to angular movement around the Y-axis only, so no rolling.
The sphere gets "sucked" fast through the ground it does not sink slowly. I tried to make the fixedtimestep smaller 1/420 with 64 substeps did not give any better results. This happens most often while ascending or descending a slope. My ground is gently sloped but an incline of 20% seems to be enough for it to fall through a lot but it can happen on level ground too, just not as often.
When I did my first test I used a big stretched out cube as ground and it worked well.
So my problem now is I don't even know why this is happening so I have no idea what to try next? Can anyone please give me a solution or some pointers.
Is there any use in increasing the multi-sphere size (for the game I can not increase more than 25-30%) I have not explicitly set any collision margins but I think this would just make my sphere float over the ground? Is there any profit in changing the ground from a static object to a kinematic?
Would it work to use a raytest from the sphere straight down and push it up if it is lower than the ground? I think not, why would it fall through if it could detect the ground in the first place..?
[EDIT: additional info]
There are quite a few occurrences of similar problems floating around on forums and also here at stack overflow. Most seem to be about very small objects. Small objects (>0.2m) is clearly not a good option for bullet unless you want to increase the number of simulation steps quite a lot. My problems does not seem to fall under this category since my smallest object is 0.9m in diameter?
I have now also done a debug draw to see the normals of the trimesh that I use as ground. I can not find any errors with the normals.
I also tried to increase the collission margins of the speheres but to no avail.
I further tried to use suggested settings:
((btDefaultCollisionConfiguration)world.collisionConfiguration).setPlaneConvexMultipointIterations(3,3); ((btDefaultCollisionConfiguration)world.collisionConfiguration).setConvexConvexMultipointIterations(3, 3);
No difference.
I did however read about big trimeshes not working very well for raycasting, my mesh is big 512x512 units but I am not sure if this could cause my object to fall through the mesh?
I also read that sphere shapes has problems with trimeshes, but again I am not sure if this would be my case? The sphere I am using is locked for rotation on all axes.
I have also tried using a btCapsule but it gave same results.. Would a cylinder work better?
[EDIT]
I have tried using a cylinder instead since sphere and capsule did not work. The cylinder is working a lot better. I have still got it to fall through once though. The clyinder was jerking around a lot before it went through where the sphere/capsule would just go through really fast and easy. Maybe this could be a clue of whats the underlaying problem? A cylinder is not the best for a character shape though..
An other possible reason could be if a triangle in the mesh has too long sides or a large ratio between sides. I found a few of those on a slope where my sphere always falls through. If this is indeed the problem can I do anything about it except manually editing the mesh in Blender?
As you can see there are a lot of these questions and a lot of possible answers and I have no idea which one corresponds to my case, someone with better insight giving some pointers would mean a lot, thanks!
As seen in the image
I draw set of contours (polygons) as GL_LINE_STRIP.
Now I want to select curve(polygon) under the mouse to delete,move..etc in 3D .
I am wondering which method to use:
1.use OpenGL picking and selection. ( glRenderMode(GL_SELECT) )
2.use manual collision detection , by using a pick-ray and check whether the ray is inside each polygon.
I strongly recommend against GL_SELECT. This method is very old and absent in new GL versions, and you're likely to get problems with modern graphics cards. Don't expect it to be supported by hardware - probably you'd encounter a software (driver) fallback for this mode on many GPUs, provided it would work at all. Use at your own risk :)
Let me provide you with an alternative.
For solid, big objects, there's an old, good approach of selection by:
enabling and setting the scissor test to a 1x1 window at the cursor position
drawing the screen with no lighting, texturing and multisampling, assigning an unique solid colour for every "important" entity - this colour will become the object ID for picking
calling glReadPixels and retrieving the colour, which would then serve to identify the picked object
clearing the buffers, resetting the scissor to the normal size and drawing the scene normally.
This gives you a very reliable "per-object" picking method. Also, drawing and clearing only 1 pixel with minimal per-pixel operation won't really hurt your performance, unless you are short on vertex processing power (unlikely, I think) or have really a lot of objects and are likely to get CPU-bound on the number of draw calls (but then again, I believe it's possible to optimize this away to a single draw call if you could pass the colour as per-pixel data).
The colour in RGB is 3 unsigned bytes, but it should be possible to additionally use the alpha channel of the framebuffer for the last byte, so you'd get 4 bytes in total - enough to store any 32-bit pointer to the object as the colour.
Alternatively, you can create a dedicated framebuffer object with a specific pixel format (like GL_R32UI, or even GL_RG32UI if you need 64 bits) for that.
The above is a nice and quick alternative (both in terms of reliability and in implementation time) for the strict geometric approach.
I found that on new GPUs, the GL_SELECT mode is extremely slow. I played with a few different ways of fixing the problem.
The first was to do a CPU collision test, which worked, but wasn't as fast as I would have liked. It definitely slows down when you are casting rays into the screen (using gluUnproject) and then trying to find which object the mouse is colliding with. The only way I got satisfactory speeds was to use an octree to reduce the number of collision tests down and then do a bounding box collision test - however, this resulted in a method that was not pixel perfect.
The method I settled on was to first find all the objects under the mouse (using gluUnproject and bounding box collision tests) which is usually very fast. I then rendered each of the objects that have potentially collided with the mouse in the backbuffer as a different color. I then used glReadPixel to get the color under the mouse, and map that back to the object. glReadPixel is a slow call, since it has to read from the frame buffer. However, it is done once per frame, which ends up taking a negligible amount of time. You can speed it up by rendering to a PBO if you'd like.
Giawa
umanga, Cant see how to reply inline... maybe I should sign up :)
First of all I must apologize for giving you the wrong algo - i did the back face culling one. But the one you need is very similar which is why I got confused... d'oh.
Get the camera position to mouse vector as said before.
For each contour, loop through all the coords in pairs (0-1, 1-2, 2-3, ... n-0) in it and make a vec out of them as before. I.e. walk the contour.
Now do the cross prod of those two (contour edge to mouse vec) instead of between pairs like I said before, do that for all the pairs and vector add them all up.
At the end find the magnitude of the resulting vector. If the result is zero (taking into account rounding errors) then your outside the shape - regardless of facing. If your interested in facing then instead of the mag you can do that dot prod with the mouse vector to find the facing and test the sign +/-.
It works because the algo finds the amount of distance from the vector line to each point in turn. As you sum them up and you are outside then they all cancel out because the contour is closed. If your inside then they all sum up. Its actually Gauss's Law of electromagnetic fields in physics...
See:http://en.wikipedia.org/wiki/Gauss%27s_law and note "the right-hand side of the equation is the total charge enclosed by S divided by the electric constant" noting the word "enclosed" - i.e. zero means not enclosed.
You can still do that optimization with the bounding boxes for speed.
In the past I've used GL_SELECT to determine which object(s) contributed the pixel(s) of interest and then used computational geometry to get an accurate intersection with the object(s) if required.
Do you expect to select by clicking the contour (on the edge) or the interior of the polygon? Your second approach sounds like you want clicks in the interior to select the tightest containing polygon. I don't think that GL_SELECT after rendering GL_LINE_STRIP is going to make the interior responsive to clicks.
If this was a true contour plot (from the image I don't think it is, edges appear to intersect) then a much simpler algorithm would be available.
You cant use select if you stay with the lines because you would have to click on the line pixels rendered not the space inside the lines bounding them which I read as what you wish to do.
You can use Kos's answer but in order to render the space you need to solid fill it which would involve converting all of your contours to convex types which is painful. So I think that would work sometimes and give the wrong answer in some cases unless you did that.
What you need to do is use the CPU. You have the view extents from the viewport and the perspective matrix. With the mouse coord, generate the view to mouse pointer vector. You also have all the coords of the contours.
Take the first coord of the first contour and make a vector to the second coord. Make a vector out of them. Take 3rd coord and make a vector from 2 to 3 and repeat all the way around your contour and finally make the last one from coord n back to 0 again. For each pair in sequence find the cross product and sum up all the results. When you have that final summation vector keep hold of that and do a dot product with the mouse pointer direction vector. If its +ve then the mouse is inside the contour, if its -ve then its not and if 0 then I guess the plane of the contour and the mouse direction are parallel.
Do that for each contour and then you will know which of them are spiked by your mouse. Its up to you which one you want to pick from that set. Highest Z ?
It sounds like a lot of work but its not too bad and will give the right answer. You might like to additionally keep bounding boxes of all your contours then you can early out the ones off of the mouse vector by doing the same math as for the full vector but only on the 4 sides and if its not inside then the contour cannot be either.
The first is easy to implement and widely used.
I'm trying to write an algorithm to generate the "ceiling panel" from a horiontally wrappable panoramic image like the one above. Images 1 to 4 are a straight cut out for the walls of the cube but the ceiling will be more complicated as I assume it needs to be composited from parts 5a to 5d. Does anyone know the solution in pseudocode?
my guess is that we need to iterate over the coordinates of the ceiling tile
i.e.
for y=0 to height
for x=0 to width
colorofsomecoordinateonoriginalimage = some function (poloar coords?)
set pixel(x,y) = colorofsomecoordinateonoriginalimage
next
next
Hum... I remember doing something like that for computer vision class one time back in grad school. It's not impossible but a LOT of work needs to be done. One way would be to degrade the entire product's quality. That's the easiest starting point. Once you degraded it enough (depending on how much you need to stretch the edges), you can start applying nonlinear transformations to the image. This is probably best done approximating by maybe cutting out sections of the cylinder by degrees and then applying one of the age old projections used in making flat maps (like Mercator or CADRG or something)... but you have to remember to interpolate the pixels, make sure you at least do an averaging of the pixels to approximate. That's the best I can think of.
You can't generate a panorama just by taking photos from a single location and stitch them. Well, you can for a single horizontal set, but it would look ugly (usually, you stitch many more than 4 photos to avoid distortions at the edges).
Here, you have even more data in the y-direction, which means even more pictures, and some sort fancy projection to generate the final image.
If you look at the panorama you have closely, you'll notice that the boundary of the region in sunlight is not straight. That is because your panorama was projected on a cylinder, not a cube. So I don't think 1/2/3/4 would look right directly mapped to a cube.
Bottom line, you really can't consider those 8 chunks as 8 pictures taken from a fixed point (If you need convincing, try yourself to take 8 pictures like that and try to stitch them together. You'll see how fun it is for the upper row, and even though it is easy for the bottom row, how ugly it looks on the stitched regions).
Now, why you need cube maps changes drastically what your options are. If you're only looking for a cube map to do cheap environment mapping effects, then the simplest is to find an arbitrary function that maps the edges where you want them to be, and simply linearly interpolate in between. It's completely the wrong projection, but ought to give a picture that looks good enough for the intended goal.
If you're looking for something more accurate, then you need to know how the projection was generated, so that you can unproject it before re-projecting it on the cube.
All that said, it's also a lot easier to just photograph cube maps rather than process a panorama to generate them, but that might not be possible for you.
I'm making a game where the game's size varies, so I want to make my own shadows. The api i'm using can fill rectangles, make ellipses, horizontal lines etc. And supports rgba. Given this, how could I make a drop shadow? I tried making a black to white gradient and setting the alpha to 20%, but it didnt look very good... I'm not sure how they are done. Thanks
I would suggest:
copy the object,
move it in the opposite direction of the light source and use its distance as a weight,
turn it totally black,
blur it using the light source's distance as a weight, too,
put it behind the object,
lower the alpha if you want.
?????
profit.