So for the game I am working on, I am trying to make a unique type of shader. The majority is very basic, there is just one difficult part. I only want to render what the player would see in their line of sight. This is a top-down 2D game so the player would normally be able to see rooms over. But take this image for example.
Obviously the player here is the orange circle. The area that he can see is the gray ( grey? ) filled in area. The black lines represent a room and the purple lines represent his field of vision through the door of the room. I want to only render the shaded area.
I am aware GLSL has a discard statement where you can remove specific pixels. This means that by making a boolean function I could just do the following code in my vertex shader.
if ( !playerCanSeePoint( params ) ) {
discard;
}
What I don't know how to do though is make the playerCanSeePoint function. One idea I had was to cast invisible lines from the player in all directions and find it's first intersection point. That would mean to first wall and would create the proper shape. This seems resource consuming though. So is there a good way to do this?
You start with a square the size of the view area and then for each wall you cut out a portion of it based on where the player is.
Then you can triangulate the polygon and use a stencil to prevent drawing outside of it.
Related
I'm creating a comic book editor. I want to be able to use some fairly complex customisable shapes for the speech balloons.
I can draw the tail and then draw a balloon but that means I have the outline inside the shape and I want it only around the edge.
I assumed QPainterPath::simplified() would solve the problem but it doesn't seem to do anything.
At the moment my best idea is to draw a shape with a thick outline and then draw it again with no outline but I don't think that will work for "zero width" outlines.
I can think of two possible solutions here:
Draw both the "tail" and the main "balloon" as a single shape. In this case, you'd simply draw a single shape with a single outline and a single fill.
Draw them separately, but twice. Draw an "expanded" version of the shapes in black first, and then draw the "normal" version of the shapes in white over the top of it. You wouldn't draw any "lines" at all - the "expanded" version of the fill would serve the same purpose.
The first method would allow alternative line styles to be used (dotted or wiggly lines), but the latter would allow the "outline" to be slightly offset, so that it appeared thicker around some edges and thinner around others.
It turns out QPainterPath::simplified() does work. It depends on whether I draw clockwise or anti-clockwise (I believe it works when drawn clockwise), which I presume is down to how Qt's Winding Fill works.
// create a path representing the bubble and its "tail"
QPainterPath tail = tail.shape();
tail.addPath(bubble.shape());
tail.setFillRule(Qt::WindingFill);
painter->drawPath(tail.simplified);
I am kinda new to all this but I am trying to make myself a simple 2D game in c++.
I have decided to do a kind of a maze type game and what I am doing for this is drawing out the maze as a texture and then having another texture as my character move around inside this maze.
However I am hugely struggling with the collision detection so that my character doesn't just walk through the walls. I have been told that I can use glReadPixels to find the colour of the background but whenever I try this it just ignores the colour and still continues on through the walls.
Can anybody please help me with this and tell me how I can do it as I cannot find anything anywhere which can help.
Thanks in advance.
Depending on the maze type, if you only have vertical and horizontal walls of unit length, you could reprezent the maze and current position in a 2D array/matrix and decide whether the new position is OK to move into based on the content of the new position in the maze matrix.
You will have to do some translation to/from matrix coordinates and screen coordinates
Advantages:
you don't need to read from the screen
maze can be larger than what fits on the screen -- draw the relevant portion only
Disadvantages:
you can only have "bolck" type terrain (e.g. vertical/horizontal walls)
if you want to add movign enemies, the collision detection can be too coarse (you cannot avoid the monster "just by a hair/pixel")
Imagine a plain rectangular bitmap of, say, 1024x768 pixels filled with white. There are a few (non-overlapping) sprites drawn onto the bitmap: circles, squares and triangles.
Is there an algorithm (possibly even a C++ implementation) which, given the bitmap and the color which is the background color (white, in the above example), yields a list containing the smallest bounding rectangles for each of the sprites?
Here's some sample: On the left side you can see a sample bitmap which my code is given (together with the information that the 'background' is white). On the right side you can see the same image together with the bounding rectangles of the four shapes (in red); the algorithm I'm looking for computes the geometry of these rectangles.
Some painting programs have a similiar feature for selecting shapes: they can even compute seemingly arbitrary bounding polygons. Instead of dragging a selection rectangle manually, you can click the 'background' (what's background and what's not is determined by some threshold) and then the tool automatically computes the shape of the object drawn onto the background. I need something like this, except that I'm perfectly fine if I just have the rectangular bounding areas for objects.
I became aware of OpenCV; it appears to be relevant (it seems to be a library which includes every graphics algorithm I can think of - and then some) but in the fast amount of information I couldn't find the way to the algorithm I'm thinking of. I would be surprised if OpenCV couldn't do this, but I fear you've got to have a PhD to use it. :-)
Here is the great article on the subject:
http://softsurfer.com/Archive/algorithm_0107/algorithm_0107.htm
I think that PhD is not required here :)
These are my first thoughts, none complicated, except for the edge detection
For each square,
if it's not-white
mark as "found"
if you havn't found one next to it already
add it to points list
for each point in the points list
use basic edge detection to find outline
keep track of bounds while doing so
add bounds to shapes list
remove duplicates from shapes list. (this can happen for concave shapes)
I just realized this will consider white "holes" (like in your leftmost circle in your sample) to be it's own shape. If the first "loop" is a flood fill, it doesn't have this problem, but will be much slower/take much more memory.
The basic edge detection I was thinking of was simple:
given eight cardinal directions left, downleft, etc...
given two relative directions cw(direction-1) and ccw(direction+1)
starting with a point "begin"
set bounds to point
find direction d, where the begin+d is not white, and begin+cw(d) is white.
set current to begin+d
do
if current is outside of bounds, increase bounds
set d = cw(d)
while(cur+d is white or cur+ccw(d) is not white)
d = ccw(d)
cur = cur + d;
while(cur != begin
http://ideone.com/
There's a quite a few edge cases not considered here: what if begin is a single point, what if it runs to the edge of the picture, what if start point is only 1 px wide, but has blobs to two sides, probably others... But the basic algorithm isn't that complicated.
As seen in the image
I draw set of contours (polygons) as GL_LINE_STRIP.
Now I want to select curve(polygon) under the mouse to delete,move..etc in 3D .
I am wondering which method to use:
1.use OpenGL picking and selection. ( glRenderMode(GL_SELECT) )
2.use manual collision detection , by using a pick-ray and check whether the ray is inside each polygon.
I strongly recommend against GL_SELECT. This method is very old and absent in new GL versions, and you're likely to get problems with modern graphics cards. Don't expect it to be supported by hardware - probably you'd encounter a software (driver) fallback for this mode on many GPUs, provided it would work at all. Use at your own risk :)
Let me provide you with an alternative.
For solid, big objects, there's an old, good approach of selection by:
enabling and setting the scissor test to a 1x1 window at the cursor position
drawing the screen with no lighting, texturing and multisampling, assigning an unique solid colour for every "important" entity - this colour will become the object ID for picking
calling glReadPixels and retrieving the colour, which would then serve to identify the picked object
clearing the buffers, resetting the scissor to the normal size and drawing the scene normally.
This gives you a very reliable "per-object" picking method. Also, drawing and clearing only 1 pixel with minimal per-pixel operation won't really hurt your performance, unless you are short on vertex processing power (unlikely, I think) or have really a lot of objects and are likely to get CPU-bound on the number of draw calls (but then again, I believe it's possible to optimize this away to a single draw call if you could pass the colour as per-pixel data).
The colour in RGB is 3 unsigned bytes, but it should be possible to additionally use the alpha channel of the framebuffer for the last byte, so you'd get 4 bytes in total - enough to store any 32-bit pointer to the object as the colour.
Alternatively, you can create a dedicated framebuffer object with a specific pixel format (like GL_R32UI, or even GL_RG32UI if you need 64 bits) for that.
The above is a nice and quick alternative (both in terms of reliability and in implementation time) for the strict geometric approach.
I found that on new GPUs, the GL_SELECT mode is extremely slow. I played with a few different ways of fixing the problem.
The first was to do a CPU collision test, which worked, but wasn't as fast as I would have liked. It definitely slows down when you are casting rays into the screen (using gluUnproject) and then trying to find which object the mouse is colliding with. The only way I got satisfactory speeds was to use an octree to reduce the number of collision tests down and then do a bounding box collision test - however, this resulted in a method that was not pixel perfect.
The method I settled on was to first find all the objects under the mouse (using gluUnproject and bounding box collision tests) which is usually very fast. I then rendered each of the objects that have potentially collided with the mouse in the backbuffer as a different color. I then used glReadPixel to get the color under the mouse, and map that back to the object. glReadPixel is a slow call, since it has to read from the frame buffer. However, it is done once per frame, which ends up taking a negligible amount of time. You can speed it up by rendering to a PBO if you'd like.
Giawa
umanga, Cant see how to reply inline... maybe I should sign up :)
First of all I must apologize for giving you the wrong algo - i did the back face culling one. But the one you need is very similar which is why I got confused... d'oh.
Get the camera position to mouse vector as said before.
For each contour, loop through all the coords in pairs (0-1, 1-2, 2-3, ... n-0) in it and make a vec out of them as before. I.e. walk the contour.
Now do the cross prod of those two (contour edge to mouse vec) instead of between pairs like I said before, do that for all the pairs and vector add them all up.
At the end find the magnitude of the resulting vector. If the result is zero (taking into account rounding errors) then your outside the shape - regardless of facing. If your interested in facing then instead of the mag you can do that dot prod with the mouse vector to find the facing and test the sign +/-.
It works because the algo finds the amount of distance from the vector line to each point in turn. As you sum them up and you are outside then they all cancel out because the contour is closed. If your inside then they all sum up. Its actually Gauss's Law of electromagnetic fields in physics...
See:http://en.wikipedia.org/wiki/Gauss%27s_law and note "the right-hand side of the equation is the total charge enclosed by S divided by the electric constant" noting the word "enclosed" - i.e. zero means not enclosed.
You can still do that optimization with the bounding boxes for speed.
In the past I've used GL_SELECT to determine which object(s) contributed the pixel(s) of interest and then used computational geometry to get an accurate intersection with the object(s) if required.
Do you expect to select by clicking the contour (on the edge) or the interior of the polygon? Your second approach sounds like you want clicks in the interior to select the tightest containing polygon. I don't think that GL_SELECT after rendering GL_LINE_STRIP is going to make the interior responsive to clicks.
If this was a true contour plot (from the image I don't think it is, edges appear to intersect) then a much simpler algorithm would be available.
You cant use select if you stay with the lines because you would have to click on the line pixels rendered not the space inside the lines bounding them which I read as what you wish to do.
You can use Kos's answer but in order to render the space you need to solid fill it which would involve converting all of your contours to convex types which is painful. So I think that would work sometimes and give the wrong answer in some cases unless you did that.
What you need to do is use the CPU. You have the view extents from the viewport and the perspective matrix. With the mouse coord, generate the view to mouse pointer vector. You also have all the coords of the contours.
Take the first coord of the first contour and make a vector to the second coord. Make a vector out of them. Take 3rd coord and make a vector from 2 to 3 and repeat all the way around your contour and finally make the last one from coord n back to 0 again. For each pair in sequence find the cross product and sum up all the results. When you have that final summation vector keep hold of that and do a dot product with the mouse pointer direction vector. If its +ve then the mouse is inside the contour, if its -ve then its not and if 0 then I guess the plane of the contour and the mouse direction are parallel.
Do that for each contour and then you will know which of them are spiked by your mouse. Its up to you which one you want to pick from that set. Highest Z ?
It sounds like a lot of work but its not too bad and will give the right answer. You might like to additionally keep bounding boxes of all your contours then you can early out the ones off of the mouse vector by doing the same math as for the full vector but only on the 4 sides and if its not inside then the contour cannot be either.
The first is easy to implement and widely used.
I want to make a 2D game in C++ using the Irrlicht engine. In this game, you will control a tiny ship in a cave of some sort. This cave will be created automatically (the game will have random levels) and will look like this:
Suppose I already have the the points of the polygon of the inside of the cave (the white part). How should I render this shape on the screen and use it for collision detection? From what I've read around different sites, I should use a triangulation algorithm to make meshes of the walls of the cave (the black part) using the polygon of the inside of the cave (the white part). Then, I can also use these meshes for collision detection. Is this really the best way to do it? Do you know if Irrlicht has some built-in functions that can help me achieve this?
Any advice will be apreciated.
Describing how to get an arbitrary polygonal shape to render using a given 3D engine is quite a lengthy process. Suffice to say that pretty much all 3D rendering is done in terms of triangles, and if you didn't use a tool to generate a model that is already composed of triangles, you'll need to generate triangles from whatever data you have there. Triangulating either the black space or the white space is probably the best way to do it, yes. Then you can build up a mesh or vertex list from that, and render those triangles that way. The triangles in the list then also double up for collision detection purposes.
I doubt Irrlicht has anything for triangulation as it's quite specific to your game design and not a general approach most people would take. (Typically they would have a tool which permits generation of the game geometry and the navigation geometry side by side.) It looks like it might be quite tricky given the shapes you have there.
One option is to use the map (image mask) directly to test for collision.
For example,
if map_points[sprite.x sprite.y] is black then
collision detected
assuming that your objects are images and they aren't real polygons.
In case you use real polygons you can have a "points sample" for every object shape,
and check the sample for collisions.
To check whether a point is inside or outside your polygon, you can simply count crossings. You know (0,0) is outside your polygon. Now draw a line from there to your test point (X,Y). If this line crosses an odd number of polygon edges (e.g. 1), it's inside the polygon . If the line crosses an even number of edges (e.g. 0 or 2), the point (X,Y) is outside the polygon. It's useful to run this algorithm on paper once to convince yourself.