OpenGL: Easy way of stopping light at vertexes? - c++

I have a scene built in OpenGL. When my light is in the center of the room, the outside of the room is lit. Is there any easy way to make OpenGl stop the lighting at vertexes, or will it require complex calculations. Here are pictures of my crappy, quick scene showing the lighting as it is when asking this question:

Essentially, you want the walls of the room to cast a shadow. That's what you want when you want the exterior part of the object not to be lit.
Shadowing in graphics, is generally a pretty hard problem. There are a lot of good, and a lot of fast, solutions, but not both -- any one solution is going to be a tradeoff between the two. SIGGRAPH is full of all sorts of papers from Really Smart People trying to solve this problem.
If you want something quick and dirty, shadow mapping is not terribly difficult (at least the simple kind), but it is imprecise. You'll see artifacts along the intersections of your object and the walls, for one. For precision, stencil shadows will work, but you'll have hard-edged shadows.

One solution here would probably be to author the geometry in a way, that objects don't protrude outside walls (or separate them into inside/outside parts). Then render the interior and exterior with different lights.
You wouldn't have the problem if everything would cast shadows on everything.
If the lights are static, you might want to consider pre-calculating (baking) the lighting (together with shadows). You can do this in many 3D packages, and from a programming perspective this might be the simplest solution.
If you had a general solution for real-time rendering shadows for every light, that would also solve your problem, but that is a challenging task, and it might also not be the optimal thing to do, if you want to maintain good frame-rates.
If you want to learn about real-time shadow rendering, I recommend looking at shadow maps - they are generally the solution used in most games today. Note that for a point light you would need to render the shadow map for 6 sides of a cube-map. For practical purposes you should really consider which lights really need to cast shadows, and make some kind of trade-off.

Related

test if square overlaps poly in c++ w/ directx (optional)

how would I go about checking to see if a triangular poly is present within a square area? (I.E. picture a grid of squares overlaying a group of 2d polys.)
Or even better, how can I determine the percentage of one of these squares that is occupied by a given poly (if at all).
I've used directx before but can't seem to find the right combination of functions in their documentation. - Though it feels like something with ray-tracing might be relevant.
I use c++ and can use directx if helpful.
Thanks for any suggestions or ideas. :)
You might consider the clipper library for doing generic 2D polygon clipping, area computation, intersection testing, etc. It is fairly compact and easy to deal with, and has decent examples of how to use it.
It is an implementation of the Vatti clipping algorithm and will handle many odd edge cases (which may be overkill for you)
There are a few ways to do this and it's essentially a clipping problem.
One way is to use the Cohen–Sutherland algorithm: http://en.wikipedia.org/wiki/Cohen%E2%80%93Sutherland
You would run the algorithm 3 times (once for each triangle edge).
You can then find the percentage of area occupied by calculating area(clipped_triangle) / area(square_region).
You might consider the clipper library for doing generic 2D polygon clipping, area computation, intersection testing, etc. It is fairly compact and easy to deal with, and has decent examples of how to use it.
It is an implementation of the Vatti clipping algorithm and will handle many odd edge cases (which may be overkill for you)
Can ho celadon city - vinhomes central park

What is so bad about GL_QUADS?

I hear that GL_QUADS are going to be removed in the OpenGL versions > 3.0, why is that? Will my old programs not work in the future then? I have benchmarked, and GL_TRIANGLES or GL_QUADS have no difference in render speed (might even be that GL_QUADS is faster). So whats the point?
The point is that your GPU renders triangles, not quads. And it is pretty much trivial to construct a rectangle from two triangles, so the API doesn't really need to be burdened with the ability to render quads natively. OpenGL is going through a major trimming process, cutting a lot of functionality that made sense 15 years ago, but no longer match how the GPU works, or how the GPU is ever going to work. The fixed function pipeline is gone from the latest versions too, I believe, because, once again, it's no longer necessary, and it no longer matches how the GPU works (programmable shaders).
The point is that the smaller and tighter the OpenGL API can be made, the easier it is for vendors to write robust, high-performance drivers, and the easier it is to learn to use the API correctly and efficiently.
A few years ago, practically anything in OpenGL could be done in 3-5 different ways, which put a lot of burden on the developer to figure out which implementation is the right one if you want optimal performance.
So they're trying to streamline the API.
People have already answered quite well on your question. On top of their answer, one of the reason that GL_QUADS being deprecated is because of quads's undefined nature.
For example try to model a 2d square with points (0,0,0), (1,0,0), (1,1,1), (0,1,0). This is flat quad with one corner dragged up. It is impossible to draw a NORMAL flat square in such way. Depending on drivers, it will be split to 2 triangles either one or another way - which we can't control. Such a model MUST be modeled with two triangles. - All three points of a triangle always lies on a same plane.
It isn't "going" to be anything. As with a lot of other functionality, GL_QUADS was deprecated in version 3.0 and removed in version 3.1. Obviously this is all irrelevant if you create a compatibility context.
Any answer that anyone might give for the reason for deprecating them would be sheer speculation.

Best OpenGL culling method for old game rendering? [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
I am rendering some old geometry from an old game. Their client had some algorithm which allowed them to see which areas were nearby but I don't have that ability so I am looking into culling the unnecessary polygons. Currently, I am rendering every single polygon in the whole zone regardless if I can see it or not, regardless if it is even in visual range. Obviously this is completely inefficient.
What type of culling should I look into using?
I know I can cull polygons not in the frustum and that will help alleviate some of the load but would I be able to say, choose to not render polygons that are a certain distance from the camera? What is this called? I also am using fog in some of the areas. Same question goes. Can I come up with a way where I can cull everything that is behind the fog, area I cannot see.
There are two different thing to consider: Do you just want to look it properly, i.e. hidden surface removal? Then simple depth testing will do the job; the overhead is, that you process geometry that doen't make it on the screen at all. However if that's a (very) old game, you took the data from, it's very likely that a full map with all it's assets has fewer polygons, than what's visible in modern games on a screenfull. In that case you'll not run in any performance problems.
If you really run into performance problems, you'll need to find a balance on how much time you want to spend, determining what's (not) visible, and actually rendering it. 10 years ago it was still crucial to be almost pixel perfect to save as much rasterizing time as possible. Modern GPUs have so much spare power, that it suffices to just do a coarse selection of what to include in rendering.
These calculations are however completely outside the scope of OpenGL or any other 3D rasterizing API (e.g. Direct3D) — their task is just drawing triangles to the screen using sophisticated rasterization methods; there's no object management, no higher level functions. So it's up to you to implement this.
The typical approach is using a spatial subdivision structure. Most popular are Kd trees, octrees and BSP trees. BSP trees are spatially very efficient, but heavier in computation. Personally I prefer a hybrid/combination of Kd tree and octree, since those are easy to modify to follow dynamic changes in the scene. BSP trees are a lot heavier to update (usually requires a full recomputation).
Given such a spatial structuring it's very easy to determine if a point lies in a specific region of interest. It is also very simple to select nodes in the tree by geometric constraints, like planes. This makes implementing a coarse frustum culling very easy: One uses the frustum clipping planes to select all the nodes from the tree within the planes. To make the GPUs life easier you then might want to sort the nodes near to far; again the tree structure helps you there, as you can recursively sort down the tree, resulting in a nearly optimal O(n log(n)) complexity.
If you still need to improve rendering performance, you could use the spatial divisions defined by the tree, to (invisibly) render testing geometry in a occlusion query, before recursing into the subtree limited by the tested bounds.
I know I can cull polygons not in the frustum and that will help alleviate some of the load but would I be able to say, choose to not render polygons that are a certain distance from the camera? What is this called?
This is already done by the frustum it self. The far plane set a camera distance limit to the object to be rendered.
Have a look at glFrustum.

Directx 9 Terrain collision

I searched and I found some tutorials how to do terrain collision but they were using .raw files, I'm using .x. But, I think i can do same thing they did. They took x,y,z values of an object can checked it against every single triangle in the terrain. It makes sense but It look like it will be slow. It is just like picking checking against every single triangle is slow.
Is there faster way to do it and good?
UPDATE
My terrain is not flat, if it was i would use bounding boxes.
Last time I did this, I used the Bullet library, and it worked great. It has various collision shapes to choose from, optimised for different scenarios, including general triangle meshes and heightfields. You can use the library's collision routines without the physics.
One common way to significantly reduce the time it takes to detect collisions is to organize the space into an octree, which will allow you to very quickly determine whether or not a collision could occur in a particular node. Generally speaking, it's easier to accomplish these sorts of tasks with a game engine.

Playing with geometry?

Does anyone have some useful beginner tutorials and code snippets for playing with basic geometric shapes and geometric proofs in code?
In particular something with the ability to easily create functions and recursively draw them on the screen. Additional requirements, but not absolute, support for Objective-C and basic window drawing routines for OS X and Cocoa.
A specific question how would one write a test to validate that a shape is in fact a square, triangle, etc. The idea being that you could draw a bunch of shapes, fit them together and test and analyze the emergent shape that arises from the set of sub shapes.
This is not a homework question. I am not in school. Just wanted to experiment with drawing code and geometry. And looking for an accessible way to play and experiment with shapes and geometry programming.
I am open to Java and Processing, or Actionscript/Haxe and Flash but would also like to use Objective C and Xcode to build projects as well.
What I am looking for are some clear tutorials to get me started down the path.
Some specific applications include clear examples of how to display for example parts of a Cantor Set, Mandelbrot Set, Julia set, etc...
One aside, I was reading on Wikipedia about the "Russell's Paradox". And the wiki article stated:
Let us call a set "abnormal" if it is
a member of itself, and "normal"
otherwise. For example, take the set
of all squares. That set is not itself
a square, and therefore is not a
member of the set of all squares. So
it is "normal". On the other hand, if
we take the complementary set that
contains all non-squares, that set is
itself not a square and so should be
one of its own members. It is
"abnormal".
The point about squares seems intuitively wrong to me. All the squares added together seem to imply a larger square. Obviously I get the larger paradox about sets. But what I am curious about is playing around with shapes in code and analyzing them empirically in code. So for example a potential routine might be draw four squares, put them together with no space between them, and analyze the dimensions and properties of the new shape that they make.
Perhaps even allowing free hand drawing with a mouse. But for now just drawing in code is fine.
If you're willing to use C++ I would recommend two libraries:
boost::GGL generic geometry library handles lots of geometric primitives such as polygons, lines, points, and so forth. It's still pretty new, but I have a feeling that it's going to be huge when it's officially added into boost.
CGAL, the Computational Geometry Algorithms Library: this thing is huge, and will do almost anything you'll ever need for geometry programming. It has very nice bindings for Qt as well if you're interested in doing some graphical stuff.
I guess OpenGL might not be the best starting point for this. It's quite low-level, and you will have to fight with unexpected behavior and actual driver issues. If you emphasize the "playing" part, go for Processing. It's a programming environment specifically designed to play with computer graphics.
However, if you really want to take the shape testing path, an in-depth study of computer vision algorithms in inevitable. On the other hand, if you just want to compare your shapes to a reference image, without rotation, scaling, or other distortions, the Visual Difference Predictor library might help you.
I highly recommend NeHe for any beginner OpenGL programmer, once you complete the first few tutorials you should be able to have fun with geometry any way you want.
Hope that helps