Cylinder Normals - opengl

I'm developing an OpenGL application which right now only draws a big tube consisting of several small cylinders (kind of like a slinky). I'm getting an annoying effect when I turn the lighting and normals on, as from certain angles I get these annoying black dots on the cylinders' borders:
I belive this is a byproduct of the fact the cylinders are very thin. Basically I set the normal to (0,0,+/- 1) when setting the top/base, and then side normals are (cos(toRadian(beta)), sin(toRadian(beta)), 0).
Is it possible to remove this effect whitout getting 'fatter' cylinders? Or is there something wrong in the way I define the normals?
Thanks

This appears to be the rendering of the sides of the cylinders. The yellow in the image corresponds to the tops of the cylinders. The sides of the cylinders are at 90 degrees to the tops, so they are not lit (I'm guessing the light is in the same direction as the camera) and appear black. With the cylinders being so thin these will not fill make pixels so they don't show up much.
How to fix it? I've got a couple of ideas:
1) Just draw the tops and bottoms, not the sides - this will definitely fix the problem when viewed from this angle, but will lead to further problems if your camera moves.
2) Disable lighting, then all faces will be drawn the same colour (assuming the sides are the same colour as the top).
3) Use multi-sampling - this means most of the edges won't appear (as each pixel is more likely to be top than side due to the angle).
4) Add more lights around the scene, perpendicular to the current light.
1 & 2 are your best bet depending on what you're trying to achieve.

Related

Rendering Point Sprites across cameras in cube maps

I'm rendering a particle system of vertices, which are then tessellated into quads in a geom shader, and textured/rendered as point sprites. Then they are scaled in size depending on how far away they are from the camera. I'm trying to render out every frame of my scene into cube maps. So essentially I place six cameras into my scene and point them in each direction for the face of the cube and save an image.
My point sprites are of varying sizes. When they near the border of one camera, (if they are large enough) they appear in two cameras simultaneously. Since point sprites are always facing the camera, this means that they are not continuous along the seam when I wrap my cube map back into 3d space. This is especially noticeable when the points are quite close to the camera, as the points are larger, and stretch further across into both camera views. I'm also doing some alpha blending, so this may be contributing to the problem as well.
I don't think I can just cull points that near the edge of the camera, because when I put everything back into 3d I'd think there would be strange areas where the cloud is more sparsely populated. Another thought I had would be to blur the edges of each camera, but I think this too would give me a weird blurry zone when I go back to 3d space. I feel like I could manually edit the frames in photoshop so they look ok, but this would be kind of a pain since it's an animation at 30fps.
The image attached is a detail from the cube map. You can see the horizontal seam where the sprites are not lining up correctly, and a slightly less noticeable vertical one on the right side of the image. I'm sure that my camera settings are correct, because I've used this same camera setup in other scenes and my cubemaps look fine.
Anyone have ideas?
I'm doing this in openFrameworks / openGL fwiw.
Instead of facing the camera, make them face the origin of the cameras? Not sure if this fixes everything, but intuitively I'd say it should look close to OK. Maybe this is already what you do, I have no idea.
(I'd like for this to be a comment, but no reputation)

OpenGL/OpenTK Fill Interior Space

I am looking for a way to "fill" three-dimensional geometry with color, and quite possibly a texture at some time later on.
Suppose for a moment that you could physically phase your head into a concrete wall, logically you would see only darkness. In OpenGL, however, when you do this the world is naturally hollow and transparent due to culling and because of how the geometry is drawn. I want to simulate the darkness/color/texture within it instead.
I know some games do this by overlaying a texture/color directly over the hud--therefore blinding the player.
Is there another way to do this, though? Suppose the player is standing half in water; they can partially see below the waves. How would you fill it to prevent them from being able to see clearly below what is now half of their screen?
What is this concept even called?
A problem with the texture-in-front-of-the-camera method is a texture is 2D but you want to visualize a slice of a 3D volume. For the first thing you talk about, the head-inside-a-wall idea, I'll point you to "3D/volume texturing". For standing-half-in-water, you're after "volume rendering" with "absorption" (discussed by #user3670102).
3D texturing
The general idea here is you have some function that defines a colour everywhere in a 3D space, not just on a surface (as with regular texture mapping). This is nice because you can put geometry anywhere and colour it in the fragment shader based on the 3D position. Think of taking a slice through the volume and looking at the intersection colour.
For the head-in-a-wall effect you could draw a full screen polygon in front of the player (right on the near clipping plane, although you might want to push this forwards a bit so its not too small) and colour it based on a 3D function. Now it'll look properly solid and move ad the player does and not like you've cheaply stuck a texture over the screen.
The actual function could be defined with a 3D texture but that's very memory intensive. Instead, you could look into either procedural 3D colour (a procedural wood or brick shader is pretty common as an example). Even assuming a 2D texture is "extruded" through the volume will work, or better yet weight 3 textures (one for each axis) based on the angle of the intersection/surface you're drawing on.
Detecting an intersection with the geometry and the near clipping plane is probably the hardest bit here. If I were you I'd look at tricks with the z-buffer and make sure to draw everything as solid non-self-intersecting geometry. A simple idea might be to draw back faces only after drawing everything with front faces. If you can see back faces that part of the near plane must be inside something. For these pixels you could calculate the near clipping plane position in world space and apply a 3D texture. Though I suspect there are faster ways than drawing everything twice.
In reality there would probably be no light getting to what you see and it should be black, but I guess just ignore this and render the colour directly, unlit.
Absorption
This sounds way harder than it actually is. If you have some transparent solid that's all the one colour ("homogeneous") then it removes light the further light has to travel through it. Think of many alpha-transparent surfaces, take the limit and you have an exponential. The light remaining is close to 1/exp(dist) or exp(-dist). Google "Beer's Law". From here,
vec3 Absorbance = WaterColor * WaterDensity * -WaterDepth;
vec3 Transmittance = exp(Absorbance);
A great way to find distances through something is to render the back faces (or seabed/water floor) with additive blending using a shader that draws distance to a floating point texture. Then switch to subtractive blending and render all the front faces (or water surface). You're left with a texture containing distances/depth for the above equation.
Volume Rendering
Combining the two ideas, the material is both a transparent solid but the colour (and maybe density) varies throughout the volume. This starts to get pretty complicated if you have large amounts of data and want it to be fast. A straight forward way to render this is to numerically integrate a ray through the 3D texture (or procedural function, whatever you're using), at the same time applying the absorption function. A basic brute force Euler integration might start a ray for each pixel on the near plane, then march forwards at even distances. Over each step while you march you assume the colour remains constant and apply absorption, keeping track of how much light you have left. A quick google brings up this.
This seems related to looking through what's called "participating media". On the less extreme end, you'd have light fog, or smoky haze. In the middle could be, say, dirty water. And the extreme case would be your head-in-the-wall example.
Doing this in a physically accurate way isn't trivial, because the darkening effect is more pronounced when the thickness of the media is greater.
But you can fake this by making some assumptions and giving the interior geometry (under the water or inside the wall) darker by reduced lighting or using darker colors. If you care about the depth effect, look at OpenGL and fog.
For underwater, you can make the back side of the water a semi-transparent color that causes stuff above it to have a suitable change in color.
If you really want to go nuts with accuracy, look at Kajia's Rendering Equation. That covers everything (including stuff that glows), but generally needs simplification and approximations to be more useful.

How to draw smooth lines without using GLSL, FSAA nor GL_LINE_SMOOTH?

So i need a method to do smooth lines without using:
Full Screen Antialiasing (slow)
Shaders (not supported on all cards)
GL_LINE_SMOOTH (causes a crash on some cards)
Only way i could think of doing this was using a textured rectangle that is always faced at camera direction, but the problems are:
1. how do i always face the rectangle at the camera (efficiently) ?
2. how do i keep its size always the same no matter how far away my camera is looking at it?
Any other ideas?
Billboarding is a simple concept, but can be difficult to implement. A billboard is a flat object, usually a quad (square), which faces the camera. This direction usually changes constantly during runtime as the object and camera move, and the object needs to be rotated each frame to point in that direction. There are two types of billboarding: point and axis. Point sprites, or point billboards, are a quad that is centered at a point and the billboard rotates about that central point to face the user. Axis billboards come in two types: axis aligned and arbitrary. The axis-aligned (AA) billboards always have one local axis that is aligned with a global axis, and they are rotated about that axis to face the user. The arbitrary axis billboards are rotated about any axis to face the user.
http://nehe.gamedev.net/data/articles/article.asp?article=19
You can use point sprites, they are always the same size and always face the camera.
http://www.opengl.org/registry/specs/ARB/point_sprite.txt

What is back face culling?

What exactly is back face culling in OpenGL? Can you give me a specific example with e.g. one triangle?
If you look carefully you can see examples of this in a lot of video games. Any time the camera accidentally moves through an object - typically a moving object like a character - notice how the world continues to render correctly. That's because the back sides of the triangles that form the skin of the character are not rendered; they are effectively transparent. If this were not the case then every time the camera accidentally moved inside an object either the screen would go black (because the interior of the object is not lit) or you'd get a bizarre perspective on what the skin of the object looks like from the inside.
Back face culling is where the triangles pointing away from the camera/viewpoint are not considered for drawing.
Wikipedia defines this as:
It is a step in the graphical pipeline that tests whether the points in the polygon appear in clockwise or counter-clockwise order when projected onto the screen. If the user has specified that front-facing polygons have a clockwise winding, if the polygon projected on the screen has a counter-clockwise winding it has been rotated to face away from the camera and will not be drawn.
Other systems use the face normal and do the dot product with the view direction.
It is a relatively quick way of deciding whether to draw a triangle or not. Consider a cube. At any one time 3 of the sides of the cube are going to be facing away from the user and hence not visible. Even if these were drawn they would be obscured by the three "forward" facing sides. By performing back face culling you are reducing the number of triangles drawn from 12 to 6 (2 per side).
Back face culling works best with closed "solid" objects such as cubes, spheres, walls.
Some systems don't have this as they consider faces to be double sided and hence visible from either direction.
It's only and optimization technique.
When you look at a closed object, say a cube, you only see about half the faces : the faces that are towards you (or, at least, the faces that are not towards you are always occluded by another face that points towards you)
If you skip drawing all these backwards-facing faces, it will have two consequences :
- the rendering time will be twice better (on average)
- the final render won't change (since another, front-facing face will be drawn on top of a "culled" one)
So you basically get x2 perf for free.
In order to know whether the triangle is front- or back-facing, you take v0-v1 and v0-v2, make a cross product. This gives you the face normal. If this vector is towards you ( dot(normal, viewVector) < 0), draw.
Triangles have their coordinates specificed in a specific order, clockwise IIRC.
When the graphics engine look at a triangle from a specific direction, and the coordinates are counter-clockwise, it knows that it's looking at the backside of the triangle through an object. As the front side of the object is covering the triangle, it doesn't have to be drawn.

OpenGL Spotlight shining through from rear-face

I have a Spotlight source in OpenGL, pointing towards a texture mapped sphere.
I rotate the lightsource with the sphere, such that if I rotate the sphere to the 'non-light' side, that side should be dark.
The odd part is, the spotlight seems to be shining through my sphere (it's a solid, no gaps between triangles. The light seems to be 'leaking' through to the other side.
Any thoughts on why this is happening?
Screenshots:
Front view, low light to emphasize the problem
Back view, notice the round area that is 'shining through'
Its really hard to tell from the images, but:
Check if GL_LIGHT_MODEL_TWO_SIDE is being set (two sided lighting), but more importantly have a look at the normals of the sphere you are rendering.
Edit: Also - change the background colour to something lighter. Oh and make sure you aren't rendering with alpha blending turned on (maybe its a polygon sorting issue).
OK, I'm a nob - I was specifying my normals, but not calling glEnableClientState(GL_NORMAL_ARRAY). Hence all normals were facing one direction (I think that's the default, no?)
Anyway - a lesson learned - always go back over the basics.