My goal is to draw white lines over an asphalt road. Since the properties of the road change, there cannot be just a texture representing both asphalt and white lines.
The current approach is to apply the asphalt texture and code some information in the other two texture coordinates. In a pixel shader, reading those coordinates, I decide whether that fragment should be white or not.
This results in high levels of aliasing. And that’s the problem I want to try to solve.
I have been changing the “whiteness” of the line applying smoothstep or linear interpolation. I have also changed the width and color according to distance from camera. This helps a little bit, but at far away distances, there are still ugly aliased lines.
How would you go on doing this? Would it be better to have a texture representing a smoothed white line and accessing the texels? Should I implement a bilinear filter accessing neighboring texels?
You should simply use 2 textures with 2 coordinates.
Small seamless asphalt texture tiled on the road polygon.
Mark texture with alpha that you will place on the middle of this polygon (with texture coordinate offset)
Or you can create extra polygons in the middle of the road for marks to avoid any aliasing.
To make it all looks real you can apply Texture Bombing with dirt and cracks.
Related
I am looking for a way to "fill" three-dimensional geometry with color, and quite possibly a texture at some time later on.
Suppose for a moment that you could physically phase your head into a concrete wall, logically you would see only darkness. In OpenGL, however, when you do this the world is naturally hollow and transparent due to culling and because of how the geometry is drawn. I want to simulate the darkness/color/texture within it instead.
I know some games do this by overlaying a texture/color directly over the hud--therefore blinding the player.
Is there another way to do this, though? Suppose the player is standing half in water; they can partially see below the waves. How would you fill it to prevent them from being able to see clearly below what is now half of their screen?
What is this concept even called?
A problem with the texture-in-front-of-the-camera method is a texture is 2D but you want to visualize a slice of a 3D volume. For the first thing you talk about, the head-inside-a-wall idea, I'll point you to "3D/volume texturing". For standing-half-in-water, you're after "volume rendering" with "absorption" (discussed by #user3670102).
3D texturing
The general idea here is you have some function that defines a colour everywhere in a 3D space, not just on a surface (as with regular texture mapping). This is nice because you can put geometry anywhere and colour it in the fragment shader based on the 3D position. Think of taking a slice through the volume and looking at the intersection colour.
For the head-in-a-wall effect you could draw a full screen polygon in front of the player (right on the near clipping plane, although you might want to push this forwards a bit so its not too small) and colour it based on a 3D function. Now it'll look properly solid and move ad the player does and not like you've cheaply stuck a texture over the screen.
The actual function could be defined with a 3D texture but that's very memory intensive. Instead, you could look into either procedural 3D colour (a procedural wood or brick shader is pretty common as an example). Even assuming a 2D texture is "extruded" through the volume will work, or better yet weight 3 textures (one for each axis) based on the angle of the intersection/surface you're drawing on.
Detecting an intersection with the geometry and the near clipping plane is probably the hardest bit here. If I were you I'd look at tricks with the z-buffer and make sure to draw everything as solid non-self-intersecting geometry. A simple idea might be to draw back faces only after drawing everything with front faces. If you can see back faces that part of the near plane must be inside something. For these pixels you could calculate the near clipping plane position in world space and apply a 3D texture. Though I suspect there are faster ways than drawing everything twice.
In reality there would probably be no light getting to what you see and it should be black, but I guess just ignore this and render the colour directly, unlit.
Absorption
This sounds way harder than it actually is. If you have some transparent solid that's all the one colour ("homogeneous") then it removes light the further light has to travel through it. Think of many alpha-transparent surfaces, take the limit and you have an exponential. The light remaining is close to 1/exp(dist) or exp(-dist). Google "Beer's Law". From here,
vec3 Absorbance = WaterColor * WaterDensity * -WaterDepth;
vec3 Transmittance = exp(Absorbance);
A great way to find distances through something is to render the back faces (or seabed/water floor) with additive blending using a shader that draws distance to a floating point texture. Then switch to subtractive blending and render all the front faces (or water surface). You're left with a texture containing distances/depth for the above equation.
Volume Rendering
Combining the two ideas, the material is both a transparent solid but the colour (and maybe density) varies throughout the volume. This starts to get pretty complicated if you have large amounts of data and want it to be fast. A straight forward way to render this is to numerically integrate a ray through the 3D texture (or procedural function, whatever you're using), at the same time applying the absorption function. A basic brute force Euler integration might start a ray for each pixel on the near plane, then march forwards at even distances. Over each step while you march you assume the colour remains constant and apply absorption, keeping track of how much light you have left. A quick google brings up this.
This seems related to looking through what's called "participating media". On the less extreme end, you'd have light fog, or smoky haze. In the middle could be, say, dirty water. And the extreme case would be your head-in-the-wall example.
Doing this in a physically accurate way isn't trivial, because the darkening effect is more pronounced when the thickness of the media is greater.
But you can fake this by making some assumptions and giving the interior geometry (under the water or inside the wall) darker by reduced lighting or using darker colors. If you care about the depth effect, look at OpenGL and fog.
For underwater, you can make the back side of the water a semi-transparent color that causes stuff above it to have a suitable change in color.
If you really want to go nuts with accuracy, look at Kajia's Rendering Equation. That covers everything (including stuff that glows), but generally needs simplification and approximations to be more useful.
I am rendering complex 3d objects. Here is a simple example with a sphere-like object:
Next I am applying a clipping plane to these objects and rendering a texture on this plane, giving the impression you are looking at the inside of the object, as if it was sliced. For example:
The problem is the jagged edge of the texture. It will stick out passed the boundary of the surface. Here's another angle where you can see it sticking out. The surface and the texture both derive from the same source data, but the surface is smoothed and has a higher resolution than the texture.
What I want is to be able to somehow clip the texture, so that it never sticks out past the boundary of the surface. Also, I don't want to simply scale down the texture, since although this might prevent it from sticking outside, it would create interior gaps between the texture edge and the surface edge. I would rather the texture be a little too big and have it clipped so that it sits flush against the edge of the surface.
Here's where I am:
I figured the first step would be to define the intersection of the plane and the surface. So now I have that, as an ordered list of line segments. However, I'm not sure how to proceed with this info (or if this is even the best approach).
I've been reading up on stencil buffers. One approach might be to turn the intersection line into a 2d shape and draw this into a stencil buffer. Then apply this when drawing the texture. (Although I think it's a lot of work since the shapes can be complicated.)
I am wondering if I can somehow use the already drawn surface (in conjunction with a stencil buffer or some other technique) to somehow clip the texture -- without having to go through the extra trouble of deriving the intersection line, etc.
What's the best approach here? (Any online examples you can point me to would also be really helpful.)
If you're clipping convex objects and know coordinates of clipped points, you can create polygonal "cap" yourself - just draw clipped points in proper order using GL_TRIANGLE_FAN, and that's it. Won't work with non-convex object - that would require triangulation algorithm. You could use glu tesselators to triangulate polygons, but that can be tricky/difficult.
If clipped area can be defined by formula, you can write a shader that'll precisely clip pixels over certain distance (i.e. if x^2+x^2+z^2 > r^2 do not draw pixel).
You could also draw back-facing faces with a shader that would draw every back facing pixel as if it were on on clip-plane using simple raytracing. That's complicated, and might be overkill in your case. Dead Rising used similar technique in their game engine.
Also you can use stencil buffer.
Draw back-facing faces first with GL_INCR (glStencilOp(GL_KEEP, GL_INCR, GL_INCR)), then draw front-facing surfaces with GL_DECR (glStencilOp(GL_KEEP, GL_DECR, GL_DECR)). Then draw texture only where stencil is non-zero. (glStencilFunc(GL_GREATER, 0, 0xff); glStencilOp(GL_KEEP, GL_KEEP, GL_KEEP);). If you have many overlapping shapes, however, you'll need to take special care of them.
--edit--
However, I'm not sure how to proceed with this info (or if this is even the best approach).
Draw it as a triangle fan. For convex objects, that's all you need. For non-convex objects that won't work.
ve been reading up on stencil buffers. One approach might be to turn the intersection line into a 2d shape
No, it won't work like that. Region you want to fill with texture should hold certain stencil value. That's how stencil clipping works.
to somehow clip the texture
In OpenGL you have 6(?) clip planes. If you need more than that, you'll need advanced techniques - stencil, deriving intersection line, shaders, or triangulation.
Any online examples you can point me to would also be really helpful
Drawing Filled, Concave Polygons Using the Stencil Buffer
I have researched and the methods used to make a blooming effects are usually based on having a sharp and blurred image conjoined to give the glow effect. But I want to know how I can make gl_lines(or any line) have brightness. Since in my game I am randomly generating a simple 2D terrain, I wish to make the terrain line segments glow.
Use a fragment shader to calculate the distance from a fragment to the edge and color the fragment with the appropriate color value. You can use a simple control curve to control the radius and intensity anlong of the glow(like in photoshop). It can also be tuned to act like wireframe visualization. The idea is you don't really rasterize points to lines using a draw call, just shade each pixel based on its distance from the corresponding edge.
The difference from using a blur pass is that you will first get better performance, and second - per-pixel control over the glow, you can have non-uniform glow which you cannot get by using blur because it is not really aware of the actual line geometry, it just blindly works on pixels, whereas with edge distance detection you do use the actual geometry data as input without flatting it down to pixels. You can also have stuff like gradient glows, e.g. the glow color is different and changes with the radius.
Since GL_LINE_SMOOTH is not hardware accelerated, nor supported on all GFX cards, how do you draw smooth lines in 2D mode, which would look as good as with GL_LINE_SMOOTH ?
Edit2: My current solution is to draw a line from 2 quads, which fade to zero transparency from edges and the colors in between those 2 quads would be the line color. it works good enough for basic smooth lines rendering and doesnt use texturing and thus is very fast to render.
So, you want smooth lines without:
line smoothing.
full-screen antialiasing.
shaders.
Alright.
Your best bet is to use Valve's Alpha-Tested Magnification technique. The basic idea, for your needs, is to create a texture that represents the distance from the line, with the center of the texture being a distance of 1.0. This could probably be a 1D texture.
Then using the techniques described in the paper (many of which work with fixed-function, including the antialiased version), draw a quad that represents your lines. Obviously you'll need alpha blending (and thus it isn't order-independent). You use your line width to control the distance at which it becomes the appropriate color, thus allowing you to make narrow or wide lines.
Doing this with shaders is virtually identical to the above, except without the texture. Instead of accessing a distance texture, the distance is passed and interpolated from the vertex shader. For the left-edge of the quad, the vertex shader passes 0. For the right edge, it passes 1. You multiply this by 2, subtract 1, and take the absolute value.
That's your distance from the line (the line being the center of the quad). Then just use that distance exactly as Valve's algorithm does.
Turning on full-screen anti-aliasing and using a quad would be my first choice.
Currently I am using 2 or 3 quads to do this, it is the simpliest way to do it.
If line thickness <= 1px, then you need only 2 quads.
If line thickness > 1px, then you need to add third quad in the middle.
The fading edge quads thickness must not change if the line thickness >= 1px.
In the image below you can see the quads with blue borders. White color means full opacity and black color means zero opacity (=fully transparent).
Greetings all,
As seen in the image , I draw lots of contours using GL_LINE_STRIP.
But the contours look like a mess and I wondering how I can make this look good.(to see the depth..etc )
I must render contours so , i have to stick with GL_LINE_STRIP.I am wondering how I can enable lighting for this?
Thanks in advance
Original image
http://oi53.tinypic.com/287je40.jpg
Lighting contours isn't going to do much good, but you could use fog or manually set the line colors based on distance (or even altitude) to give a depth effect.
Updated:
umanga, at first I thought lighting wouldn't work because lighting is based on surface normal vectors - and you have no surfaces. However #roe pointed out that normal vectors are actually per vertex in OpenGL, and as such, any POLYLINE can have normals. So that would be an option.
It's not entirely clear what the normal should be for a 3D line, as #Julien said. The question is how to define normals for the contour lines such that the resulting lighting makes visual sense and helps clarify the depth?
If all the vertices in each contour are coplanar (e.g. in the XY plane), you could set the 3D normal to be the 2D normal, with 0 as the Z coordinate. The resulting lighting would give a visual sense of shape, though maybe not of depth.
If you know the slope of the surface (assuming there is a surface) at each point along the line, you could use the surface normal and do a better job of showing depth; this is essentially like a hill-shading applied only to the contour lines. The question then is why not display the whole surface?
End of update
+1 to Ben's suggestion of setting the line colors based on altitude (is it topographic contours?) or based on distance from viewer. You could also fill the polygon surrounded by each contour with a similar color, as in http://en.wikipedia.org/wiki/File:IsraelCVFRtopography.jpg
Another way to make the lines clearer would be to have fewer of them... can you adjust the density of the contours? E.g. one contour line per 5ft height difference instead of per 1ft, or whatever the units are. Depending on what it is you're drawing contours of.
Other techniques for elucidating depth include stereoscopy, and rotating the image in 3D while the viewer is watching.
If your looking for shading then you would normally convert the contours to a solid. The usual way to do that is to build a mesh by setting up 4 corner points at zero height at the bounds or beyond then dropping the contours into the mesh and getting the mesh to triangulate the coords in. Once done you then have a triangulated solid hull for which you can find the normals and smooth them over adjacent faces to create smooth terrain.
To triangulate the mesh one normally uses the Delaunay algorithm which is a bit of a beast but there does exist libraries for doing it. The best of which I know of is the ones based on Guibas as Stolfi papers since its pretty optimal.
To generate the normals you do a simple cross product and ensure the facing is correct and manually renormalize them before feeding into the glNormal.
The in the old days you used to make a glList out of the result but the newer way is to make a vertex array. If you want to be extra flash then you can look for coincident planar faces and optimize the mesh down for faster redraw but thats a bit of a black art - good for games, not so good for CAD.
(thx for bonus last time)