A line rendered with GL_LINES is visible regardless of how far away from the camera it is, yet its visibility is still able to be obstructed by world geometry.
I recently converted my ballistic projectile rendering code from GL_LINES to lines implemented with polygons, and discovered that projectiles now go completely invisible once they're sufficiently far away from the camera.
Is there a way render lines using polygons, but
Still retain the infinite render distance behaviour that GL_LINES has
Still be able to obstruct the visibility of the line with world geometry
edit: hmmm, couldn't I just "undo" (or dampen) the perspective projection by performing the inverse of it on the model's geometry in the vertex shader? it would stretch the geometry to make it look orthographic, yet still be obstructable by other geometry on the map
Related
I'm trying to add a skybox to the world/camera/game and I don't know how to go about it. If someone could give me some guidance on how to apply it, it would be much appreciated.
I have already loaded the skybox, I just don't know how to draw it properly so it will fit around the camera as it moves.
I have managed to texture a sort of cube, which might be close to a skybox but then it's only visible from the outside. Once you enter the cube, you can't see it from the inside. Perhaps if I could invert the cube's faces, it will show when I'm inside the cube and I can make it larger?
From outside the cube looking at it
From inside looking out
I had a similar problem a few weeks back, if you are looking for some pseudo code I think I may be able to help. First of all using a cube isn't the best idea when rendering as your box won't look natural, map it to a sphere for a smooth effect.
Create a bounding sphere around your viewer that moves relative to your camera
Apply the texture on that sphere, this will give the impression that the sky is moving relative to you
When you are drawing, disable your z-buffer and frustum (assuming you're using any culling algorithm) this will allow the sky box to be drawn but will ensure terrain is drawn over the top of the sky box when depth sort algorithms are performed by OpenGL.
Note: Don't forget to re-enable the z-buffer after the sky box has been drawn, otherwise your terrain elements will appear outside of the sphere, meaning you will only see the Sky box.
I recently wrote a basic terrain engine in DirectX but the principals are fairly similar, if you'd like to view the repo you can find it here
Check out line 286 in this file to see how the Skybox is rendered, then also visit the SkyBox implementation file to see how it is constructed, and the SkyShader implementation file to see how the texture is mapped to the sphere, the main method to be concerned with in the shader file is SetShaderParameters()
In terms of moving the skybox relative to your camera, simply set the WVP matrix of your skybox to that of your camera, and then tweak the x, y, z planes of the skybox to your liking.
Extra If you are going to implement multi-player aspects, just disable back-face rendering for the sphere, then each player can see their SkyBox but opponents cannot. Alternatively you create one large sphere around the world
Hope that helps - if you need anymore help just ask, I know this stuff can be fairly dense at first:)
After reading up on OpenGL and GLSL I was wondering if there were examples out there to make something like this http://i.stack.imgur.com/FtoBj.png
I am particular interesting in the beam and intensity of light (god ray ?) .
Does anybody have a good start point ?
OpenGL just draws points, lines and triangles to the screen. It doesn't maintain a scene and the "lights" of OpenGL are actually just a position, direction and color used in the drawing calculations of points, lines or triangles.
That being said, it's actually possible to implement an effect like yours using a fragment shader, that implements a variant of the shadow mapping method. The difference would be, that instead of determining if a surface element of a primitive (point, line or triangle) lies in the shadow or not, you'd cast rays into a volume and for every sampling position along the ray test if that volume element (voxel) lies in the shadow or not and if it's illuminated add to the ray accumulator.
I want to create skybox, which is just textured cube around camera. But actually i don't understand how this can work, because the camera viewing volume is frustum and the skybox is cube. According to this source:
http://www.songho.ca/opengl/gl_projectionmatrix.html
Note that the frustum culling (clipping) is performed in the clip
coordinates, just before dividing by wc. The clip coordinates, xc, yc
and zc are tested by comparing with wc. If any clip coordinate is less
than -wc, or greater than wc, then the vertex will be discarded.
vertexes of skybox faces should be clipped, if they are outside of frustum.
So it looks for me that the cube should be actually a frustum and should match exactly the gl frustum faces, so my whole scene is wrapped inside of that skybox, but i am sure this is bad. Is there any way how to fill whole screen with something, that wraps whole gl frustum?
The formulation from your link is rather bad. It is not actually vertices that get clipped, but rather fragments. Drawing a primitive with vertices completely off-screen does not prevent the fragments that would intersect with the screen from getting drawn. (The picture in the link also actually shows this being the case.)
That having been said, however, it may (or may not, depending on the design of your rendering code) be easier to simply draw a full-screen quad, and use the inverse of the projection matrix to calculate the texture coordinates instead.
I'm trying to implement a multi-pass rendering method using OpenSceneGraph. However, I'm not entirely certain my problem is theoretical or due to a lack of applied knowledge of OSG. Thus far, I've successfully implemented multi-pass shading by rendering to a texture using an orthogonal projection, but I cannot seem to make a perspective projection work.
It may be that I don't quite understand how to implement multi-pass shading. Of course, I have to pre-render the entire scene with the multi-pass shaders to a texture, then use the texture in the final render. However, I'm not talking about creating a separate texture for each object in the scene, but effectively capturing a screenshot of the entire prerendered scene. Then, from that texture alone, applying the rendered effects to the individual geometries.
I assume this means I would have to do an extra conversion of the vertex coordinates for each geometry in the vertex shader. That is, after computing:
gl_Position = ModelViewProjectionMatrix * Vertex;
I would need to go a step further and calculate the vertex's screen coordinates in order to map the vertices correctly (again, given that the texture consists of an entire screen shot of the scene).
If I am correct, then I must be able to pre-render the scene in a perspective view identical to the view used in the final render, rather than an orthogonal view. This is where I have troubles. I can make an orthogonal view do what I want, but not the perspective view.
Am I correct in my approach? The only other approach I can imagine is to render everything to a screen-filling quad (in effect, the same thing as converting to screen coordinates), but that doesn't alleviate the need to use a perspective projection in the pre-render stage.
Thoughts? Links??
edit: I should also point out that in my successful attempts, I used a fragment shader only. The perspective projection worked, but, of course, the screen aligned quad I was using was offset rather than centered. I added a pass-through vertex shader and everything went blank.
As it turns out, my approach was correct. It's especially nice as it avoids having to add another camera to my scene graph to render the final output - I can simply use the main camera. Unfortunately, it means that all of my output textures are rendered at the screen resolution, rather than a resolution appropriate to the size of the object. That is, if my screen is 1024 x 1024, then so is the output texture, one for each pre-render camera in the graph. Not exactly efficient, but it'll do for now.
I'm going to have meshes with several coplanar polygons, all lying in a certain plane, that I'm not going to be able to eliminate.
These polygons have a specific draw order. Some polygons are behind other polygons. If I turn off depth testing I'll have the effect I want, but I want to be able to position this mesh in a 3D scene.
I do not trust glPolygonOffset because I'll potentially have several of these overlapping polygons and am am worried about the cumulative effects of the offset.
If I turn off depth testing I'll have the effect I want, but I want to be able to position this mesh in a 3D scene.
Simply disable writing to z-buffer, without disabling depth test.
glDepthMask(GL_FALSE);
Make sure to render all polygons that doesn't require glDepthMask(GL_FALSE) before rendering any polygons with glDepthMask(GL_FALSE); Otherwise object will be incorrectly positioned.
If you can't do that, then you should change your geometry or use texture instead.
glDepthMask documentation