I have been following some OpenGL tutorials for an open world project i am currently working on where the goal is to have an Openworld Scene with several objects (mountains etc...) present and with a SkyBox where all the objects are placed inside it.
I would like to ask if there is any way of the camera freely moving inside the skybox, "interacting" with potential objects in it, but without actually getting out of the boundaries of the box. In the tutorials the translation of the camera is removed, so it can only look around without moving around.
Is it a common practice to actually move the camera inside the skybox, or should i somehow move the skybox along with the camera, thus never reaching the boundaries of the box?
Skybox is usually rendered without offset to camera because its content represent stuff very far away (many times bigger than actual camera movement) like stars or mountains that are many kilometers away. So even if you move like 100 m in any direction the rendered result is not changed at all (or very little that can not be recognized).
If your skybox contains stuff you want to move towards than is doable but you need to limit the movement so you not get too close as that would result in pixelation of the skybox and eventually even crossing it. That can be done by game terrain (you can not jump above boundary mountains or swim too far from an island etc.
Another option is to limit camera distance from skybox center to some safe distance. If more far then the limit move the skybox to match the distance again... that way you can come near/far to skybox up to a point (it gets bigger/smaller on the close/far side) and never cross it ... without any actual camera position restrictions...
First things first, when you are rendering a sky box, generally, you don't render an actual box.
The skybox contains stuff that generally never or only very slowly change and is so far away that the player will never reach. The skybox is stored in a cube map texture and rendered through a full screen rectangle. In the shader you use OpenGL's cubemap sampling by sampling with the eye vector into the map.
If the skybox is dynamic, like dynamic time of day, it is only re rendered every couple of frames or only when needed.
A while back I wrote an article on how to do it: GLSL Skybox (You will need to update the code to a modern OpenGL version through...)
Related
I'm rendering a particle system of vertices, which are then tessellated into quads in a geom shader, and textured/rendered as point sprites. Then they are scaled in size depending on how far away they are from the camera. I'm trying to render out every frame of my scene into cube maps. So essentially I place six cameras into my scene and point them in each direction for the face of the cube and save an image.
My point sprites are of varying sizes. When they near the border of one camera, (if they are large enough) they appear in two cameras simultaneously. Since point sprites are always facing the camera, this means that they are not continuous along the seam when I wrap my cube map back into 3d space. This is especially noticeable when the points are quite close to the camera, as the points are larger, and stretch further across into both camera views. I'm also doing some alpha blending, so this may be contributing to the problem as well.
I don't think I can just cull points that near the edge of the camera, because when I put everything back into 3d I'd think there would be strange areas where the cloud is more sparsely populated. Another thought I had would be to blur the edges of each camera, but I think this too would give me a weird blurry zone when I go back to 3d space. I feel like I could manually edit the frames in photoshop so they look ok, but this would be kind of a pain since it's an animation at 30fps.
The image attached is a detail from the cube map. You can see the horizontal seam where the sprites are not lining up correctly, and a slightly less noticeable vertical one on the right side of the image. I'm sure that my camera settings are correct, because I've used this same camera setup in other scenes and my cubemaps look fine.
Anyone have ideas?
I'm doing this in openFrameworks / openGL fwiw.
Instead of facing the camera, make them face the origin of the cameras? Not sure if this fixes everything, but intuitively I'd say it should look close to OK. Maybe this is already what you do, I have no idea.
(I'd like for this to be a comment, but no reputation)
I'm trying to add a skybox to the world/camera/game and I don't know how to go about it. If someone could give me some guidance on how to apply it, it would be much appreciated.
I have already loaded the skybox, I just don't know how to draw it properly so it will fit around the camera as it moves.
I have managed to texture a sort of cube, which might be close to a skybox but then it's only visible from the outside. Once you enter the cube, you can't see it from the inside. Perhaps if I could invert the cube's faces, it will show when I'm inside the cube and I can make it larger?
From outside the cube looking at it
From inside looking out
I had a similar problem a few weeks back, if you are looking for some pseudo code I think I may be able to help. First of all using a cube isn't the best idea when rendering as your box won't look natural, map it to a sphere for a smooth effect.
Create a bounding sphere around your viewer that moves relative to your camera
Apply the texture on that sphere, this will give the impression that the sky is moving relative to you
When you are drawing, disable your z-buffer and frustum (assuming you're using any culling algorithm) this will allow the sky box to be drawn but will ensure terrain is drawn over the top of the sky box when depth sort algorithms are performed by OpenGL.
Note: Don't forget to re-enable the z-buffer after the sky box has been drawn, otherwise your terrain elements will appear outside of the sphere, meaning you will only see the Sky box.
I recently wrote a basic terrain engine in DirectX but the principals are fairly similar, if you'd like to view the repo you can find it here
Check out line 286 in this file to see how the Skybox is rendered, then also visit the SkyBox implementation file to see how it is constructed, and the SkyShader implementation file to see how the texture is mapped to the sphere, the main method to be concerned with in the shader file is SetShaderParameters()
In terms of moving the skybox relative to your camera, simply set the WVP matrix of your skybox to that of your camera, and then tweak the x, y, z planes of the skybox to your liking.
Extra If you are going to implement multi-player aspects, just disable back-face rendering for the sphere, then each player can see their SkyBox but opponents cannot. Alternatively you create one large sphere around the world
Hope that helps - if you need anymore help just ask, I know this stuff can be fairly dense at first:)
I am looking for a way to "fill" three-dimensional geometry with color, and quite possibly a texture at some time later on.
Suppose for a moment that you could physically phase your head into a concrete wall, logically you would see only darkness. In OpenGL, however, when you do this the world is naturally hollow and transparent due to culling and because of how the geometry is drawn. I want to simulate the darkness/color/texture within it instead.
I know some games do this by overlaying a texture/color directly over the hud--therefore blinding the player.
Is there another way to do this, though? Suppose the player is standing half in water; they can partially see below the waves. How would you fill it to prevent them from being able to see clearly below what is now half of their screen?
What is this concept even called?
A problem with the texture-in-front-of-the-camera method is a texture is 2D but you want to visualize a slice of a 3D volume. For the first thing you talk about, the head-inside-a-wall idea, I'll point you to "3D/volume texturing". For standing-half-in-water, you're after "volume rendering" with "absorption" (discussed by #user3670102).
3D texturing
The general idea here is you have some function that defines a colour everywhere in a 3D space, not just on a surface (as with regular texture mapping). This is nice because you can put geometry anywhere and colour it in the fragment shader based on the 3D position. Think of taking a slice through the volume and looking at the intersection colour.
For the head-in-a-wall effect you could draw a full screen polygon in front of the player (right on the near clipping plane, although you might want to push this forwards a bit so its not too small) and colour it based on a 3D function. Now it'll look properly solid and move ad the player does and not like you've cheaply stuck a texture over the screen.
The actual function could be defined with a 3D texture but that's very memory intensive. Instead, you could look into either procedural 3D colour (a procedural wood or brick shader is pretty common as an example). Even assuming a 2D texture is "extruded" through the volume will work, or better yet weight 3 textures (one for each axis) based on the angle of the intersection/surface you're drawing on.
Detecting an intersection with the geometry and the near clipping plane is probably the hardest bit here. If I were you I'd look at tricks with the z-buffer and make sure to draw everything as solid non-self-intersecting geometry. A simple idea might be to draw back faces only after drawing everything with front faces. If you can see back faces that part of the near plane must be inside something. For these pixels you could calculate the near clipping plane position in world space and apply a 3D texture. Though I suspect there are faster ways than drawing everything twice.
In reality there would probably be no light getting to what you see and it should be black, but I guess just ignore this and render the colour directly, unlit.
Absorption
This sounds way harder than it actually is. If you have some transparent solid that's all the one colour ("homogeneous") then it removes light the further light has to travel through it. Think of many alpha-transparent surfaces, take the limit and you have an exponential. The light remaining is close to 1/exp(dist) or exp(-dist). Google "Beer's Law". From here,
vec3 Absorbance = WaterColor * WaterDensity * -WaterDepth;
vec3 Transmittance = exp(Absorbance);
A great way to find distances through something is to render the back faces (or seabed/water floor) with additive blending using a shader that draws distance to a floating point texture. Then switch to subtractive blending and render all the front faces (or water surface). You're left with a texture containing distances/depth for the above equation.
Volume Rendering
Combining the two ideas, the material is both a transparent solid but the colour (and maybe density) varies throughout the volume. This starts to get pretty complicated if you have large amounts of data and want it to be fast. A straight forward way to render this is to numerically integrate a ray through the 3D texture (or procedural function, whatever you're using), at the same time applying the absorption function. A basic brute force Euler integration might start a ray for each pixel on the near plane, then march forwards at even distances. Over each step while you march you assume the colour remains constant and apply absorption, keeping track of how much light you have left. A quick google brings up this.
This seems related to looking through what's called "participating media". On the less extreme end, you'd have light fog, or smoky haze. In the middle could be, say, dirty water. And the extreme case would be your head-in-the-wall example.
Doing this in a physically accurate way isn't trivial, because the darkening effect is more pronounced when the thickness of the media is greater.
But you can fake this by making some assumptions and giving the interior geometry (under the water or inside the wall) darker by reduced lighting or using darker colors. If you care about the depth effect, look at OpenGL and fog.
For underwater, you can make the back side of the water a semi-transparent color that causes stuff above it to have a suitable change in color.
If you really want to go nuts with accuracy, look at Kajia's Rendering Equation. That covers everything (including stuff that glows), but generally needs simplification and approximations to be more useful.
I have attached a camera object to a mobile object (a car) at a scene. The camera shows the area which the object is at and looking to (front window of the car).
My problem is that, while my object (with camera) moves or rotates, the objects at the screen (street light sticks, other cars moving parallel) seem shaking. Namely I see the solid object and some little transparent versions of the same object near to the original. It is more observable when the camera changes its orientation than when it moves straight. When moving straight, the lights shake when they are far from the observer, at some range they stop shaking, and when they get nearer they start shaking again.
I do not think that OpenGL makes a motion blur at the background by itself. But I also could not find a name to this problem, so I am not able to find a starting point.
When the program is in full screen mode, and the time required to draw the full scene is below 16msec (for 60fps), then I could be able to have smooth movement with no objects oscillating.
When the draw time oscillates too much or when I set Vega to limit the fps algorithmically, the scene oscillates.
When I set the swap interval to 1, namely letting the underlying draw thread to match with screen refresh, then it is fine again.
I'm writing a Minecraft like static 3d block world in C++ / openGL. I'm working at improving framerates, and so far I've implemented frustum culling using an octree. This helps, but I'm still seeing moderate to bad frame rates. The next step would be to cull cubes that are hidden from the viewpoint by closer cubes. However I haven't been able to find many resources on how to accomplish this.
Create a render target with a Z-buffer (or "depth buffer") enabled. Then make sure to sort all your opaque objects so they are rendered front to back, i.e. the ones closest to the camera first. Anything using alpha blending still needs to be rendered back to front, AFTER you rendered all your opaque objects.
Another technique is occlusion culling: You can cheaply "dry-render" your geometry and then find out how many pixels failed the depth test. There is occlusion query support in DirectX and OpenGL, although not every GPU can do it.
The downside is that you need a delay between the rendering and fetching the result - depending on the setup (like when using predicated tiling), it may be a full frame. That means that you need to be creative there, like rendering a bounding box that is bigger than the object itself, and dismissing the results after a camera cut.
And one more thing: A more traditional solution (that you can use concurrently with occlusion culling) is a room/portal system, where you define regions as "rooms", connected via "portals". If a portal is not visible from your current room, you can't see the room connected to it. And even it is, you can click your viewport to what's visible through the portal.
The approach I took in this minecraft level renderer is essentially a frustum-limited flood fill. The 16x16x128 chunks are split into 16x16x16 chunklets, each with a VBO with the relevant geometry. I start a floodfill in the chunklet grid at the player's location to find chunklets to render. The fill is limited by:
The view frustum
Solid chunklets - if the entire side of a chunklet is opaque blocks, then the floodfill will not enter the chunklet in that direction
Direction - the flood will not reverse direction, e.g.: if the current chunklet is to the north of the starting chunklet, do not flood into the chunklet to the south
It seems to work OK. I'm on android, so while a more complex analysis (antiportals as noted by Mike Daniels) would cull more geometry, I'm already CPU-limited so there's not much point.
I've just seen your answer to Alan: culling is not your problem - it's what and how you're sending to OpenGL that is slow.
What to draw: don't render a cube for each block, render the faces of transparent blocks that border an opaque block. Consider a 3x3x3 cube of, say, stone blocks: There is no point drawing the center block because there is no way that the player can see it. Likewise, the player will never see the faces between two adjacent stone blocks, so don't draw them.
How to draw: As noted by Alan, use VBOs to batch geometry. You will not believe how much faster they make things.
An easier approach, with minimal changes to your existing code, would be to use display lists. This is what minecraft uses.
How many blocks are you rendering and on what hardware? Modern hardware is very fast and is very difficult to overwhelm with geometry (unless we're talking about a handheld platform). On any moderately recent desktop hardware you should be able to render hundreds of thousands of cubes per frame at 60 frames per second without any fancy culling tricks.
If you're drawing each block with a separate draw call (glDrawElements/Arrays, glBegin/glEnd, etc) (bonus points: don't use glBegin/glEnd) then that will be your bottleneck. This is a common pitfall for beginners. If you're doing this, then you need to batch together all triangles that share texture and shading parameters into a single call for each setup. If the geometry is static and doesn't change frame to frame, you want to use one Vertex Buffer Object for each batch of triangles.
This can still be combined with frustum culling with an octree if you typically only have a small portion of your total game world in the view frustum at one time. The vertex buffers are still loaded statically and not changed. Frustum cull the octree to generate only the index buffers for the triangles in the frustum and upload those dynamically each frame.
If you have surfaces close to the camera, you can create a frustum which represents an area that is not visible, and cull objects that are entirely contained in that frustum. In the diagram below, C is the camera, | is a flat surface near the camera, and the frustum-shaped region composed of . represents the occluded area. The surface is called an antiportal.
.
..
...
....
|....
|....
|....
|....
C |....
|....
|....
|....
....
...
..
.
(You should of course also turn on depth testing and depth writing as mentioned in other answer and comments -- it's very simple to do in OpenGL.)
The use of a Z-Buffer ensures that polygons overlap correctly.
Enabling the depth test makes every drawing operation check the Z-buffer before placing pixels onto the screen.
If you have convex objects you must (for performance) enable backface culling!
Example code:
glEnable(GL_CULL_FACE);
glEnable(GL_DEPTH_TEST);
glDepthMask(GL_TRUE);
You can change the behaviour of glCullFace() passing GL_FRONT or GL_BACK...
glCullFace(...);
// Draw the "game world"...