The Processing project website has an example of implementing a 3D textured sphere with rotational capabilities. I'm trying to understand the code, but I'm having trouble comprehending many of the code blocks since I don't have a background in graphics.
Any higher-level explanation of what each block is trying to accomplish, perhaps referencing the relevant algorithm, would allow me to read up on the concepts and better understand the implementation.
After just a few minutes looking at the code, I'd say the draw() function is called by the Processing runtime system each time the image should be redrawn. This just paints a black background, then renders the globe with the renderGlobe() function.
The renderGlobe() function sets up the environment for drawing the globe, calculating position, turing on lights, setting the texture to IMAGE, etc. Then it calls texturedSphere to draw the globe. After that, it cleans up and adjusts the position variables for the next time through.
The initializeSphere() function calculates the vertex locations for the sphere. This is simple trigonometry.
The texturedSphere() function draws the sphere. First it draws the southern cap, which is really a cone, a very flat cone. Next it draws rings for each section of the sphere, and then tops it off with another cone for the northern cap.
Although I haven't gone through the Processing learning materials, the headings indicate that if you start from the beginning, and try everything in order, you'll easily understand this code.
Related
I am looking for a way to "fill" three-dimensional geometry with color, and quite possibly a texture at some time later on.
Suppose for a moment that you could physically phase your head into a concrete wall, logically you would see only darkness. In OpenGL, however, when you do this the world is naturally hollow and transparent due to culling and because of how the geometry is drawn. I want to simulate the darkness/color/texture within it instead.
I know some games do this by overlaying a texture/color directly over the hud--therefore blinding the player.
Is there another way to do this, though? Suppose the player is standing half in water; they can partially see below the waves. How would you fill it to prevent them from being able to see clearly below what is now half of their screen?
What is this concept even called?
A problem with the texture-in-front-of-the-camera method is a texture is 2D but you want to visualize a slice of a 3D volume. For the first thing you talk about, the head-inside-a-wall idea, I'll point you to "3D/volume texturing". For standing-half-in-water, you're after "volume rendering" with "absorption" (discussed by #user3670102).
3D texturing
The general idea here is you have some function that defines a colour everywhere in a 3D space, not just on a surface (as with regular texture mapping). This is nice because you can put geometry anywhere and colour it in the fragment shader based on the 3D position. Think of taking a slice through the volume and looking at the intersection colour.
For the head-in-a-wall effect you could draw a full screen polygon in front of the player (right on the near clipping plane, although you might want to push this forwards a bit so its not too small) and colour it based on a 3D function. Now it'll look properly solid and move ad the player does and not like you've cheaply stuck a texture over the screen.
The actual function could be defined with a 3D texture but that's very memory intensive. Instead, you could look into either procedural 3D colour (a procedural wood or brick shader is pretty common as an example). Even assuming a 2D texture is "extruded" through the volume will work, or better yet weight 3 textures (one for each axis) based on the angle of the intersection/surface you're drawing on.
Detecting an intersection with the geometry and the near clipping plane is probably the hardest bit here. If I were you I'd look at tricks with the z-buffer and make sure to draw everything as solid non-self-intersecting geometry. A simple idea might be to draw back faces only after drawing everything with front faces. If you can see back faces that part of the near plane must be inside something. For these pixels you could calculate the near clipping plane position in world space and apply a 3D texture. Though I suspect there are faster ways than drawing everything twice.
In reality there would probably be no light getting to what you see and it should be black, but I guess just ignore this and render the colour directly, unlit.
Absorption
This sounds way harder than it actually is. If you have some transparent solid that's all the one colour ("homogeneous") then it removes light the further light has to travel through it. Think of many alpha-transparent surfaces, take the limit and you have an exponential. The light remaining is close to 1/exp(dist) or exp(-dist). Google "Beer's Law". From here,
vec3 Absorbance = WaterColor * WaterDensity * -WaterDepth;
vec3 Transmittance = exp(Absorbance);
A great way to find distances through something is to render the back faces (or seabed/water floor) with additive blending using a shader that draws distance to a floating point texture. Then switch to subtractive blending and render all the front faces (or water surface). You're left with a texture containing distances/depth for the above equation.
Volume Rendering
Combining the two ideas, the material is both a transparent solid but the colour (and maybe density) varies throughout the volume. This starts to get pretty complicated if you have large amounts of data and want it to be fast. A straight forward way to render this is to numerically integrate a ray through the 3D texture (or procedural function, whatever you're using), at the same time applying the absorption function. A basic brute force Euler integration might start a ray for each pixel on the near plane, then march forwards at even distances. Over each step while you march you assume the colour remains constant and apply absorption, keeping track of how much light you have left. A quick google brings up this.
This seems related to looking through what's called "participating media". On the less extreme end, you'd have light fog, or smoky haze. In the middle could be, say, dirty water. And the extreme case would be your head-in-the-wall example.
Doing this in a physically accurate way isn't trivial, because the darkening effect is more pronounced when the thickness of the media is greater.
But you can fake this by making some assumptions and giving the interior geometry (under the water or inside the wall) darker by reduced lighting or using darker colors. If you care about the depth effect, look at OpenGL and fog.
For underwater, you can make the back side of the water a semi-transparent color that causes stuff above it to have a suitable change in color.
If you really want to go nuts with accuracy, look at Kajia's Rendering Equation. That covers everything (including stuff that glows), but generally needs simplification and approximations to be more useful.
I'm developing a project of 3D maze with OpenGL. Despite my poor knowledge of programming, I managed to build it, add textures and steering with mouse and keyboard (stolen from net in most part, though). Now I'm stuck with lighting.
My idea (similar to famous game SCP-087) was to make a headlight, with a short range of light. I've read a lot about lighting in OpenGL and normals, but light still acts odd and kind of randomly.
I have the functions, that move (translate) my world around the camera. I understand, that if I place my light0 on the beginning of code, before translations in Render() function, that light should be "still placed" with camera, and world shall move around them - that sounds simple. But it doesn't seem to work. Most of walls are lit when I'm standing sideways to them (and completely dark when I turn to them with other side).
My other problem are normals. I'm pretty sure, that my normals aren't specified well, but have no idea how they should look like. I'm using triangle strips for walls (theoretically for improved efficiency, but practically it's just my caprice) and I could nowhere find a solution how normals should be specified for straight walls made of triangle strips (how to attach normals for faces, when I have only two vertices definign a single face, actually). Making one wall of simple quads and attaching four perpendicular, lenght of one unit vectors for one face also didn't make any improvement.
I can neither handle properly parameters of OpenGL state machine, especially attenuation - no changes in code seems to affect final lighting.
I have currently no idea, where to search and what to read about lighting in OpenGL. I would like to make it supersimple, with no additional libraries (except glu.h). Can you give me a hand with this problem or refer me to any useful source of knowledge (simple examples needed...)?
I'm trying to create an OpenGL application with water waves and refraction. I need to either cast rays from the sun and then the camera and figure out where they intersect, or I need to start from the ocean floor and figure out in which direction(s, if any) I have to go in order to hit the sun or the camera. I'm kind of stuck, can any one give me an inpoint into either OpenGL ray casting or a crash course in advanced geometry? I don't want the ocean floor to be at a constant depth and I don't want the water waves to be simple sinusoidal waves.
First things first: The effect you're trying to achieve can be implemented using OpenGL, but it is not a feature of OpenGL. OpenGL by itself is just a sophisticated triangle to screen drawing API. You got some input data and write a program that performs relatively simple rasterizing drawing operations based on the input data using the OpenGL API. Shaders give it some space; you can implement a raytracer in the fragment shader.
In your case that means, you must implement a some algorithm that generates a picture like you intend. For water is must be some kind of raytracer or fake refraction method to get the effect of looking into the water. The caustics require either a full features photon mapper, or you're good with a fake effect based on the 2nd derivative of the water surface.
There is a WebGL demo, rendering stunningly good looking, interactive water: http://madebyevan.com/webgl-water/ And here's a video of it on YouTube http://www.youtube.com/watch?v=R0O_9bp3EKQ
This demo uses true raytracing (the water surface, the sphere and the pool are raytraced), the caustics are a "fake caustics" effect, based on projecting the 2nd derivative of the water surface heightmap.
There's nothing very OpenGL-specific about this.
Are you talking about caustics? Here's another good Gamasutra article.
Reflections are normally achieved by reflecting the camera in the plane of the mirror and rendering to a texture, you can apply distortion and then use it to texture the water surface. This only works well for small waves.
What you're after here is lots of little ways to cheat :-)
Techincally, all you perceive is a result of lightwaves/photons bouncing off the surfaces and propagating through mediums. For the "real deal" you'll have to trace the light directly from the Sun with each ray following the path:
hit the water surface
refract+reflect, reflected goes into the camera(*), refracted part goes further
hits the ocean bottom
reflects
hits the water from beneath
reflect+refracts, refracted part gets out of the water and hits the camera(*), reflected again goes to the ocean bottom, reflects etc.
(*) Actually, most of the rays will miss the camera, but that will be overly expensive, so this is a cheat.
Do this for at least three wavelengths - "red", "green" and "blue". Each of them will refract and reflect differently. You'll get the whole picture by combining the three.
Then you just create a texture with the rays that got into the camera and overlay it in OpenGL.
That's a straighforward, simple and very computationally expensive way that gives an approximation to the physics beyond the caustics.
I'm writing a Minecraft like static 3d block world in C++ / openGL. I'm working at improving framerates, and so far I've implemented frustum culling using an octree. This helps, but I'm still seeing moderate to bad frame rates. The next step would be to cull cubes that are hidden from the viewpoint by closer cubes. However I haven't been able to find many resources on how to accomplish this.
Create a render target with a Z-buffer (or "depth buffer") enabled. Then make sure to sort all your opaque objects so they are rendered front to back, i.e. the ones closest to the camera first. Anything using alpha blending still needs to be rendered back to front, AFTER you rendered all your opaque objects.
Another technique is occlusion culling: You can cheaply "dry-render" your geometry and then find out how many pixels failed the depth test. There is occlusion query support in DirectX and OpenGL, although not every GPU can do it.
The downside is that you need a delay between the rendering and fetching the result - depending on the setup (like when using predicated tiling), it may be a full frame. That means that you need to be creative there, like rendering a bounding box that is bigger than the object itself, and dismissing the results after a camera cut.
And one more thing: A more traditional solution (that you can use concurrently with occlusion culling) is a room/portal system, where you define regions as "rooms", connected via "portals". If a portal is not visible from your current room, you can't see the room connected to it. And even it is, you can click your viewport to what's visible through the portal.
The approach I took in this minecraft level renderer is essentially a frustum-limited flood fill. The 16x16x128 chunks are split into 16x16x16 chunklets, each with a VBO with the relevant geometry. I start a floodfill in the chunklet grid at the player's location to find chunklets to render. The fill is limited by:
The view frustum
Solid chunklets - if the entire side of a chunklet is opaque blocks, then the floodfill will not enter the chunklet in that direction
Direction - the flood will not reverse direction, e.g.: if the current chunklet is to the north of the starting chunklet, do not flood into the chunklet to the south
It seems to work OK. I'm on android, so while a more complex analysis (antiportals as noted by Mike Daniels) would cull more geometry, I'm already CPU-limited so there's not much point.
I've just seen your answer to Alan: culling is not your problem - it's what and how you're sending to OpenGL that is slow.
What to draw: don't render a cube for each block, render the faces of transparent blocks that border an opaque block. Consider a 3x3x3 cube of, say, stone blocks: There is no point drawing the center block because there is no way that the player can see it. Likewise, the player will never see the faces between two adjacent stone blocks, so don't draw them.
How to draw: As noted by Alan, use VBOs to batch geometry. You will not believe how much faster they make things.
An easier approach, with minimal changes to your existing code, would be to use display lists. This is what minecraft uses.
How many blocks are you rendering and on what hardware? Modern hardware is very fast and is very difficult to overwhelm with geometry (unless we're talking about a handheld platform). On any moderately recent desktop hardware you should be able to render hundreds of thousands of cubes per frame at 60 frames per second without any fancy culling tricks.
If you're drawing each block with a separate draw call (glDrawElements/Arrays, glBegin/glEnd, etc) (bonus points: don't use glBegin/glEnd) then that will be your bottleneck. This is a common pitfall for beginners. If you're doing this, then you need to batch together all triangles that share texture and shading parameters into a single call for each setup. If the geometry is static and doesn't change frame to frame, you want to use one Vertex Buffer Object for each batch of triangles.
This can still be combined with frustum culling with an octree if you typically only have a small portion of your total game world in the view frustum at one time. The vertex buffers are still loaded statically and not changed. Frustum cull the octree to generate only the index buffers for the triangles in the frustum and upload those dynamically each frame.
If you have surfaces close to the camera, you can create a frustum which represents an area that is not visible, and cull objects that are entirely contained in that frustum. In the diagram below, C is the camera, | is a flat surface near the camera, and the frustum-shaped region composed of . represents the occluded area. The surface is called an antiportal.
.
..
...
....
|....
|....
|....
|....
C |....
|....
|....
|....
....
...
..
.
(You should of course also turn on depth testing and depth writing as mentioned in other answer and comments -- it's very simple to do in OpenGL.)
The use of a Z-Buffer ensures that polygons overlap correctly.
Enabling the depth test makes every drawing operation check the Z-buffer before placing pixels onto the screen.
If you have convex objects you must (for performance) enable backface culling!
Example code:
glEnable(GL_CULL_FACE);
glEnable(GL_DEPTH_TEST);
glDepthMask(GL_TRUE);
You can change the behaviour of glCullFace() passing GL_FRONT or GL_BACK...
glCullFace(...);
// Draw the "game world"...
I am currently working on designing my first FPS game using JOGL. (Java bindings for OpenGL).
So far I have been able to generate the 'world' (a series of cubes), and a player model. I have the collision detection between the player and the cubes working great.
Now I am trying to add in the guns. I have the gun models drawn correctly and loading onto player model. The first gun I'm trying to implement is a laser gun, which shoots and instantaneous line-of-sight laser at whatever you're aiming at. Before I work on implementing the enemy models, I would like to get the collision detection between the laser and the walls working.
My laser, currently, is drawn by a series of small cubes, one after the other. The first cube is drawn at the end of the players gun, then it draws continuously from there. The idea was to continue drawing the cubes of the laser until a collision was detected with something, namely the cubes in the world.
I know the locations of the cubes in the world. The problem is that I have to call glMatrixPush to draw my character model. The laser is then drawn within this modelview. Meaning that I have lost my old coordinate system - so I'm drawing the world in one system, then the lazer in another. Within this player matrix, I have go call glRotate and glTranslate several times, in order to sync everything up with the way the camera is rotating. The lazer is then built by translating along the z-axis of this new system.
My problem is that through all of these transformations, I no longer have any idea where my laser exists in the map coordinate system, primarily due to the rotations involving the camera.
Does anyone know of a method - or have any ideas, for how to solve this problem? I believe I need a way to convert the new coordinates of the laser into the old coordinates of the map, but I'm not sure how to go about undoing all of the transformations that have been done to it. There may also be some functionality provided by OpenGL to handle this sort of problem that I'm just unaware of.
You shouldn't be considering the laser as a spacial child of the character that fires it. Once its been fired, the laser is an entity of its own, so you should render as follows:
glPushMatrix(viewMatrix);
glPushMatrix(playerMatrix);
DrawPlayer();
glPopMatrix();
glPushMatrix(laserMatrix);
DrawLaser();
glPopMatrix();
glPopMatrix();
Also, be sure that you don't mix your rendering transformation logic with the game logic. You should always store the world-space position of your objects to be able to test for intersections regardless of your current OpenGL matrix stack.
Remember to be careful with spacial parent/child relationships. In practice, they aren't that frequent. For more information, google about the problems of scene graphs.
The point that was being made in the first answer is that you should never depend on the matrix to position the object in the first place. You should be keeping track of the position and rotation of the laser before you even think about drawing it. Then you use the translate and rotate commands to put it where you know it should be.
You're trying to do things backwards, and yes, that does mean you'll have to do the matrix math, and OpenGL doesn't keep track of that because the ModelView matrix is the ONLY thing that OpenGL does keep track of in regards to object positions. OpenGL has no concept of "world space" or "camera space". There is only the matrix that all input is multiplied by. It's elegantly simple... but in some cases I do prefer the way DirectX has a a separate view matrix and model matrix.
So, if you don't know where an object is located without matrix math, then I would consider that a fundamental design problem. If you don't need to know the object position, then matrix-transform to your hearts content, but if you do need it's position, start with the position.
(pretty much what the first answer says, just in a different way...)