opengl texturing a height map [closed] - c++

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I can't seem to find any tutorials or information on how to properly texture a terrain generated from a height map.
Texture mapping is not the problem, what I want to know is how to texture the entire terrain with difference elevations having different textures.
What I came up with was having a texture that looks like this:
Depending on the elevation of the triangle a different texture is assigned to it.
The result:
What I'm trying to do is create an environment like skyrim, were textures don't need to repeat constantly, a convincing landscape!
The question is how do I create some thing like this:
The textures blend together seamlessly at different elevations! How is this done? What is the technique used?
Example Video: http://www.youtube.com/watch?v=qzkBnCBpQAM

One way would be to use a 3D-Texture for your terrain. Each layer of the texture is a different material (sand, rock, grass for example). Every vertex has a third uv-component that specifies the blending between two adjacent textures, you could also use the height of the vertex here. Note that a blending between grass and sand in our example is not possible with this approach because rock 'lies in between', but it is certainly the easiest and fastest method. An other method would be to use individual 2-dimensional textures instead of a single 3D one. You would then bind the textures sand and grass for example and draw all vertices that need a blending between the two. Bind two other textures and repeat. That is certainly more complicated and slower but allows blending between any two textures.
There might be more methods but these two are the ones I can think off right now.
Professional game engines usually use more advanced methods, I've seen designers painting multiple materials on a terrain like in photoshop but that's a different story.

Related

How to make low-res graphics with opengl? [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 5 years ago.
Improve this question
I am writing using opengl in C and I want to make oldschool style graphics – like Star Fox for SNES. So I plan to have a 2D array (I'll figure out how, just talking pseudocode for now) of fragments that will represent the lower resolution (you can imagine just containing rbg color info). So I'm going to be writing my own code that makes the 3D world and rasterizes it into this 2D array (might try to get the GPU to help there). Does this even make sense? Are there better ways to make low-res 2D graphics using OpenGL?
Render scene to low-resolution FBO.
Stretch-blit FBO contents to screen using a textured quad or glBlitFramebuffer().

OpenGL Render Queue - Where do my buffers go? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I'm trying to redesign my current rendering system, which pretty much works as you would think for a first time renderer: Every frame, the game calls Render(), which calls Render() on the world, which Z sorts the entities (It's a 2D game), calls Render(SpriteBatch& batch) on every Entity, and then each Entity uses the given SpriteBatch to render it's sprite. The SpriteBatch has a VAO, a VBO, and an EBO. It is efficient for sprite rendering, but still fairly naive. I also have no way to render other things, like polygons and lines and such. For example, I wanted to be able to just tell my renderer to render a line from one point to another. In order to do that now, I have to create an entire Mesh, set its primitive type to GL_LINES, upload vertex data, upload element data, then call Render() on it somewhere. And those lines won't even be batched because they each have their own OpenGL buffers.
I want to move all the rendering logic into my OpenGLRenderer class. I have been doing lots of reading and it seems like the way to go is to use some sort of RenderQueue and RenderCommand setup. I think I understand that pretty well - Each entity will create a RenderCommand, with its Mesh/Material or Sprite (which would just be data) and then submit it to the Renderer. The Renderer will then sort the commands based on Material and such to avoid state changes wherever possible.
This is where my question comes in. Where will my buffers go? I have an entire queue of RenderCommands, which point to all the vertex and material data I need, but how do I get this over to OpenGL? I was thinking about just having one big VAO, VBO, and EBO, and treating the entire thing like a big batch. I would flush the buffers if state needed to change, and batch data together that could be batched. However something feels strange about having one huge set of OpenGL buffers for an entire game.

OpenGL - How to show the occluded region of a sprite as a silhouette [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I'm making a 2D game with OpenGL, using textured quads to display 2D sprites. I'd like to create an effect whereby any character sprite that's partially hidden by a terrain sprite will have the occluded region visible as a solid-colored silhouette, as demonstrated by the pastoral scene in this image.
I'm really not sure how to achieve this. I'm guessing the solution will involve some trick with the fragment shader, but as for specifics I'm stumped. Can anyone point me in the right direction?
Here's what I've done in the past
Draw the world/terrain (everything you want the silhouette to show through)
Disable depth test
disable draw to depth buffer
Draw sprites in silhouette mode (a different shader or texture)
enable depth test
enable draw to depth buffer
draw sprites in normal mode
draw anything else that should go on top (like the HUD)
Explanation:
When you draw the first time (in silhouette mode) it will draw over everything, but not affect the depth buffer, so that when you draw the 2nd time you won't get z-fighting. When you draw the 2nd time, some of it will be behind the terrain, but where the silhouette has already been drawn.
You can do things like this using stenciling or depth buffering.
When rendering the wall make sure that it writes a different value to the stencil buffer than the background. Then render the cow twice, once passing the stencil test when not at the wall, and once otherwise. Use a different shader each time.

How to draw a 3d rendered Image (perspective proj) back to another viewport with orthogonal proj. simultaniously using multiple Viewports and OpenGL [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
My problem is that i want to take a kind of snapshot of a 3d scene manipulate that snapshot and draw it back to another viewport of the scene,
I just read the image using the glReadPixel method.
Now I want to draw back that image to a specified viewport but with the usage of modern OpenGL.
I read about FrameBufferObject (fbo) and PixelBufferObject (pbo) and the solution to write back the FrameBufferObject contents into a gl2DTexture and pass it to the FragementShader as simple texture.
Is this way correct or can anyone provide a simple example of how to render the image back to the scene using modern OpenGL and not the deprecated glDrawPixel method?
The overall process you want to do will look something like this
Create an FBO with a color and depth attachment. Bind it.
Render your scene
Copy the contents out of its color attachment to client memory to do the operations you want on it.*
Copy the image back into an OpenGL texture (may as well keep the same one).
Bind the default framebuffer (0)
Render a full screen quad using your image as a texture map. (Possibly using a different shader or switching shader functionality).
Possible questions you may have:
Do I have to render a full screen quad? Yup. You can't bypass the vertex shader. So somewhere just go make four vertices with texture coordinates in a VBO, yada yada.
My vertex shader deals with projecting things, how do I deal with that quad? You can create a subroutine that toggles how you deal with vertices in your vertex shader. One can be for regular 3D rendering (ie transforming from model space into world/view/screen space) and one can just be a pass through that sends along your vertices unmodified. You'll just want your vertices at the four corners of the square on (-1,-1) to (1,1). Send those along to your fragment shader and it'll do what you want. You can optionally just set all your matrices to identity if you don't feel like using subroutines.
*If you can find a way do your texture operations in a shader, I'd highly recommend it. GPUs are quite literally built for this.

How to add glowing effect to a line for OpenGL? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
How can I add a glowing effect to a line that I draw? I'm using OpenGL for Linux.
You can implement the radial blur effect described on Nehe Lesson 36. The main idea is to render the drawing to a texture, and do that N times with a small offset after each render, until the drawing is ready to be copied to the framebuffer.
I've written a small demo that uses Qt and OpenGL. You can see the original drawing (without the blur) below:
The next image shows the drawing with the blur effect turned on:
I know it's not much, but it's a start.
I too once hoped there was a very simple solution to this, but unfortunately it is a little complicated, at least for a beginner.
The way glowing effects are implemented today, regardless of API (D3D,OpenGL) is with pixel/fragment-shaders. It usually involves multiple render passes where you render your scene, then render a pass where only "glowing objects" are visible, then you apply a bloom pixelshader and compose them together.
See the link provided by #Valmond for details
Edit:
It should be added that this can be achieved with deferred rendering, where normals, positions and other information like a "glow flag" is rendered to a texture, i.e. stored in different components of the texture. Then a shader will read from the textures and do lightning computations and post-processing effects in a single pass since all data it needs is available from that rendered texture.
Check this out : http://developer.download.nvidia.com/books/HTML/gpugems/gpugems_ch21.html
It explains easily how to make glow effects.
Without using shaders, you might also try rendering to texture and doing a radial blur.
As a starting point check out NeHe-Tutorials.