Terrain rendering from bitmap in OpenGL - opengl

I have an assignment to render a terrain from a greyscale 8bit bmp and get colors to the terrain from a texture 24bit bmp. I managed to get a proper landscape with heights and so on, and also I get the colors from the texture bitmap. The problem is that the full color rendered terrain is very "blocky", it shows right colors and height but it's so blocky. I use glShadeModel(GL_SMOOTH) but it still looks so blocky, almost like I can see the pixels from the bitmap. So any hints are appreciated.

Do you use the bitmap as texture, or do you set vertex colours from the bitmap? I suggest you use a texture, using the planar vertex position as texture coordinate.

One thing you have to take into consideration is when you are rendering are you using GL_TRIANGLES or GL_TRIANGLESTRIPS this makes a difference on performance, second if you are using lighting you have to define your normals and each triangle or each vertex of each triangle, the problem then becomes tricky because almost every triangle is on a different plane. Not having proper normals would make it look blocky. The third thing that makes a difference is how big or small the triangles are; the smaller the triangles or the more divisions in your [x,z] plane increases you resolution thus increases the visual quality, but also slows down your frame rate. You have to find a good balance between the two.

Related

Variable Width Outline Effect Around a Texture in 2D

I am looking to create a glsl shader program that will give me a variable width outline around an arbitrary 2d texture as shown in the picture. Is this a reasonable job for the GPU? I've looked at edge-detection approaches but those would only reasonably provide a few pixels border. I want arbitrary width. Is this doable?
One approach would be to render the object to a texture with a bit larger scale and color it all the way your outline color should be.
Then you render it to another texture as it should be displayed.
In a third render pass you could then combine the 2 textures by choosing the outline texture, when the color-texture is empty and else the color texture.
This could be an expensive process, but it obviously depends on the scope of your project if that impacts performance too much.

render two images to the screen separately

I want to render two textures on the screen at the same time at different positions, but, I'm confused about the vertex coordinates.
How could I write a vertex shader to meet my goal?
Just to address the "two images to the screen separately" bit...
A texture maps image colours onto geometry. To be pedantic, you can't draw a texture but you can blit and you can draw geometry with a mapped texture (using per-vertex texture coordinates).
You can bind two textures at once while drawing, but you'll need both a second set of texture coordinates and to handle how they blend (or don't in your case). Even then the shader will be quite specific and because the images are separate there'll be unnecessary code running for each pixel to handle the other image. What happens when you want to draw 3 images, or 100?
Instead, just draw a quad with one image twice (binding each texture in turn before drawing). The overhead will be tiny unless you're drawing lots, at which point you might look at texture atlases and drawing all the geometry with one draw call (really getting towards the "at the same time" part of the question).

What is most efficient in OpenGL: a full-screen texture or a call to `glDrawElements` with a few hundreds vertices?

This is a beginner's question, but I am a little confused about how this works in OpenGL. Also, I am aware that in practice, given the figures I'm using, this won't make a real difference in terms of framerate, but I would like to understand how this works.
My problem
Let's say I want to display a starry night sky, i.e. a set of white points on a black background. Assuming the stars do not move, I can think of two options to do that in OpenGL:
Define each star as an OpenGL vertex and call glDrawElements to render them.
Use a full-screen texture, rendered on an OpenGL quad.
Let's say my screen is 1920x1080 wide and I want to draw 1000 stars. Then, if we quickly compare the workload associated with each option: the first one has to draw 1000 vertices whereas the second one uses only 4 vertices but must uselessly render 1920x1080 = 2*106 pixels.
My questions
Should I conclude that the first option is most efficient ? If not, why ?
I'm more particularly interested in OpenGL ES (2.0), so is the answer the same for OpenGL and OpenGL ES (2.0) ?
It totally depends on the resolution. In fact, you're right that you'd limit the vertices amount, but you have to understand the Graphics Pipeline.
Even tough the texture is only black and white, OpenGL has to work with each Texel of the texture, getting even more expensive if you don't use mipmapping (using auto-generated lower resolution texture variants on distance). Let's say you're using a texture of the size 640 * 480 for the stars, used for the quad in the sky. Then OpenGL has to compute 4 vertex and 307200 texels for your sky, each having four components (r,g,b,a).
Indeed, you'd only have to compute 4 vertices, but instead a huge ammount of texels. So if you really have this black sky with the ~1000 stars, it should be more efficient to draw vertex array with glDrawElements. And yes, it should be the same for OpenGL and GLES.

Drawing a Circle on a plane, Boolean Subtraction - OpenGL

I'm hoping to draw a plane in OpenGL, using C++, with a hole in the center, much like the green of a golf course for example.
I was wondering what the easiest way to achieve this is?
It's fairly simple to draw a circle and a plane (tutorials all over google will show this for those curious), but I was wondering if there is a boolean subtraction technique like you can get when modelling in 3Ds Max or similar software? Where you create both objects, then take the intersection/union etc to leave a new object/shape? In this case subtract the circle from the plane, creating a hole.
Another way I thought of doing it is giving the circle alpha values and making it transparent, but then of course it still leaves the planes surface visible anyway.
Any help or points in the right direction?
I would avoid messing around with transparency, blending mode, and the like. Just create a mesh with the shape you need and draw it. Remember OpenGL is for graphics, not modelling.
There are a couple ways you could do this. The first way is the one you already stated which is to draw the circle as transparent. The caveat is that you must draw the circle first before you draw the plane so that the alpha blending will blend the circle with the background. Then when you render the plane the parts that are covered by the circle will be discarded in the depth test.
The second method you could try is with texture mapping. You could create a texture that is basically a mask with everything set to opaque white except the circle portion which is set to have an alpha value of 0. In your shader you would then multiply your fragment color by this mask texture color so that the portions where the circle is located are now transparent.
Both of these methods would work with shapes other than a circle as well.
I suggest the stencil buffer. Use the stencil buffer to mark the area where you want the hole to be by masking the color and depth buffers and drawing only to the stencil buffer, then unmask your color and depth, avoid drawing to the stencil buffer, and draw your plane with a stencil function telling OpenGL to discard all pixels where the stencil buffer "markings" are.

Drawing procedural lines on top of a texture avoiding aliasing in glsl

My goal is to draw white lines over an asphalt road. Since the properties of the road change, there cannot be just a texture representing both asphalt and white lines.
The current approach is to apply the asphalt texture and code some information in the other two texture coordinates. In a pixel shader, reading those coordinates, I decide whether that fragment should be white or not.
This results in high levels of aliasing. And that’s the problem I want to try to solve.
I have been changing the “whiteness” of the line applying smoothstep or linear interpolation. I have also changed the width and color according to distance from camera. This helps a little bit, but at far away distances, there are still ugly aliased lines.
How would you go on doing this? Would it be better to have a texture representing a smoothed white line and accessing the texels? Should I implement a bilinear filter accessing neighboring texels?
You should simply use 2 textures with 2 coordinates.
Small seamless asphalt texture tiled on the road polygon.
Mark texture with alpha that you will place on the middle of this polygon (with texture coordinate offset)
Or you can create extra polygons in the middle of the road for marks to avoid any aliasing.
To make it all looks real you can apply Texture Bombing with dirt and cracks.