Implementing Inverse Mapping Texture Approach in OpenGL - c++

I have to implement simple texturing in OpenGL using the inverse(or backwards) mapping approach. In theory, I know what it is and how it works, but I can't figure out how to implement it in OpenGL. Is there a way to specify how OpenGL handles textures in this regard?
I know, more or less, how to texture polygons, but this little side-note on the homework assignment is what bugs me:
Note: The texture mapping is implemented using the inverse mapping
approach (do not use OpenGL texture mapping function)
Does anyone have any idea what is meant by this? I'm drawing the polygon to be textured with glBegin and glEnd and using glEnable(GL_TEXTURE_2D) and glTexCoord2f to texture it. Is there another way of doing it, or am I reading too much into the assignment?
I'm using Visual Studio 2012, which came with OpenGL installed, and have an AMD Radeon HD 6850 graphics card. This is for a simple homework assignment, so the simplest solution will suffice.
To save time, yes, this is for a homework, no, I don't want anyone to do it for me, just an indication as to how I would go about doing it myself, and no, an extensive google search gave me no insight whatsoever as to how this would be implemented in code.

Related

What should I consider when choosing an OpenGL version for simple rendering tasks?

I have an application in which I want to draw some simple 3D geometry to the screen - a relatively small number of textured triangles with no shading. I have decided to use OpenGL.
The application doesn't need any features which are not available in OpenGL 1.1, so any OpenGL version is sufficient.
What should I consider when deciding which OpenGL version to use?
There is little to no reason to use OGL 1.1, other than if you want to prototype something or if you are doing something extremely simple. VBOs are introduced in OGL 1.5 and thats probably the lowest I would be willing to go. You get shaders in version 2. That is usually not a bad place to aim for. OGL 3 gives you VAOs which can really simplify draw calls, instanced rendering, and uniform and texture buffer objects which are really useful for more complex tasks.
If you want something real quick and dirty OGL 1.1 is fine. Usually I would not suggest going under 1.5 as VBOs are not too much of a jump and can really make your code a lot easier to debug and VBOs are much more efficient.
All in all i would suggest using OGL2 so you can use the programmable pipeline to handle your own matrices efficiently so you don't have to do all the push/pop matrices which can make your head spin in a complicated rendering situation. If using OGL3 isn't a problem then you could also use VAOs to make things even cleaner. This also gives you a bit more freedom if you decide you need more power/features later on.

How to get the depth of field effect using OpenGL?

I am wondering how to implement the "depth of field/circle of confusion" effect using OpenGL?
Is there any built-in method or library to support it?
You will not find anything "built in" to OpenGL that will give you what you are looking for. You will have to implement this effect through a shader, which is fairly straightforward.
An article on how to achieve this effect is freely available here:
Nvidia article on depth of field techniques
You can compute different DOF approximations. For a simple approximation, you could try to render near object into a texture and far object into another texture. In another pass you could blur the texture holding the image of the far objects and then combine both textures to a single texture and wrap it on a screen space rect. This has nothing to do with the actual DOF but in real-time graphics often little tricks are used - but the visual outcome has to be convincing though.

Query about TTF and OpenGL

I searched around a bit and couldn't find anything clear cut as to whether/how you can render true-type fonts using OpenGL (with SDL as the API if it makes any difference).
So I was wondering if anybody with experience knows the answer to if it is possible and how, or could point me to some other good source or explanation.
If it's not possible which I suspect is the case, any other suggestions would be greatly appreciated for working with fonts using OpenGL.
OpenGL itself deals only with points, lines and triangles. Anything going beyond that functionality must be implemented by the user. So no, there's no direct support for font rendering in OpenGL.
One can of course use OpenGL to rasterize glyphs, by various methods.
A very widespread method is texture mapped fonts, i.e. each (used) glyph of a font rendered into a texture atlat.
One can use OpenGL primitives to raster glyph curves, though this is a tricky subject.
Use shaders to implement vector textures
Use shaders to implement distance maps (distance maps are not unlike texture mapped font's but with a greatly reduced memory footprint).
Look at this. Here is a more recent example of using FreeType with OpenGL.

OpenGL: Which is faster - GL_POLYGON or GL_TRIANGLE_FAN?

I am going to draw a regular hexagon with one fill color. I can do it with a sequence of glVertex2*() calls. However, the glBegin() call is what I am asking about. Is there any benefit to using GL_POLYGON or GL_TRIANGLE_FAN? If it matters, drawing hexes will be the main work of the program. If you have another idea, I am all ears.
GL_POLYGON is deprecated in OpenGL 3.x. I think most drivers convert GL_POLYGON to a bunch of triangles, you can save the conversion by providing triangles in the first place.
Usually that are very very close but at that point it's going to be down to the hardware. Don't worry about such minor details and only revisit it once you are sure that you indeed have a bottleneck in that area.

Crossfading scenes in OpenGL

I would like to render two scenes in OpenGL, and then do a visual crossfade from one scene to the second. Can anyone suggest a starting point for learning how to do this?
The most pmajor thing you need to learn is how to do render-to-texture.
When you have both scenes in 2 textures it really is simple to crossfade between them. In fact its pretty simple to do all manor of interesting fade effects :)
Here's sample code of a cross fade. This seems a little different than what Goz has since the two scenes are dynamic. The example uses the stencil buffer for the cross fade.
I could think of another way to crossfade scenes, but it depends on how complex your scene renderer is. If it is simple, you could start a shader program before rendering the second scene that does the desired blending effect. I would try glBlend (GL_ALPHA, GL_ONE_MINUS_SRC_ALPHA) and manipulate the fragments' alpha values in the shader.
FBOs are btw. available for years already - extension or not. If your renderer is complex and uses shader programs, you could just as well render both scenes to FBOs and blend these. Using FBOs is a very common technique for allowing to easily apply all kinds of effect rendering.