Crossfading scenes in OpenGL - c++

I would like to render two scenes in OpenGL, and then do a visual crossfade from one scene to the second. Can anyone suggest a starting point for learning how to do this?

The most pmajor thing you need to learn is how to do render-to-texture.
When you have both scenes in 2 textures it really is simple to crossfade between them. In fact its pretty simple to do all manor of interesting fade effects :)

Here's sample code of a cross fade. This seems a little different than what Goz has since the two scenes are dynamic. The example uses the stencil buffer for the cross fade.

I could think of another way to crossfade scenes, but it depends on how complex your scene renderer is. If it is simple, you could start a shader program before rendering the second scene that does the desired blending effect. I would try glBlend (GL_ALPHA, GL_ONE_MINUS_SRC_ALPHA) and manipulate the fragments' alpha values in the shader.
FBOs are btw. available for years already - extension or not. If your renderer is complex and uses shader programs, you could just as well render both scenes to FBOs and blend these. Using FBOs is a very common technique for allowing to easily apply all kinds of effect rendering.

Related

opengl - possibility of a mirroring shader?

Until today, when I wanted to create reflections (a mirror) in opengl, I rendered a view into a texture and displayed that texture on the mirroring surface.
What i want to know is, are there any other methods to create a mirror in opengl?
And 2. can this be done lonely in shaders (e.g. geometry shader) ?
Ray-tracing. You can write a ray-tracer in the fragment shader (every fragment follows a ray). Ray-tracers can perfectly deal with reflection (mirroring) on all kinds of surfaces.
You can find an OpenGL example here and a WebGL example including mirroring here.
There are no universal way to do that, in any 3D API i know of.
Depending on your case there are several possible techniques with different downsides.
Planar reflections: That's what you are doing already.
Note that your mirror needs to be flat and you have to clip so anything closer than the mirror ins't rendered into the texture.
Good old cubemaps: attach a cubemap to each mirror then sample it in the reflection direction. This works for any surface but you will need to render the cubemaps (which can be done only once if you don't care about moving objects being reflected). I don't think you can do this without shaders but only the mirror will need one. Its a very common technique as it's easy do implement, can be dynamic and fairly cheap while being easy to integrate into an existing engine.
Screen space ray-marching: It's what danny-ruijters suggested. Kind of like SSAO : for each pixel, sample the depth buffer along the reflection vector until you hit something. This has the advantage to be applicable anywhere (on arbitrary complex surfaces) however it can only reflect stuff that appear on screen which can introduce lots of small artifacts but it's completly dynamic and very simple to implement. Note that you will need an additional pass (or rendering normals into a buffer) to access your scene final color in while computing the reflections. You absolutely need shaders for that, but it's post process so it won't interfere with the scene rendering if that's what you fear.
Some modern game engines use this to add small details to reflective surfaces without the burden of having to compute/store cubemaps.
They are probably many other ways to render mirrors but these are the tree main one (at least for what i know) ways of doing reflections.

Is there a way to use GLSL programs as filters?

Assume that we have different shader programs for different objects in a game. For example the player model has a shader that controls skeleton system (bone matrices multiplication etc.), or a particle has a shader for sparkling effects, wall has parallax mapping etc.
But what if I want to add fog to the game that must affect every one of these objects ? For example I have a room that will have a red fog, should I change EVERY glsl program to have fog code or is there a possible way to make global filters ? Should I change every glsl program when i want to add a feature ?
The typical process for this type of thing is to use a full-screen shader in post processing using the depth buffer from your fully rendered scene, or using a z-pass, which renders only to the depth buffer. You can chain them together and create any number of effects. It typically involves some render-to-texture work, and is not a real trivial task (too much to post code here), but it's not THAT difficult either.
If you want to take a look at a decent post-processing system, take a look at the PostFx system in Torque3D:
https://github.com/GarageGames/Torque3D
And here is an example of creating fog with GLSL in post:
http://isnippets.blogspot.com/2010/10/real-time-fog-using-post-processing-in.html

Techniques for drawing tiles with OpenGL

I've been using XNA for essentialy all of my programming so far and would like to move on to OpenGL (along with SFML for IO, creating the window etc.) with C++ . For starters I'd like to create a tile-based game and I've mostly looked at LazyFoo's tutorials.
I just have a two questions:
How should I draw the tiles? Should I use immediate drawing, arrays, VBOs or what? VBOs feel like overkill for this but I'm not sure. It's very tempting to use immediate drawing but apparently it's deprecated. Maybe it's fine for this purpose since it's 2D and only for a bunch of quads.
I'd like a lot of different tiles and thus all of my tiles will not fit into a single texture without making it massive. I've read that using bindTexture isn't very cheap and thus I should avoid as many calls as I can. I thought that maybe I can create a manager for my textures and stitch them all together into one big texture and bind that but then the dimensions of that is an issue.
Don't use immediate mode! It's cumbersome to work with and has been removed from recent OpenGL versions. Use Vertex Arrays, ideally through VBOs. In the end they're much easier to use, believe me.
Regarding that switching of textures. We're talking about optimizing the texture switch patterns in very complex scenes. In your case it will hardly matter at all.
Update
Right now you worry abount things without having even used them. That's worse than premature optimization. I suggest you first get a good grip on OpenGL, then start worrying about state switch management.
With regards to the texture atlas; this is usually done by stitching textures into groups of power-of-two sized textures. For example in a tile-based game you might have a particular tile set (say, tiles for an ice world) grouped together on 2 or 3 textures. When you want to render them you would determine what tiles are visible, then you bind each texture once and render the tiles from that texture for any tiles that are visible on screen.
This requires quite a lot of set-up time to get right; you need keep information on each sub-texture of the atlas so you can find the right texture and render the appropriate region of that texture whenever a tile is referenced. You also need a good way of grouping rendering operations so that they occur when the appropriate texture is bound.
Like datenwolf said, I wouldn't focus too much on complicated texture systems early on; eager binding of textures will be plenty fast enough until you get further down the road.

Opengl/glsl shader animation and lighting issue

So lately i've took my first serious steps (or at least i think so) into opengl/glsl and shaders in general.
Ive managed to construct and render VBOs, create and compile shaders and also mess with them in some sort of way.
I'm using a vertex shader to fix my opengl view (correct the aspect ratio) and also perform animation. This is achieved with varius matrix manipulations.
One would ask why am i using vertex shaders for animation, but reading articles around the globe i got the impression it's best to maintain static VBOs rather updating them constantly. Some sort of GPU>CPU battle.
Now i may be wrong about it that's why im reaching here for aid on the matter. My view on it is that in the future i might make a game which (for instance) will have a lot of coins for a player to grab and i would like them to be staticly stored at the GPU side. And then use the shader for rotating them.
Moving on.. "Let there be light".
I've also managed to use my normals in the vertex shader to reproduce lighting. It all worked fine with the exception that light rotates with my cube (currently im using a cube as a test dummy). Now, i know what's wrong here. It's my vertex shader transforming absolutely everything (even my light source i guess). And i can think of a way or two on how to solve this problem. One would be to apply reverse-negative transformation forces on my light source so i can keep it static as everything else rotates.
And here's where everything blurs. Im reaching stackoverflow for guidance on how to move forward. I am trying to think bigger in a way-sense : what if, in the future, i'll have plenty objects i'd like to perform basic animations for (such as rotation, scaling, translations). Would that require me to have different shaders or even a packed one with every function in it. And how would i even use this. Would i pass different values before every object rending inside the same shader?
Right now, to be honest, i want to handle the lighting issue. But i have a feeling that the way im about to approach this will set my general approach in shading animations in general. One suggested (here in stackoverflow in another question) that one should really use different shaders and swap them before every VBO rendering. I have my concerns on wether that's efficient enough, but i definately like the idea.
One suggested (here in stackoverflow in another question) that one should really use different shaders and swap them before every VBO rendering.
Which question/answer was this? Because you normally should avoid switching shaders where possible. Maybe the person meant uniforms, which are parameters to shaders, but can be changed for cheap.
Also your question is far too broad and also not very concise (all that backstory hides the actual issue). I strongly suggest you split up your doubts into a number of small questions which can be answered in separate.

Creating Windows Vista Aurora screensaver like curtain effect in OpenGL

I am trying to create an interactive background animation using WebGL / Three.js
The animation would be generated from a two-color gradient
The animation would be controlled by external factors (intensity, speed, etc.)
The result should look something like this: https://www.youtube.com/watch?v=PdrkrCFRHWA
I am not sure how Vista managed to pull the effect and I am interested in possible techniques which would yield similar looking results. I am looking for pointers how to get started
Should I use alpha blended generated textures and dancing quads?
Should I use pixel shaders?
etc.
Any tips welcome.
I would use three.js and render a bunch of triangle strips and do the gradient effect itself on the fragment shader.
The effect looks rather simple enough to be wholly calculated inside the fragment shader directly, so a fullscreen quad would also work nicely. Depends really on the type of detail you are aiming for, I'd experiment with both.