I am trying to create an interactive background animation using WebGL / Three.js
The animation would be generated from a two-color gradient
The animation would be controlled by external factors (intensity, speed, etc.)
The result should look something like this: https://www.youtube.com/watch?v=PdrkrCFRHWA
I am not sure how Vista managed to pull the effect and I am interested in possible techniques which would yield similar looking results. I am looking for pointers how to get started
Should I use alpha blended generated textures and dancing quads?
Should I use pixel shaders?
etc.
Any tips welcome.
I would use three.js and render a bunch of triangle strips and do the gradient effect itself on the fragment shader.
The effect looks rather simple enough to be wholly calculated inside the fragment shader directly, so a fullscreen quad would also work nicely. Depends really on the type of detail you are aiming for, I'd experiment with both.
Related
Until today, when I wanted to create reflections (a mirror) in opengl, I rendered a view into a texture and displayed that texture on the mirroring surface.
What i want to know is, are there any other methods to create a mirror in opengl?
And 2. can this be done lonely in shaders (e.g. geometry shader) ?
Ray-tracing. You can write a ray-tracer in the fragment shader (every fragment follows a ray). Ray-tracers can perfectly deal with reflection (mirroring) on all kinds of surfaces.
You can find an OpenGL example here and a WebGL example including mirroring here.
There are no universal way to do that, in any 3D API i know of.
Depending on your case there are several possible techniques with different downsides.
Planar reflections: That's what you are doing already.
Note that your mirror needs to be flat and you have to clip so anything closer than the mirror ins't rendered into the texture.
Good old cubemaps: attach a cubemap to each mirror then sample it in the reflection direction. This works for any surface but you will need to render the cubemaps (which can be done only once if you don't care about moving objects being reflected). I don't think you can do this without shaders but only the mirror will need one. Its a very common technique as it's easy do implement, can be dynamic and fairly cheap while being easy to integrate into an existing engine.
Screen space ray-marching: It's what danny-ruijters suggested. Kind of like SSAO : for each pixel, sample the depth buffer along the reflection vector until you hit something. This has the advantage to be applicable anywhere (on arbitrary complex surfaces) however it can only reflect stuff that appear on screen which can introduce lots of small artifacts but it's completly dynamic and very simple to implement. Note that you will need an additional pass (or rendering normals into a buffer) to access your scene final color in while computing the reflections. You absolutely need shaders for that, but it's post process so it won't interfere with the scene rendering if that's what you fear.
Some modern game engines use this to add small details to reflective surfaces without the burden of having to compute/store cubemaps.
They are probably many other ways to render mirrors but these are the tree main one (at least for what i know) ways of doing reflections.
Question: How do I render points in openGL using GLSL?
Info: a while back I made a gravity simulation in python and used blender to do the rendering. It looked something like this. As an exercise I'm porting it over to openGL and openCL. I actually already have it working in openCL, I think. It wasn't until i spent a fair bit of time working in openCL that I realized that it is hard to know if this is right without being able to see the result. So I started playing around with openGL. I followed the openGL GLSL tutorial on wikibooks, very informative, but it didn't cover points or particles.
I'm at a loss for where to start. most tutorials I find are for the openGL default program. I want to do it using GLSL. I'm still very new to all this so forgive me my potential idiocy if the answer is right beneath my nose. What I'm looking for is how to make halos around the points that blend into each other. I have a rough idea on how to do this in the fragment shader, but so far as I'm aware I can only grab the pixels that are enclosed by polygons created by my points. I'm sure there is a way around this, it would be crazy for there not to be, but me in my newbishness is clueless. Can some one give me some direction here? thanks.
I think what you want is to render the particles as GL_POINTS with GL_POINT_SPRITE enabled, then use your fragment shader to either map a texture in the usual way, or generate the halo gradient procedurally.
When you are rendering in GL_POINTS mode, set gl_PointSize in your vertex shader to set the size of the particle. The vec2 variable gl_PointCoord will give you the coordinates of your fragment in the fragment shader.
EDIT: Setting gl_PointSize will only take effect if GL_PROGRAM_POINT_SIZE has been enabled. Alternatively, just use glPointSize to set the same size for all points. Also, as of OpenGL 3.2 (core), the GL_POINT_SPRITE flag has been removed and is effectively always on.
simply draw a point sprites (using GL_POINT_SPRITE) use blending functions: gl_src_alpha and gl_one and then "halos" should be visible. Blending should be responsible for "halos" so look for some more info about that topic.
Also you have to disable depth wrties.
here is some link about that: http://content.gpwiki.org/index.php/OpenGL:Tutorials:Tutorial_Framework:Particles
Well, i have a 3d scene currently with just a quad (painting) with texture on it. Between the painting and the "camera" i have places an other quad i would like to behave like a optical lens: distorting the picture "below" it
how would one achieve it preferably with a shader and some pixelbuffers?
Here is an example I found a while ago which does something very similar to what you want. http://www.paulsprojects.net/opengl/refract/refract.html
You will probably have to modify the code a bit to achieve the inversion effect you want, but this will get you started on the right track.
Edit:
By the way, you will not need the second image (the inverted small rectangle). Just use a single background image and the shader.
Between the painting and the "camera" i have places an other quad i would like to behave like a optical lens:
This is a tricky one. First one must understand that OpenGL is a so called localized rendering model rasterizer, which means in layman terms, that it works like pencils and brushes on a canvas.
It thus works in very contrast to global scene representation renderers like raytracers. A raytracer actually operates on a fully defined scene, because of that it can to things like refraction trivially.
Indeed one must treat OpenGL like an artist treats its tools. So any optical "effect" you want to create must be implemented by mastering various drawing techiques possible with the tools OpenGL offers. To create the effect you desire you must implement a multistage process.
For refraction you first render the scene as "seen" by the refracting object in all directions (you create a dynamic cube map), then you use this cube map as input data for rasterizing the "refracting" object, where a shader is used to determine the refracted direction of a ray of light hitting the rasterized fragments.
BTW: What holds for refraction holds for any other like interacting effect. Shadows are as non-trivial like refractions in OpenGL.
Just what it says: I have some code that's drawing GLUT cubes, but they're all grey. How do I make them different colors?
Everything I've tried so far has failed. I think my problem is that I'm trying to use OpenGL functions to change their colors, but GLUT is maintaining it's own internal color or material data and I don't know how to make it change that data.
This is just filler graphics for a test-client for an online game, so they don't have to look good, I just need to be able to tell things apart. I know GLUT isn't considered great, so if anyone wants to post an example of drawing a cube with plain OpenGL instead of glutCube I'm all ears. I don't really care how I get the cubes on the screen, and it's not a part of the code I want to spend a lot of time on. I have a partner who's doing the real graphics; I just need to get something showing so that I can visualize what my code is doing.
The language I'm using OpenGL/GLUT from is called Io, but the API it exposes should be the same as if I were calling it from C.
It turns out that if I just do:
glEnable(GL_COLOR_MATERIAL)
then it makes the material track whatever color I set with glColor, even when lighting is enabled.
Just set the color beforehand with glColor(). If you're using lighting (i.e. GL_LIGHTING is enabled), though, then you'll instead have to use glMaterial() to set the cube's color.
I would like to render two scenes in OpenGL, and then do a visual crossfade from one scene to the second. Can anyone suggest a starting point for learning how to do this?
The most pmajor thing you need to learn is how to do render-to-texture.
When you have both scenes in 2 textures it really is simple to crossfade between them. In fact its pretty simple to do all manor of interesting fade effects :)
Here's sample code of a cross fade. This seems a little different than what Goz has since the two scenes are dynamic. The example uses the stencil buffer for the cross fade.
I could think of another way to crossfade scenes, but it depends on how complex your scene renderer is. If it is simple, you could start a shader program before rendering the second scene that does the desired blending effect. I would try glBlend (GL_ALPHA, GL_ONE_MINUS_SRC_ALPHA) and manipulate the fragments' alpha values in the shader.
FBOs are btw. available for years already - extension or not. If your renderer is complex and uses shader programs, you could just as well render both scenes to FBOs and blend these. Using FBOs is a very common technique for allowing to easily apply all kinds of effect rendering.