Using a Shader to animate a mesh - c++

I have been working on my low level OpenGL understanding, and I've finally come to how to animate 3D models. Nowhere I look tells me how to do Skeletal animation. most things use some kind of 3D engine and just say "Load the Skeleton" or "Apply the Animation" but not how to load a skeleton, or how to actually move the vertices.
I'm assuming each bone has a 4x4 Matrix of the Translation/Rotation/Scale for the vertices its attached too that way when the bone is moved the vertices attached also move by the same amount.
for skeletal animation I was guessing that you would pass the Bone(s) to the shader, that way in the vertex shader I move the current vertex before it goes to the fragment shader. If I have a keyframed animation I send the current bone and the new bone to the shader along with the current time between frames and interpolate the points between bones based on how much time there is between keyframes.
Is this the correct way to animate a mesh? or is there a better way

Well - the method of animation depends on the format, and the data that's written in it. Some formats supply you in vectors, some use matrices. I gotta admit I came to this site to ask a similar question, but I've specified the format (was using *.x files, you can check the topic), and I got an answer.
You're idea of the subject is correct. If you want a sample implementation, you can find one on the OpenGL wiki.

Related

Select object in OpenGL when doing transformations in the vertex shader

I'm pretty new to OpenGL and am trying to implement a simple program where I can draw cubes, move them around with the mouse, and delete them.
Previously I had done my drag operations by translating on the CPU. In this way I was able to use ray-tracing to pick out the element I wanted because the vertices themselves were being updated.
However, I'm trying to move all of the transformations to the GPU and in doing so realized that I would then be giving up updated access to the vertices on the CPU (as the CPU still thinks the vertices are the un-transformed ones). How does one do this communication so that I wouldn't have to manually do transformations on the CPU as well as in the Vertex Shader?
No matter where you're doing your transformations, you will typically have a model matrix that describes where each object is in the scene. Instead of transforming each object into world space just so you can check for intersection with a world-space ray, you can also transform the ray into the object space of each object by transforming the ray with the inverse model matrix.
One general issue with ray-tracing is that, as your scene gets larger, brute force testing of each object will get increasingly slow. You can use acceleration structures like an Octree or a Bounding Volume Hierarchy to speed things up. A completely different approach when it comes to picking would be just render an ID buffer, i.e. a buffer that has the same resolution as your currently rendered frame and for each pixel saves the ID of the object that is visible at that pixel. Then you can simply read back the value of the pixel underneath the cursor to find out what object you hit without the need to do any raytracing. Rendering the ID buffer could be done as a separate pass or can likely just be added as an additional render target to a pass you're already doing, e.g., prefilling the depth buffer or just when rendering the scene in case you only do one pass.

openGL simple 2d light

I am making a simple pixel top-down game. And I want to add some simple lights there, but I don't know what the best way to do that. This image is an example of light what I want to realise.
http://imgur.com/a/PpYiR
When I googled that task, I saw only solutions for that kind of light.
https://www.youtube.com/watch?v=mVlYsGOkkyM
But I need to increase a brightness of the texture part when the light source is near. How can I do this if I am using textures with GL_QUADS without UV?
Ok, my response may not totally answer you question, but it will lead you down the right path.
It appears you are using immediate mode, this is now depreciated and changing to VBOs (vertex buffer objects) will make you life easier.
The lighting in the picture appears to be hand drawn. You cannot create that style of lighting exactly with even the best algorithm.
You really have two options to solve your problem, and both of them will require texture coordinates and shaders.
You could go with lightmaps, which use a pre generated texture multiplied over the texture of a quad. This is extremely fast, but requires some sort of tool to generate the lightmaps which might be a bit over your head at the moment.
Instead, learn shader based lighting. Many tutorials exist for 3d lighting but the principles remain the same for 2D.
Some Googling will get you the resources you need to implement shaders.
A basic distance based lighting algorithm will look like this:
GL_Color = texturecolor * 1.0/distance(light_position,world_position);
It multiplies the color of the texel by how far away the texel is from the light position. There are tutorials that go more into depth on this.
If you want to make the lighting look "retro" like in the first image,you can downsample the colors in a postprocesing step.

Generic picking solution for 3D scenes with vertex-shader-based geometry deformation applied

I'm trying to implement a navigation technique for 3D scenes (in OpenSceneGraph with OpenGL). Among other things the user should be able to click on an scene object on the screen to move towards it.
The navigation technique should be integrated into another project which uses a vertex shader to apply a global deformation to the scene geometry. And here is the problem: Since the geometry is deformed using a vertex shader, it is not straight forward to un-project the mouse cursor position to the world coordinates of the spot the user actually selected. But I need those coordinates to perform the proper camera movement in my navigation technique.
One way of performing this un-projection would be to modify the vertex shader (used for the deformation) to let it also store the vertex' original position and normal in separate textures. Afterwards one could read those textures at the mouse position to get the desired values.
Now, as I said, the vertex shader belongs to another project which I actually don't want to touch. One goal of my navigation technique is to be as generic as possible to be easily integrated into other projects as well.
So here is the question: Is there any feature in OpenSceneGraph or OpenGL that I did not consider so far? Anything that allows me to get the world coordinates of a fragment, independently of the vertex shader coder?
Well, you could always do an OpenGL selection operation:
http://www.glprogramming.com/red/chapter13.html
Alternately, you could rasterize to a very small (1px*1px) framebuffer where the user clicked, read back the z-buffer and unproject the Z value you got into world space.

OpenGL: create complex and smoothed polygons

In my OpenGL project, I want to dynamically create smoothed polygons, similiar like this one:
The problem relies mainly in the smoothing process. My procedure up to this point, is firstly to create a VBO with randomly placed vertices.
Then, in my fragment shader, (I'm using the programmable function pipeline) there should happen the smoothing process, or in other words, created the curves out of the previously defined "lines" between the vertices.
And exactly here is the problem: I am not very familiar with thoose complex mathematical algorithms, which would examine, if a point is inside the "smoothed polygon" or not.
First up, you can't really do it in the fragment shader. The fragment shader is limited to setting the final(ish) color of a "pixel" (which is basically, but not exactly, an actual pixel) before it gets written to the screen. It can't create new points on a curve.
This page gives a nice overview of the different algorithms for creating smooth curves.
The general approach is to break a couple of points into multiple points using a geometry shader, and then render them just like a normal polygon. But I don't know the details. Try a google search for bezier geometry shader for example.
Wait, I lie. I found a program here that does it in the fragment shader.

Lens shader / Image disortion

Well, i have a 3d scene currently with just a quad (painting) with texture on it. Between the painting and the "camera" i have places an other quad i would like to behave like a optical lens: distorting the picture "below" it
how would one achieve it preferably with a shader and some pixelbuffers?
Here is an example I found a while ago which does something very similar to what you want. http://www.paulsprojects.net/opengl/refract/refract.html
You will probably have to modify the code a bit to achieve the inversion effect you want, but this will get you started on the right track.
Edit:
By the way, you will not need the second image (the inverted small rectangle). Just use a single background image and the shader.
Between the painting and the "camera" i have places an other quad i would like to behave like a optical lens:
This is a tricky one. First one must understand that OpenGL is a so called localized rendering model rasterizer, which means in layman terms, that it works like pencils and brushes on a canvas.
It thus works in very contrast to global scene representation renderers like raytracers. A raytracer actually operates on a fully defined scene, because of that it can to things like refraction trivially.
Indeed one must treat OpenGL like an artist treats its tools. So any optical "effect" you want to create must be implemented by mastering various drawing techiques possible with the tools OpenGL offers. To create the effect you desire you must implement a multistage process.
For refraction you first render the scene as "seen" by the refracting object in all directions (you create a dynamic cube map), then you use this cube map as input data for rasterizing the "refracting" object, where a shader is used to determine the refracted direction of a ray of light hitting the rasterized fragments.
BTW: What holds for refraction holds for any other like interacting effect. Shadows are as non-trivial like refractions in OpenGL.