Basically I'm writing a C++ program to draw objects in a world and I'm having some difficulty with lighting when I'm rotating/translating an object that consists of multiple objects (For example a tree object consists of a cylinder object for the trunk and pyramid objects for the leaves).
I have a working light source right now but I run into some issues when rotating a subcomponent of an object (like the pyramid object inside its parent, the tree).
All lighting/shading works when I apply a rotation to the tree object, but lighting gets wonky and random if I attempt to rotate the pyramid object inside the tree. I'm really hoping someone may have some tips or hints as to what I could be running into?
Edit: There are a lot of files with this project and it would be difficult to throw enough up here to give the entire picture. Essentially my steps are:
Set up a world with a functioning opengl light source.
Create a shape object which has a transformation matrix associated with it.
Using the shape and its matrix and calculate its normals to use for shading.
Now create a new shape object which consists of the other shape objects, and throw it in the world for the lighting to take effect.
Now the problem happens here. If I rotate the parent object, lighting is fine, but if I rotate the child object, lighting becomes random.
I found the problem in my code. I was updating the normals too frequently so opengl lighting had bad normals to work with.
Thanks for the help everyone!
Related
I'm pretty new to OpenGL and am trying to implement a simple program where I can draw cubes, move them around with the mouse, and delete them.
Previously I had done my drag operations by translating on the CPU. In this way I was able to use ray-tracing to pick out the element I wanted because the vertices themselves were being updated.
However, I'm trying to move all of the transformations to the GPU and in doing so realized that I would then be giving up updated access to the vertices on the CPU (as the CPU still thinks the vertices are the un-transformed ones). How does one do this communication so that I wouldn't have to manually do transformations on the CPU as well as in the Vertex Shader?
No matter where you're doing your transformations, you will typically have a model matrix that describes where each object is in the scene. Instead of transforming each object into world space just so you can check for intersection with a world-space ray, you can also transform the ray into the object space of each object by transforming the ray with the inverse model matrix.
One general issue with ray-tracing is that, as your scene gets larger, brute force testing of each object will get increasingly slow. You can use acceleration structures like an Octree or a Bounding Volume Hierarchy to speed things up. A completely different approach when it comes to picking would be just render an ID buffer, i.e. a buffer that has the same resolution as your currently rendered frame and for each pixel saves the ID of the object that is visible at that pixel. Then you can simply read back the value of the pixel underneath the cursor to find out what object you hit without the need to do any raytracing. Rendering the ID buffer could be done as a separate pass or can likely just be added as an additional render target to a pass you're already doing, e.g., prefilling the depth buffer or just when rendering the scene in case you only do one pass.
In OpenGL I created a simple cube that I rotate. I am rotating the cube, not the camera!
Then I added a light source. The light also rotates with the cube. How can I avoid this?
Can somebody tell me when to activate which matrix mode and when to push and pop the current matrix. I have tried many, many combinations and sequences, but the damn light always (!) follows the rotation.
I am not expecting a general answer that just says that has something to do with the matrix stack. A concrete short example of a rotating object with a non-rotating light would be great!
A probable reason why the light follows a rotating object is that there was defined no (suitable) normal vector for the object. Although the light and the movements of the objects are programmed correctly the wrong or no normal vectors can make the whole scene look like the light follows a rotating object.
Defining normal vectors for the objects, that determine how the light is reflected, can solve the problem.
Currently I'm using WebGL for a school project, and am stuck on trying to keep an object perpendicular to the camera while allowing rotation of the rest of the scene.
I currently have a cube and other object on the canvas, and they're being successfully rotated using a quaternion and converting that to a rotation matrix. I would now like to add a simple square within the scene, but have it consistently perpendicular to the camera.
In thinking about this, I've considered an approach using multiple vertex shader programs for the different objects in the scene: 1 for handling the positioning of all objects in the scene that can be rotated, and another vertex shader corresponding to the square that I don't want rotated.
Though, I really don't know if this approach will work as expected, as I am still a novice at WebGL.
Would there be a better approach to this? I greatly apologize if there's a lack of info given, and can give any more info if needed. I've searched quite a bit for something involving this, but have hit a dead end.
Thank you!
If you want a more step by step tutorial: http://www.opengl-tutorial.org/intermediate-tutorials/billboards-particles/billboards/.
I have been working on my low level OpenGL understanding, and I've finally come to how to animate 3D models. Nowhere I look tells me how to do Skeletal animation. most things use some kind of 3D engine and just say "Load the Skeleton" or "Apply the Animation" but not how to load a skeleton, or how to actually move the vertices.
I'm assuming each bone has a 4x4 Matrix of the Translation/Rotation/Scale for the vertices its attached too that way when the bone is moved the vertices attached also move by the same amount.
for skeletal animation I was guessing that you would pass the Bone(s) to the shader, that way in the vertex shader I move the current vertex before it goes to the fragment shader. If I have a keyframed animation I send the current bone and the new bone to the shader along with the current time between frames and interpolate the points between bones based on how much time there is between keyframes.
Is this the correct way to animate a mesh? or is there a better way
Well - the method of animation depends on the format, and the data that's written in it. Some formats supply you in vectors, some use matrices. I gotta admit I came to this site to ask a similar question, but I've specified the format (was using *.x files, you can check the topic), and I got an answer.
You're idea of the subject is correct. If you want a sample implementation, you can find one on the OpenGL wiki.
I'm trying to build a (simple) game engine using c++, SDL and OpenGL but I can't seem to figure out the next step. This is what I have so far...
An engine object which controls the main game loop
A scene renderer which will render the scene
A stack of game states that can be pushed and popped
Each state has a collection of actors and each actor has a collection of triangles.
The scene renderer successfully sets up the view projection matrix
I'm not sure if the problem I am having relates to how to store an actors position or how to create a rendering queue.
I have read that it is efficient to create a rendering queue that will draw opaque polygons front to back and then draw transparent polygons from back to front. Because of this my actors make calls to the "queueTriangle" method of the scene renderer object. The scene renderer object then stores a pointer to each of the actors triangles, then sorts them based on their position and then renders them.
The problem I am facing is that for this to happen the triangle needs to know its position in world coordinates, but if I'm using glTranslatef and glRotatef I don't know these coordinates!
Could someone please, please, please offer me a solution or perhaps link me to a (simple) guide on how to solve this.
Thankyou!
If you write a camera class and use its functions to move/rotate it in the world, you can use the matrix you get from the internal quaternion to transform the vertices, giving you the position in camera space so you can sort triangles from back to front.
A 'queueTriangle' call sounds to me to be very inefficient. Modern engines often work with many thousands of triangles at a time so you'd normally hardly ever be working with anything on the level of a single triangle. And if you were changing textures a lot to accomplish this ordering then that is even worse.
I'd recommend a simpler approach - draw your opaque polygons in a much less rigorous order by sorting the actor positions in world space rather than the positions of individual triangles and render the actors from front to back, an actor at a time. Your transparent/translucent polygons still require the back-to-front approach (providing you're not using premultiplied alpha) but everything else should be simpler and faster.