matrix non uniform scale rotation issue - opengl

I just ran into this while doing some 2d graphics work with opengl.
if I do a non uniform scale then rotation things gets really wonky.
This is because the upper 3x3 matrix holds both rotation and scale.
what are some ways to deal with this non uniform scaling and rotations? I could provide a video but I am sure most people here have seen this problem although I could provide a video.

It depends upon what type of output you want, you have to see when you are rotating the object , around what axis you want to rotate the object.
If you want to rotate the object around its own axis, better translate back to origin and then do your transformations and then again translate back to original position. if you can post video, it will be better to visualize what you exactly want. Also order of transformation can make things reaaly wonky :)

Related

Transformation of vertex data. Mouse picking algorithm

In my program I have a gizmo wich moves any objects in the scene. As I already know, is the usual way of storing any transformations is store that transformation in model matrices of this objects and execute any transformation directly in the shader. BUT also in my program I implement a classic ray-picking algorythm wich works only with a real transformed data. A ray detect any intersection with real(transformed) vertex position. How is the common way to solve this conflict:
Multiply any transformation immediatly on CPU and store transformed data. I think it's a clear way but it's expensive: for example I drag my object on screen in during 100 frames, and each frame I convert the delta of moving to matrices and multiply whole data by it.
Store any transformation in matrices until the mouse picking will starts and then quick multiply verticies by matrix to prepare data for picking. This is very fancy but there is ways to optimize it.
Which is the more performant way. Maybe there is some other method?
Update for Robinson:
I think you misunderstood me. Or I did not fully understand you. I have a box and a sphere and I move it by gizmo (I edit their model matrix) on 1,0,0 and 0,1,0 respectively. His model matrix is now different. HERE I get data that I need for ray-picking - ever objects has own individual place.
Then I transform the entire scene to eye space (view matrix) and then to clip space (projetion matrix) and render it. My ray makes return journey from viewport to world space (unproject a view and a projection matrix) and should interacts with the actual scene. My ray transformed rather than scene!
My question was how to interact with the objects wich the real place is unknown until it's will render (or transformed)? Or may be I'm not on the right track and I should have done it differently - multiply entire data each step (it's expensive, look at my first question).
You use ray-picking which technically is "get x,y screen coordinates, transform them to NDC and set the z as anyone in the range [-1,1]; and finally transform them all back to world coordinates".
This is useful when you want to intersect a ray from the point of view (the camera) to "mouse coordinates" AND you want to do all of this intersection calculations on CPU side.
Notice you can do it even when nothing is drawn in the screen, just mouse coordinates are needed; well, plus the viewport and the current transformations, but you know them before any glDrawxxx command.
Next, the question is: what are you going to do with that ray or intersections?
You may wish to modify some property (like color) or position. Right?
How many objects are to be modified? If it's just a bunch then it's OK to do it on CPU modifying the data to send to GPU. But if you have thousands of objects then think of the hardware-accelerated way: keep their coordinates but send the new tranformation matrices and properties to GPU and let it do the hard work.
If you are worried about some objects stay as before but others get modified, remember that you can draw groups of objects that share the matrices and other uniforms with a single glDrawxxx call. If you have different groups, use several glDrawxxx calls with different uniforms, even different shaders.

How would I go about moving in a 3D environment, openGL c++

I'm not quite sure on how I should be making things move using openGL.
Am I supposed to be moving the camera's position around the 3D world, or moving/translating the objects around the camera?
I read online that the camera should stay at the origin and everything else should move around the camera, but wouldn't that be an intensive operation? Like if I have 1000 objects and I'm moving, we'd have to move all of these objects. Would it not be easier to move the camera and keep the world objects where they are?
The way OpenGL works is, conceptually, the camera is always in the center, Y axis up and Z axis forward. If you want to move or rotate the camera, you actually move everything else the opposite way.
This is opposed to Direct3D for example, where you have a separate camera matrix.
It's a minor detail though because mathematically speaking they're exactly the same. Whether you move everything forward or the camera back, it's exactly the same end result. You could even argue that having only one matrix as opposed to lugging around two and multiplying them is a performance gain, but it's extremely minor and usually you'll separate your camera matrix from your world building matrix anyway.
In Opengl, the camera is always located at the eye space coordinate (0., 0., 0.). To give the appearance of moving the camera, your OpenGL application must move the scene with the inverse of the camera transformation.
You don't need to worry about moving/translating objects in your scene. gluLookAt() function does it for you. This function computes the inverse camera transform according to its parameters and multiplies it onto the current matrix stack.

How do I access a transformed openegl modelview matrix [tried glGetFloatv()]?

I am trying to rotate over the 'x' axis and save the transformed matrix so that I can use it to rotate further later; or over another axis from the already rotated perspective.
//rotate
glRotatef(yROT,model[0],model[4],model[8]);//front over right axis
//save model
glGetFloatv(GL_MODELVIEW_MATRIX, model);
Unfortunately I noticed that openGL must buffer the transformations because the identity matrix is loaded to model. Is there a work-around?
Why, oh God, would you do this?
I have been toying around with attempting to understand quaternions, euler, or axis rotation. The concepts are not difficult but I have been having trouble with the math even after looking at examples *edit[and most of the open classes I have found either are not well documented for simpleton users or have restrictions on movement].
I decided to find a way to cheat.
edit*
By 'further later' I mean in the next loop of code. In other words, yRot is the number of degrees I want my view to rotate from the saved perspective.
My suggestion: Don't bother with glRotate at all, they were never very pleasant to work with in the first place and no serious program did use them ever.
If you want to use the fixed function pipeline (= no shaders), use glLoadMatrix to load whatever transformation you currently need. With shaders you have to do the conceptually same with glUniform anyway.
Use a existing matrix math library, like GLM, Eigen or linmath.h to construct the transformation matrices. The nice benefit is, that you can make copies of a matrix at any point, so instead of fiddling with glLoadIdentity, glPushMatrix and glPopMatrix you just make copies where you need them and work from them.
BTW: There is no such thing as "models" in OpenGL. That's not how OpenGL works. OpenGL draws points, lines or triangles, one at a time, where each such called primitive is transformed individually to a position on the (screen) framebuffer and turned into pixels. Once a primitive has been processed OpenGL already forgot about it.

Keeping polygon perpendicular to camera

Currently I'm using WebGL for a school project, and am stuck on trying to keep an object perpendicular to the camera while allowing rotation of the rest of the scene.
I currently have a cube and other object on the canvas, and they're being successfully rotated using a quaternion and converting that to a rotation matrix. I would now like to add a simple square within the scene, but have it consistently perpendicular to the camera.
In thinking about this, I've considered an approach using multiple vertex shader programs for the different objects in the scene: 1 for handling the positioning of all objects in the scene that can be rotated, and another vertex shader corresponding to the square that I don't want rotated.
Though, I really don't know if this approach will work as expected, as I am still a novice at WebGL.
Would there be a better approach to this? I greatly apologize if there's a lack of info given, and can give any more info if needed. I've searched quite a bit for something involving this, but have hit a dead end.
Thank you!
If you want a more step by step tutorial: http://www.opengl-tutorial.org/intermediate-tutorials/billboards-particles/billboards/.

why does the camera have to stay at the origin in opengl

So i'm learning openGL and one thing I find very strange is that the camera has the stay at the origin and look in the same direction. To achieve camera movement and rotation you have to move and rotate the entire world instead of the camera.
My question is, why can't you move the camera? Does directx allow you to move the camera?
This is an interesting question. I think the answer depends on what you actually mean, when you are talking about a fixed camera.
As a matter of fact instead of saying openGL has a fixed camera I'd rather tend to say there isn't any camera at all in openGL.
On the other hand I wouldn't agree with your interpretation that the openGL API moves or rotates the world.
Instead I'd say the openGL API doesn't move or rotate the world at all.
I think the reason why there isn't any concept of a camera in the openGL API is, because it isn't meant as a high level abstraction layer, but rather linked to the computational necesissities in displaying computer graphics.
As I suggest you're mainly talking about displaying 3-dimensional scenes this means transforming 3D vertex data to a 2D raster image.
For every frame rendered this involves transforms transforming the 3-D coordinates of every vertex in your scene to their corresponding 2D location on the screen.
As every vertex has to be placed at the right position on screen it doesn't make any computational difference at all if you conceptually move something like a camera around or just move the whole world, you'll have to do the same transformation nonetheless.
The mathmatics involved in computing the "right" position for a vertex on screen can be described by a mathmatical object called matrix that, when applied (the mathmatical term used for this application is matrix-multiplication) to 3-D data will result in the desired 2D screen coordinates.
So essentially what happens in rendering a 3-D scene - regardless of the fact if there is any camera at all or not - is that every vertex is processed by some transformation matrix, leaving the original 3-D data of your vertex intact.
As the 3-D vertex data doesn't get changed at all, I'd say the openGL doesn't move or rotate the world at all, but this "observation" may depend on the observers perspective.
As a matter of fact leaving the 3-D vertex data intact without changing it all is essential to prevent your 3-D scene from deforming due to accumulated rounding inaccuracy.
I hope I could help by expressing my opinon on who or what moves whom when or why in the openGL API.
Even if I couldn't convice you there is no word-moving involved in using the openGL API don't forget the fact it doesn't weight anything at all so moving it around shouldn't be too painful.
BTW. don't bother to investigate about the proprietary library mentioned in your question and keep relying on open standards.
What's the difference between moving the world and moving the camera? Mathematically... there isn't any; it's the same number either way. It's all a matter of perspective. As long as you code your camera abstraction correctly, you don't have to think of it as moving the world if you don't want to.