Maintaining Aspect Ratio and Scale Independent of Window Size with freeglut - c++

I've been wanting to experiment with platforming physics using freeglut, but before I would allow myself to start, I had an old problem to take care of.
You see, I want to write a reshape handler that not only maintains the scale and eliminates any distortion of the view, but also allows all of the onscreen shapes to maintain their size even while the window is too small to contain them (i.e. let them be clipped).
I've almost got all three parts solved, but when I scale my window, the circle I have drawn onto it scales just slightly. Otherwise, I got the clipping, and I have eliminated the distortion. Update: What I want to achieve is a program that maintains scale and aspect ratio independent of window size.
Here's my code:
void reshape(int nwidth,int nheight)
{
glViewport(0,0,nwidth,nheight);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
//here begins the code
double bound = 1.5;
double aspect = double(nwidth)/nheight;
//so far, I get the best results by normalizing the dimensions
double norm = sqrt(bound*bound+aspect*aspect);
double invnorm = sqrt(bound*bound+(1/aspect)*(1/aspect));
if(nwidth <= nheight)
glOrtho(-bound/invnorm,bound/invnorm,-bound/aspect/invnorm,bound/aspect/invnorm,-1,1);
else
glOrtho(-bound*aspect/norm,bound*aspect/norm,-bound/norm,bound/norm,-1,1);
//without setting the modelview matrix to the identity form,
//the circle becomes an oval, and does not clip when nheight > nwidth
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
}
Update: As per Mr. Coleman's suggestion, I've tried switching out single precision for double. The scaling issue has improved along the vertical axis, but whenever I drag the horizontal axis in either direction, the shape still scales by a noticeable amount. It's still the same shape throughout, but a visual inspection tells me that the shape is not the same size when the window is 150x300 as it is when the window is 600x800, regardless of which glOrtho is being executed.

I've got it. Here's how I changed my code:
//at the top of the source file, in global scope:
int init_width;//the initial width
int init_height;//the initial height
void reshape(int new_width, int new_height)
{
//moved the glViewport call further down (it was part of an earlier idea that didn't work out)
glMatrixMode(GL_PROJECTION);
glLoadIdentity();//these two lines are unchanged
double bound = 1.0; //I reduced the edge distance to make the shape larger in the window
double scaleX = double(new_width)/init_width;
double scaleY = double(new_height)/init_height;
glOrtho( -bound*scaleX/2, bound*scaleX/2, //these are halved in order to un-squash the shape
-bound*scaleY, bound*scaleY, -1,1 );
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glViewport(0,0,new_width,new_height);
}
That is what my code looks like now. It maintains the scale and shape of what I have on screen, and allows it to go offscreen when the window is too small to contain the entire shape.

Related

Rotating object around itself

I have an object that I want to to move around using the following mechanic: the left and right arrows change its rotation and the up arrow increments its position.
My problem is that I either can't rotate the object around itself, or I can't move it in the direction being looked at.
The draw function is as follows:
glMatrixMode(GL_MODELVIEW);
glPushMatrix();
glScalef(SCALE, SCALE, SCALE);
glTranslatef(x, 0, 0);
glRotatef(rotationZ, 0, 0, 1);
glTranslatef(-x, 0, 0);
// Draw the object...
glPopMatrix();
Key press detection code:
case GLUT_KEY_UP:
teclas.up = GL_TRUE;
glutPostRedisplay();
break;
case GLUT_KEY_LEFT:
teclas.left = GL_TRUE;
glutPostRedisplay();
break;
case GLUT_KEY_RIGHT:
teclas.right = GL_TRUE;
glutPostRedisplay();
break;
Timer function:
if (teclas.up) {
x++;
}
if (teclas.left) {
rotationZ++;
}
if (teclas.right) {
rotationZ--;
}
glutPostRedisplay();
I've seen multiple threads about this, and I've tried changing the signal of the x variable but nothing seems to work.
Edit(solved):
I just changed the part of the Timer function that is responsible for the forward movement to this:
if (estado.teclas.up) {
homer.x+= (float)cos(homer.rotationZ * M_PI / 180);
homer.y+= (float)sin(homer.rotationZ * M_PI / 180);
}
And also, my Draw function:
glMatrixMode(GL_MODELVIEW);
glPushMatrix();
glScalef(SCALE, SCALE, SCALE);
glTranslatef(x, 0, 0);
glRotatef(rotationZ, 0, 0, 1);
// Draw the object...
glPopMatrix();
This way, the object always moves towards what it's facing
This is a case of problem with Moving Reference Frame, those are the keywords. Unless you simulate physics of process as well, for OpenGL rendering all we have to worry about are the coordinates. Here we have the stationary reference frame, sometimes called a world frame (especially if observer is moving relative to it as well), and a moving reference frame (MRF )connected to object. MRF can have arbitrary rotation and translation relative to world frame, there are traditional ways how it is defined.
For example for Earth globe MRF defined as origin in center of Earth, positive X axis intersecting equator and 0 meridian, positive Z - north pole and Y is complementary to them. For static point on surface of earth (local geographic coordinates) it usually Y directed to zenith and positive Z - toward North in plane of horizon and positive X - toward east. In case of moving vehicle's the positive Y- or pitch axis always points to its left, and the positive Z- or yaw axis always points up, X - the roll axis is pointed straight forward. This one seem to match your case.
Regardless of axis specification, the rotation of vehicle is equivalent of changing matrix corresponding to it. Lets call it transformation matrix. In local coordinates vehicle speed v = {vx,0,0} is a vector collinear to positive X axis. But in world coordinates it is equal to
v' = M*v
where M is a transformation matrix of MRF. As v is change of coordinates per unit of time, then any translations should follow this formula too. There are two ways to solve this , if you're using legacy OpenGL, you have two options:
First: you would start with identity matrix and recreate all transforms in proper order.
Set identity matrix.
Translate by value required (in local cords)
Apply rotations of vehicle
Translate by values of last known position of vehicle.
Either calculate new position of vehicle, knowing transforms, or read that value , by getting matrix from OpenGL (by glGetFloatv(GL_MODELVIEW_MATRIX, ptr)) and extracting offset from it.
Downside of this method is that you have to use functions of OpenGL,where each call of glTranslate or glRotate is creating another matrix that is getting multiplied with other (in opposite order). That's excess math operations and precision of them isn't brilliant either. It can get quite interesting in Chinese manner if you have several frames of reference, especially nested.
Second method is to do all matrix math yourself, for example using some math library like GLM (glm.h) and store matrix for each frame of reference, modifying or regenerating them when needed. You can supply matrix directly to OpenGL even in legacy mode by glLoadMatrix. If you worry about performance, you should know that all modern implementations are done that math on CPU anyway, GPUs do not work with matrix stack anymore, for long time. It can be found quickly by inspecting open-source implementations.
In case of modern, flexible pipeline you don't have glScale, glTranslate, glRotate available at all. Entire matrix stack is deprecated in OpenGL 3. You can do it only in second way, but in this case you would supply matrices to shader program through uniforms.

OpenGL draw circle with Mouse Move

I am trying to use the function mouseMove(int x, int y) to draw a circle centered at my mouse as I click and drag it across the screen. Circles will be drawn on the moving mouse like a spray paint. So far, this is what I have
void mouseMove(int x, int y) {
glBegin(GL_POLYGON);
for (int i = 0; i <= 360; i++)
{
float theta = (2 * 3.14 * i) / 360;
glVertex2f((size/2 + x) * cos(theta), (size/2 + y) * sin(theta));
}
glEnd();
glutPostRedisplay();
}
But when using this, it draws very large circles that aren't centered around my mouse. How would I alter this to make the program draw circles centered at my mouse?
To describe the project, I am creating a painting program that changes shapes, colors, sizes, and rotations of the drawing done in mouseMove. For now, the size is an int set to 32. When the user selects the shape using the 'b' key in a keyboard function, he/she can switch the shapes that are drawn around the mouse as the user clicks and drags the mouse around. Like a spray paint. All the other shapes work shaped around the mouse except for the circle shape spray.
This answer assumes that things like your viewport and projection matrices are set up correctly, and that the input to this function is taking into account the fact that "screen coordinates" (what the mouse uses) are not the same thing as "OpenGL Coordinate Space" (this usually implies reversing the direction of the y-axis for one or the other).
The math you're using for setting your vertex coordinates is wrong. The mouse's x and y coordinates should not be multiplied by the sine/cosine functions.
The correct way to write it is
glVertex2f((size/2) * cos(theta) + x, (size/2) * sin(theta) + y);
I would also add that you appear to still be using OpenGL's Immediate Mode rendering, which is Deprecated and will offer extremely poor training for a professional setting. I highly advise you learn Modern OpenGL (3.x+) and reapply those concepts to whatever projects you're already working on. This is a very good tutorial.

OpenGL. Window-To-Viewport Transformation

I'm new to OpenGL. Hence I require some assistance in the matter described below. I'm not sure how to produce viewport coordinates with respect to screen coordinates as well as producing it in c++ since I used to deal with Java.
In this question I need to implement the function worldToViewportTransform.
The function implements a 2D orthographic projection matrix, which is used for the (world)window-to-viewport transformation. In OpenGL this matrix is defined by gluOrtho2D.
Input are the coordinates of the world-window (winLeft, winRight, winBottom, winTop), the top-left corner of the viewport (window) on the screen (windowX, windowY), and the size of the viewport (window) on the screen (windowWidth, windowHeight).
Output are the values A, B, C and D which constitute the world-to-viewport transformation.
The answer needs to use the function format below - copy it and fill out the missing code. The function uses pointer variables for the values A, B, C and D since Coderunner does not seem to accept C++ notation - the code segment below converts the pointer variables to double values and back, so you don't need to understand how pointers work.
void worldToViewportTransform (double winLeft, double winRight, double winBottom, double winTop,
int windowX, int windowY, int windowWidth, int windowHeight,
double* APtr, double* BPtr, double* CPtr, double* DPtr)
{
double A=*APtr, B=*BPtr, C=*CPtr, D=*DPtr;
<INSERT YOUR CODE HERE>
*APtr=A; *BPtr=B; *CPtr=C; *DPtr=D;
}
Particular Test case should produces the output:(u,v)=(-200,367)
//Code for Testing
double A, B, C, D;
double winLeft=1.5, winRight=4.5, winBottom=0.0, winTop=3.0;
int windowX=100, windowY=100, windowWidth=600, windowHeight=400;
worldToViewportTransform(winLeft, winRight, winBottom, winTop,
windowX, windowY, windowWidth, windowHeight,
&A, &B, &C, &D);
// Test cases
double x, y; // world coordinates
int u, v; // window coordinates
x=0.0f; y=1.0f;
u=(int) floor(A*x+C+0.5f);
v=(int) floor(B*y+D+0.5f);
printf("(u,v)=(%d,%d)",u,v);
The function implements a 2D orthographic projection matrix, which is used for the (world)window-to-viewport transformation. In OpenGL this matrix is defined by gluOrtho2D.
No! gluOrtho2D/glOrtho is not doing that. These functions setup a orthographic projection matrix, which purpose is to transform from view-space into clip-space.
Then an implicit clip-space to NDC-space transform happens behind the scenes.
Finally the NDC-space coordinates are transformed to window coordinates in the viewport range.
Input are the coordinates of the world-window (winLeft, winRight, winBottom, winTop), the top-left corner of the viewport (window) on the screen (windowX, windowY), and the size of the viewport (window) on the screen (windowWidth, windowHeight).
Your nomenclature seems a little bit off.
The usual convention is that the viewport defines the target rectangle within the window, specified in window-relative coordinates. In OpenGL the window coordinate (0,0) being the lower-left corner of the window.
Window coordinates are usually relative to its parent window; hence for top-level window relative to the screen coordinates. In usual screen coordinate systems (0,0) is the upper-left.
Output are the values A, B, C and D which constitute the world-to-viewport transformation.
It's unclear what you actually want, but my best educated guess is, that you want to recreate the OpenGL transformation chain. Of course if you're using shaders, everything could be done in there. But in practice you'll probably just want to follow the chain
r_clip = P · M · r_in
r_NDC = r_clip/r_clip.w
r_viewport = (r_NDC.xy + 1)*viewport.width_height/2 + viewport.xy
where P is your projection matrix, for example the matrix produced by glOrtho.

opengl - Rotating around a sphere using vectors and NOT glulookat

I'm having an issue with drawing a model and rotating it using the mouse,
I'm pretty sure there's a problem with the mathematics but not sure .
The object just rotates in a weird way.
I want the object to start rotating each click from its current spot and not reset because of the
vectors are now changed and the calculation starts all over again.
void DrawHandler::drawModel(Model * model){
unsigned int l_index;
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glMatrixMode(GL_MODELVIEW); // Modeling transformation
glLoadIdentity();
Point tempCross;
crossProduct(tempCross,model->getBeginRotate(),model->getCurrRotate());
float tempInner= innerProduct(model->getBeginRotate(),model->getCurrRotate());
float tempNormA =normProduct(model->getBeginRotate());
float tempNormB=normProduct(model->getCurrRotate());
glTranslatef(0.0,0.0,-250.0);
glRotatef(acos (tempInner/(tempNormA*tempNormB)) * 180.0 / M_PI,tempCross.getX(),tempCross.getY(),tempCross.getZ());
glColor3d(1,1,1);
glBegin(GL_TRIANGLES);
for (l_index=0;l_index < model->getTrianglesDequeSize() ;l_index++)
{
Triangle t = model->getTriangleByPosition(l_index);
Vertex a1 = model->getVertexByPosition(t.getA());
Vertex a2 = model->getVertexByPosition(t.getB());
Vertex a3 = model->getVertexByPosition(t.getC());
glVertex3f( a1.getX(),a1.getY(),a1.getZ());
glVertex3f( a2.getX(),a2.getY(),a2.getZ());
glVertex3f( a3.getX(),a3.getY(),a3.getZ());
}
glEnd();
}
This is the mouse function which saves the beginning vector of the rotating formula
void Controller::mouse(int btn, int state, int x, int y)
{
x=x-WINSIZEX/2;
y=y-WINSIZEY/2;
if (btn==GLUT_LEFT_BUTTON){
switch(state){
case(GLUT_DOWN):
if(!_rotating){
_model->setBeginRotate(Point(float(x),float(y),
(-float(x)*x - y*y + SPHERERADIUS*SPHERERADIUS < 0)? 0:float(sqrt(-float(x)*x - y*y + SPHERERADIUS*SPHERERADIUS))));
_rotating=true;
}
break;
case(GLUT_UP):
_rotating=false;
break;
}
}
}
and finally the following function which holds the current vector.
(the beginning vector is where the mouse was clicked at
and the curr vector is where the mouse position at the moment )
void Controller::getMousePosition(int x,int y){
x=x-WINSIZEX/2;
y=y-WINSIZEY/2;
if(_rotating){
_model->setCurrRotate(Point(float(x),float(y),
(-float(x)*x - y*y + SPHERERADIUS*SPHERERADIUS < 0)? 0:float(sqrt(-float(x)*x - y*y + SPHERERADIUS*SPHERERADIUS))));
}
}
where sphereradius is the sphere radius O_O of 70 degress
is any calculation wrong ? cant seem to find the problem...
thanks
Why so complicated? Either you change the view matrix or you change the model matrix of your focused object. If you choose to change the model matrix and your object is centered in (0,0,0) of the world coordinate system, computing the rotation around a sphere illusion is trivial - you just rotate into the opposite direction. If you want to change the view matrix (which is actually done when you change the position of the camera) you have to approximate the surface points on the chosen sphere. Therefore, you could introduce two parameters specifying two angles. Everytime you click move your mouse, you update the params and compute the new locations on the sphere. There are some useful equations in [http://en.wikipedia.org/wiki/Sphere].
Without knowing what library (or libraries) you're using your code is rather difficult to read. It seems you're setting up your camera at (0, 0, -250), looking towards the origin, then rotating around the origin by the angle between two vectors, model->getCurrRotate() and model->getBeginRotate().
The problem seems to be that in "mouse down" events you explicitly set BeginRotate to the point on the sphere under the mouse, then in "mouse move" events you set CurrRotate to the point under the mouse, so every time you click somewhere else, you lose the previous state of rotation because BeginRotate and CurrRotate are simply overwritten.
Combining multiple rotations around arbitrary different axes is not a trivially simple task. The proper way to do it is to use quaternions. You may find this primer on quaternions and other 3D math concepts useful.
You might also want a more robust algorithm for converting screen coordinates to model coordinates on the sphere. The one you are using is assuming the sphere appears 70 pixels in radius on the screen and that the projection matrix is orthographic.

Mirroring the Y axis in SFML

Hey so I'm integrating box2d and SFML, and box2D has the same odd, mirrored Y-axis coordinate system as SFML, meaning everything is rendered upside down. Is there some kind of function or short amount of code I can put that simply mirrors the window's render contents?
I'm thinking I can put something in sf::view to help with this...
How can i easily flip the Y-axis easily, for rendering purposes, not effecting the bodies dimensions/locations?
I don't know what is box2d but when I wanted to flip Y axis using openGL, I just applied negative scaling factor to projection matrix, like:
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glScalef(1.0f, -1.0f, 1.0f);
If you want to do it independent of openGL simply apply a sf::View with a negative x value.
It sounds like your model uses a conventional coordinate system (positive y points up), and you need to translate that to the screen coordinate system (positive y points down).
When copying model/Box2D position data to any sf::Drawable, manually transform between the model and screen coordinate systems:
b2Vec2 position = body->GetPosition();
sprite.SetPosition( position.x, window.GetHeight() - position.y )
You can hide this in a wrapper class or function, but it needs to sit between the model and renderer as a pre-render transform. I don't see a place to set that in SFML.
I think Box2D has the coordinate system you want; just set the gravity vector based on your model (0, -10) instead of the screen.
How can i easily flip the Y-axis easily, for rendering purposes, not effecting the bodies dimensions/locations?
By properly applying transforms. First, you can apply a transform that sets the window's bottom-left corner as the origin. Then, scale the Y axis by a factor of -1 to flip it as the second transform.
For this, you can use sf::Transformable to specify each transformation individually (i.e., the setting of the origin and the scaling) and then – by calling sf::Transformable::getTransform() – obtain an sf::Transform object that corresponds to the composed transform.
Finally, when rendering the corresponding object, pass this transform object to the sf::RenderTarget::draw() member function as its second argument. An sf::Transform object implicitly converts to a sf::RenderStates which is the second parameter type of the corresponding sf::RenderTarget::draw() overload.
As an example:
#include <SFML/Graphics.hpp>
auto main() -> int {
auto const width = 300, height = 300;
sf::RenderWindow win(sf::VideoMode(width, height), "Transformation");
win.setFramerateLimit(60);
// create the composed transform object
const sf::Transform transform = [height]{
sf::Transformable transformation;
transformation.setOrigin(0, height); // 1st transform
transformation.setScale(1.f, -1.f); // 2nd transform
return transformation.getTransform();
}();
sf::RectangleShape rect({30, 30});
while (win.isOpen()) {
sf::Event event;
while (win.pollEvent(event))
if (event.type == sf::Event::Closed)
win.close();
// update rectangle's position
rect.move(0, 1);
win.clear();
rect.setFillColor(sf::Color::Blue);
win.draw(rect); // no transformation applied
rect.setFillColor(sf::Color::Red);
win.draw(rect, transform); // transformation applied
win.display();
}
}
There is a single sf::RectangleShape object that is rendered twice with different colors:
Blue: no transform was applied.
Red: the composed transform was applied.
They move in opposite directions as a result of flipping the Y axis.
Note that the object space position coordinates remain the same. Both rendered rectangles correspond to the same object, i.e., there is just a single sf::RectangleShape object, rect – only the color is changed. The object space position is rect.getPosition().
What is different for these two rendered rectangles is the coordinate reference system. Therefore, the absolute space position coordinates of these two rendered rectangles also differ.
You can use this approach in a scene tree. In such a tree, the transforms are applied in a top-down manner from the parents to their children, starting from the root. The net effect is that children's coordinates are relative to their parent's absolute position.