Making object stay on screen OpenGL SFML - opengl

I'm drawing a Triangle in OpenGL and you can move it up,down,left right. I'm using SFML as my windowing framework, I want to know how i can keep my triangle in the window and not have it move outside of it i.e. if it goes all the way to the top i want it stop going past the height

That largely depends on you projection matrix. You need to obtain High/Low bounds of it (if you use perspective they will depend on Z-distance; with orthogonal matrix it's easier as Z is squashed) and then check against them - if your object is off - forbid the movement.

Related

How would I go about moving in a 3D environment, openGL c++

I'm not quite sure on how I should be making things move using openGL.
Am I supposed to be moving the camera's position around the 3D world, or moving/translating the objects around the camera?
I read online that the camera should stay at the origin and everything else should move around the camera, but wouldn't that be an intensive operation? Like if I have 1000 objects and I'm moving, we'd have to move all of these objects. Would it not be easier to move the camera and keep the world objects where they are?
The way OpenGL works is, conceptually, the camera is always in the center, Y axis up and Z axis forward. If you want to move or rotate the camera, you actually move everything else the opposite way.
This is opposed to Direct3D for example, where you have a separate camera matrix.
It's a minor detail though because mathematically speaking they're exactly the same. Whether you move everything forward or the camera back, it's exactly the same end result. You could even argue that having only one matrix as opposed to lugging around two and multiplying them is a performance gain, but it's extremely minor and usually you'll separate your camera matrix from your world building matrix anyway.
In Opengl, the camera is always located at the eye space coordinate (0., 0., 0.). To give the appearance of moving the camera, your OpenGL application must move the scene with the inverse of the camera transformation.
You don't need to worry about moving/translating objects in your scene. gluLookAt() function does it for you. This function computes the inverse camera transform according to its parameters and multiplies it onto the current matrix stack.

How do I access a transformed openegl modelview matrix [tried glGetFloatv()]?

I am trying to rotate over the 'x' axis and save the transformed matrix so that I can use it to rotate further later; or over another axis from the already rotated perspective.
//rotate
glRotatef(yROT,model[0],model[4],model[8]);//front over right axis
//save model
glGetFloatv(GL_MODELVIEW_MATRIX, model);
Unfortunately I noticed that openGL must buffer the transformations because the identity matrix is loaded to model. Is there a work-around?
Why, oh God, would you do this?
I have been toying around with attempting to understand quaternions, euler, or axis rotation. The concepts are not difficult but I have been having trouble with the math even after looking at examples *edit[and most of the open classes I have found either are not well documented for simpleton users or have restrictions on movement].
I decided to find a way to cheat.
edit*
By 'further later' I mean in the next loop of code. In other words, yRot is the number of degrees I want my view to rotate from the saved perspective.
My suggestion: Don't bother with glRotate at all, they were never very pleasant to work with in the first place and no serious program did use them ever.
If you want to use the fixed function pipeline (= no shaders), use glLoadMatrix to load whatever transformation you currently need. With shaders you have to do the conceptually same with glUniform anyway.
Use a existing matrix math library, like GLM, Eigen or linmath.h to construct the transformation matrices. The nice benefit is, that you can make copies of a matrix at any point, so instead of fiddling with glLoadIdentity, glPushMatrix and glPopMatrix you just make copies where you need them and work from them.
BTW: There is no such thing as "models" in OpenGL. That's not how OpenGL works. OpenGL draws points, lines or triangles, one at a time, where each such called primitive is transformed individually to a position on the (screen) framebuffer and turned into pixels. Once a primitive has been processed OpenGL already forgot about it.

why does the camera have to stay at the origin in opengl

So i'm learning openGL and one thing I find very strange is that the camera has the stay at the origin and look in the same direction. To achieve camera movement and rotation you have to move and rotate the entire world instead of the camera.
My question is, why can't you move the camera? Does directx allow you to move the camera?
This is an interesting question. I think the answer depends on what you actually mean, when you are talking about a fixed camera.
As a matter of fact instead of saying openGL has a fixed camera I'd rather tend to say there isn't any camera at all in openGL.
On the other hand I wouldn't agree with your interpretation that the openGL API moves or rotates the world.
Instead I'd say the openGL API doesn't move or rotate the world at all.
I think the reason why there isn't any concept of a camera in the openGL API is, because it isn't meant as a high level abstraction layer, but rather linked to the computational necesissities in displaying computer graphics.
As I suggest you're mainly talking about displaying 3-dimensional scenes this means transforming 3D vertex data to a 2D raster image.
For every frame rendered this involves transforms transforming the 3-D coordinates of every vertex in your scene to their corresponding 2D location on the screen.
As every vertex has to be placed at the right position on screen it doesn't make any computational difference at all if you conceptually move something like a camera around or just move the whole world, you'll have to do the same transformation nonetheless.
The mathmatics involved in computing the "right" position for a vertex on screen can be described by a mathmatical object called matrix that, when applied (the mathmatical term used for this application is matrix-multiplication) to 3-D data will result in the desired 2D screen coordinates.
So essentially what happens in rendering a 3-D scene - regardless of the fact if there is any camera at all or not - is that every vertex is processed by some transformation matrix, leaving the original 3-D data of your vertex intact.
As the 3-D vertex data doesn't get changed at all, I'd say the openGL doesn't move or rotate the world at all, but this "observation" may depend on the observers perspective.
As a matter of fact leaving the 3-D vertex data intact without changing it all is essential to prevent your 3-D scene from deforming due to accumulated rounding inaccuracy.
I hope I could help by expressing my opinon on who or what moves whom when or why in the openGL API.
Even if I couldn't convice you there is no word-moving involved in using the openGL API don't forget the fact it doesn't weight anything at all so moving it around shouldn't be too painful.
BTW. don't bother to investigate about the proprietary library mentioned in your question and keep relying on open standards.
What's the difference between moving the world and moving the camera? Mathematically... there isn't any; it's the same number either way. It's all a matter of perspective. As long as you code your camera abstraction correctly, you don't have to think of it as moving the world if you don't want to.

2D platform game camera with OpenGl and C++

I am trying to make a 2D platform game using OpenGl and glut with C++. You can move your player around with the left and right arrow keys and jump with space. I have all the platforms loaded into the game through a text file and printed to the screen. This all works really good. The problem I am having is with the camera. When the right arrow key is pressed I have the players x position to increase. The problem is that when this happens I can not get the actual camera to move as well. This makes me think that instead of moving the player, I should use glTranslatef to translate all the platforms to the left. This seems a bit odd to me and I am just wondering if this is how it should be done. So I guess the final question is, should I translate the entire scene or move the player?
Move the camera to follow the player by glTranslateing in the opposite direction as the player.
You should think of the camera like an in game object similar to the player and other movable items and the level as a static object with a fixed position. This makes placing in-game items and other things much easier.
actually when you "move the camera" in OpenGL, since there is actually no camera, what is done internally, is exactly that, moving everything on the scene in the oposite direction.
As for the solution, if you're using glut, you can use
gluLookAt(x, y, z,
ex, ey, ez,
0, 1, 0)
where (x, y, z) is the coordinate where you want the camera to be, (ex, ey, ez) is the direction vector that you want the camera to look into (in reference to (x, y, z) coordinates) and (0, 1, 0) is the up vector. This function does all the matrix transformations necessary. More info here
If you're not using glut, but only raw OpenGL, the same link also explains the equivalent opengl calls you have to use to achieve exactly the same effect as gluLookAt
So when you want to move the camera (only right and left probably, since it's a platform game) you only have to change the x value
I got this working using the following..
gl.PushMatrix()
gl.Translatef((1280/2)-float32((player.Body.Position().X)), 0, 0.0)
//Rest of drawing Code Code
gl.PopMatrix()
This is using OpenGL + Chipmunk Physics (in GoLang but should apply as these are just c-bindings). 1280 in this case is screen Width.

Ways to implement manipulation handles in 3d view

I'm building a simple solid modeling application. Users need to be able to manipulate object in both orthogonal and perspective views. For example, when there's a box in the screen and the user clicks on it to select it, it needs to get 'handles' at the corners and in the center so that the user can move the mouse over such a handle and drag it to enlarge or move the box.
What strategies are there to do this, and which one is the best one? I can think of two obvious ones:
1) Treat the handles as 3d objects. I.e. for a box, add small boxes to the scene at the corners of the 'main' box. Problems: this won't work in perspective view, I'd need to determine the size of the boxes relative to the current zoom level (the handles need to have the same size no matter how far the user is zoomed in/out)
2) Add the handles after the scene has been rendered. Render to an offscreen buffer, determine the 2d locations of the corners somehow and use regular 2d drawing techniques to draw the handles. Problems: how will I do hittesting? I'd need to do a two-stage hittesting approach, as well; how do I draw in 2d on a 3d rendered image? Fall back to GDI?
There are probably more problems with both approaches. Is there an industry-standard way of tackling this problem?
I'm using OpenGL, if that makes a difference.
I would treat the handles as 3D objects. This provides many advantages - it's more consistant, they behave well, hit testing is easy, etc.
If you want the handles to be a constant size, you can still treat them as 3D objects, but you will have to scale their size as appropriate based off the distance to camera. This is a bit of a hassle, but since there are typically only a few handles, and these are usually small objects, it should be fine performance wise.
However, I'd actually say let the handles scale with the scene. As long as you pick a rendering style for the handle that makes them stand out (ie: bright orange boxes, etc), the perspective effects (smaller handles in the background) actually makes working with them easier for the end-user in many ways. It is difficult to get a sense of depth from a 3D scene - the perspective effects on the handles help provide more visual clues as to how "deep" the handle is into the screen.
First off, project the handle/corner co-ordinates onto the camera's plane (effectively converting them to 2D coordinates on the screen; normalize this against the screen dimensions.)
Here's some simple code to enable orthogonal/2D-overlay drawing:
void enable2D()
{
glMatrixMode(GL_PROJECTION);
glPushMatrix();
glLoadIdentity();
int wind[4];
glGetIntegerv(GL_VIEWPORT,wind);
glOrtho(0,wind[2],0,wind[3],-1,1);
glMatrixMode(GL_MODELVIEW);
glPushMatrix();
glLoadIdentity();
}
void disable2D()
{
glMatrixMode(GL_PROJECTION);
glPopMatrix();
glMatrixMode(GL_MODELVIEW);
glPopMatrix();
}
enable2D() caches the current modelview/projection matrices and replaces the projection matrix with one normalized to the screen (i.e. the width/height of the screen) and restores the identity matrix for modelview.
After making this call, you can make glVertex2f() calls using screen/pixel coordinates, allowing you to draw in 2D! (This will also allow you to hit-test since you can easily get the mouse's current pixel coords.)
When you're done, call disable2D to restore your old modelview/projection matrices :)
The hardest part is computing where the hitboxes fall on the 2D plane and dealing with overlaying (if two project to the same place, which to select on click?)
Hope this helped :)
I've coded up a manipulator with handles for a 3d editing package, and ran into a lot of these same issues.
First, there's an open source manipulator. I couldn't find it in my most recent search, probably because there's a plethora of names for these things - 3d widgets, gizmos, manipulators, gimbals, etc.
Anyhow, the way I did it was to add a manipulator object to the scene that, when drawn, draws all of the handles. It does the same thing for bounding box computation, and selection.
Reed's idea about keeping them the same size is interesting for handles that exist on objects, and might work there. For a manipulator, I found that it was more of a 3d UI element, and it was much more usable if it did not change size. I had a bug where the size was only determined based on the active viewport, which resulted in horrible huge/tiny manipulators in other viewports, very useless. If you're going to add them to the scene, you might want to add them per-viewport, or make them actually have a fixed size.
I know the question is really old. But just in case someone needs it:
Interactive Techniques in Three-dimensional Scenes (Part 1): Moving 3D Objects with the Mouse using OpenGL 2.1
Article is good and has an interesting link section at the bottom.