How glOrtho2D Works - opengl

glViewport(0, 0, w, h);
gluOrtho2D(10, 100, 10, 100);
I am not getting any output, can you help me?
If I set it to
gluOrtho2D(-50, 50,-50, 50);
Then object created at center of window.

gluOrtho2D(left, right, top, bottom) defines a view box. That means that there is no perspective.
With
gluOrtho2D(-50,50,-50,50);
you say (assuming you have no additional model view matrix):
The viewport's left edge is located at world position x = -50
The viewport's rightedge is located at world position x = +50
The viewport's top edge is located at world position y = -50
The viewport's bottomedge is located at world position y = +50
If then your object appears at the screen's center, it is probably located at the world's origin.
If you specify
gluOrtho2D(10,100,10,100);
then you can see an x-range of 10 to 100. The origin is not within this range and therefore not visible (it is beyond the left edge of the viewport).
Note that there are apparently two different signatures of this function based on the API:
gluOrtho2D(left, right, bottom, top) and gluOrtho2D(left, right, top, bottom).

Related

Why is the screen space coordinate system for my sfml rendered app inverted?

I am learning C++ and I thought I'd make the original asteroids game with a fresh coat of paint using the SFML graphics library. However, for my player sprite, while the origin is at the top left corner of the screen, to the right of it is the negative x axis and downwards is negative y axis (opposite of what it's supposed to be in both cases). Also, no matter what object or rotation, invoking the setRotation function always rotates any object about the top left corner of the screen even if, for that object, I have set the origin to the object's center.
#include<SFML\Graphics.hpp>
using namespace sf;
const int W{ 1200 }, H{ 800 };
const float degToRad = 0.017453f;
int main() {
float x{ -600 }, y{ -400 };
float dx{}, dy{}, angle{};
bool thrust;
RenderWindow app(VideoMode(W, H), "Asteroids!");
app.setFramerateLimit(60);
Texture t1, t2;
t1.loadFromFile("images/spaceship.png");
t2.loadFromFile("images/background.jpg");
Sprite sPlayer(t1), sBackground(t2);
sPlayer.setTextureRect(IntRect(40, 0, 40, 40));
sPlayer.setOrigin(-600, -400);
while (app.isOpen())
{
app.clear();
app.draw(sPlayer);
app.display();
}
return 0;
}
The above code draws the player (spaceship.png) to the center of my rendered window (app) but notice how I have had to put in negative coordinates. Also, if I further put in the code for taking keyboard inputs and call the setRotation function, instead of rotating my sPlayer sprite about its center (i.e. (-600,-400)), it rotates the sprite about the top left corner of the screen which is (0,0). I can't find any explanation for this in the SFML online documentation. What should I do?
As I mentioned I have tried reading the documentation. I've watched online tutorials but to no avail.
Origin is the point on sprite where you "hold" it.
Position is the point on screen where you put Origin of the sprite.
In short, you take your sprite by Origin and put it so Origin is on Position.
By default, both Origin and Position are (0, 0), so top left of your sprite is put at top left of the screen. What you did was to say "take this point on sprite, which is way to the upper-left that actual visible part of the sprite is and put it to the top left of the screen". This had an effect of moving your sprite to the bottom right.
You probably want something like:
// This is will make sure that Origin, i.e. point which defines rotation and other transformations center is at center of the ship
sPlayer.setOrigin(sprite_width / 2, sprite_height / 2);
// This will put Origin (center of the ship) at center of the screen
sPlayer.setPosition(screen_width / 2, screen_height / 2);

OpenGL - get mouse position co-ordinates

I am making a 2D board game. the game board grid is 8x8 and each cell of the grid is an object. So a board consists of 64 cell objects. My aim is to work out which cell the mouse is in. I am attempting this by tracking the mouse coordinates and comparing it to the grid coordinates.
my coordinate system is as follows:
gluOrtho2D(-4,4,-4,4);
I am trying to get the current mouse position by using the following in my update function:
POINT p
if (GetCursorPos(&p)){
}
if (ScreenToClient(hWnd, &p))
{
}
However although this is tracking the coordinates of the mouse it is not correctly tracking the world coordinates that I set with gluOrtho2D. How can I achieve this?
It depends on your glViewPort
Let's say you have:
glViewport(0,0, 640, 640);
The mouse position is (mousePos.x,mousePos.y) and the world position you want to know is (world.x, world.y)
And, give that the top/left corner of your screen is the (0, 0) coordinate
Then we can make the following:
world.x = -4.0 + (mousePos.x / 640.0) * (4*2)
world.y = 4.0 - (mousePos.y / 640.0) * (4*2)
What we are doing here is a linear interpolation using the normalize position of the mouse within the screen (mousePos.x / 640) and then multiplying this value to the width of the word (4*2).
Given that the top/left corner of the grid start at (-4, 4), we add the offset of the world position.

Moving origin co ordinates from bottom left to center of screen

The origin i.e - X and Y (0, 0) co-ordinates starts from bottom left of screen (Portrait mode).
Is there a way I can move origin(0, 0) to center of screen.
So, that I can differentiate when my sprite is on positive or negative axis on both X and Y co-ordinates ?
Or is there any other logic that could be used to know when sprite is either left or right side of screen ?
Cocos2d works with a tree of nodes, the position of each subnode is relative to the parent.
This means that if you add a middle node between your layer and everything else you can easily obtain the desired behavior. For example:
Node* mainNode = Node::create();
mainNode->setPosition(Point(WIDTH/2, HEIGHT/2));
layer->addChild(mainNode);
// this will now place the sprite in the middle of the viewport
Node* sprite = ...
sprite->setPosition(Point::ZERO);
mainNode->addChild(sprite);

Blit a rectangle at by coordinate

In Pygame and Python 2.7, how can I blit a rectangle at certain point represented by a set of coordinates?
I know I can use this:
screen.blit(img.image, img.rect.topleft)
But I want the rectangle to be at a precise point on the screen.
If you need topleft corner of rectangle in point (34,57) you can do
screen.blit(img.image, (34,57) )
or this
img.rect.topleft = (34,57)
screen.blit(img.image, img.rect)
or this
img.rect.x = 34
img.rect.y = 57
screen.blit(img.image, img.rect)
If you need center of rectange in point (34,57)
img.rect.center = (34,57)
screen.blit(img.image, img.rect)
If you need rectangle in center of the screen:
(especially useful if you need to show text (ex. "PAUSE") in center of the screen, or text in center of rectangle to create button)
img.rect.center = screen.get_rect().center
screen.blit(img.image, img.rect)
If you need rectangle touching right border of the screen:
img.rect.right = screen.get_rect().right
screen.blit(img.image, img.rect)
If you need rectangle in bottom left corner of the screen:
img.rect.bottomleft = screen.get_rect().bottomleft
screen.blit(img.image, img.rect)
And you have more - see pygame.Rect
x,y
top, left, bottom, right
topleft, bottomleft, topright, bottomright
midtop, midleft, midbottom, midright
center, centerx, centery
Using above element it doesn't change width and height.
If you change x (or other value) then you automaticly get new value of left, right and others.
BTW: As you see you can use img.rect as argument in blit()
BTW: You can also do this: (for example in __init__):
img.rect = img.image.get_rect(center=screen.get_rect().center)
to center object on screen
BTW: You can use it also to blit image/Surface on other Surface at a precise point. You can put text in center of some surface (for example: button) and then that surface put in bottomright corner of the screen
From your code:
screen.blit(img.image, img.rect.topleft)
will put the image at (0, 0) since the rect has been obtained from an image that has not yet been drawn to the display surface. If you want to draw at a specific coordinate simply do this:
screen.blit(image, (x, y)) #x and y are the respective position coordinates

World-Coordinate Issues with gluUnProject()

I'm currently calling Trace (method below) from a game loop. Right now all I'm trying to do is get the world coordinates from the screen mouse so I can move objects around in the world space. The values I'm getting from gluUnProject are however; puzzling me.
I was using glReadPixel(...) to get the Z value but that produced little to no movement in the object I was drawing and the resulting vector ended up being the same as my cameras location (except for the tiny decimal changes due to mouse movement), so I decided to get rid of the call and replace the Z value with 1.
My question is: Does the following code look right to you? Every example I've seen thusfar is either identical or -very- similar but I can't seem to produce correct results, even if I lock down the Y axis. If the code is correct, then I'm guessing that I'm just not using the resulting vector properly. Should I not be able to draw an object or point directly with the resulting vector or do I have to do something else with it, like normalize?
The current render mode is GL_RENDER and I am using glFrustum with a NearZ value of 1 and FarZ value of 2048, to create a perspective. There is also a series of viewports created along with scissors, with a size and width of 512x768 and positioned in each corner of a 1024x768 window. Trace(...) is called in between rendering of the upper left viewport and is the only perspective projection, while the other viewports are orthographic. FOV is set to 45.
void VideoWindow::Trace(int cursorX, int cursorY)
{
double objX, objY, objZ;//holder for world coordinates
GLint view[4];//viewport dimensions+pos
GLdouble p[16];//projection matrix
GLdouble m[16];//modelview matrix
GLdouble z;//Z-Buffer Value?
glGetDoublev (GL_MODELVIEW_MATRIX, m);
glGetDoublev (GL_PROJECTION_MATRIX,p);
glGetIntegerv( GL_VIEWPORT, view );
//view[3]-cursorY = conversion from upper left (0,0) to lower left (0,0)
//Unproject 2D Screen coordinates into wonderful world coordinates
gluUnProject(cursorX, view[3]-cursorY, 1, m, p, view, &objX, &objY, &objZ);
//Do something useful here???
}
Any ideas?
Edit: I've changed the winZ value to 0.5 instead of 1 which gives a vector thats more reasonable but drawing a point still wasn't matching the mouse. I found out that the value of view[3] was 384 which is correct for the viewport I'm using but I replaced it with 768 (the actual window size) and the point followed the mouse 100%. Further experimentation reveals that I can't use the coordinates to move around a 3D object in the perspective world space using this these coordinates however moving around 3D object in Orthographic space works fine.
The winz argument to gluUnproject specifies the depth from the camera at which you're "picking" your points. As you've stated this coordinate should be in the [0, 1] range.
Some tutorials like NeHes read out the z coordinate from the depth buffer so that you "pick" at the right depth, of course for this to work you'll have to do the gluUnproject after you've rendered everything else.
Regardless, if you set winz to 0.5 or something (not 0 or 1 or the point will end up on the near or far clip plane, and maybe culled) and do the following:
gluUnProject(cursorX, view[3]-cursorY, 0.5, m, p, view, &objX, &objY, &objZ);
//Do something useful here???
glPointSize(10);
glBegin(GL_POINTS);
glColor3f(1, 0, 0);
glVertex3f(objX, objY, objZ);
glEnd();
You should end up with a red blob at the mouse pointer (provided nothing else overdraws it afterwards and you don't have any funny render states which renders the point invisible).
just a thought, but if the third argument to gluUnProject is the z distance to the camera, wouldn't any point you draw at that location be on the near clipping plane of your frustum?
Better make that z value a bit higher.