OpenGL Convert ClipCoord to ScreenCoord - c++

I want to convert ClipCoord to ScreenCoord but I don't know what is the right way to get the Canvas width and height.
(Canvas = the drawable area in the image plane)
glm::vec4 clipCoords = vec4f(0.1f, 0.3f, 1.0f, 1.0f) // random point
float canvasWidth = 2;
float canvasHeight = 2;
GLfloat ndcX = (clipCoords.x + canvasWidth / 2.0f) / canvasWidth;
GLfloat ndcY = (clipCoords.y + canvasHeight / 2.0f) / canvasHeight;
GLint pixelX = ndcX * SCREEN_WIDTH;
GLint pixelY = (1 - ndcY) * SCREEN_HEIGHT;
In OpenGL, the canvas is the nearPlane in the perspective projection.
I found old thread with same question
so I have the answer now.

So you need a function that takes a point glm::vec2 ( convert it to glm::vec2 by using the matrix multiplication from the camera matrix ) and then get it to Screen by value mapping.
A helpful link might be this

It depends on the library you are using for creating the window with the OpenGL Context. E.g. if you use GLFW (a great library for window creation btw), you get the the framebuffer size with glfwGetFramebufferSize(GLFWwindow* window, int* width, int* height), like so:
int width;
int height;
glfwGetFramebufferSize(window, &width, &height);
But this is only the framebuffer size, if you want to get the viewport, which is how many pixel will be written from OpenGL into the framebuffer, glGetIntegerv(GLenum pname, GLint* data is what you are looking for:
GLint* viewport = new GLint[4];
glGetIntegerv(GL_VIEWPORT, viewport);
int x = viewport[0];
int y = viewport[1];
int width = viewport[2];
int height = viewport[3];
delete[] viewport;

Related

OpenGL: Move 2D Orthographic Camera with Mouse

I'm making a level editor for my game with OpenGL in C++. I'm trying to make Editor Camera just like in Unity Engine 2D Scene Camera, but I have an issue when I try to implement mouse movement for the camera (Camera Panning). I'm converting mouse position from screen to world space.
ScreenToWorldSpace Method:
Vector3 Application::ScreenToWorldSpace(int mousex, int mousey)
{
double x = 2.0 * mousex / viewportWidth - 1;
double y = 2.0 * mousey / viewportHeight - 1;
Vector4 screenPos = Vector4(x, -y, -1.0f, 1.0f);
Matrix4 ProjectionViewMatrix = camera1->GetProjectionMatrix() * camera1->GetViewMatrix();
Matrix4 InverseProjectionViewMatrix = glm::inverse(ProjectionViewMatrix);
Vector4 worldPos = InverseProjectionViewMatrix * screenPos;
return Vector3(worldPos);
}
The above method works correctly.
But I'm using ScreenToWorldSpace coordinates to update camera position.
Render Method:
void Application::Render(float deltaTime)
{
Vector3 pos = ScreenToWorldSpace(mousePosition.x, mousePosition.y);
// This is the position of a tile not the camera
position = Vector3(0, 0, 0);
Vector3 rotation = Vector3(0, 0, 0);
Vector3 scale = Vector3(1);
Matrix4 translationMatrix = glm::translate(Matrix4(1.0f), position);
Matrix4 rotationMatrix = glm::eulerAngleYXZ(rotation.y, rotation.x, rotation.z);
Matrix4 scaleMatrix = glm::scale(Matrix4(1.0f), scale);
modelMatrix = translationMatrix * rotationMatrix * scaleMatrix;
if (mouseButtonDown)
{
Console << pos.x << ", " << pos.y << Endl;
camera1->position = Vector3(pos.x, pos.y, -10);
}
{
glScissor(0, 0, 900, 600);
glEnable(GL_SCISSOR_TEST);
glClearColor(236 / 255.0f, 64 / 255.0f, 122 / 255.0f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glViewport(0, 0, 900, 600);
basicShader->Use();
dirt_grass_tex->Use();
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, ibo);
camera1->SetZoom(zoomFactor);
camera1->Update();
Matrix4 mvp = camera1->GetProjectionMatrix() * camera1->GetViewMatrix() * modelMatrix;
basicShader->SetUniformMat4("MVP", mvp);
glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_INT, 0);
glDisable(GL_SCISSOR_TEST);
}
}
Camera Class:
#include "camera.h"
Camera::Camera(int width, int height)
{
swidth = width;
sheight = height;
position = Vector3(0, 0, -10);
rotation = Vector3(0, 0, 0);
m_direction = Vector3(0, 0, -5);
m_up = Vector3(0, 1, 0);
m_right = Vector3(1, 0, 0);
m_offset = Vector3(-swidth / 2 * m_zoom, -sheight / 2 * m_zoom, 0);
m_projection = glm::ortho(0.0f * m_zoom, (float)swidth * m_zoom, 0.0f * m_zoom, (float)sheight * m_zoom, -1000.0f, 0.0f);
}
Camera::~Camera()
{
}
void Camera::Update()
{
Vector3 finalPos = position + m_offset;
m_up = glm::cross(m_right, m_direction);
m_viewMatrix = glm::lookAt(finalPos, finalPos + m_direction, m_up);
m_viewMatrix = glm::scale(m_viewMatrix, Vector3(100));
}
void Camera::SetZoom(float zoom)
{
m_zoom = zoom;
m_offset = Vector3(-swidth / 2 * m_zoom, -sheight / 2 * m_zoom, 0);
m_projection = glm::ortho(0.0f * m_zoom, (float)swidth * m_zoom, 0.0f * m_zoom, (float)sheight * m_zoom, -1000.0f, 0.0f);
}
The following is the output I get when I try to move camera with mouse position converted from Screen to World Space:
if (mouseButtonDown)
{
Console << pos.x << ", " << pos.y << Endl;
position = Vector3(pos.x, pos.y, 0);
}
But if I use mouse position converted from Screen to World space using ScreenToWorldSpace Method the object moves perfectly. Have a look at the following gif:
Following is what I'm trying to achieve:
So I'm Trying to make Game Engine Editor, in that I want to implement Editor Scene Camera like unity / unreal engine scene camera. Following is the editor I'm currently working on:
I tried looking into different resources, but i'm clueless. Help me understand how to move the camera with mouse.
What I think is happening:
Since I'm converting mouse position from screen to world space using camera's projectionView matrix and using those world coordinates to move camera position is causing the problem, because when ever camera moves, projectionView is updated which in turn changes mouse position relative to viewMatrix recursively.
I would Appreciate some help.
Ordinarily, you wouldn't want to write the mouse position directly into the camera location (because that will be of limited use in practice - whenever you click on the screen, the camera would jump).
What you probably want to do something along these lines:
Vector3 g_lastPosition;
void onMousePressed(int x, int y) {
// record starting position!
g_lastPosition = ScreenToWorldSpace(x, y);
}
void onMouseMove(int x, int y) {
// find the difference between new position, and last, in world space
Vector3 new_pos = ScreenToWorldSpace(x, y);
Vector3 offset = new_pos - g_lastPosition;
g_lastPosition = new_pos;
// now move camera by offset
camera->position += offset
}
If you are in an orthographic view, then really you don't need to worry about the projection matrix at all.
int g_lastX;
int g_lastY;
void onMousePressed(int x, int y) {
// store mouse pos
g_lastX = x;
g_lastY = y;
}
void onMouseMove(int x, int y) {
// find the difference between new position, and last, in pixels
int offsetX = x - g_lastX;
int offsetY = y - g_lastY;
// update mouse pos
g_lastX = x;
g_lastY = y;
// get as ratio +/- 1
float dx = ((float) offsetX) / swidth;
float dy = ((float) offsetY) / sheight;
// now move camera by offset (might need to multiply by 2 here?)
camera->position.x += camera->m_offset.x * dx;
camera->position.y += camera->m_offset.y * dy;
}
But in general, for any mouse based movement, you always want to be thinking in terms of adding an offset, rather than setting an exact position.

Drag object based on mouse selection Opengl

What would I need to do in order to select an object with the mouse in OpenGL? I found something like selection buffer but I also read some where that it was deprecated. So I'm stuck and do not know what to look for. Also I'm using C++ do to do this.
For 2D, here's the code I have working -- you'll have to modify it a bit, but hopefully it will give you some ideas. This code gives you the world coordinates at "0 height" -- if something doesn't have 0 height, this may not select it properly depending on perspective.
// for the current mouse position on the screen, where does that correspond to in the world?
glm::vec2 World::world_position_for_mouse(const glm::vec2 mouse_position,
const glm::mat4 projection_matrix,
const glm::mat4 view_matrix)
{
int window_width;
int window_height;
this->graphics.get_window_dimensions(window_width, window_height);
const int mouse_x = mouse_position[0];
const int mouse_y = mouse_position[1];
// normalize mouse position from window pixel space to between -1, 1
GLfloat normalized_mouse_x = (2.0f * mouse_x) / window_width - 1.0f;
float normalized_mouse_y = 1.0f - (2.0f * mouse_y) / window_height;
glm::vec3 normalized_mouse_vector = glm::vec3(normalized_mouse_x, normalized_mouse_y, 1.0f);
glm::vec4 ray_clip = glm::vec4(normalized_mouse_vector.xy(), -1.0, 1.0);
glm::vec4 ray_eye = glm::inverse(projection_matrix) * ray_clip;
ray_eye = glm::vec4(ray_eye.xy(), -1.0, 0.0);
glm::vec3 ray_world = (glm::inverse(view_matrix) * ray_eye).xyz();
ray_world = glm::normalize(ray_world);
float l = -(camera.z / ray_world.z);
return {camera.x + l * ray_world.x, camera.y + l * ray_world.y};
}
To pan the world by the same "screen units" regardless of zoom, I use this code based on the results of the code above:
float camera_motion = time.get_wall_clock_delta() * camera_motion_per_second;
auto x1 = this->world_position_for_mouse(glm::vec2(1,0), this->cached_projection_matrix, this->cached_view_matrix).x;
auto x2 = this->world_position_for_mouse(glm::vec2(0,0), this->cached_projection_matrix, this->cached_view_matrix).x;
auto camera_change = (x1 - x2) * camera_motion;
where camera_motion is just a multiplier on how fast you want it to move combined with the time delta from the previous frame. Basically the further zoomed out you are, the faster this scrolls you per second. Whatever pixel is on the right edge of your window will take a constant time to get to the left edge regardless of zoom.

Finding center of image for rotation in opengl

So I have this piece of code, which pretty much draws various 2D textures on the screen, though there are multiple sprites that have to be 'dissected' from the texture (spritesheet). The problem is that rotation is not working properly; while it rotates, it does not rotate on the center of the texture, which is what I am trying to do. I have narrowed it down to the translation being incorrect:
glTranslatef(x + sr->x/2 - sr->w/2,
y + sr->y/2 - sr->h/2,0);
glRotatef(ang,0,0,1.f);
glTranslatef(-x + -sr->x/2 - -sr->w/2,
-y + -sr->y/2 - -sr->h/2,0);
X and Y is the position that it's being drawn to, the sheet rect struct contains the position X and Y of the sprite being drawn from the texture, along with w and h, which are the width and heights of the 'sprite' from the texture. I've tried various other formulas, such as:
glTranslatef(x, y, 0);
The below three switching the negative sign to positive (x - y to x + y)
glTranslatef(sr->x/2 - sr->w/2, sr->y/2 - sr->h/2 0 );
glTranslatef(sr->x - sr->w/2, sr->y - sr->h/2, 0 );
glTranslatef(sr->x - sr->w, sr->y - sr->w, 0 );
glTranslatef(.5,.5,0);
It might also be helpful to say that:
glOrtho(0,screen_width,screen_height,0,-2,10);
is in use.
I've tried reading various tutorials, going through various forums, asking various people, but there doesn't seem to be a solution that works, nor can I find any useful resources that explain to me how I find the center of the image in order to translate it to '(0,0)'. I'm pretty new to OpenGL so a lot of this stuff takes awhile for me to digest.
Here's the entire function:
void Apply_Surface( float x, float y, Sheet_Container* source, Sheet_Rect* sr , float ang = 0, bool flipx = 0, bool flipy = 0, int e_x = -1, int e_y = -1 ) {
float imgwi,imghi;
glLoadIdentity();
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D,source->rt());
// rotation
imghi = source->rh();
imgwi = source->rw();
Sheet_Rect t_shtrct(0,0,imgwi,imghi);
if ( sr == NULL ) // in case a sheet rect is not provided, assume it's width
//and height of texture with 0/0 x/y
sr = &t_shtrct;
glPushMatrix();
//
int wid, hei;
glGetTexLevelParameteriv(GL_TEXTURE_2D,0,GL_TEXTURE_WIDTH,&wid);
glGetTexLevelParameteriv(GL_TEXTURE_2D,0,GL_TEXTURE_HEIGHT,&hei);
glTranslatef(-sr->x + -sr->w,
-sr->y + -sr->h,0);
glRotatef(ang,0,0,1.f);
glTranslatef(sr->x + sr->w,
sr->y + sr->h,0);
// Yeah, out-dated way of drawing to the screen but it works for now.
GLfloat tex[] = {
(sr->x+sr->w * flipx) /imgwi, 1 - (sr->y+sr->h *!flipy )/imghi,
(sr->x+sr->w * flipx) /imgwi, 1 - (sr->y+sr->h * flipy)/imghi,
(sr->x+sr->w * !flipx) /imgwi, 1 - (sr->y+sr->h * flipy)/imghi,
(sr->x+sr->w * !flipx) /imgwi, 1 - (sr->y+sr->h *!flipy)/imghi
};
GLfloat vertices[] = { // vertices to put on screen
x, (y + sr->h),
x, y,
(x +sr->w), y,
(x +sr->w),(y +sr->h)
};
// index array
GLubyte index[6] = { 0,1,2, 2,3,0 };
float fx = (x/(float)screen_width)-(float)sr->w/2/(float)imgwi;
float fy = (y/(float)screen_height)-(float)sr->h/2/(float)imghi;
// activate arrays
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
// pass verteices and texture information
glVertexPointer(2, GL_FLOAT, 0, vertices);
glTexCoordPointer(2, GL_FLOAT, 0, tex);
glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_BYTE, index);
glDisableClientState(GL_VERTEX_ARRAY);
glDisableClientState(GL_TEXTURE_COORD_ARRAY);
glPopMatrix();
glDisable(GL_TEXTURE_2D);
}
Sheet container class:
class Sheet_Container {
GLuint texture;
int width, height;
public:
Sheet_Container();
Sheet_Container(GLuint, int = -1,int = -1);
void Load(GLuint,int = -1,int = -1);
float rw();
float rh();
GLuint rt();
};
Sheet rect class:
struct Sheet_Rect {
float x, y, w, h;
Sheet_Rect();
Sheet_Rect(int xx,int yy,int ww,int hh);
};
Image loading function:
Sheet_Container Game_Info::Load_Image(const char* fil) {
ILuint t_id;
ilGenImages(1, &t_id);
ilBindImage(t_id);
ilLoadImage(const_cast<char*>(fil));
int width = ilGetInteger(IL_IMAGE_WIDTH), height = ilGetInteger(IL_IMAGE_HEIGHT);
return Sheet_Container(ilutGLLoadImage(const_cast<char*>(fil)),width,height);
}
Your quad (two triangles) is centered at:
( x + sr->w / 2, y + sr->h / 2 )
You need to move that point to the origin, rotate, and then move it back:
glTranslatef ( (x + sr->w / 2.0f), (y + sr->h / 2.0f), 0.0f); // 3rd
glRotatef (0,0,0,1.f); // 2nd
glTranslatef (-(x + sr->w / 2.0f), -(y + sr->h / 2.0f), 0.0f); // 1st
Here is where I think you are getting tripped up. People naturally assume that OpenGL applies transformations in the order they appear (top-to-bottom), that is not the case. OpenGL effectively swaps the operands everytime it multiplies two matrices:
M1 x M2 x M3
~~~~~~~
(1)
~~~~~~~~~~
(2)
(1) M2 * M1
(2) M3 * (M2 * M1) --> M3 * M2 * M1 (row-major / textbook math notation)
The technical term for this is post-multiplication, it all has to do with the way matrices are implemented in OpenGL (column-major). Suffice it to say, you should generally read glTranslatef, glRotatef, glScalef, etc. calls from bottom-to-top.
With that out of the way, your current rotation does not make any sense.
You are telling GL to rotate 0 degrees around an axis: <0,0,1> (the z-axis in other words). The axis is correct, but a 0 degree rotation is not going to do anything ;)

How can I copy parts of an image from the buffer into a texture to render?

I have been searching around for a simple solution, but I have not found anything. Currently I am loading a texture from a file and rendering it into the buffer using C++ 2012 Express DirectX9. But what I want to do is be able to copy parts of the buffer, and use the part that is copied as the texture, instead of the loaded texture.
I want to be able to copy/select like a map-editor would do.
EDIT: Problem Solves :) It was just dumb mistakes.
You can use the StretchRect function (see documentation).
You should copy a subset of the source buffer into the whole destination buffer (which is the new texture's buffer in your case). Something like this:
LPDIRECT3DTEXTURE9 pTexSrc, // source texture
pTexDst; // new texture (a subset of the source texture)
// create the textures
// ...
LPDIRECT3DSURFACE9 pSrc, pDst;
pTexSrc->GetSurfaceLevel(0, &pSrc);
pTexDst->GetSurfaceLevel(0, &pDst);
RECT rect; // (x0, y0, x1, y1) - coordinates of the subset to copy
rect.left = x0;
rect.right = x1;
rect.top = y0;
rect.bottom = y1;
pd3dDevice->StretchRect(pSrc, &rect, pDst, NULL, D3DTEXF_NONE);
// the last parameter could be also D3DTEXF_POINT or D3DTEXF_LINEAR
pSrc->Release();
pDst->Release(); // remember to release the surfaces when done !!!
EDIT:
OK, I've just got through the tones of your code and I think the best solution would be to use uv coordinates instead of copying subsets of the palette texture. You should calculate the appropriate uv coordinates for a given tile in game_class:: game_gui_add_current_graphic and use them in the CUSTOMVERTEX structure:
float width; // the width of the palette texture
float height; // the height of the palette texture
float tex_x, tex_y; // the coordinates of the upper left corner
// of the palette texture's subset to use for
// the current tile texturing
float tex_w, tex_h; // the width and height of the above mentioned subset
float u0, u1, v0, v1;
u0 = tex_x / width;
v0 = tex_y / height;
u1 = u0 + tex_w / width;
v1 = v0 + tex_h / height;
// create the vertices using the CUSTOMVERTEX struct
CUSTOMVERTEX vertices[] = {
{ 0.0f, 32.0f, 1.0f, u0, v1, D3DCOLOR_XRGB(255, 0, 0), },
{ 0.0f, 0.0f, 1.0f, u0, v0, D3DCOLOR_XRGB(255, 0, 0), },
{ 32.0f, 32.0f, 1.0f, u1, v1, D3DCOLOR_XRGB(0, 0, 255), },
{ 32.0f, 0.0f, 1.0f, u1, v0, D3DCOLOR_XRGB(0, 255, 0), } };
Example: Your palette consists of 3 rows and 4 columns with the 12 possible cell textures. Each texture is 32 x 32. So tex_w = tex_h = 32;, width = 4 * tex_w; and height = 3 * tex_h;. Suppose you want to calculate uv coordinates for a tile which should be textured with the image in the second row and the third column of the palette. Then tex_x = (3-1)*tex_w; and tex_y = (2-1)*tex_h;. Finally, you calculate the UVs as in the code above (in this example you'll get {u0,v0,u1,v1} = {(3-1)/4, (2-1)/3, 3/4, 2/3} = {0.5, 0.33, 0.75, 0.66}).

Transforming verticies with center point and scale factor?

My application is a vector drawing application. It works with OpenGL. I will be modifying it to instead use the Cairo 2D graphics library. The issue is with zooming. With openGL camera and scale factor sort of work like this:
float scalediv = Current_Scene().camera.ScaleFactor / 2.0f;
float cameraX = GetCameraX();
float cameraY = GetCameraY();
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
float left = cameraX - ((float)controls.MainGlFrame.Dimensions.x) * scalediv;
float right = cameraX + ((float)controls.MainGlFrame.Dimensions.x) * scalediv;
float bottom = cameraY - ((float)controls.MainGlFrame.Dimensions.y) * scalediv;
float top = cameraY + ((float)controls.MainGlFrame.Dimensions.y) * scalediv;
glOrtho(left,
right,
bottom,
top,
-0.01f,0.01f);
// Set the model matrix as the current matrix
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
hdc = BeginPaint(controls.MainGlContext.mhWnd,&ps);
Mouse position is obtained like this:
POINT _mouse = controls.MainGlFrame.GetMousePos();
vector2f mouse = functions.ScreenToWorld(_mouse.x,_mouse.y,GetCameraX(),GetCameraY(),
Current_Scene().camera.ScaleFactor,
controls.MainGlFrame.Dimensions.x,
controls.MainGlFrame.Dimensions.y );
vector2f CGlEngineFunctions::ScreenToWorld(int x, int y, float camx, float camy, float scale, int width, int height)
{
// Move the given point to the origin, multiply by the zoom factor and
// add the model coordinates of the center point (camera position)
vector2f p;
p.x = (float)(x - width / 2.0f) * scale +
camx;
p.y = -(float)(y - height / 2.0f) * scale +
camy;
return p;
}
From there I draw the VBO's of triangles. This allows me to pan and zoom in. Given that Cairo only can draw based on coordinates, how can I make it so that a vertex is properly scaled and panned without using transformations. Basically GlOrtho sets the viewport usually but I dont think I could do this with Cairo.
Well GlOrtho is able to change the viewport matrix instead of modifying the verticies but how could I instead modify the verticies to get the same result?
Thanks
*Given vertex P, which was obtained from ScreenToWorld, how could I modify it so that it is scaled and panned accordng to the camera and scale factor? Because usually OpenGL would essentially do this
I think Cairo can do what you want ... see http://cairographics.org/matrix_transform/ . Does that solve your problem, and if not, why ?