GPU mouse picking OpenGL/WebGL - opengl

I understand i need to render only 1x1 or 3x3 pixel part of the screen where the mouse is, with object id's as colors and then get id from the color.
I have implemented ray-cast picking with spheres and i am guessing it has something to do with making camera look in direction of the mouse ray?
How do i render the correct few pixels?
Edit:
setting camera in direction of mouse ray works, but if i make the viewport smaller the picture scales but what (i think) i need is for it to be cropped rather than scaled. How would i achieve this?

The easiest solution is to use the scissor test. It allows you to render only pixels within a specified rectangular sub-region of your window.
For example, to limit your rendering to 3x3 pixels centered at pixel (x, y):
glScissor(x - 1, y - 1, 3, 3);
glEnable(GL_SCISSOR_TEST);
glDraw...(...);
glDisable(GL_SCISSOR_TEST);
Note that the origin of the coordinate system is at the bottom left of the window, while most window systems will give you mouse coordinates in a coordinate system that has its origin at the top left. If that's the case on your system, you will have to invert the y-coordinate by subtracting it from windowHeight - 1.

Related

How to understand Sprite coordinates in AddSprite function?

I have a bitmap with a sprite on it. Say the sprite with coordinates (0,0)-(5,4) in the bitmap coordinate space (source rectangle). I put it onto the destination rectangle with the same coordinates (0,0)-(5,4) in the render target coordinate space (ID2D1SpriteBatch::AddSprites function). But it draws the sprite with the cut right and bottom edged by one pixel.
If I use a source rectangle with (0,0)-(6,5) coordinates (that's plus one pixel), it solves the problem and Direct2D draws the sprite as needed. OK, but I do not understand why I have to use "one pixel plus" technic to draw "the uncut" sprite? What's wrong with sprite coordinates?
The D2D_RECT_F structure that you are passing to ID2D1SpriteBatch::AddSprites as the destinationRectangles is documented as:
Represents a rectangle defined by the coordinates of the upper-left corner (left, top) and the coordinates of the lower-right corner (right, bottom).
Note that you are specifying the corners on the destination device context, not the inclusive starting/ending pixel rows/columns. Therefore, if you draw from (0,0) to (0,0), you would be asking to draw a 0-sized rectangle, rather than a 1x1 rectangle.

OpenGL render portion of screen to texture

I am trying to render a small region of the screen to an off-screen texture. This is part of a screenshot function in my app where the user selects a region on the screen and saves this to an image. While the region on the screen might be 250x250px, the saved image can be a lot larger like 1000x1000px.
I understand the process of rendering to a texture using an FBO. I'm mostly stuck when it comes to defining the projection matrix that clips the scene so that only the screenshot region is rendered.
I believe you can do this without changing the projection matrix. After all, if you think about, you don't really want to change the projection. You want to change which part of the projected geometry gets mapped to your rendering surface. The coordinate system after projection is NDC (normalized device coordinates). The transform that controls how NDC is mapped to the rendering surface is the viewport transformation. You control the viewport transformation by the parameters to glViewport().
If you set the viewport dimensions to the size of your rendering surface, you map the NDC range of [-1.0, 1.0] to your rendering surface. To render a sub-range of that NDC range to your surface, you need to scale up the specified viewport size accordingly. Say to map 1/4 of your original image to the width of your surface, you set the viewport width to 4 times your surface width.
To map a sub-range of the standard NDC range to your surface, you will also need to adjust the origin of the viewport. The viewport origin values become negative in this case. Continuing the previous example, to map 1/4 or the original image starting in the middle of the image, the x-value of your viewport origin will be -2 times the surface width.
Here is what I came up with on how the viewport needs to be adjusted. Using the following definitions:
winWidth: width of original window
winHeight: height of original window
xMin: minimum x-value of zoomed region in original window coordinates
xMax: maximum x-value of zoomed region in original window coordinates
yMin: minimum y-value of zoomed region in original window coordinates
yMax: maximum y-value of zoomed region in original window coordinates
fboWidth: width of FBO you use for rendering zoomed region
fboHeight: height of FBO you use for rendering zoomed region
To avoid distortion, you will probably want to maintain the aspect ratio:
fboWidth / fboHeight = (xMax - xMin) / (yMax - yMin)
In all of the following, most of the operations (particularly the divisions) will have to be executed in floating point. Remember to use type casts if the original variables are integers, and round the results back to integer for the final results.
xZoom = winWidth / (xMax - xMin);
yZoom = winHeight / (yMax - yMin);
vpWidth = xZoom * fboWidth;
vpHeight = yZoom * fboHeight;
xVp = -(xMin / (xMax - xMin)) * fboWidth;
yVp = -(yMin / (yMax - yMin)) * fboHeight;
glViewport(xVp, yVp, vpWidth, vpHeight);
You might want to look into how gluPickMatrix works and replicate its functionality using modern OpenGL methods. You can find the gluPickMatrix source code in the implementation of Mesa3D.
While the original intent of gluPickMatrix was for selection mode rendering, it can be used for what you want to do as well.

OpenGl coordinate system is not at -1 to 1

I am creating a basic game in OpenGl and C++ and want to make it so that when the player gets to the edge of the screen they can't move any further. I am having trouble working out where the edge of the screen is. I know that windows normally have a system between 1 and -1, but mine seems to be more like 0.63 to -0.63. The player is shown as a box on the screen which has an x, y, and z location, but it will only move in 2D space.
I want to change the bounds so that they are between -1 and 1, not a odd value.
How can I do this?
Code has been uploaded to http://pastebin.com/jxd5YhHa.
If you aren't going to be dynamically changing your projection matrix, the easiest thing to do would be to call
glScalef(.63f,.63f,1);
on your projection matrix.
You can then restrict movement based on these values.
To compute the world space coordinates at any time you should make use of gluUnProject.
assuming 'x' and 'y' are the width and height of your window respectively (the values you pass gluPerspective) you can find the world space coordinates like so:
double world_llx,world_lly,world_llz;
//world coordinates of lower left corner of window
gluUnProject(0, 0, 0, view_mat, proj_mat, viewport,&world_llx,&world_lly,&world_llz);
//world coordinate of upper right corner of window
double world_urx,world_ury,world_urz;
gluUnProject(x,y,0,view_mat,proj_mat,viewport,&world_urx,&world_ury,&world_urz);
view_mat is your view matrix. proj_mat is your projection matrix. You can get both of these using glGetDouble* with GL_MODELVIEW_MATRIX and GL_PROJECTION_MATRIX.
The viewport parameter will probably have the same dimensions as your window. In any event, this is what you set with glViewport.
This assumes your XZ plane is at z == 0.

OpenGL Viewport Transformation

I was wondering how OpenGL handles viewport transformation to the window.
As I understand viewport transformation is that it strethes the scene onto the OpenGL window by applying the viewport transformation to that scene.
Please correct me if I'm wrong.
After clipping and perspective divide, all remaining (visible) vertex coordinates x,y,z are between -1 and +1 -- these are called normalized device coordinates. These are mapped to device coordinates by the appropriate scale and shift -- i.e, the viewport transformation.
For example, if the viewport has size 1024x768 with a 16-bit depth buffer and the origin is (0,0), then the points will be scaled by (512,384,2^14) and shifted by (512,384,2^14) yielding the appropriate pixel and depth values for the device.
http://www.songho.ca/opengl/gl_transform.html:
Window Coordinates (Screen Coordinates)
It is yielded by applying normalized device coordinates (NDC) to
viewport transformation. The NDC are scaled and translated in order to
fit into the rendering screen. The window coordinates finally are
passed to the raterization process of OpenGL pipeline to become a
fragment. glViewport() command is used to define the rectangle of the
rendering area where the final image is mapped. And, glDepthRange() is
used to determine the z value of the window coordinates. The window
coordinates are computed with the given parameters of the above 2
functions;
Follow the link to see the math details.

the order of translate and scale for zoom and pan

first thing I want to do is translating to the center of the screen and draw all of the objects from there.
then I would like to apply tranlsate for panning and scale for zoom. I want to zoom relative to a center point ! so how should be the order of them so that it works ?
glTranslatef(width/2, height/2, 0);
gltranslate(centerX,centerY); // go to center point
glscale(zoom);
glTranslatef(offset.x/zoom, offset.y/zoom, offset.z/zoom); // pan
I tried the above order but it doesn't go to the center point and it always zoom relative to (0,0).
I suppose you are drawing a square with both x and y between 0,1.
first you have to translate to the point the scaled object should be:
glTranslate3f(centerX,centerY,0);
glScale(zoom);
glTranslatef(-0.5f, -0.5f,0); // to the middle
draw stuff
opengl executes the transformations in reverse order since it's a pipeline.
reading the above sequence in the bottom-up direction will give the key.