I am trying to draw a simple cube in opengl using the mouse. Here's the basic step I followed:
1. Get mouse click coordinates. One,when the mouse is first clicked (say x1,y1) and the other, when the mouse is released i.e after the drag (say x2,y2).
2. Convert the 2d Coordinates to 3d using glUnproject.
3. Now that I have two points in 3d, I can easily render a Cube.
Everything went as planned, except I this was found while are drew the cube:
Link to the image: Here
The cube was half drawn,I dont know what's the problem here.
That looks like the whole scene is clipped at the backplane. Try moving the backplane further away from your camera. If you don't know what a backplane is take a look at this awesome article: http://www.lighthouse3d.com/tutorials/view-frustum-culling/
Related
I have a problem, I need to draw a 3D object in such a way that I can move it along the screen plane and rotate so that the angle of view is as if I always looked at it fixedly from one point.
I use the GLM library for working with matrices. I tried to use glm::ortho, but I can not operate with the z coordinate, respectively, the model does not rotate. if i use glm::perspective, then the model looks like i need only in the center of the screen. I mean, for example, the character model is depicted in the window of a game. How would you not move the window at any point on the screen, you look at the model directly, and not as if looking out of the corner.
I apologize, I do not know how to explain it in normal language, I hope it is understandable.
I'm trying to add a skybox to the world/camera/game and I don't know how to go about it. If someone could give me some guidance on how to apply it, it would be much appreciated.
I have already loaded the skybox, I just don't know how to draw it properly so it will fit around the camera as it moves.
I have managed to texture a sort of cube, which might be close to a skybox but then it's only visible from the outside. Once you enter the cube, you can't see it from the inside. Perhaps if I could invert the cube's faces, it will show when I'm inside the cube and I can make it larger?
From outside the cube looking at it
From inside looking out
I had a similar problem a few weeks back, if you are looking for some pseudo code I think I may be able to help. First of all using a cube isn't the best idea when rendering as your box won't look natural, map it to a sphere for a smooth effect.
Create a bounding sphere around your viewer that moves relative to your camera
Apply the texture on that sphere, this will give the impression that the sky is moving relative to you
When you are drawing, disable your z-buffer and frustum (assuming you're using any culling algorithm) this will allow the sky box to be drawn but will ensure terrain is drawn over the top of the sky box when depth sort algorithms are performed by OpenGL.
Note: Don't forget to re-enable the z-buffer after the sky box has been drawn, otherwise your terrain elements will appear outside of the sphere, meaning you will only see the Sky box.
I recently wrote a basic terrain engine in DirectX but the principals are fairly similar, if you'd like to view the repo you can find it here
Check out line 286 in this file to see how the Skybox is rendered, then also visit the SkyBox implementation file to see how it is constructed, and the SkyShader implementation file to see how the texture is mapped to the sphere, the main method to be concerned with in the shader file is SetShaderParameters()
In terms of moving the skybox relative to your camera, simply set the WVP matrix of your skybox to that of your camera, and then tweak the x, y, z planes of the skybox to your liking.
Extra If you are going to implement multi-player aspects, just disable back-face rendering for the sphere, then each player can see their SkyBox but opponents cannot. Alternatively you create one large sphere around the world
Hope that helps - if you need anymore help just ask, I know this stuff can be fairly dense at first:)
I have a rectangle on my window and I am trying to make this rectangle click-able by defining the area of the rectangle.
If the mouse click is inside this area, then it's a click else not.
For eg: On the window, let's assume the vertex of the rectangle is:
x = 40, y = 50; width = 200, height = 100;
So, a click will be counted when
(mouseXPos > getX()) && (mousxPos < (getX()+width)) && (mouseYPos > getY()) && (mouseYPos > getY()+height)
Now, I am doing lookAt transformation to the object by inheriting a class which has lookAt functions. Also, I am using a camera to check the different faces of the object (camera rotation). So, when the object rotates along various axes and shows different faces when the camera is used.
However, when the object moves, I would have thought the vertices of the rectangle would change. The vertices of the rectangle should also have changed on doing gluLookAt function but looks like they do not and my click-area always remains stationary at those points although the object is not there. How do I tackle this problem? How do I make my object clickable and add some mouse events on it?
If you're trying to click on 3D shapes, and you are moving the camera, I wouldn't check this in screen coordinates.
Instead, you can project the point where the user has clicked into a line in 3D space, and intersect that against the objects which can be clicked on.
GL has a function for this: gluUnproject()
If you give that function the details of your view, along with the screen point being clicked on, it can tell you where in 3D space the user has clicked. If you check a point at both the near and far planes, the line between these points can be checked against your object.
The other alternative, is some kind of ID buffer, where you render the scene off-screen, but instead of outputting shaded pixels, you output ID values for each object. The hit-test is then just a matter of reading a pixel from that buffer.
Seeing that nobody else is answering, I will give it a try. :)
Though not very experienced with opengl this seems like the expected behavior to me, and it seems like you are mixing up world coordinates with on screen coordinates.
Think of it this way, if you take a picture of a table in your living room from two different angles it will look differently, but will in both images occupy the same space in that room. The same can be said about entities when working with opengl, even though you move the camera the coordinates for the entity does not change, just the perception of it.
Now, if I'm not mistaken you can translate your world coordinates into on-screen coordinates by applying the same transformations as applied by opengl. I would suggest taking a look at this: OpenGL: projecting mouse click onto geometry
Thumbs up for the ID buffer alternative. I used it several times and it was really the right thing to do : no expensive geometry picking and a hardly-total disconnection with the underlying geometry.
Still, you can only pick the closest geometry (that is the one "visible" on the screen) and not the models that could be hidden behind. The same problem appears when dealing with translucent materials too.
One solution could be to do some ID-peeling by rendering four objects in a RGBA ID texture (I have a feeling it could be non-optimal but it's worth a shot).
I'm trying to show only a part of a background image (game scenenario in the future). The basic way to work is for example, first I draw a background image, after that i need to "hide"/cover the image with some dark or darness (no light, don't know what option must be chosen) and use the mouse click to using a circle or a triangle (my options) show only the part of the image background over with the circle/triangle centered on mouse position. I called this "lantern effect".
First Option: Play with the alpha channel, creating a an square covering all the window size and after that trying to substract the circle area over the alpha square over the image.
Second Option: Play again with a black square covering all the image background and trying to substract a circle/triangle. Try with glLogicOp but this method only plays mixing colors. Don't know how to do operation with 2D polygons with OpenGL.
...
Any other idea or easy example to learn how to do something similar.
Image example:
That's quite easy to achieve actually:
Create black texture with your lantern light shape in Alpha channel. (Texture could be animated)
Render background.
Render Lantern texture centered at your in-game mouse cursor.
Render black padding around the lantern texture to hide everything around till screen edges.
There's no need to use any special blending modes, just an overlay.
I have a simple solid modeling application in which I want to implement several "navigation modes", ways for the user to navigate the camera through 3d space. One of them is the ubiquitous 'drag and pan/rotate' that is used in SketchUp, Blender etc.; I also want to implement something that is more relevant to my specific application. Specifically, I want to implement a mode where the camera floats on a 'ring' above the object being modeled (a building), and always looks at the center of the model; this way, a user can easily 'circle' around the object, a common operation in my application.
So, what I want to do is render the building in my view, and display a torus in the top right of the view, with a small sphere on the torus to represent the camera location. There would be a north arrow in the torus, and the user would drag the camera around the model object by dragging the sphere; moving the sphere would reposition the camera and redraw the scene.
It looks like what I should do is the following: render the 'main view', i.e. the building; then render the torus and sphere (with different perspective settings and lighting) to an offscreen buffer, and blit it from there to my main view.
Then however I get to the hit testing. I want to detect if the user clicks on the sphere, or the torus; from what I understand from OpenGL picking (it seems to be a hard subject :/ ), all picking methods apply only for selecting in one 'scene'. Apart from that, I still want to detect 'normal' picking operations in the building model, obviously.
So, my questions:
How do I render to an offscreen buffer and blit into another OpenGL context (with alpha blending & transparence like for the center of the torus)?
How do I do hit testing in the described scenario?
I don't think you need to do off-screen rendering for this. You should be able to just re-set the camera and viewport and render the overlay after the main scene. You might have issues with Z-ordering and/or buffering, but perhaps the "sub-scene" is simple enough for that not to matter, or you could of course just clear the Z buffer before rendering it.
As far as drawing the torus/sphere goes, create a separate class for that and implement a "draw" method. Have the class contain the location of both the sphere and torus and have draw() render those things on the screen.
Then just call myRing.draw() in your main drawing method and you'll have a sphere and torus!
If you mean you want to have a a circle/ring rendered in 2D (which might be easier) in the top right corner of the window, then the same sort of idea would apply as in your hitbox post (except without that annoying projection calculation!)
Lastly, I'd consider using a function key in combination with mouse drags to implement the functionality you want... E.g. the user holds "shift" and then click-drags the mouse across the screen. These mouse events are caught and the x-delta is used to compute the angle of rotation. The camera's location is updated as this happens and you get a smooth sliding motion :)
I agree with #unwind; you don't need an offscreen buffer. If you want to anyway, search for "render-to-texture".
As for hit testing, The OpenGL FAQ has an entry on it. It describes several solutions: using GL_SELECTION render mode, using gluUnproject() to get a 3D collision ray and a simple 2D solution using unique colors.