Drawing multiple objects - c++

I have been learning DirectX for a while now & I got to the point of drawing multiple objects to the screen from .obj files. My problem is that if I draw 2 objects, the 2nd draw will be on top of the 1st.
Example:
Obj1 = Cardboard box
Obj2 = Cube
loadStuff_&_draw(Obj1);
loadStuff_&_draw(Obj2);
That would draw the cube outside/front the box even if it is in/behind.
How would I make multiple objects draw together so that they overlap correctly?
The only drawing things I know of are:
Load vertex, index, constant buffers
updateSubresource()
drawIndexed()
Edit:
Here is a picture of a cube in a box. It shows that the cube is behind the box rather than inside. It also shows that the rim of the box clips behind the box. Not sure what happened.
I drew the cube, then drew the hollow box.

This is what the depth buffer (sometimes called the z-buffer) is for. When you write a pixel during the drawing of one object, it records the distance from the viewpoint for that pixel in the depth buffer. Then when you draw a second object that would also write to that pixel, it can compare the new object's depth value at that pixel with the buffered value (from previous objects) and only overwrites the pixel if the new value would be nearer than the old one. If the pixel gets drawn, then the depth value is updated to reflect the closer value.
Some links that might give you some ideas on how to implement this:
http://msdn.microsoft.com/en-gb/library/windows/desktop/bb205074%28v=vs.85%29.aspx
http://www.rastertek.com/dx11tut35.html
https://stackoverflow.com/a/8641210/28875

This depends on how do you set the world matrix of the object, If you want to second object behind the first object, then set the correct world matrix before drawing. suppose object 1 place at origin(0, 0, 0), then you can translate object2 to (0, 0, -10) to make it behind object1. and translate to (0, 0, 10) will make it in front of object1.

Related

Method to get all points' coordinates OpenGL

I have a rectangle and a circle. Of the circle i have all points' coordinates because i calculate them to draw it using math rules. the rectangle in drawed using two triangles so 4 vertices. Now these are free to translate and route in the plan and i want to determinate when one of them touch the other one. so I thought that this happens when one of the coordinates of one of them is the same that one of the others of the other object. The problem is that i haven't an array of all coordinates of the rectangle. Is there a method that return all coordinates that a drawed triangles and not only the vertices' ones in OpenGL?
There is method to record coords and commands supplied to OpenGL, using stencil buffer, but that's a rather inefficient way, because you would need to decompile commands inside buffer.
If you didn't had an array of coordinates , you already used the most inefficient way to supply geometry to OpenGL:
glBegin(...);
glVertex3f(...);
glVertex3f(...);
...
glVertex3f(...);
glEnd();
The more efficient way to do that is to use vertex buffer, which automatically requires to have array of coordinates. With large amount of vertices, VBO methods is times faster than vertex by vertex copying.
OpenGL doesn't store the coordinates you've supplied to it any longer than it is required, i.e. until rasterization. Whole goal of OpenGL is to create image on screen, not to solve some abstract tasks.

Get screen coordinates and size of OpenGL 3D Object after transformation

I have a couple of 3D objects in OpenGL in a processing sketch and I need to find out if the mouse is hovering over those objects. Since there is constant transformation I can't compare the original coordinates and size to the mouse position. I already found the screenX() and screenY() methods which return the translated screen coordinates after transformation and translation but I would still need to get the displayed size after rotation.
Determining which object the mouse is over is called picking and there are 2 main approaches:
Color picking. Draw each object using a different color into the back buffer (this is only done when picking, the colored objects are never displayed on screen). Then use glReadPixels to read the pixel under the cursor and check its color to determine which object it is. If the mouse isn't over an object you'll get the background color. More details here: Lighthouse 3D Picking Tutorial, Color Coding
Ray casting. You cast a ray through the cursor location into the scene and check if it intersects any objects. More details here: Mouse picking with ray casting
From reading your description option 1 would probably be simpler and do what you need.

Placing multiple images on a 3D surface

If I was to place a texture on the surface of a 3D object, for example a cube, I could use the vertices of that cube to describe the placement of this texture.
But what if I want to place multiple separate images on the same flat surface? Or suppose it is just one image, but I don't want it to appear at the edges of the surface, where the vertices are, but rather somewhere small and in the middle of the surface. I want the actual images to be chosen and placed dynamically at runtime, otherwise I could condense them offline as a single texture.
I have an approach but I want to seek advice as to whether there is a better method, or if this is perfectly acceptable:
My guess is to create multiple separate 2D quads (with depth of 0), each with a texture associated with them and placed on them (they could of course be a texture atlas with different texture coordinates).
Then, I transform these quads such that they appear to be on the surface of a 3D object, such as a cube. Of course I'd have to maintain a matrix hierarchy so these quads are transformed appropriately whenever the cube is transformed, such that they appear to be attached to the cube.
While this isn't necessarily hard, I am new to texturing and would like to know if this is a normal practice for something like this.
You could try rendering a scene and saving that as a texture then use that texture on the surface.
Check out glCopyTexImage2D() or glCopyTexSubImage2D().
Or perhaps try using frame buffer objects.
But what if I want to place multiple separate images on the same flat surface?
Use multiple textures, maybe each with its own set of textuer coordinates. Your OpenGL implementation will offer you a number of textuer units. Each of them can supply a different texture.
glActiveTexture(GL_TEXTURE_0 + i);
glBindTexture(…);
glUniform1i(texturesampler[i], i); // texturesampler[i] contains the sampler uniform location of the bound program.
Or suppose it is just one image, but I don't want it to appear at the edges of the surface, where the vertices are, but rather somewhere small and in the middle of the surface.
That's where GL_CLAMP… texture wrap modes get their use.
glTexParameteri(GL_TEXTURE_WRAP_{S,T,R}, GL_CLAMP[_TO_{EDGE,BORDER}]);
With those you specify texture coordinates at the vertices to be outside the [0, 1] interval, but instead of repeating the image will show only one time, with only the edge pixels repeated. If you make the edge pixels transparent, it's as if there was no image there.

Make an object clickable (click if the mouse is in the area of object)

I have a rectangle on my window and I am trying to make this rectangle click-able by defining the area of the rectangle.
If the mouse click is inside this area, then it's a click else not.
For eg: On the window, let's assume the vertex of the rectangle is:
x = 40, y = 50; width = 200, height = 100;
So, a click will be counted when
(mouseXPos > getX()) && (mousxPos < (getX()+width)) && (mouseYPos > getY()) && (mouseYPos > getY()+height)
Now, I am doing lookAt transformation to the object by inheriting a class which has lookAt functions. Also, I am using a camera to check the different faces of the object (camera rotation). So, when the object rotates along various axes and shows different faces when the camera is used.
However, when the object moves, I would have thought the vertices of the rectangle would change. The vertices of the rectangle should also have changed on doing gluLookAt function but looks like they do not and my click-area always remains stationary at those points although the object is not there. How do I tackle this problem? How do I make my object clickable and add some mouse events on it?
If you're trying to click on 3D shapes, and you are moving the camera, I wouldn't check this in screen coordinates.
Instead, you can project the point where the user has clicked into a line in 3D space, and intersect that against the objects which can be clicked on.
GL has a function for this: gluUnproject()
If you give that function the details of your view, along with the screen point being clicked on, it can tell you where in 3D space the user has clicked. If you check a point at both the near and far planes, the line between these points can be checked against your object.
The other alternative, is some kind of ID buffer, where you render the scene off-screen, but instead of outputting shaded pixels, you output ID values for each object. The hit-test is then just a matter of reading a pixel from that buffer.
Seeing that nobody else is answering, I will give it a try. :)
Though not very experienced with opengl this seems like the expected behavior to me, and it seems like you are mixing up world coordinates with on screen coordinates.
Think of it this way, if you take a picture of a table in your living room from two different angles it will look differently, but will in both images occupy the same space in that room. The same can be said about entities when working with opengl, even though you move the camera the coordinates for the entity does not change, just the perception of it.
Now, if I'm not mistaken you can translate your world coordinates into on-screen coordinates by applying the same transformations as applied by opengl. I would suggest taking a look at this: OpenGL: projecting mouse click onto geometry
Thumbs up for the ID buffer alternative. I used it several times and it was really the right thing to do : no expensive geometry picking and a hardly-total disconnection with the underlying geometry.
Still, you can only pick the closest geometry (that is the one "visible" on the screen) and not the models that could be hidden behind. The same problem appears when dealing with translucent materials too.
One solution could be to do some ID-peeling by rendering four objects in a RGBA ID texture (I have a feeling it could be non-optimal but it's worth a shot).

Rasterizer not picking up GL_LINES as I would want it to

So I'm rendering this diagram each frame:
https://dl.dropbox.com/u/44766482/diagramm.png
Basically, each second it moves everything one pixel to the left and every frame it updates the rightmost pixel column with current data. So a lot of changes are made.
It is completely constructed from GL_LINES, always from bottom to top.
However those black missing columns are not intentional at all, it's just the rasterizer not picking them up.
I'm using integers for positions and bytes for colors, the projection matrix is exactly 1:1; translating by 1 means moving 1 pixel. Orthogonal.
So my problem is, how to get rid of the black lines? I suppose I could write the data to texture, but that seems expensive. Currently I use a VBO.
Render you columns as quads instead with a width of 1 pixel, the rasterization rules of OpenGL will make sure you have no holes this way.
Realize the question is already closed, but you can also get the effect you want by drawing your lines centered at 0.5. A pixel's CENTER is at 0.5, and drawing a line there will always be picked up by the rasterizer in the right place.