Portal Effect in OpenGL [closed] - opengl

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
i'm wondering in which way the Portal Game works.
You can stand between one portal and the other, it's so fascinating.
Every time you shoot one portal, maybe, the level is copied throught it? or is only a camera/frustum/viewport effect?
I want to develope it in OpenGL, any suggestion?

This has been a nightmare for them to implement. Play the game through with "director's commentary" and you'll get some interesting interviews mentioning it.
Here's the basic idea. When you look into the blue portal, you're not looking at a copy of the level, but simply at the same thing rendered from a different point of view. The engine renders the part seen through the portal from the point of view "behind" the orange portal, corresponding to your location in front of the blue one. It needs to take special care not to show anything that's in between this virtual viewpoint and the back of the orange portal. The view frustum is adjusted to include only the bits you can see through the blue portal.
But that's not the whole story, because what if you can see one portal through the other? You'll get an "infinite" feedback effect. In practice, the effect is not actually infinite; it just does enough iterations (say 40) until the images get small enough that you can't tell the difference. Each next iteration can be rendered at a smaller size, so we don't have to render the whole level 40 times at full resolution. But there's still work involved with clipping, culling, and so on.
In OpenGL, this could either be accomplished by rendering to a texture using framebuffer objects (FBOs), or rendering directly to the end result but clipped using the stencil buffer (thanks datenwolf!). But, as the paragraphs above show, that's only the beginning of the story. If you're just getting started with OpenGL, I'm afraid you're completely at the wrong end of the difficulty scale.
(Aside: There are also interesting things going on with the physics engine, where an object that's halfway through a portal needs to be in two places at once. Another big headache.)

They keywords are: Stencil buffer, clip planes and recursive rendering.
The stencil buffer is used to cut out the part of the viewport that's "the portal". This is achieved by rendering some helper geometry, when the main view of the scene is rendered.
In the next step the scene is rendered a further time, but this time the scene is moved by an additional transformation, namely the one describing the relative alignment of the portals to each other. Then a clip-plane is placed on the portal plane and the scene is rendered. During this rendering another portal stencil may be rendered, which triggers a further recursion. To prevent infinite loops in a hall of mirrors situation there's a recursion limit.

I don't know the specifics but this could be achieved in several ways. I assume that they use a camera at the location of the portals and a render to texture function in order to capture the scene from that view and render it at the location of the portal. They also allow for a larger number of iterations which I'm still not certain how that works (likely lowering the resolution of the render to texture till it hits one pixel or a predefined depth value).

Related

Find out which triangles were drawn OpenGL

I have an idea and I want to know if this would be possible in any way. I want to render a scene and use the resulting image to find out which triangles were or are visible from my current point of view.
Let me give you an example: I would render the scene into a custom framebuffer and store a certain ID to every pixel, the ID would be an identifier to the original primitive. Now my problem is that I don't know how to find out which pixel belonged to which triangle. My first idea was to just pass an ID along the shader stages, but I don't know if that would be possible. If I can find out which primitives were drawn, I could cull the others. Is there any way to find out which pixel belonged to which (original) triangle?
There is a similar question here on Stackoverflow, but it does not really answer my question (see question).
Why do I want to do this?
I have a server-client scenario where my server is very powerful whereas my client is not. The server sends the model data to the client and the client renders it locally. To reduce the rendering time and the amount of memory needed, I want to do precalculations on the server and only send certain parts of the model to the client.
Edit: Changed my question because I misunderstood some concepts.

How can I change an object's rate of rotation in OpenGL?

I'm working with a simple rotating polygon that should speed up when the user clicks and drags upwards and show down when the user clicks and drags downward. Unfortunately, I have searched EVERYWHERE and cannot seem to find any specific GL function or variable that will easily let me manipulate the speed (I've searched for "frame rate," too...)
Is there an easy call/series of calls to do something like this, or will I actually need to do things with timers at different segments of the code?
OpenGL draws stuff exactly where you tell it to draw the stuff. It has no notion of what was rendered the previous frame or what will be rendered the next frame. OpenGL has no concept of time, and without time, you cannot have speed.
Speed is something you have to manage. Generally, this is done by taking your intended speed, multiplying it by the time interval since you last rendered, and adding that into the current rotation (or position). Then render at that new orientation/position. This is all up to you however.

OpenGL Picking from a large set

I'm trying to, in JOGL, pick from a large set of rendered quads (several thousands). Does anyone have any recommendations?
To give you more detail, I'm plotting a large set of data as billboards with procedurally created textures.
I've seen this post OpenGL GL_SELECT or manual collision detection? and have found it helpful. However it can take my program up to several minutes to complete a rendering of the full set, so I don't think drawing 2x (for color picking) is an option.
I'm currently drawing with calls to glBegin/glVertex.../glEnd. Given that I made the switch to batch rendering on the GPU with vao's and vbo's, do you think I would receive a speedup large enough to facilitate color picking?
If not, given all of the recommendations against using GL_SELECT, do you think it would be worth me using it?
I've investigated multithreaded CPU approaches to picking these quads that completely sidestep OpenGL all together. Do you think a OpenGL-less CPU solution is the way to go?
Sorry for all the questions. My main question remains to be, whats a good way that one can pick from a large set of quads using OpenGL (JOGL)?
The best way to pick from a large number of quad cannot be easily defined. I don't like color picking or similar techniques very much, because they seem to be to impractical for most situations. I never understood why there are so many tutorials that focus on people that are new to OpenGl or even programming focus on picking that is just useless for nearly everything. For exmaple: Try to get a pixel you clicked on in a heightmap: Not possible. Try to locate the exact mesh in a model you clicked on: Impractical.
If you have a large number of quads you will probably need a good spatial partitioning or at least (better also) a scene graph. Ok, you don't need this, but it helps A LOT. Look at some tutorials for scene graphs for further information's, it's a good thing to know if you start with 3D programming, because you get to know a lot of concepts and not only OpenGl code.
So what to do now to start with some picking? Take the inverse of your modelview matrix (iirc with glUnproject(...)) on the position where your mouse cursor is. With the orientation of your camera you can now cast a ray into your spatial structure (or your scene graph that holds a spatial structure). Now check for collisions with your quads. I currently have no link, but if you search for inverse modelview matrix you should find some pages that explain this better and in more detail than it would be practical to do here.
With this raycasting based technique you will be able to find your quad in O(log n), where n is the number of quads you have. With some heuristics based on the exact layout of your application (your question is too generic to be more specific) you can improve this a lot for most cases.
An easy spatial structure for this is for example a quadtree. However you should start with they raycasting first to fully understand this technique.
Never faced such problem, but in my opinion, I think the CPU based picking is the best way to try.
If you have a large set of quads, maybe you can group quads by space to avoid testing all quads. For example, you can group the quads in two boxes and firtly test which box you
I just implemented color picking but glReadPixels is slow here (I've read somehere that it might be bad for asynchron behaviour between GL and CPU).
Another possibility seems to me using transform feedback and a geometry shader that does the scissor test. The GS can then discard all faces that do not contain the mouse position. The transform feedback buffer contains then exactly the information about hovered meshes.
You probably want to write the depth to the transform feedback buffer too, so that you can find the topmost hovered mesh.
This approach works also nice with instancing (additionally write the instance id to the buffer)
I haven't tried it yet but I guess it will be a lot faster then using glReadPixels.
I only found this reference for this approach.
I'm using the solution that I've borrowed from DirectX SDK, there's a nice example how to detect the selected polygon in a vertext buffer object.
The same algorithm works nice with OpenGL.

Selection / glRenderMode(GL_SELECT)

In order to do object picking in OpenGL, do I really have to render the scene twice?
I realize rendering the scene is supposed to be cheap, going at 30fps.
But if every selection object requires an additional gall to RenderScene()
then if I click at 30 times a second, then the GPU has to render twice as many times?
One common trick is to have two separate functions to render your scene. When you're in picking mode, you can render a simplified version of the world, without the things you don't want to pick. So terrain, inert objects, etc, don't need to be rendered at all.
The time to render a stripped-down scene should be much less than the time to render a full scene. Even if you click 30 times a second (!), your frame rate should not be impacted much.
First of all, the only way you're going to get 30 mouse clicks per second is if you have some other code simulating mouse clicks. For a person, 10 clicks a second would be pretty fast -- and at that, they wouldn't have any chance to look at what they'd selected -- that's just clicking the button as fast as possible.
Second, when you're using GL_SELECT, you normally want to use gluPickMatrix to give it a small area to render, typically a (say) 10x10 pixel square, centered on the click point. At least in a typical case, the vast majority of objects will fall entirely outside that area, and be culled immediately (won't be rendered at all). This speeds up that rendering pass tremendously in most cases.
There have been some good suggestions already about how to optimize picking in GL. That will probably work for you.
But if you need more performance than you can squeeze out of gl-picking, then you may want to consider doing what most game-engines do. Since most engines already have some form of a 3D collision detection system, it can be much faster to use that. Unproject the screen coordinates of the click and run a ray-vs-world collision test to see what was clicked on. You don't get to leverage the GPU, but the volume of work is much smaller. Even smaller than the CPU-side setup work that gl-picking requires.
Select based on simpler collision hulls, or even just bounding boxes. The performance scales by number of objects/hulls in the scene rather than by the amount of geometry plus the number of objects.

OpenGL game development - scenes that span far into view

I am working on a 2d game. Imagine a XY plane and you are a character. As your character walks, the rest of the scene comes into view.
Imagine that the XY plane is quite large and there are other characters outside of your current view.
Here is my question, with opengl, if those objects aren't rendered outside of the current view, do they eat up processing time?
Also, what are some approaches to avoid having parts of the scene rendered that aren't in view. If I have a cube that is 1000 units away from my current position, I don't want that object rendered. How could I have opengl not render that.
I guess the easiest approaches is to calculate the position and then not draw that cube/object if it is too far away.
OpenGL faq on "Clipping, Culling and Visibility Testing" says this:
OpenGL provides no direct support for determining whether a given primitive will be visible in a scene for a given viewpoint. At worst, an application will need to perform these tests manually. The previous question contains information on how to do this.
Go ahead and read the rest of that link, it's all relevant.
If you've set up your scene graph correctly objects outside your field of view should be culled early on in the display pipeline. It will require a box check in your code to verify that the object is invisible, so there will be some processing overhead (but not much).
If you organise your objects into a sensible hierarchy then you could cull large sections of the scene with only one box check.
Typically your application must perform these optimisations - OpenGL is literally just the rendering part, and doesn't perform object management or anything like that. If you pass in data for something invisible it still has to transform the relevant coordinates into view space before it can determine that it's entirely off-screen or beyond one of your clip planes.
There are several ways of culling invisible objects from the pipeline. Checking if an object is behind the camera is probably the easiest and cheapest check to perform since you can reject half your data set on average with a simple calculation per object. It's not much harder to perform the same sort of test against the actual view frustrum to reject everything that isn't at all visible.
Obviously in a complex game you won't want to have to do this for every tiny object, so it's typical to group them, either hierarchically (eg. you wouldn't render a gun if you've already determined that you're not rendering the character that holds it), spatially (eg. dividing the world up into a grid/quadtree/octree and rejecting any object that you know is within a zone that you have already determined is currently invisible), or more commonly a combination of both.
"the only winning move is not to play"
Every glVertex etc is going to be a performance hit regardless of whether it ultimately gets rendered on your screen. The only way to get around that is to not draw (i.e. cull) objects which wont ever be rendered anyways.
most common method is to have a viewing frustum tied to your camera. Couple that with an octtree or quadtree depending on whether your game is 3d/2d so you dont need to check every single game object against the frustum.
The underlying driver may do some culling behind the scenes, but you can't depend on that since it's not part of the OpenGL standard. Maybe your computer's driver does it, but maybe someone else's (who might run your game) doesn't. It's best for you do to your own culling.