Rendering an image in a new perspective from 2 or more viewpoints - computer-vision

I have a screen that I want to act like a mirror, however I cannot mount a camera behind the screen to get the right perspective. I can mount cameras around the screen and know their position in an X/Y/Z space and their focal length. Is there anyway to take the output of those cameras and render a new image with the perspective that I require? One that would allow the screen to emulate a mirror.

Related

How to find overlapping fields of view?

If I have 2 cameras and I'm given the positions and orientations of the cameras in the same coordinate system, is there any way I could detect overlapping fields of view? In other words, how could I tell if something that's displayed in the frame of 1 camera is also displayed in another? In addition, I'm also given the view and projection matrices of the 2 cameras.
To detect two overlapping fields of view you'll want to do a collision check between two viewing frustums (viewing volume).
A frustum is a convex polyhedra so you can use the separating axis theorem to do it.
See here.
However, if you just want to know if an object that is displayed in the frame of one camera is displayed in the frame of another camera the best way to do that is to transform the world space coordinates from said object in to the viewport space of both cameras. If both coordinates land within the range [0:width, height:0] for both and the z coordinate is positive, then the object is in view of both cameras.
This page has a great diagram of the 3D transformation viewing pipeline if you want to read more on what viewspace and worldspace are.

light field rendering from camera array

I'm trying to implement something similar to "Dynamically Reparameterized Light Fields" (Isaksen, McMillan, & Gortler), where the light field is a series of cameras placed on a plane:
In the paper, it discusses that we can find the corresponding camera and pixels by the following formulation: MF→D s,t = Ps,t ◦ TF.
The dataset that I'm using doesn't contain any camera parameters or any information regarding what is the distance between the cameras. I just know that they are placed on a plane uniformly. I have a free moving camera and rendering a view quad. So I can get the 3D position of the focal surface but I don't know how to get the (s,t,u,v) parameters from that. As soon as I get these parameters I can render correctly.

Rendering Point Sprites across cameras in cube maps

I'm rendering a particle system of vertices, which are then tessellated into quads in a geom shader, and textured/rendered as point sprites. Then they are scaled in size depending on how far away they are from the camera. I'm trying to render out every frame of my scene into cube maps. So essentially I place six cameras into my scene and point them in each direction for the face of the cube and save an image.
My point sprites are of varying sizes. When they near the border of one camera, (if they are large enough) they appear in two cameras simultaneously. Since point sprites are always facing the camera, this means that they are not continuous along the seam when I wrap my cube map back into 3d space. This is especially noticeable when the points are quite close to the camera, as the points are larger, and stretch further across into both camera views. I'm also doing some alpha blending, so this may be contributing to the problem as well.
I don't think I can just cull points that near the edge of the camera, because when I put everything back into 3d I'd think there would be strange areas where the cloud is more sparsely populated. Another thought I had would be to blur the edges of each camera, but I think this too would give me a weird blurry zone when I go back to 3d space. I feel like I could manually edit the frames in photoshop so they look ok, but this would be kind of a pain since it's an animation at 30fps.
The image attached is a detail from the cube map. You can see the horizontal seam where the sprites are not lining up correctly, and a slightly less noticeable vertical one on the right side of the image. I'm sure that my camera settings are correct, because I've used this same camera setup in other scenes and my cubemaps look fine.
Anyone have ideas?
I'm doing this in openFrameworks / openGL fwiw.
Instead of facing the camera, make them face the origin of the cameras? Not sure if this fixes everything, but intuitively I'd say it should look close to OK. Maybe this is already what you do, I have no idea.
(I'd like for this to be a comment, but no reputation)

opengl selecting area on model

I need some help in surface area selection on a 3d model rendered in opengl by picking points through mouse. I know how to get a point in world coordinate but cant find a way to select an area. Later I need to remesh that selected area and map an image over it which I know.
Well, OpenGL by itself can't help you there. OpenGL is a drawing API. You draw things, but once the drawing commands have been executed all that's left are pixels in a framebuffer and OpenGL has no recollection about the geometry whatsoever.
You can use OpenGL to implement image based area selection algorithms, for example by drawing each face with a unique index color into an off screen framebuffer. Then by looking at what values can be found therein you know which faces are present in a given area.
Later I need to remesh
This is called topology modification and is completely outside the scope of OpenGL.
that selected area and map an image over it which I know
You can use a image based approach for this again, however you must know in which way you want to make images to faces first. If you want to unwrap the mesh, then OpenGL is of no help. However if you want the user to be able to "directly draw" onto the mesh, this can be done by drawing texture coordinates into another off screen framebuffer and by this reverse mapping screen coordinates to texture coordinates.

Optimizing a render-to-texture process

I'm developing a render-to-texture process that involves using several cameras which render an entire scene of geometry. The output of these cameras is then combined and mapped directly to the screen by converting each geometry's vertex coordinates to screen coordinates in a vertex shader (I'm using GLSL, here).
The process works fine, but I've realized a small problem: every RTT camera I create will create a texture the same dimensions as the screen output. That is, if my viewport is sized to 1024x1024, even if the geometry occupies a 256x256 section of the screen, each RTT camera will render the 256x256 geometry in a 1024x1024 texture.
The solution seems reasonably simple - adjust the RTT camera texture sizes to match the actual screen area the geometry occupies, but I'm not sure how to do that. That is, how can I (for example) determine that a geometry occupies a 256x256 area of a screen so that I can correspondingly set the RTT camera's output texture to 256x256 pixels?
The API I use (OpenSceneGraph) uses axis-aligned bounding boxes, so I'm out of luck there..
Thoughts?
why out of luck? can't you use the axis-aligned bounding box to compute the area?
My idea:
take the 8 corner points of the bounding box and project them onto the image plane of the camera.
for the resulting 2d points on the image plane you can determine an axis-aligned 2d bounding box again
This should be a correct upper bound for the space the geometry can occupy.