I am using PyOpenGL to draw a sphere and texture it with a 360 degree image. I want to be able to specify a crop of this sphere's surface and project it onto the corresponding tangent plane.
See
for an example (image taken from http://arxiv.org/pdf/1708.00919.pdf).
I have two questions:
Knowing the UV coordinates of the desired crop, how do I specify the crop in the sphere surface?
What is the best way to project this crop into the tangent plane and save the resulting projection?
Related
I'm trying to realize a virtual Pan-Tilt-Zoom (PTZ) camera, based on data from physical fisheye camera (180 degrees FOV).
In my opinion I have to realize the next sequence.
Get the coordinates of center of fisheye circle in coordinates of fisheye sensor matrix.
Get radius of fisheye circle in the same coordinate system.
Generate a sphere equation, which has the same center and radius as flat fisheye image on flat camera sensor.
Project all colored points from flat image to upper hemisphere.
Choose angles in X Y plane and X Z plane to describe the direction of view of virtual PTZ.
Choose view angle and mark it with circle around virtual PTZ view vector which will be painted on the surface of hemisphere.
Generate the plane equation which intersection with hemisphere will be a circle around the direction of view.
Move all colored points from the circle to the plane of circle around the direction of view, using direction from hemisphere edge to hemisphere center for projection.
Paint all unpainted points inside circle of projection, using interpolation (realized in cv::remap).
In my opinion the most important step is to rise colored points from flat image to 3-D hemisphere.
My question is:
Will it be correct to just set Z-coordinate to all colored points of flat image in accordance with hemisphere equation, to rise points from image plane to hemisphere surface?
I've got a 2D Texture on a 3D Sphere and I want to know how to transfer a 2D coordinate on the Texture into a 3D coordinate. I know it has to do with the clipping of the texture : I'm using the auto clipping function of OpenGL to put the texture on the Sphere.
Edit:
To clarify the problem:
I have a 2D plane which is an image containing borders drawn in red now I put objects on this plane, that have a collision radius and are wildly moving around. Whenever the objects collide with the red border they bounce back.
Now I take this 2D plane and warp it around a 3D sphere. At the position of the circles I want to put 3D-Models that move on the sphere. The problem now is to get from the "simple" 2D coordinates on the plane to the more complicates 3D coordinates on the sphere to position the 3D-Models correctly.
My first approach would be to map 2D coordinates to spherical coordinates which can easily be transferred into 3D coordinates but how would I do this?
You don't "convert" the 2D coordinate to a 3D coordinate. The 2D coordinates you have are UV coordinates (from 0 to 1) and they represent a position in the texture space. What you do is to map these UV coordinates to the vertices.
You can read more about UV mapping here.
In OpenGL, it depends on which version are you using. Either you use glTexCoord calls before the glVertex calls (for old versions of OpenGL), or you set it in a VBO to be processed at the fragment shader on newer versions of OpenGL.
If you are planning to use gluSphere() function, you don't need to worry about calculating UV texture coordinates since opengl does it for you with the right functions.
Here you can check the gluSphere() documentation
Here there is an example code
If you are planning to render your own sphere, check this question
I am working on voxelisation using the rendering pipeline and now I successfully voxelise the scene using vertex+geometry+fragment shaders. Now my voxels are stored in a 3D texture which has size, for example, 128x128x128.
My original model of the scene is centered at (0,0,0) and it extends in both positive and negative axis. The texure, however, is centered at (63,63,63) in tex coordinates.
I implemented a simple ray marcing for visualizing but it doesn't take into account the camera movements (I can render only from very fixed positions because my ray have to be generated taking into account the different coordinates of the 3D texture).
My question is: how can I map my rays so that they are generated at point Po with direction D in the coordinates of my 3D model but intersect the voxels at the corresponding position in texture coordinates and every movement of the camera in the 3D world is remapped in the voxel coordinates?
Right now I generate the rays in this way:
create a quad in front of the camera at position (63,63,-20)
cast rays in direction towards (63,63,3)
I think you should store your entire view transform matrix in your shader uniform params. Then for each shader execution you can use its screen coords and view transform to compute the view ray direction for your particular pixel.
Having the ray direction and camera position you just use them the same as currently.
There's also another way to do this that you can try:
Let's say you have a cube (0,0,0)->(1,1,1) and for each corner you assign a color based on its position, like (1,0,0) is red, etc.
Now for every frame you draw your cube front faces to the texture, and cube back faces to the second texture.
In the final rendering you can use both textures to get enter and exit 3D vectors, already in the texture space, which makes your final shader much simpler.
You can read better descriptions here:
http://graphicsrunner.blogspot.com/2009/01/volume-rendering-101.html
http://web.cse.ohio-state.edu/~tong/vr/
I'm trying to implement tiled deferred rendering method and now I'm stuck. I'm computing min/max depth for each tile (32x32) and storing it in texture. Then I want to compute screen space bounding box (bounding square) represented by left down and top right coords of rectangle for every pointlight (sphere) in my scene (see pic from my app). This together with min/max depth will be used to check if light affects actual tile.
Problem is I have no idea how to do this. Any idea, source code or exact math?
Update
Screen-space is basically a 2D entity, so instead of a bounding box think of a bounding rectangle.
Here is a simple way to compute it:
Project 8 corner points of your world-space bounding box onto the screen using your ModelViewProjection matrix
Find a bounding rectangle of these points (which is just min/max X and Y coordinates of the points)
A more sophisticated way can be used to compute a screen-space bounding rect for a point light source. We calculate four planes that pass through the camera position and are tangent to the light’s sphere of illumination (the light radius). Intersections of each tangent plane with the image plane gives us 4 lines on the image plane. This lines define the resulting bounding rectangle.
Refer to this article for math details: http://www.altdevblogaday.com/2012/03/01/getting-the-projected-extent-of-a-sphere-to-the-near-plane/
The code is loading a bin file which contains (x,y,z) coordinates for a set of points.
Let say the points form a cube and that there are some points in the cube as well, how do i make the cube look like a surface cube instead of a set of points?
I read about marching cubes and barycentric coordinates, but i don't understand how to do that in C++ and opengl. Thanks.
If they form an axis-aligned cube all you need to do is draw a cube from min(x), min(y), min(z) to max(x), max(y), max(z) where min(x) represents the minimum of all x coordinates.