I am working on voxelisation using the rendering pipeline and now I successfully voxelise the scene using vertex+geometry+fragment shaders. Now my voxels are stored in a 3D texture which has size, for example, 128x128x128.
My original model of the scene is centered at (0,0,0) and it extends in both positive and negative axis. The texure, however, is centered at (63,63,63) in tex coordinates.
I implemented a simple ray marcing for visualizing but it doesn't take into account the camera movements (I can render only from very fixed positions because my ray have to be generated taking into account the different coordinates of the 3D texture).
My question is: how can I map my rays so that they are generated at point Po with direction D in the coordinates of my 3D model but intersect the voxels at the corresponding position in texture coordinates and every movement of the camera in the 3D world is remapped in the voxel coordinates?
Right now I generate the rays in this way:
create a quad in front of the camera at position (63,63,-20)
cast rays in direction towards (63,63,3)
I think you should store your entire view transform matrix in your shader uniform params. Then for each shader execution you can use its screen coords and view transform to compute the view ray direction for your particular pixel.
Having the ray direction and camera position you just use them the same as currently.
There's also another way to do this that you can try:
Let's say you have a cube (0,0,0)->(1,1,1) and for each corner you assign a color based on its position, like (1,0,0) is red, etc.
Now for every frame you draw your cube front faces to the texture, and cube back faces to the second texture.
In the final rendering you can use both textures to get enter and exit 3D vectors, already in the texture space, which makes your final shader much simpler.
You can read better descriptions here:
http://graphicsrunner.blogspot.com/2009/01/volume-rendering-101.html
http://web.cse.ohio-state.edu/~tong/vr/
Related
I want to use Java and OpenGL (through LWJGL) to render a 3D object. I also use GLFW to set up windows, contexts, etc.
I have a custom class to represent a unit sphere, which stores coordinates of every vertex as well as an array of integers to represent the triangle mesh. It also stores the translation, scale factor and rotation (so that any sphere can be built from the unit sphere model).
The vertices are world coordinates e.g. (1, 0, 0) with s.f. as 5, translation (0, 10, 0) and rotation (0,0,0).
I also have a custom camera object which stores the position of the camera and the orientation (in radians) to each axis (yaw, roll, pitch). It also stores the distance to the projection plane to allow a changeable FOV.
I know that in order to display the sphere, I need to apply a series of transformations to all vertices. My question is, where should I apply each transformation?
OpenGL screen coordinates range from (-1,-1) to (1, 1).
My current solution (which I would like to verify is optimal) is as follows:
In CPU:
Apply model transform on the sphere (so some vertex is now [5, 10, 0]). My model transform is in the following order: scale, rotation [z,y,x] then translation. Buffer these into vbo/vao and load into shader along with camera position, rotation and distance to PP.
In vertex shader:
apply camera transform on world coordinates (build the transformation matrix here or load it in from CPU too?)
calculate 2D screen coordinates by similar triangles
calculate OpenGL coordinates by ratios of screen resolution
pass resulting vertex coordinates into the fragment shader
Have I understood the process correctly? Should I rearrange any processes? I am reading guides online but they aren't always specific enough - most already store the vertices (in Java) in the OpenGL coordinate system but not world coordinates like me. Some say that the camera is fixed at the origin in OpenGL, though I assume that I need to apply the camera transform for it to be so and display shapes properly.
Thanks
In my case I wanna render 50,000 or more cubes that are distributed randomly inside a large bounding box, I don't want using instancing right now so I have to render each cube, I wanna improve the performance by culling out the cubes that are outside the camera view.
I have a camera class that has two matrices view and projection, each cube has its own bounding box, so I am planning to check each frame if the camera view bounding box contains the center of each cube if yes call its draw function if not ignore it.
I have for view matrix 3 vectors eye, target and up, and for projection width, height, near, far and FOV.
so I have two questions:
Is this a right scenario? I will calculate the camera view boumding box each frame then test each cube.
How can I calculate the camera bounding box each frame?
I got an idea from here how_to_check_if_vertex_is_visible_for_user that worked fine with me.
multiplying the projection view matrix of the camera by the any point in the 3D space the visible ones should be between [-1,1].
Hi im trying to create a shader for my 3D models with materials and fog. Everything works fine but the light direction. I'm not sure what to set it to so I used a fixed value, but when I rotate my 3D model (which is a simple textured sphere) the light rotates with it. I want to change my code so that my light stays in one place according to the camera and not the object itself. I have tried multiplying the view matrix by the input normals but the same result occurs.
Also, should I be setting the light direction according to the camera instead?
EDIT: removed pastebin link since that is against the rules...
Use camera depended values just for transforming vertex pos to view and projected position (needed in shaders for clipping and rasterizer stage). The video cards needs to know, where to draw your pixel.
For lighting you normally pass additional to the camera transformed value the world position of the vertex and the normal in world position to the needed shader stages (i.e pixel shader stage for phong lighting).
So you can set your light position, or better light direction in world space coordinate system as global variable to your shaders. With that the lighting is independent of the camera view position.
If you want to have a effect like using a flashlight. You can set the lightposition to camera position, and light direction to your look direction. So the bright parts are always in the center of your viewing frustum.
good luck
I've got a 2D Texture on a 3D Sphere and I want to know how to transfer a 2D coordinate on the Texture into a 3D coordinate. I know it has to do with the clipping of the texture : I'm using the auto clipping function of OpenGL to put the texture on the Sphere.
Edit:
To clarify the problem:
I have a 2D plane which is an image containing borders drawn in red now I put objects on this plane, that have a collision radius and are wildly moving around. Whenever the objects collide with the red border they bounce back.
Now I take this 2D plane and warp it around a 3D sphere. At the position of the circles I want to put 3D-Models that move on the sphere. The problem now is to get from the "simple" 2D coordinates on the plane to the more complicates 3D coordinates on the sphere to position the 3D-Models correctly.
My first approach would be to map 2D coordinates to spherical coordinates which can easily be transferred into 3D coordinates but how would I do this?
You don't "convert" the 2D coordinate to a 3D coordinate. The 2D coordinates you have are UV coordinates (from 0 to 1) and they represent a position in the texture space. What you do is to map these UV coordinates to the vertices.
You can read more about UV mapping here.
In OpenGL, it depends on which version are you using. Either you use glTexCoord calls before the glVertex calls (for old versions of OpenGL), or you set it in a VBO to be processed at the fragment shader on newer versions of OpenGL.
If you are planning to use gluSphere() function, you don't need to worry about calculating UV texture coordinates since opengl does it for you with the right functions.
Here you can check the gluSphere() documentation
Here there is an example code
If you are planning to render your own sphere, check this question
Is it possible to project the camera depth onto a plane ?
Let me explain, if I simply transfer the depth buffer on a plane, it will always display the depth from the camera point of view. But how can I proceed to display the depth but from the plane point of view.
I want to apply the effect on a shader. For me it's maybe a matrix problematic but I don't get it.
The depth buffer is relative to the representation, it is possible to project it to any plane using (for example) shaders. But..
The depth buffer is not a full geometrical representation of the object, but only of the "being seen" surface from the camara POV.
If you project the depth buffer, part of the object will probably not be projected (see image).
In the picture, the camara (the red eye) is looking to an object (black). The depth buffer represent the distance between the camara and the red surface.
For the plane (blue line) you probably want to get the entire object projection (blue surface), but projecting the red surface on the plane, you will only get a little portion of the entire blue surface.
If you want the entire blue surface,
Change the POV of the camara to just behind the plane.
Render the scene.
Get your depth buffer and save it into a texture/image/buffer (P).
Reset your camara POV
Render the scene with using for your shader the image (P)