Get the value of a voxel from a volume - xtk

I need a function to get the value of a voxel (3d pixel) from a X.volume object, given the x,y,z coords as an input. This is to use a lablemap as a reference for an atlas function. Is there a way to do this?
Many thanks,
Edward

Currently no, though the XTK developers are creating something similar as explained in Finding world coordinates from screen coordinates, though what I believe whats being done there is an Unproject function and Ray/Triangle intersection test. A Ray/Triangle intersection tests justs casts a Ray from your screen into the 3d world and returns the first intersected triangle coordinates, but of course you need to find the voxels. You can try creating an Unproject function and something similar to the Ray/Triangle intersection but instead finding the closest intersected voxel instead of triangle. Help with an Unproject function is here http://myweb.lmu.edu/dondi/share/cg/unproject-explained.pdf, but rememeber that link explains GluUnproject from OpenGL, but still explains what we need to do, we are just making an alternative version of GluUnproject for WebGL. Any solutions you may find would be greatly appreciated if contributed to XTK. Or you may wait on the Unproject function that may possibly come from the related problem in finding the 3d coordinates.

What are your (x,y,z) ? Are they the coordinates in the world's basis or in the volume's basis ? In both case a solution could be computed (it will be a bit more complicated in the 1st case) but I think it would require a few changes in the sources.
Do you want to code a bit in the sources and participate ? It would be great for xtk users like us ! You'd just to go through the volume's hiearchy and to do a few operations :
If (x,y,z) are coordinates in the world's basis : complicated. Must do some tests :-)
Choose a direction to work in ( 1 volume can have slices in 3 directions), for ex slicesX for the 1st direction
With the volume's center X coordinate, the volume X spacing, and the X coordinate of the picked point find the good slice in slicesX
With the volume's center (y,z), the volume Y and Z spacing, and the Y and Z coordinates of the picked point get the (x',y') coordinate on the choosen slice's texture map
With the choosen slice's texture size get (x'',y'') the coordinates on the slice's texture image
Read the (x'',y'') point of the texture
I'm sorry I've not time to look further this week, but maybe later.

Related

Rendering an atmosphere around a planet with shading

I have a made a planet and wanted to make an atmosphere around it. So I was referring to this site:
Click to visit site
I don't understand this:
As with the lookup table proposed in Nishita et al. 1993, we can get the optical depth for the ray to the sun from any sample point in the atmosphere. All we need is the height of the sample point (x) and the angle from vertical to the sun (y), and we look up (x, y) in the table. This eliminates the need to calculate one of the out-scattering integrals. In addition, the optical depth for the ray to the camera can be figured out in the same way, right? Well, almost. It works the same way when the camera is in space, but not when the camera is in the atmosphere. That's because the sample rays used in the lookup table go from some point at height x all the way to the top of the atmosphere. They don't stop at some point in the middle of the atmosphere, as they would need to when the camera is inside the atmosphere.
Fortunately, the solution to this is very simple. First we do a lookup from sample point P to the camera to get the optical depth of the ray passing through the camera to the top of the atmosphere. Then we do a second lookup for the same ray, but starting at the camera instead of starting at P. This will give us the optical depth for the part of the ray that we don't want, and we can subtract it from the result of the first lookup. Examine the rays starting from the ground vertex (B 1) in Figure 16-3 for a graphical representation of this.
First Question - isn't optical depth dependent on how you see that is, on the viewing angle? If yes, the table just gives me the optical depth of the rays going from land to the top of the atmosphere in a straight line. So what about the case where the rays pierce the atmosphere to reach the camera? How do I get the optical depth in this case?
Second Question - What is the vertical angle it is talking about...like, is it the same as the angle with the z-axis as we use in polar coordinates?
Third Question - The article talks about scattering of the rays going to the sun..shouldn't it be the other way around? like coming from the sun to a point?
Any explanation on the article or on my questions will help a lot.
Thanks in advance!
I am no expert in the matter but have played with Atmospheric scattering and various physical and optical simulations. I strongly recommend to look at this:
my VEEERRRYYY Simplified version of atmospheric scattering in GLSL
It odes not do the full volume intergration but just linear path integration along the ray and does only the Rayleight scatering with isotropic coefficients. As you can see its still good enough.
In real scattering the viewing angle is impacting the real scattering equation as the scattering coefficients are different in different angles (against main light source and viewer) So answer to your first question is Yes it does.
Not sure what you are refer to in your second question. The scattering itself is dependent on angle between light source, particle and camera. That lies on arbitrary plane. However if the Earth surface is accounted to the equation too then its dependent on the horizontal and vertical angles (against terrain) so azimuth,elevation as usually more light is reflected when camera is facing sun (azimuth) and the reflected rays are closer to your elevation. So my guess is that's what the horizontal angle is about accounting for reflected light from the surface.
To answer your 3th question is called back ray tracing. You can cast rays both ways (from camera or from sun) however if you start from light source you do not know which way to go to hit a pixel on camera screen so you need to cast a lot of rays to increase the probability of hit enough to fill the screen which is too slow and inaccurate (produce holes). If you start from screen pixel then you cast just single or per wavelength ray instead which is much much faster. The resulting color is the same.
[Edit1] vertical angle
OK I read the linked topic a bit and this is How I understand it:
So its just angle between surface normal and the casted ray. Its scaled so vert.angle=0 means that ray and normal are the same and vert.angle=1 means the are opposite directions.

How to calculate the 3D coordinate from a picture with given Z-value

I've got a picture of a plane with 4 known points on it. I've got the intrinsic and extrinsic camera parameters and also (using the Rodriguez function) the position of the camera. The plane is defined as my ground level (Z = 0). If I select a point in my image, is there an easy way to calculate the coordinates, where this point would be on my plane?
Not much can be labeled as 'easy' when dealing with 3D rendering.
For your question, I would look into ray tracing. I am not going to try to explain it, as most sites will do a better job of explaining it then I can.
When you look at opencv in calib3d module. you will see this equation:
https://docs.opencv.org/master/d9/d0c/group__calib3d.html
Please scroll down the link and see the perspective transformation equations
From what you say, you declare the plane ground level(Z=0). you also know the internsic (focal point in pixels , image center) camera parameter and you know the exterinsic (rotation and translation) camera parameter. and you want to access some pixels in your image (is it?) and from there, you want to estimate where it is on the plane??
You can use triangulatePoints() function in calib3D module of opencv. you need at least 2 images tough.
But your case seems unlikely to me, if you try to detect 4 known points, you will have to define the world coordinate of those plane first, usually, you define top left corner of the plane as original (0,0,0), then, you will know the position of those 4 known points in world coordinate by manual calculation. when you detect it in opencv program, it gives you the pixel coordinates of those 4 points. then, usually, what people expect to calculate is the pose ( rotation and translation ).
Alternatively, if your case is what you said, you can make a simple matrix operation code based on perspective transformation equation.

How should Z value be compared with depth value?

I'd like to know whether a different model is drawn just before coordinate (xyz) where it's here.
It doesn't work by Z value comparison of depth value and the coordinate I did world change of.
It seems that Z value is normalized in near=0, far=1, but depth value seems to make the point of view drawn at the most inside in View frustum 1.
When I moved a far plane to a far place, Z value decreased, but depth value didn't change.
thank you.
I am not sure I understand your question correctly, but I will make a guess and provide an answer. Apologies in advance if this is not what you were asking. In OpenGL you need to understand what the view frustum is. In it, you have an x and y coordinate and a depth value. The depth value represents how far from your eye the object (pixel) drawn is. This is so that you can avoid having objects i the background obfuscate objects that are closer in, giving a more real representation of the reality. You also have clipping planes, a near and a far clipping planes. Anything closer than the nera clipping plane will not be drawn, and anything farther than the far clipping plane won't be drawn. If, for example, I am drawing an image of the earth from space. I know I won't have to bother with anything that is on the other side of the Earth and can just clip it away, speeding things up. Usually, the near clipping plane is set at z=0, and the far clipping plane at depth=1. Then, this interval is subdivided (depending on your depth buffer) and OpenGL, as said, will put each pixel in each slot and decide what is closer to your eye and what is not (on the same line drawn from the eye to the pixel x,y). If you are in 3D and have x,y,z, the z-value from the scene won't match the value of the depth value, you need to use the view-matrix to map things right.
Hopefully this helps some.

How to create an even sphere with triangles in OpenGL?

Is there a formula that generates a set of coordinates of triangles whose vertices are located on a sphere?
I am probably looking for something that does something similar to gluSphere. Yet, I need to color the different triangles in specfic colors so that it seems I can't use gluSphere.
Also: I do understand that gluSphere draws edges along lines with equal longitudes and lattitudes which entails the triangles being small at the poles compared to their size at the equator. Now, if such a formula would generate the triangles such that their difference in size is minimized, that would be great.
To calculate the normals and the uv map.
Fortunately there is an amazing trick for calculating the normals, on a sphere. If you think about it, the normals on a sphere are indeed nothing more than the direction from the centre of the sphere, to that point!! Furthermore, if you think it through, that means the normals literally equal the point! i.e., it's the same vector! - just don't forget to normalise the length, for the normal.
You can win bar bets on that one: "is there a shape where all the normals happen to be exactly ... equal to the vertices?" At first glance you'd think, that's impossible, no such coincidental shape could exist. But of course the answer is simply "a sphere with radius one!" Heh!
Regarding the UVs. It is relatively easy on a sphere, assuming you're projecting to 2D in the "obvious" manner, a "rectangle-style" map projection. In that case the u and v is basically just the longitude / latitude of any point, normalised to 0,1.
Hope it helps!
Here's the all-time-classic web page that beautifully explains how to build an icosphere .. http://blog.andreaskahler.com/2009/06/creating-icosphere-mesh-in-code.html
Start with a unit icosahedron. Then apply muliple homogenous subdivisions of the triangles, normalizing the resulting vertices distance to the origin.

Fast plane rotation algorithm?

I am working on an application that detects the most prominent rectangle in an image, then seeks to rotate it so that the bottom left of the rectangle rests at the origin, similar to how IUPR's OSCAR system works. However, once the most prominent rectangle is detected, I am unsure how to take into account the depth component or z-axis, as the rectangle won't always be "head-on". Any examples to further my understanding would be greatly appreciated. Seen below is an example from IUPR's OSCAR system.
alt text http://quito.informatik.uni-kl.de/oscar/oscar.php?serverimage=img_0324.jpg&montage=use
You don't actually need to deal with the 3D information in this case, it's just a mappping function, from one set of coordinates to another.
Look at affine transformations, they're capable of correcting simple skew and perspective effects. You should be able to find code somewhere that will calculate a transform from the 4 points at the corners of your rectangle.
Almost forgot - if "fast" is really important, you could simplify the system to only use simple shear transformations in combination, though that'll have a bad impact on image quality for highly-tilted subjects.
Actually, I think you can get away with something much simpler than Mark's approach.
Once you have the 2D coordinates on the skewed image, re-purpose those coordinates as texture coordinates.
In a renderer, draw a simple rectangle where each corner's vertices are texture mapped to the vertices found on the skewed 2D image (normalized and otherwise transformed to your rendering system's texture coordinate plane).
Now you can rely on hardware (using OpenGL or similar) to do the correction for you, or you can write your own texture mapper:
The aspect ratio will need to be guessed at since we are disposing of the actual 3D info. However, you can get away with just taking the max width and max height of your skewed rectangle.
Perspective Texture Mapping by Chris Hecker