I've been following the GPU Gems 3 tutorial on how to blur based on camera movement. However I'm wanting to implement a blur based on object movement too. The solution is presented in the article (see quote below), however I'm curious as to how exactly to implement this.
At the moment I'm multiplying the object's matrix by the view-projection, then separately again for the previous-view-projection and then passing them into the pixel shader to calculate the velocity instead of just the view-projections.
If that is in fact the correct method, then why am I not simply able to pass in the model-view-projection? I would have assumed they would be the same value?
GPU Gems 3 Motion Blur
To generate a velocity texture for rigid dynamic objects, transform the object by using the current frame's view-projection matrix and the last frame's view-projection matrix, and then compute the difference in viewport positions the same way as for the post-processing pass. This velocity should be computed per-pixel by passing both transformed positions into the pixel shader and computing the velocity there.
Check out my research I did a few months ago on this topic: https://slu-files.s3.us-east-1.amazonaws.com/Fragment_shader_dynamic_blur.pdf
(source: stevenlu.net)
(source: stevenlu.net)
Sadly I did not implement textured objects when producing this material, but do use your imagination. I am working on a game engine so when that finally sees the light of day in the form of a game, you can be sure that I'll come and post breadcrumbs here.
It primarily addresses how to implement this effect in 2D, and in cases where objects do not overlap. There is not really a good way to handle using a fragment shader to "sweep" samples in order to generate "accurate" blur. While the effect approaches pixel-perfection as the sample count is cranked up, the geometry that must be generated to cover the sweep area has to be manually assembled using some "ugly" techniques.
In full 3D it's a rather difficult problem to know which pixels a dynamic object will sweep over during the course of a frame. Even with static geometry and a moving camera the solution proposed by the GPU Gems article is incorrect when moving past things quickly because it is unable to address that issue of requiring blending of the area swept out by something moving...
That said, if this approximation which neglects the sweep is sufficient (and it may be) then what you can do to extend to dynamic objects is to take their motion into account. You'll need to work out the details of course but look at lines 2 and 5 in the second code block on the article you linked: They are the current and previous screen space "positions" of the pixel. You simply have to somehow pass in the matrices that will allow you to compute the previous position of each pixel, taking into account the dynamic motion of your object.
It shouldn't be too bad. In the pass where you render your dynamic object you send in an extra matrix that represents its motion over the last frame.
Update:
I found that this paper describes an elegant and performant approach that provides somewhat high quality physically correct blurring for a 3D pipeline. It'll be hard to do much better than this within the constraint of rendering the full scene no more than one time for performance reasons.
I noticed with some of the examples the quality of the velocity buffer could be better. for example a rotating wheel should have some curves in the velocity space. I believe if they can be set properly (may require custom fragment shaders to render the velocity out...) they will look intuitively correct like the spinning cube seen above from my 2D exploration into dynamic motion blurring.
Related
I am quite new to cuda programming and i have a question about the texXD function. My goal is to implement a simple GPU-based ray tracer using the optimized CUDA functionality.
See CUDA texture API that is used by NVIDIA.
At my research I have to deal with images that have a different resolution for every dimension (like CT images, (x,y) have a different resolution as (z)). Resampling to an isotropic pixel/voxel size might bring up some problems (especially for medical diagnosis).
For example i have an image with size (100px x 50px) and a resolution of (2px/mm x 1px/mm). The ray enters the image at an arbitrary point and leaves is somewhere else. The ray is sampled in the direction form entrance to leaving point. At each sample point (pos.x,pos.y) the tex2D function carries out an (bicubicbilinear) interpolation taking the neighbour pixel values into account weighted by their distance from the sample point.
example image:
In both shapes the corner points are named the same way(x1,y1),.... The only difference is the physical space between the corner points. The interpolation point is (x,y). I computed an example using the formula for rectangular grids and yield a different results for both grids. But if I use the ratio of areas of the numbered rectangles I got a different result.
My Question: Will CUDA take care of the different resolutions of the dimensions or does CUDA see all pixel in the same distance (and therefore as a squared grid)?
The formula used by CUDA seems to be the one for a squared grid (google:CUDA Texture fetching).
Or can I resample the image to squared grid before using tex2D without a substantial information loss?
Any suggestions are recommended. If you need some more clarification, feel free to ask. I will specifiy my question.
I don't believe what (I think it is) you are trying to do can be achieved using textures. The sole filtering mode supported using textures is described here.
Some salient points:
Textures don't have resolution. The just have dimensions.
Textures data is implicitly uniformly spaced in all dimensions.
Texture interpolation is done in a reduced accuracy fixed point arithmetic format which gives 8 bits of representational accuracy
None of this seems like anything that would be useful for the interpolation on a non-uniform grid which you are describing. At a minimum you would need to perform a coordinate transformation before you could use the uniform filtering mode. The amount of effort and expense would be about the same as just writing an interpolation routine yourself in user code.
I am looking forward to implement a model reconstruction of RGB-D images. Preferred on mobile phones. For that I read, it is all done with an TSDF-representation. I read a lot of papers now over hierarchical structures and other ideas to speed this up, but my problem is, that I still hat no clue how to actually implement this representation.
If I have a volume grid of size n, so n x n x n and I want to store in each voxel the signed distance, weight and color information. My only guess is, that I have to build a discrete set of points, for each voxel position. And with GLSL "paint" all these points and calculate the nearest distance. But that don't seem quite good or efficient to calculate this n^3 times.
How can I imagine to implement such a TSDF-representation?
The problem is, my only idea is to render the voxel grid to store in the data of signed distances. But for each depth map I have to render again all voxels and calculate all distances. Is there any way to render it the other way around?
So can't I render the points of the depth map and store informations in the voxel grid?
How is the actual state of art to render such a signed distance representation in an efficient way?
You are on the right track, it's an ambitious project but very cool if you can do it.
First, it's worth getting a good idea how these things work. The original paper identifying a TSDF was by Curless and Levoy and is reasonably approachable - a copy is here . There are many later variations but this is the starting point.
Second, you will need to create nxnxn storage as you have said. This very quickly gets big. For example, if you want 400x400x400 voxels with RGB data and floating point values for distance and weight then that will be 768MB of GPU memory - you may want to check how much GPU memory you have available on a mobile device. Yup, I said GPU because...
While you can implement toy solutions on CPU, you really need to get serious about GPU programming if you want to have any kind of performance. I built an early version on an Intel i7 CPU laptop. Admittedly I spent no time optimising it but it was taking tens of seconds to integrate a single depth image. If you want to get real time (30Hz) then you'll need some GPU programming.
Now you have your TSFD data representation, each frame you need to do this:
1. Work out where the camera is with respect to the TSDF in world coordinates.
Usually you assume that you are the origin at time t=0 and then measure your relative translation and rotation with regard to the previous frame. The most common way to do this is with an algorithm called iterative closest point (ICP) You could implement this yourself or use a library like PCL though I'm not sure if they have a mobile version. I'd suggest you get started without this by just keeping your camera and scene stationary and build up to movement later.
2. Integrate the depth image you have into the TSDF This means updating the TSDF with the next depth image. You don't throw away the information you have to date, you merge the new information with the old.
You do this by iterating over every voxel in your TSDF and :
a) Work out the distance from the voxel centre to your camera
b) Project the point into your depth camera's image plane to get a pixel coordinate (using the extrinsic camera position obtained above and the camera intrinsic parameters which are easily searchable for Kinect)
c) Lookup up the depth in your depth map at that pixel coordinate
d) Project this point back into space using the pixel x and y coords plus depth and your camera properties to get a 3D point corresponding to that depth
e) Update the value for the current voxel's distance with the value distance_from_step_d - distance_from_step_a (update is usually a weighted average of the existing value plus the new value).
You can use a similar approach for the voxel colour.
Once you have integrated all of your depth maps into the TSDF, you can visualise the result by either raytracing it or extracting the iso surface (3D mesh) and viewing it in another package.
A really helpful paper that will get you there is here. This is a lab report by some students who actually implemented Kinect Fusion for themselves on a PC. It's pretty much a step by step guide though you'll still have to learn CUDA or similar to implement it
You can also check out my source code on GitHub for ideas though all normal disclaimers about fitness for purpose apply.
Good luck!
After I posted my other answer I thought of another approach which seems to match the second part of your question but it definitely is not reconstruction and doesn't involve using a TSDF. It's actually a visualisation but it is a lot simpler :)
Each frame you get an RGB and a Depth image. Assuming that these images are registered, that is the pixel at (x,y) in the RGB image represents the same pixel as that at (x,y) in the depth image then you can create a dense point cloud coloured using the RGB data. To do this you would:
For every pixel in the depth map
a) Use the camera's intrinsic matrix (K), the pixel coordinates and the depth value in the map at that point to project the point into a 3D point in camera coordinates
b) Associate the RGB value at the same pixel with that point in space
So now you have an array of (probably 640x480) structures like {x,y,z,r,g,b}
You can render these using on GLES just by creating a set of vertices and rendering points. There's a discussion on how to do this here
With this approach you throw away the data every frame and redo from scratch. Importantly, you don't get a reconstructed surface, and you don't use a TSDF. You can get pretty results but it's not reconstruction.
This question already has answers here:
How to render ocean wave using opengl in 3D? [closed]
(2 answers)
Closed 7 years ago.
I have absolutely no idea how to render water sources (ocean, lake, etc). It's like every tutorial I come across assumes I have the basic knowledge in this subject, and therefore speaks abstractly about the issue, but I don't.
My goal is to have a height based water level in my terrain.
I can't find any good article that will help me get started.
The question is quite broad. I'd split it up into separate components and get each working in turn. Hopefully this will help narrow down what those might be, unfortunately I can only offer the higher level discussion you aren't directly after.
The wave simulation (geometry and animation):
A procedural method will give a fixed height for a position and time based on some noise function.
A very basic idea is y = sin(x) + cos(z). Some more elaborate examples are in GPUGems.
Just like in the image, you can render geometry by creating a grid, sampling heights (y) at the grid x,y positions and connecting those points with triangles.
If you explicitly store all the heights in a 2D array, you can create some pretty decent looking waves and ripples. The idea here is to update height based on the neighbouring heights, using a few simple rules. For example, each height moves towards the average neighbouring height but also tends towards the equilibrium height equals zero. For this to work well, heights will need a velocity value to give the water momentum.
I found some examples of this kind of dynamic water here:
height_v[i][j] += ((height_west+ height_east + height_south + height_north)/4 - height[i][j]);
height_v[i][j] *= damping;
height[i][j] += height_v[i][j];
Rendering:
Using alpha transparency is a great first step for water. I'd start here until your simulation is running OK. The primary effect you'll want is reflection, so I'll just cover that. Further on you'll want to scale the reflection value using the Fresnel ratio. You may want an absorption effect (like fog) underwater based on distance (see Beer's law, essentially exp(-distance * density)). Getting really fancy, you might want to render the underneath parts of the water with refraction. But back to reflections...
Probably the simplest way to render a planar reflection is stencil reflections, where you'd draw the scene from underneath the water and use the stencil buffer to only affect pixels where you've previously drawn water.
An example is here.
However, this method doesn't work when you have a bumpy surface and the reflection rays are perturbed.
Rather than render the underwater reflection view directly to the screen, you can render it to a texture. Then you have the colour information for the reflection when you render the water. The tricky part is working out where in the texture to sample after calculating the reflection vector.
An example is here.
This uses textures but just for a perfectly planar reflection.
See also: How do I draw a mirror mirroring something in OpenGL?
I've been looking online and I'm impressed by the capabilities of using voxel data, especially for terrain building and manipulation. The problem is that voxels are never clearly explained on any site that i visited or how to use/implement them. All i find is that voxels are volumetric data. Please provide a more complete answer; what is volumetric data. It may seem like a simple question but I'm still unsure.
Also, how would you implement voxel data? (I aim to implement this into a c++ program.) What sort of data type would you use to store the voxel data to enable me to modify the contents at run time as fast as possible. I have looked online and i couldn't find anything which explained how to store the data. Lists of objects, arrays, ect...
How do you use voxels?
EDIT:
Since I'm just beginning with voxels, I'll probably start by using it to only model simple objects but I will eventually be using it for rendering terrain and world objects.
In essence, voxels are a three-dimensional extension of pixels ("volumetric pixels"), and they can indeed be used to represent volumetric data.
What is volumetric data
Mathematically, volumetric data can be seen as a three-dimensional function F(x,y,z). In many applications this function is a scalar function, i.e., it has one scalar value at each point (x,y,z) in space. For instance, in medical applications this could be the density of certain tissues. To represent this digitally, one common approach is to simply make slices of the data: imagine images in the (X,Y)-plane, and shifting the z-value to have a number of images. If the slices are close to eachother, the images can be displayed in a video sequence as for instance seen on the wiki-page for MRI-scans (https://upload.wikimedia.org/wikipedia/commons/transcoded/4/44/Structural_MRI_animation.ogv/Structural_MRI_animation.ogv.360p.webm). As you can see, each point in space has one scalar value which is represented as a grayscale.
Instead of slices or a video, one can also represent this data using voxels. Instead of dividing a 2D plane in a regular grid of pixels, we now divide a 3D area in a regular grid of voxels. Again, a scalar value can be given to each voxel. However, visualizing this is not as trivial: whereas we could just give a gray value to pixels, this does not work for voxels (we would only see the colors of the box itself, not of its interior). In fact, this problem is caused by the fact that we live in a 3D world: we can look at a 2D image from a third dimension and completely observe it; but we cannot look at a 3D voxel space and observe it completely as we have no 4th dimension to look from (unless you count time as a 4th dimension, i.e., creating a video).
So we can only look at parts of the data. One way, as indicated above, is to make slices. Another way is to look at so-called "iso-surfaces": we create surfaces in the 3D space for which each point has the same scalar value. For a medical scan, this allows to extract for instance the brain-part from the volumetric data (not just as a slice, but as a 3D model).
Finally, note that surfaces (meshes, terrains, ...) are not volumetric, they are 2D-shapes bent, twisted, stretched and deformed to be embedded in the 3D space. Ideally they represent the border of a volumetric object, but not necessarily (e.g., terrain data will probably not be a closed mesh). A way to represent surfaces using volumetric data, is by making sure the surface is again an iso-surface of some function. As an example: F(x,y,z) = x^2 + y^2 + z^2 - R^2 can represent a sphere with radius R, centered around the origin. For all points (x',y',z') of the sphere, F(x',y',z') = 0. Even more, for points inside the sphere, F < 0, and for points outside of the sphere, F > 0.
A way to "construct" such a function is by creating a distance map, i.e., creating volumetric data such that every point F(x,y,z) indicates the distance to the surface. Of course, the surface is the collection of all the points for which the distance is 0 (so, again, the iso-surface with value 0 just as with the sphere above).
How to implement
As mentioned by others, this indeed depends on the usage. In essence, the data can be given in a 3D matrix. However, this is huge! If you want the resolution doubled, you need 8x as much storage, so in general this is not an efficient solution. This will work for smaller examples, but does not scale very well.
An octree structure is, afaik, the most common structure to store this. Many implementations and optimizations for octrees exist, so have a look at what can be (re)used. As pointed out by Andreas Kahler, sparse voxel octrees are a recent approach.
Octrees allow easier navigating to neighbouring cells, parent cells, child cells, ... (I am assuming now that the concept of octrees (or quadtrees in 2D) are known?) However, if many leaf cells are located at the finest resolutions, this data structure will come with a huge overhead! So, is this better than a 3D array: it somewhat depends on what volumetric data you want to work with, and what operations you want to perform.
If the data is used to represent surfaces, octrees will in general be much better: as stated before, surfaces are not really volumetric, hence will not require many voxels to have relevant data (hence: "sparse" octrees). Refering back to the distance maps, the only relevant data are the points having value 0. The other points can also have any value, but these do not matter (in some cases, the sign is still considered, to denote "interior" and "exterior", but the value itself is not required if only the surface is needed).
How to use
If by "use", you are wondering how to render them, then you can have a look at "marching cubes" and its optimizations. MC will create a triangle mesh from volumetric data, to be rendered in any classical way. Instead of translating to triangles, you can also look at volume rendering to render a "3D sampled data set" (i.e., voxels) as such (https://en.wikipedia.org/wiki/Volume_rendering). I have to admit that I am not that familiar with volume rendering, so I'll leave it at just the wiki-link for now.
Voxels are just 3D pixels, i.e. 3D space regularly subdivided into blocks.
How do you use them? It really depends on what you are trying to do. A ray casting terrain game engine? A medical volume renderer? Something completely different?
Plain 3D arrays might be the best for you, but it is memory intensive. As BWG pointed out, octree is another popular alternative. Search for Sparse Voxel Octrees for a more recent approach.
In popular usage during the 90's and 00's, 'voxel' could mean somewhat different things, which is probably one reason you have been finding it hard to find consistent information. In technical imaging literature, it means 3D volume element. Oftentimes, though, it is used to describe what is somewhat-more-clearly termed a high-detail raycasting engine (as opposed to the low-detail raycasting engine in Doom or Wolfenstein). A popular multi-part tutorial lives in the Flipcode archives. Also check out this brief one by Jacco.
There are many old demos you can find out there that should run under emulation. They are good for inspiration and dissection, but tend to use a lot of assembly code.
You should think carefully about what you want to support with your engine: car-racing, flying, 3D objects, planets, etc., as these constraints can change the implementation of your engine. Oftentimes, there is not a data structure, per se, but the terrain heightfield is represented procedurally by functions. Otherwise, you can use an image as a heightfield. For performance, when rendering to the screen, think about level-of-detail, in other words, how many actual pixels will be taken up by the rendered element. This will determine how much sampling you do of the heightfield. Once you get something working, you can think about ways you can blend pixels over time and screen space to make them look better, while doing as little rendering as possible.
As seen in the image
I draw set of contours (polygons) as GL_LINE_STRIP.
Now I want to select curve(polygon) under the mouse to delete,move..etc in 3D .
I am wondering which method to use:
1.use OpenGL picking and selection. ( glRenderMode(GL_SELECT) )
2.use manual collision detection , by using a pick-ray and check whether the ray is inside each polygon.
I strongly recommend against GL_SELECT. This method is very old and absent in new GL versions, and you're likely to get problems with modern graphics cards. Don't expect it to be supported by hardware - probably you'd encounter a software (driver) fallback for this mode on many GPUs, provided it would work at all. Use at your own risk :)
Let me provide you with an alternative.
For solid, big objects, there's an old, good approach of selection by:
enabling and setting the scissor test to a 1x1 window at the cursor position
drawing the screen with no lighting, texturing and multisampling, assigning an unique solid colour for every "important" entity - this colour will become the object ID for picking
calling glReadPixels and retrieving the colour, which would then serve to identify the picked object
clearing the buffers, resetting the scissor to the normal size and drawing the scene normally.
This gives you a very reliable "per-object" picking method. Also, drawing and clearing only 1 pixel with minimal per-pixel operation won't really hurt your performance, unless you are short on vertex processing power (unlikely, I think) or have really a lot of objects and are likely to get CPU-bound on the number of draw calls (but then again, I believe it's possible to optimize this away to a single draw call if you could pass the colour as per-pixel data).
The colour in RGB is 3 unsigned bytes, but it should be possible to additionally use the alpha channel of the framebuffer for the last byte, so you'd get 4 bytes in total - enough to store any 32-bit pointer to the object as the colour.
Alternatively, you can create a dedicated framebuffer object with a specific pixel format (like GL_R32UI, or even GL_RG32UI if you need 64 bits) for that.
The above is a nice and quick alternative (both in terms of reliability and in implementation time) for the strict geometric approach.
I found that on new GPUs, the GL_SELECT mode is extremely slow. I played with a few different ways of fixing the problem.
The first was to do a CPU collision test, which worked, but wasn't as fast as I would have liked. It definitely slows down when you are casting rays into the screen (using gluUnproject) and then trying to find which object the mouse is colliding with. The only way I got satisfactory speeds was to use an octree to reduce the number of collision tests down and then do a bounding box collision test - however, this resulted in a method that was not pixel perfect.
The method I settled on was to first find all the objects under the mouse (using gluUnproject and bounding box collision tests) which is usually very fast. I then rendered each of the objects that have potentially collided with the mouse in the backbuffer as a different color. I then used glReadPixel to get the color under the mouse, and map that back to the object. glReadPixel is a slow call, since it has to read from the frame buffer. However, it is done once per frame, which ends up taking a negligible amount of time. You can speed it up by rendering to a PBO if you'd like.
Giawa
umanga, Cant see how to reply inline... maybe I should sign up :)
First of all I must apologize for giving you the wrong algo - i did the back face culling one. But the one you need is very similar which is why I got confused... d'oh.
Get the camera position to mouse vector as said before.
For each contour, loop through all the coords in pairs (0-1, 1-2, 2-3, ... n-0) in it and make a vec out of them as before. I.e. walk the contour.
Now do the cross prod of those two (contour edge to mouse vec) instead of between pairs like I said before, do that for all the pairs and vector add them all up.
At the end find the magnitude of the resulting vector. If the result is zero (taking into account rounding errors) then your outside the shape - regardless of facing. If your interested in facing then instead of the mag you can do that dot prod with the mouse vector to find the facing and test the sign +/-.
It works because the algo finds the amount of distance from the vector line to each point in turn. As you sum them up and you are outside then they all cancel out because the contour is closed. If your inside then they all sum up. Its actually Gauss's Law of electromagnetic fields in physics...
See:http://en.wikipedia.org/wiki/Gauss%27s_law and note "the right-hand side of the equation is the total charge enclosed by S divided by the electric constant" noting the word "enclosed" - i.e. zero means not enclosed.
You can still do that optimization with the bounding boxes for speed.
In the past I've used GL_SELECT to determine which object(s) contributed the pixel(s) of interest and then used computational geometry to get an accurate intersection with the object(s) if required.
Do you expect to select by clicking the contour (on the edge) or the interior of the polygon? Your second approach sounds like you want clicks in the interior to select the tightest containing polygon. I don't think that GL_SELECT after rendering GL_LINE_STRIP is going to make the interior responsive to clicks.
If this was a true contour plot (from the image I don't think it is, edges appear to intersect) then a much simpler algorithm would be available.
You cant use select if you stay with the lines because you would have to click on the line pixels rendered not the space inside the lines bounding them which I read as what you wish to do.
You can use Kos's answer but in order to render the space you need to solid fill it which would involve converting all of your contours to convex types which is painful. So I think that would work sometimes and give the wrong answer in some cases unless you did that.
What you need to do is use the CPU. You have the view extents from the viewport and the perspective matrix. With the mouse coord, generate the view to mouse pointer vector. You also have all the coords of the contours.
Take the first coord of the first contour and make a vector to the second coord. Make a vector out of them. Take 3rd coord and make a vector from 2 to 3 and repeat all the way around your contour and finally make the last one from coord n back to 0 again. For each pair in sequence find the cross product and sum up all the results. When you have that final summation vector keep hold of that and do a dot product with the mouse pointer direction vector. If its +ve then the mouse is inside the contour, if its -ve then its not and if 0 then I guess the plane of the contour and the mouse direction are parallel.
Do that for each contour and then you will know which of them are spiked by your mouse. Its up to you which one you want to pick from that set. Highest Z ?
It sounds like a lot of work but its not too bad and will give the right answer. You might like to additionally keep bounding boxes of all your contours then you can early out the ones off of the mouse vector by doing the same math as for the full vector but only on the 4 sides and if its not inside then the contour cannot be either.
The first is easy to implement and widely used.