I'm trying to implement mouse picking in a small application written in haskell. I want to retrieve the projection matrix that has been set with this code found in the resize function that gets called when the window resizes itself:
resize w h = do
GL.viewport $= (GL.Position 0 0, GL.Size (fromIntegral w) (fromIntegral h))
GL.matrixMode $= GL.Projection
GL.loadIdentity
GL.perspective 45 (fromIntegral w / fromIntegral h) 1 100
The best I've achieved so far is to set the current matrix to GL.Projection and then trying to read the GL.currentMatrix statevar like this:
GL.matrixMode $= GL.Projection
pm <- GL.get GL.currentMatrix
-- inverse the matrix, somehow, and multiply this with the clip plane position of
-- of the mouse
This doesn't work and produces this error:
Ambiguous type variable `m0' in the constraint:
(GL.Matrix m0) arising from a use of `GL.currentMatrix'
Probable fix: add a type signature that fixes these type variable(s)
In the first argument of `GL.get', namely `GL.currentMatrix'
In a stmt of a 'do' expression: pm <- GL.get GL.currentMatrix
I think I should be using some sort of type constraint when trying to get the matrix out of the StateVar, but changing the GL.get call to pm <- GL.get (GL.currentMatrix :: GL.GLfloat) just produces a different and equally puzzling message.
I know this is using the old deprecated OpenGL matrix stack and modern code should be using shaders and such to perform their own matrix handling, but I'm not quite comfortable enough in haskell to attempt to really do anything beyond the most basic of projects. If it's easy enough I would certainly try to convert what little rendering code I have to a more modern style, but I find it difficult to find suitable tutorials to help me along.
First thing is first: currentMatrix is deprecated and is removed in the most recent OpenGL package (2.9.2.0). In order to use the most recent version, you can upgrade the dependency in your .cabal file. If you look at the source, GL.currentMatrix is identical to calling GL.matrix Nothing.
Second: The error you're receiving is because Haskell doesn't know the type of matrix component (float or double) that you're trying to read from the GL state. You're on the right track about adding a type signature to the function call, but GL.currentMatrix has type
GL.Matrix m, GL.MatrixComponent c => GL.StateVar (m c)
Hence, you need to fully specify the type if you plan on using it in order to disambiguate it to haskell. If you're set on using the old fixed function pipeline, then the type signature should look something like this:
pm <- GL.get (GL.currentMatrix :: GL.StateVar (GL.GLmatrix GL.GLfloat))
That being said, your mouse picking code may still have problems because there're a couple of other factors that you need to account for:
You need both the modelview and projection matrices to get the proper world-space position of the ray into your scene. The call to GL.currentMatrix just gets the current matrix for whatever the current matrix mode is.
Inverting a 4x4 matrix isn't part of the OpenGL package, IIRC and you'll need your own inverting code.
Once you get the proper matrices, the OpenGL.GLU package has an unproject function that might do what you need
Related
I'm trying to implement AABBs/OOBBs with MathGeoLib since the ease to operate with BBs (and because I wanted to test some things with that library).
The problem is that the engine's objects transformations are based on glm since we started with glm (and they work properly) but when it comes to transform the OOBBs according to an object, it doesn't work very well.
What I basically do is to pass to a function the game object's translation, orientation and scale (I tried to pass a global matrix but it doesn't work, it seems to 'add' the transformation instead of setting it, and I can't access the oobb's matrix). That function does the next:
glm::vec3 pos = passedPosition - OOBBPreviousPos;
glm::mat4 Transformation = glm::translate(glm::mat4(1.0f), pos) *
glm::mat4_cast(passedRot) * glm::scale(glm::mat4(1.0f), passedScale);
glm::mat4 resMat = glm::transpose(Transformation);
math::float4x4 mat = math::float4x4::identity;
mat.Set(glm::value_ptr(resMat));
Which basically transposes the glm matrix (I have seen that that's they way of 'translating' them), passes it to a float* and then it constructs the MathGeoLib matrix with that. I have debugged it and the values seem to be right according to the object, so the next thing I do is actually transform the OOBB and then, enclose the AABB to have it inside, like this:
m_OBB.Transform(mat);
m_AABB.SetNegativeInfinity(); //Sets AABB to "null"
m_AABB.Enclose(m_OBB);
The final behaviour is pretty strange, believe me if I say that is the most close I've been from having it right, I've been some days testing different things and nothing works better (passing global/local matrices directly, trying different ways of passing/constructing transformation data, checking if the glm-MathGLib is correct...). It rotates but not around its own axis, and the scaling gets him crazy (although translation works). Its current behaviour can be seen here: https://gfycat.com/quarrelsomefineduck (blue cubes are AABBs, green ones are OOBBs).
Am I doing something wrong with the mathematics calculations or data transfer?
I still been looking on that but then some friend made me look into another direction, so I finally solved it (or better said: I "worked-around it") by storing an initial object's AABB and passing to the mentioned function the game object's global matrix. Then, inside the function, I used another MathGeoLib function to transform the OOBB.
That function finally looks like:
glm::mat4 resMat = glm::transpose(GlobalMatrixPassed);
math::float4x4 mat = math::float4x4::identity;
mat.Set(glm::value_ptr(resMat)); //"Translate" glm matrix passed into a MathGeoLib one
m_OOBB.SetFrom(m_InitialAABB); //Set OOBB from the initial aabb
m_OOBB.Transform(mat); //Transform it
m_AABB.SetFrom(m_OOBB); //Set the AABB in function of the transformed OOBB
I've got the BumbleBee 2 stereo camera and two mentioned SDKs.
I've managed to capture a video from it in my program, rectify stereo images and get a disparity map. Next thing I'd like to have is a depth map similar to one, the Kinect gives.
The Triclops' documentation is rather short, it only references functions, without typical workflow description. The workflow is described in examples.
Up to now I've found 2 relevant functions: family of triclopsRCDxxToXYZ() functions and triclopsExtractImage3d() function.
Functions from the first family calculate x, y and z coordinate for a single pixel. Z coordinate perfectly corresponds to the depth in meters. However, to use this function I should create two nested loops, as shown in the stereo3dpoints example. That gives too much overhead, because each call returns two more coordinates.
The second function, triclopsExtractImage3d(), always returns error TriclopsErrorInvalidParameter. The documentation says only that "there is a geometry mismatch between the context and the TriclopsImage3d", which is not clear for me.
Examples of Triclops 3.3.1 SDK do not show how to use it. Google brings example from Triclops SDK 3.2, which is absent in 3.3.1.
I've tried adding lines 253-273 from the link above to current stereo3dpoints - got that error.
Does anyone have an experience with it?
Is it valid to use triclopsExtractImage3d() or is it obsolete?
I also tried plotting values of disparity vs. z, obtained from triclopsRCDxxToXYZ().
The plot shows almost exact inverse proportionality: .
That is z = k / disparity. But k is not constant across the image, it varies from approximately 2.5e-5 to 1.4e-3, that is two orders of magnitude. Therefore, it is incorrect to calculate this value once and use forever.
Maybe it is a bit to late and you figured it out by yourself but:
To use triclopsExtractImage3d you have to create a 3dImage first.
TriclopsImage3d *depthImage;
triclopsCreateImage3d(triclopsContext, &depthImage);
triclopsExtractImage3d(triclopsContext, depthImage);
triclopsDestroyImage3d(&depthImage);
I am looking for a C++ equivalent to Matlab's griddata function, or any 2D global interpolation method.
I have a C++ code that uses Eigen 3. I will have an Eigen Vector that will contain x,y, and z values, and two Eigen matrices equivalent to those produced by Meshgrid in Matlab. I would like to interpolate the z values from the Vectors onto the grid points defined by the Meshgrid equivalents (which will extend past the outside of the original points a bit, so minor extrapolation is required).
I'm not too bothered by accuracy--it doesn't need to be perfect. However, I cannot accept NaN as a solution--the interpolation must be computed everywhere on the mesh regardless of data gaps. In other words, staying inside the convex hull is not an option.
I would prefer not to write an interpolation from scratch, but if someone wants to point me to pretty good (and explicit) recipe I'll give it a shot. It's not the most hateful thing to write (at least in an algorithmic sense), but I don't want to reinvent the wheel.
Effectively what I have is scattered terrain locations, and I wish to define a rectilinear mesh that nominally follows some distance beneath the topography for use later. Once I have the node points, I will be good.
My research so far:
The question asked here: MATLAB functions in C++ produced a close answer, but unfortunately the suggestion was not free (SciMath).
I have tried understanding the interpolation function used in Generic Mapping Tools, and was rewarded with a headache.
I briefly looked into the Grid Algorithms library (GrAL). If anyone has commentary I would appreciate it.
Eigen has an unsupported interpolation package, but it seems to just be for curves (not surfaces).
Edit: VTK has a matplotlib functionality. Presumably there must be an interpolation used somewhere in that for display purposes. Does anyone know if that's accessible and usable?
Thank you.
This is probably a little late, but hopefully it helps someone.
Method 1.) Octave: If you're coming from Matlab, one way is to embed the gnu Matlab clone Octave directly into the c++ program. I don't have much experience with it, but you can call the octave library functions directly from a cpp file.
See here, for instance. http://www.gnu.org/software/octave/doc/interpreter/Standalone-Programs.html#Standalone-Programs
griddata is included in octave's geometry package.
Method 2.) PCL: They way I do it is to use the point cloud library (http://www.pointclouds.org) and VoxelGrid. You can set x, and y bin sizes as you please, then set a really large z bin size, which gets you one z value for each x,y bin. The catch is that x,y, and z values are the centroid for the points averaged into the bin, not the bin centers (which is also why it works for this). So you need to massage the x,y values when you're done:
Ex:
//read in a list of comma separated values (x,y,z)
FILE * fp;
fp = fopen("points.xyz","r");
//store them in PCL's point cloud format
pcl::PointCloud<pcl::PointXYZ>::Ptr basic_cloud_ptr (new pcl::PointCloud<pcl::PointXYZ>);
int numpts=0;
double x,y,z;
while(fscanf(fp, "%lg, %lg, %lg", &x, &y, &z)!=EOF)
{
pcl::PointXYZ basic_point;
basic_point.x = x; basic_point.y = y; basic_point.z = z;
basic_cloud_ptr->points.push_back(basic_point);
}
fclose(fp);
basic_cloud_ptr->width = (int) basic_cloud_ptr->points.size ();
basic_cloud_ptr->height = 1;
// create object for result
pcl::PointCloud<pcl::PointXYZ>::Ptr cloud_filtered(new pcl::PointCloud<pcl::PointXYZ>());
// create filtering object and process
pcl::VoxelGrid<pcl::PointXYZ> sor;
sor.setInputCloud (basic_cloud_ptr);
//set the bin sizes here. (dx,dy,dz). for 2d results, make one of the bins larger
//than the data set span in that axis
sor.setLeafSize (0.1, 0.1, 1000);
sor.filter (*cloud_filtered);
So that cloud_filtered is now a point cloud that contains one point for each bin. Then I just make a 2-d matrix and go through the point cloud assigning points to their x,y bins if I want an image, etc. as would be produced by griddata. It works pretty well, and it's much faster than matlab's griddata for large datasets.
I have completely re-written this first post to better show what the problem is.
I am using ps v1.4 (highest version I have support for) and keep getting an error.
It happens any time I use any type of function such as cos, dot, distance, sqrt, normalize etc. on something that was passed into the pixelshader.
For example, I need to do "normalize(LightPosition - PixelPosition)" to use a point light in my pixelshader, but normalize gives me an error.
Some things to note-
I can use things like pow, abs, and radians with no error.
There is only an error if it is done on something passed from the vertex shader. (For example I could take the sqrt of a local pixelshader variable with no error)
I get the error from doing a function on ANY variable passed in, even text coords, color, etc.
Inside the vertex shader I can do all of these functions on any variables passed in with no errors, it's only in the pixelshader that I get an error
All the values passing from the vertex to pixel shader are correct, because if I use software processing rather than hardware I get no error and a perfectly lit scene.
Since normalizing the vector is essentially where my error comes form I tried creating my own normalizing function.
I call Norm(LightPosition - PixelPosition) and "Norm" looks like this -
float3 Norm(float3 v)
{
return v / sqrt(dot(v, v));
}
I still get the error because I guess technically I'm still trying to take a sqrt inside the pixelshader.
The error isn't anything specific, it just says "error in application" on the line where I load my .fx file in C#
I'm thinking it could actually be a compiling error because I have to use such old versions (vs 1.1 and ps 1.4)
When debugged using fxc.exe it tells me "can not map instruction to pixel shader instruction set"
Old GPU:s didn't always support any instruction, especially in the pixel shader.
You might get away with a sqrt in the vertex shader but for a so old version (1.1 !!) the fragment shader might be extremely limited.
I.e this might not be a bug.
The work around could be to skip the hlsl and write your own assembler (but you might stumble onto the same problem there) and simulate the sqrt (say with a texture lookup and / or interpolations if you can have 2 textures in 1.0 :-p )
You can of course try to write a sqrt-lookup/interpolation in hlsl but it might be too big too (I don't remember but IIRC 1.1 don't let you write very long shaders).
I was studying Perlin's Noise through some examples # http://dindinx.net/OpenGL/index.php?menu=exemples&submenu=shaders and couldn't help to notice that his make3DNoiseTexture() in perlin.c uses noise3(ni) instead of PerlinNoise3D(...)
Now why is that? Isn't Perlin's Noise supposed to be a summation of different noise frequencies and amplitudes?
Qestion 2 is what does ni, inci, incj, inck stand for? Why use ni instead of x,y coordinates? Why is ni incremented with
ni[0]+=inci;
inci = 1.0 / (Noise3DTexSize / frequency);
I see Hugo Elias created his Perlin2D with x,y coordinates, and so does PerlinNoise3D(...).
Thanks in advance :)
I now understand why and am going to answer my own question in hopes that it helps other people.
Perlin's Noise is actually a synthesis of gradient noises. In its production process, we must compute the dot product of a vector pointing from one of the corners flooring the input point to the input point itself with the random-generated gradient vector.
Now if the input point were a whole number, such as the xyz coordinates of a texture you want to create, the dot product would always return 0, which would give you a flat noise. So instead, we use inci, incj, inck as an alternative index. Yep, just an index, nothing else.
Now returning to question 1, there are two methods to implement Perlin's Noise:
1.Calculate the noise values separately and store them in the RGBA slots in the texture
2.Synthesize the noises up before-hand and store them in one of the RGBA slots in the texture
noise3(ni) is the actual implementation of method 1, while PerlinNoise3D(...) suggests the latter.
In my personal opinion, method 1 is much better because you have much more flexibility over how you use each octave in your shaders.
My guess on the reason for using noise3(ni) in make3DNoiseTexture() instead if PerlinNoise3D(...) is that when you use that noise texture in your shader you want to be able to replicate and modify the functionality of PerlinNoise3D(...) directly in the shader.
My guess for the reasoning behind ni, inci, incj, inck is that using x,y,z of the volume directly don't give a good result so by scaling the the noise with the frequency instead it is possible to adjust the resolution of the noise independently from the volume size.