Vertex2 confusion with haskell openGL bindings - opengl

I'm using Haskell openGL bindings to try to make a particle generator.
I want to store information about a particle in a record where I can then pass in fields and update them when appropriate.
So a position in the record is stored as:
data Particle = Particle { pos :: Vertex2 GLfloat }
and set like this:
Particle { pos = Vertex2 1 (0::GLfloat) }
Then I pass in the particle and try to retrieve the values like so:
drawParticle :: Particle -> IO ()
drawParticle part =
(pos part) >>= \(Vertex2 x y) ->
print x
The error I get:
Couldn't match type `Vertex2' with `IO'
Expected type: IO GLfloat
Actual type: Vertex2 GLfloat
In the return type of a call of `pos'
In the first argument of `(>>=)', namely `(pos part)'
In the expression: (pos part) >>= \ (Vertex2 x y) -> print x
I'm mildly confused about the data type Vertex2 and why it has to be declared with a single GLfloat instead of two. How would I extract numbers from the data type Vertex2 GLfloat?
(that is how would I extract Vertex2 1 (0::GLfloat) to be x = 1.0, y = 0.0?

To answer your questions:
It would be possible to define Vertex2 to take two types, to allow X to have one type and Y another, e.g. data Vertex2 xtype ytype = Vertex2 xtype ytype. However, it's generally a bad idea to have two different types for X and Y so instead it's defined as: data Vertex2 sametype = Vertex2 sametype sametype to save problems.
Since you have explicitly typed the Vertex2 in your declaration for Particle, you don't need to type the expression you list. Just Particle { pos = Vertex2 1 0 } should be enough. (Or: Particle (Vertex2 1 0)).
You get the compile error because you don't need monadic bind. part has type Particle, not IO Particle, so you don't need bind to get the value out. You can write either:
drawParticle part = let Vertex2 x y = pos part in print x
or:
drawParticle part = do
let Vertex2 x y = pos part
print x
(Note the two different forms of let, depending on whether it's in a do block; this confused me when I started out.)

Related

OpenGl Rotate object on Y axis to look at another object

So like in a topic I got 2 objects one i moving around (on z and x axis) the other one is static but should rotate around y axis to always like a look at the other... and i am fighting with this already a week
what i got now is
vector from 1object to 2object and actual look at(also vector) of the 2object
i'am calculating angel betwean this two vectors and adding this to rotattion.y of the 2 object but its not working properly
any idea how to make it work? btw i'am using eular angel transforms
pseudCode:
vectorFrom1to2 = vector1 - vector2;
lookatVectorof2ndObject;
i normalize both of them and then
float angle = acos(dot(vectorFrom1to2, lookatVectorof2ndObject));
object2.rotateY = angle;
i dont know where i do mistake
As a general rule of thumb, which proved itself true in many situations I observed is: As soon as you find yourself calculating angles from vectors, you are most likely doing something in a more unnecessarily complicated way than necessary.
All you need is a basis transformation which transforms the first object's local coordinate system to make its local Z axis point towards the second object. You can do this with a simple rotation matrix (provided you have a matrix/vector library ready to facilitate this more easily).
So, provided you have object 1 with position p1 and object 2 with position p2 and you want p1 to rotate towards p2, then the rotation matrix can be obtained as follows:
(I am just using GLSL pseudo syntax here)
vec3 p1 = ... // <- position of first object
vec3 p2 = ... // <- position of second object
vec3 d = normalize(p2 - p1)
vec3 r = cross(vec3(0.0, 1.0, 0.0), d)
= vec3(d.z, 0, -d.x)
mat3 m = mat3(d.z, 0, -d.x, // <- first column ('right' vector)
0, 1, 0, // <- second column (keep Y)
d.x, 0, d.z) // <- third column (map Z to point towards p2)
When transforming the vertices v of the first object with m by: v' = m * v you get the Z axis of object p1 to point towards the position of p2, all formulated in the same "world" coordinate system.

How can I interpolate a set of precomputed functions in OpenCL?

I'm working with an OpenCL kernel where I need to use associated Legendre polynomials.
These are a set of fairly difficult to compute polynomials indexed by a integer n and m orders, and accepting a real argument. The specifics of the actual polynomials is irrelevant, since I have a (slow) host-side function that can generate them, but the kernel side function would need to look something like:
float legendre(int n, int m, float z)
{
float3 lookupCoords;
lookupCoords.x = n;
lookupCoords.y = m;
lookupCoords.z = z;
//Do something here to interpolate Z for a given N and M...
}
I want to interpolate along the Z axis, but just have nearest neighbor for the n and m axes since they're only defined for integer values. A benefit of Z is that it's only defined between -1 and 1, so it already looks a lot like a texture coordinate.
How can I accomplish this with a sampler and lookup tables in OpenCL?
My first thought was to attempt to use a 3D texture filled with precomputed orders, but I only want to interpolate along one dimension (the real or Z argument), and I'm not sure what this would look like in OpenCL C.
In OpenCL 1.1 use read_imagef with an image3d_t for the first parameter, a sampler_t created with CLK_FILTER_LINEAR for the second paramter, and finally a float4 coord for the third parameter with your coordinates to read from.
To interpolate only along one axis, let that coordinate's value be any float value but make the other two coordinates floor(value) + 0.5f. This will make them not interpolate. Like this (only interpolating z):
float4 coordinate = (float4)(floor(x) + 0.5f, floor(y) + 0.5f, z, 0.0f);
In OpenCL 1.2 you could use image arrays but I'm not sure it would be any faster and NVIDIA does not support OpenCL 1.2 on Windows.

Coerce built-ins to their GL equivalents

I just wrote a quick Conway's Game of Life in Haskell for practice and I though that I should now animate it.
I wanted to use GLUT + OpenGL and few seconds of Googling later, I had an example up and ready to fire. The difference in the example is that it defines a function that returns some points as myPoints :: [(GLfloat,GLfloat,GLfloat)] whereas I have Coord defined as data Coord = Coord {x :: Int, y :: Int} deriving (Show, Eq). That's all nice and neat and the game seems to work in its plain form, but there are issues when I try to draw it. Namely the part when I am supposed to pass the points to the renderer:
renderPrimitive Points $ mapM_ (\(x, y, z)->vertex$Vertex3 x y z) myPoints
That's all fine if myPoints contains values of type GLfloat but renderPrimitive Points $ mapM_ (\(Coord x y)->vertex$Vertex2 x y) myCoords complains:
No instance for (VertexComponent Int)
arising from a use of `vertex'
Possible fix: add an instance declaration for (VertexComponent Int)
I tried things like fromIntegral x and adding a type signatures :: GLfloat or GLint but it always seems to complain that it can't take the type or that it can't go Int -> GLfloat/GLint.
My question is, how do I get my Coord type to play with OpenGL types? I can't find any hint on the Web and I'd rather not make Coord = Coord {x :: GLfloat, y :: GLfloat} for the simple reason that the whole lot of other code will not play nicely with GLfloats as it's all expecting Num or Int and whatnot.
Below is minimum scenario where the issue is illustrated and which I'd love to be able to compile:
module Main
where
import Graphics.Rendering.OpenGL
import Graphics.UI.GLUT
data Coord = Coord {x :: Int, y :: Int} deriving (Show, Eq)
someCoords = [Coord 1 1, Coord 1 2, Coord 42 7]
main = do
(progName, _) <- getArgsAndInitialize
createWindow "Please compile."
displayCallback $= display
mainLoop
display = do
clear [ColorBuffer]
renderPrimitive Points $ mapM_ (\(Coord x y) -> vertex $ Vertex2 x y) someCoords
After hunting for a longer while with another programmer in my household, we have found a snippet of code which let us make GLfloat from an int, namely fromIntegral x :: GLfloat will produce the desired result. Similarly, fromRational can be used for Float -> GLfloat.

2D rigid body physics using runge kutta

Does anyone know any c++/opengl sourcecode demos for 2D rigid body physics using runge kutta?
I want to build a physics engine but I need some reference code to understand better how others have implemented this.
There are a lot of things you have to take care to do this nicely. I will focus on the integrator implementation and what I have found works good for me.
For all the degrees of freedom in your system implement a function to return the accelerations a as a function of time t, positions x and velocities v. This should operate on arrays or vectors of quantities and not just scalars.
a = accel(t,x,v);
After each RK step evaluate the acceleration to be ready for the next step. In the loop then do this:
{
// assume t,x[],v[], a[] are known
// step time t -> t+h and calc new values
float h2=h/2;
vec q1 = v + h2*a;
vec k1 = accel(t+h2, x+h2*v, q1);
vec q2 = v + h2*k1;
vec k2 = accel(t+h2, x+h2*q1, q2);
vec q3 = v + h*k2;
vec k3 = accel(t_h, x+h*q2, q3);
float h6 = h/6;
t = t + h;
x = x + h*(v+h6*(a+k1+k2));
v = v + h6*(a+2*k1+2*k2+k3);
a = accel(t,x,v);
}
Why? Well the standard RK method requires you to make a 2xN state vector, but the derivatives of the fist N elements are equal to the last N elements. If you split the problem up to two N state vectors and simplify a little you will arrive at the above scheme for 2nd order RK.
I have done this and the results are identical to commercial software for a plan system with N=6 degrees of freedom.

3D Vector (X, Y, Z) look at Vector

Im working with the source sdk (Which uses c++) and I want to rotate a entity's angle so it looks at another entity.
A entity can be looked at as a gameobject or similar and has a position (Vector) in the world as well as a angle (Vector).
I can rotate the entity by using SetAbsAngles which takes a QAngle (Basically a Vector) as parameter.
Here is some pseudo-code:
vec3 p = entity2->getPosition();
vec3 r = entity1->getPosition();
float xdistance = p[0] - r[0];
float ydistance = p[1] - r[1];
float zdistance = p[2] - r[2];
float xzdistance = sqrt(xdistance * xdistance + zdistance * zdistance);
entitity1->setHeading(atan2(xdistance, zdistance)); // rotation around y
entitity1->setPitch(-atan2(ydistance, xzdistance)); // rotation around x
entitity1->setBank(0); // rotation around z
The z-rotation is set to 0 because it cannot be determined. You can set it freely if you like.
This works in a coordinate system with z facing forward, y up and x to the right. If you are using a different system you may have to adjust some signs.