I am trying to compute the visibility between two planes or patches.
I have a wireframe of quads. Each quad has a normal vector with X, Y and Z coordinates. Each quad has 4 vertices. Each vertex has X, Y and Z coordinates.
Given two quads, how can I know if there is an occluder or another object in between these two patches (quads).
Therefore, I need to create a method that returns 1 if patches has no occluders or return 0 if patches has occluder.
The method I picture would be something like this:
GLint visibility(Patch i, Patch j) {
GLboolean isVisible;
vector<Patch> allPatches; // can be used to get all patches in the scene
// Check if there is any occluder between patch i and patch j
Some computations here
if(isVisible) {
return 1;
} else {
return 0;
}
}
I've heard of z-buffer algorithms and the hemicube implementation that would get this done. I already have the form-factors computed. I just need to finish this step to get shadows.
Make sure you give some form of answer with graphs or methods because I am not that genius
I found the solution. Basically I needed to use ray tracing techniques. Throw ray from one patch to another and check if ray intercepts the planes with barycentric equation computation. Once you find the control points you need to check if the control point lies on you quad.
Related
I'm looking for a fast, efficient (even if sometimes false positive) way to determine if a cube intersects a frustum.
I've been using a brute force test of just seeing if all cube points are behind one of the planes, and rejecting on that basis, using this function:
inline char ClassifyPoint(Vector thePoint, char thePlane)
{
Vector aDir=mPlane[thePlane].mPos-thePoint;
float aD=aDir.Dot(mPlane[thePlane].mNormal);
if (aD<-0.0005f) return 1; // In front of plane
if (aD>0.0005f) return -1; // Behind plane
return 0; // "On" the plane
}
It also seems to work if I just use the center of each cube face, which saves me two tests.
But I wanted to know if there's a faster way, something more volume oriented.
I am trying to implement a logic where, on mouse click, a shot is fired at an object.To do so, I did the following,
I first considered the .obj file of my model and found the region (list of coordinates) that the shot works on (a particular weak point of the body).
I then considered the least and largest x,y and z values present in the file for that particular region (xmin,ymin,zmin and xmax,ymax,zmax).
To figure out whether the shot has landed on the weak point, I considered the assumption that a shot would land on the weak point, if the coordinates of the shot lie between (xmin,ymin,zmin) and (xmax,ymax,zmax).
I assumed the coordinates from the .obj file to be the actual coordinates of the model, since the assimp code I have directly loads in the coordinates of the model. Considering (xmin,ymin,zmin) and (xmax,ymax,zmax), I converted the coordinates to window coordinates via gluProject().
I then considered the current cursor position and checked if the cursor position lies between (xmin,ymin,zmin) and (xmax,ymax,zmax).
The problems I now face are:
The object coordinates provided in the .obj file range between -4 to 4, which then lie around 1.0 after gluProject(), whereas the cursor position lies between (0,0) and (1280,720).
After gluProject(), (xmin,ymin) and (xmax,ymax) are either (0,1) or (1,0) whereas the zmin and zmax values seem fine.
How can I get my logic working ?
Here is the code:
// Call shader to draw and acquire necessary information for gluProject()
modelShader.use();
modelShader.setMat4("projection", projection);
modelShader.setMat4("view", view);
glm::mat4 model_dragon;
double time=glfwGetTime();
model_dragon=glm::translate(model_dragon, glm::vec3(cos((360.0-time)/2.0)*60.0,cos(((360.0-time)/2.0))*(-2.5),sin((360-time)/1.0)*60.0));
model_dragon=glm::rotate(model_dragon,(float)(glm::radians(30.0)),glm::vec3(0.0,0.0,1.0));
model_dragon=glm::scale(model_dragon,glm::vec3(1.4,1.4,1.4));
modelShader.setMat4("model", model_dragon);
collision_model=model_dragon;collision_view=view;collision_proj=projection; //so that I can provide the view,model and projection required for gluProject()
ourModel.Draw(modelShader);
Mouse button callback
// Note: dragon_min and dragon_max variables hold the constant position of the min and max coordinates.
void mouse_button_callback(GLFWwindow* window,int button,int action,int mods){
if(button==GLFW_MOUSE_BUTTON_LEFT && action==GLFW_PRESS){
Mix_PlayChannel( -1, shot, 0 ); //Play sound
GLdouble x,y,xmin,ymin,zmin,xmax,ymax,zmax,dmodel[16],dproj[16];
GLint dview[16];
float *model = (float*)glm::value_ptr(collision_model);
float *proj = (float*)glm::value_ptr(collision_proj);
float *view = (float*)glm::value_ptr(collision_view);
for (int i = 0; i < 16; ++i){dmodel[i]=model[i];dproj[i]=proj[i];dview[i]=(int)view[i];} // Convert mat4 to double array
glfwGetCursorPos(window,&x,&y);
gluProject(dragon_min_x,dragon_min_y,dragon_min_z,dmodel,dproj,dview,&xmin,&ymin,&zmin);
gluProject(dragon_max_x,dragon_max_y,dragon_max_z,dmodel,dproj,dview,&xmax,&ymax,&zmax);
if((x>=xmin && x<=xmax) && (y>=ymin && y<=ymax)){printf("Hit\n");defense--;}
The .obj coordinates have eg. values as shown:
0.032046 1.533727 4.398055
You are confusing the parameters of gluProject, especially the view parameter. This parameter should contain 4 integers which describe the viewport (x,y,width,height) and not the view matrix.
gluProject (and a lot of other glu functions) are tailored towards the fixed function pipeline and their matrix stacks. Due to this, you have to pass the following information:
model: The modelview matrix, as returned by glGetDoublev( GL_MODELVIEW_MATRIX, ...)).
proj: The projection matrix, as returned by glGetDoublev( GL_PROJECTION_MATRIX, ...).
view: The current viewport, as returned by glGetIntegerv( GL_VIEWPORT, ...)
As you see, the view matrix is packed together with the model matrix and view contains the viewport.
I'd strongly advice not to use glu functions at all when working with modern OpenGL. Especially when the matrices are already stored in glm, it would be better to use glm::project.
Note1: Converting a floating point matrix to an integer matrix by casting each element almost never results in anything meaningful.
Note2: When projecting a bounding rectangle to screenspace, the result will in general not be a rectangle anymore. During projection, angles are not preserved, thus the result is a general four cornered polygon and not a rectangle anymore. Same goes for bounding boxes: You can't even guarantee that the projected box is contained in the screen-space rectangle defined by projecting [x_min, y_min, z_min] and [x_max, y_max, z_max].
I'm just wondering if there was any way which one can perform mouse picking detection onto any object. Whether it would be generated object or imported object.
[Idea] -
The idea I have in mind is that, there would be iterations with every object in the scene. Checking if the mouse ray has intersected with an object. For checking the intersection, it would check the mouse picking ray with the triangles that make up the object.
[Pros] -
I believe the benefit of this approach is that, every object can be detected with mouse picking since they all inherit from the detection method.
[Cons] -
I believe this drawbacks are mainly the speed and the method being very expensive. So would need fine tuning of optimization.
[Situation] -
In the past I have read about mouse picking and I too have implemented some basic form of mouse picking. But all those were crappy work which I am not proud of. So again today, I have re-read some of the stuff from online. Nowadays I see alot of mouse picking using color ids and shaders. I'm not too keen for this method. I'm more into a mathematical side.
So here is my mouse picking ray thingamajig.
maths::Vector3 Camera::Raycast(s32 mouse_x, s32 mouse_y)
{
// Normalized Device Coordinates
maths::Vector2 window_size = Application::GetApplication().GetWindowSize();
float x = (2.0f * mouse_x) / window_size.x - 1.0f;
float y = 1.0f;
float z = 1.0f;
maths::Vector3 normalized_device_coordinates_ray = maths::Vector3(x, y, z);
// Homogeneous Clip Coordinates
maths::Vector4 homogeneous_clip_coordinates_ray = maths::Vector4(normalized_device_coordinates_ray.x, normalized_device_coordinates_ray.y, -1.0f, 1.0f);
// 4D Eye (Camera) Coordinates
maths::Vector4 camera_ray = maths::Matrix4x4::Invert(projection_matrix_) * homogeneous_clip_coordinates_ray;
camera_ray = maths::Vector4(camera_ray.x, camera_ray.y, -1.0f, 0.0f);
// 4D World Coordinates
maths::Vector3 world_coordinates_ray = maths::Matrix4x4::Invert(view_matrix_) * camera_ray;
world_coordinates_ray = world_coordinates_ray.Normalize();
return world_coordinates_ray;
}
I have this ray plane intersection function which calculates if a certain ray as intersected with a certain plane. DUH!
Here is the code for that.
bool Camera::RayPlaneIntersection(const maths::Vector3& ray_origin, const maths::Vector3& ray_direction, const maths::Vector3& plane_origin, const maths::Vector3& plane_normal, float& distance)
{
float denominator = plane_normal.Dot(ray_direction);
if (denominator >= 1e-6) // 1e-6 = 0.000001
{
maths::Vector3 vector_subtraction = plane_origin - ray_origin;
distance = vector_subtraction.Dot(plane_normal);
return (distance >= 0);
}
return false;
}
There are many more out there. E.g. Plane Sphere Intersection, Plane Disk Intersection. These things are like very specific. So it feel that is very hard to do mouse picking intersections on a global scale. I feel this way because, for this very RayPlaneIntersection function. What I expect to do with it is, retrieve the objects in the scene and retrieve all the normals for that object (which is a pain in the ass). So now to re-emphasize my question.
Is there already a method out there which I don't know, that does mouse picking in one way for all objects? Or am I just being stupid and not knowing what to do when I have everything?
Thank you. Thank you.
Yes, it is possible to do mouse-picking with OpenGL: you render all the geometry into a special buffer that stores a unique id of the object instead of its shaded color, then you just look at what value you got at the pixel below the mouse and know the object by its id that is written there. However, although it might be simpler, it is not a particularly efficient solution if your camera or geometry constantly moves.
Instead, doing an analytical ray-object intersection is the way to go. However, you don't need to check the intersection of every triangle of every object against the ray. That would be inefficient indeed. You should cull entire objects by their bounding boxes, or even portions of the whole scene. Game engines have their own spacial index data structure to speed-up ray-object intersections. They need it not only for mouse picking, but also for collision-detection, physics simulations, AI, and what-not.
Also note that the geometry used for the picking might be different from the one used for rendering. One example that comes to mind is that of semi-transparent objects.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I am looking for a way to do a wormhole effect like this:
- https://www.youtube.com/watch?v=WeOBXmLeJWo&feature=youtu.be&t=43s
I have already found nice tunnels in the examples, but here it is a little bit more involved. Actually the space appears to be warped somehow and the movement is high velocity, so it is not just entering into a simple tunnel. Any idea how to do the space warping part of it?
I decided to add more info because this was too broad:
I have a galaxy and each star has a 3d coord, size, etc. in this galaxy. I can visit these stars with a space ship. There are very distant stars and it would take a lot of time to get to them, that's why I need warp (faster than light) speed. This does not necessarily requires a wormhole according to physics, but this app does not have to be overly realistic. I don't want to solve this with pure OpenGL so ofc. we can use shaders. I'd like to warp the space in the middle of the screen when accelerating to warp speeds. After that a tunnel effect can come, because I think it would consume a lot of resources to update every star by very high speeds, so I'd like to update only the close stars. This can't be a prerendered animation, because the destination is not always certain, so this has sometimes exploration purposes and sometimes traveling purposes. I don't think warping only the sky box is enough, but I am not sure about this.
There are 2 things going on there:
space curvature around the hole
You need to construct equation that describe the curvature of space around hole parametrized by hole parameters (mass,position,orientation) and time so you can animate it. Then from this curvature you can compute relative displacement of pixel/voxel around it. I would start with cylindrical cones with radius modulated by sin of the distance from hole +/- some animation parameters (need experimentation).
Something like this:
and for example start with (in wormhole local coordinates LCS):
r = R * sin(z*0.5*M_PI/wormhole_max_depth)
Then modulate it by additional therms. The wormhole_max_depth,R should be functions of time even linear or with some periodic therm so it is pulsating a bit.
The displacement can be done by simply computing distance of concerned point to cone surface and push it towards it the more the closer to the cone it is (inside cone voxels are considered below surface so apply max displacement strength)
particle/light/matter bursting out of the hole
I would go for this only when #1 is alredy done. It should be simple particle effect with some nice circular blended alpha texture animated on the surface of the cone from #1. I see it as few for loops with pseudo random displacement in position and speed ...
Techniques
This topic depends on how you want to do this. I see these possibilities:
Distort geometry during rendering (3D vector)
So you can apply the cone displacement directly on rendered stuff. This would be best applicable in GLSL but the geometry rendered must have small enough primitives to make this work on vertex level ...
Distort skybox/stars only (3D vector or 2D raster but objects stay unaffected)
So you apply the displacement on texture coordinates of skybox or directly on the star positions.
Distort whole rendered scene in second pass (2D raster)
This need to use 2 pass rendering and in the second pass just wrap the texture coordinates near hole.
As you got different local stars in each sector I would use star background generated from star catalogue (list of all your stars) And apply the distortion on them directly in 3D vector space (so no skybox.. option #2). And also because my engines already use such representation and rendering for the same reasons.
[Edit1] cone geometry
I haven't much time for this recently until today so I did not make much of a progress. I decided to start with cone geometry so here it is:
class wormhole
{
public:
reper rep; // coordinate system transform matrix
double R0,R1,H,t; // radiuses,depth
wormhole(){ R0=10.0; R1=100.0; H=50.0; t=0.0; };
wormhole(wormhole& a){ *this=a; };
~wormhole(){};
wormhole* operator = (const wormhole *a) { *this=*a; return this; };
/*wormhole* operator = (const wormhole &a) { ...copy... return this; };*/
void ah2xyz(double *xyz,double a,double h) // compute cone position from parameters a=<0,2pi>, h=<0,1>
{
double r,tt;
tt=t; if (t>0.5) tt=0.5; r=2.0*R0*tt; // inner radius R0
tt=t; if (t>1.0) tt=1.0; r+=(R1-r)*h*h*tt; // outer radius R1
xyz[0]=r*cos(a);
xyz[1]=r*sin(a);
xyz[2]=H*h*tt;
rep.l2g(xyz,xyz);
}
void draw_cone()
{
int e;
double a,h,da=pi2*0.04,p[3];
glColor3f(0.2,0.2,0.2);
for (h=0.0;h<=1.0;h+=0.1){ glBegin(GL_LINE_STRIP); for (e=1,a=0.0;e;a+=da) { if (a>=pi2) { e=0; a=0.0; } ah2xyz(p,a,h); glVertex3dv(p); } glEnd(); }
for (e=1,a=0.0;e;a+=da){ glBegin(GL_LINE_STRIP); for (h=0.0;h<=1.0;h+=0.1) { if (a>=pi2) { e=0; a=0.0; } ah2xyz(p,a,h); glVertex3dv(p); } glEnd(); }
}
} hole;
Where rep is my class for homogenous 4x4 transform matrix (remembering both direct and inverse matrices at the same time) function l2g just transforms from local coordinates to global. The cone parameters are:
R0 - inner cone radius when fully grown
R1 - outer cone radius when fully grown
H - the height/depth of the cone when fully grown
t - is animation parameter values <0.0,1.0> are the growth and values above 1.0 are reserved for wormhole fully grown animation
Here how it looks like:
What I would do is simply calculate a vector from the texture coordinate of the screen center to the texture coordinate of the pixel you're shading.
Then modify that vector in any way you want (time based for example) and apply it to the texture coordinate of the pixel you're shading and then use the resulting coordinate to sample your texture.
In pseudocode this would be something like this:
vec2 vector_to_screen_center = vec2(0.5) - texture_coordinate;
texture_coordinate += vector_to_screen_center * sin(time) * 0.1; // Time based modulation of the vector.
gl_FragColor = texture2D(screen_texture, texture_coordinate);
Your question does not have a GLSL tag. If you plan to do this without shaders, this is going to be hard and/or inefficient.
Need some direction on 3d point cloud display using openGl in c++ (vs2008). I am trying to do a 3d point cloud display with a texture. I have 3 2D arrays (each same size 1024x512) representing x,y,z of each point. I think I am on the right track with
glBegin(GL_POINTS);
for(int i=0; i<1024; i++)
{
for(int j=0; j<512; j++)
{
glVertex3f(x[i][j], y[i][j], z[i][j]);
}
}
}
glEnd();
Now this loads all the vertices in the buffer (i think) but from here I am not sure how to proceed. Or I am completely wrong here.
Then I have another 2D array (same size) that contains color data (values from 0-255) that I want to use as texture on the 3D point cloud and display.
The point drawing code is fine as is.
(Long term, you may run into performance problems if you have to draw these points repeatedly, say in response to the user rotating the view. Rearranging the data from 3 arrays into 1 with x, y, z values next to each other would allow you to use faster vertex arrays/VBOs. But for now, if it ain't broke, don't fix it.)
To color the points, you need glColor before each glVertex. It has to be before, not after, because in OpenGL glVertex loosely means that's a complete vertex, draw it. You've described the data as a point cloud, so don't change glBegin to GL_POLYGON, leave it as GL_POINTS.
OK, you have another array with one byte color index values. You could start by just using that as a greyscale level with
glColor3ub(color[i][j], color[i][j], color[i][j]);
which should show the points varying from black to white.
To get the true color for each point, you need a color lookup table - I assume there's either one that comes with the data, or one you're creating. It should be declared something like
static GLfloat ctab[256][3] = {
1.0, 0.75, 0.33, /* Color for index #0 */
...
};
and used before the glVertex with
glColor3fv(ctab[color[i][j]]);
I've used floating point colors because that's what OpenGL uses internally these days. If you prefer 0..255 values for the colors, change the array to GLubyte and the glColor3fv to glColor3ub.
Hope this helps.