Intersection of mouse cursor and projected 2D rectangle - opengl

There is a bunch of 2x2 "floor/tiles" in a 3D-world with JOGL, they are projected into the camera and I want to know the cursor is hovering the tile or not.
I have a camera settings like this:
glu.gluPerspective(90, 4.0/3.0, 1, 100);
glu.gluLookAt(-2f, 8f, -2f, 0f, 0f, 0f, 0f, 1f, 0f);
and there are some tiles or blocks on the y=0 pane like this
gl.glPushMatrix();
{
// Camera settings (same as above)
glu.gluPerspective(90, 4.0/3.0, 1, 100);
glu.gluLookAt(-2f, 8f, -2f, 0f, 0f, 0f, 0f, 1f, 0f);
// Draw the tiles
gl.glPushMatrix();
{
gl.glBegin(GL2.GL_POLYGON);
{
// a bunch of translated and textured
// (1,0,1) (1,0,-1) (-1,0,-1) (-1,0,1)
// rectangle here,
}
gl.glEnd();
}
gl.glPopMatrix();
}
gl.glPopMatrix();
I am new in 3D and I am only familiar with Java Graphics2D. Intersection of 2D rectangle and cursor is just a few easy comparison, but it seems to be a lot more complicated in 3D. I am looking for some Maths or library to do this.
Or if there is a method to get the 4 point of the final pixels on the screen, Maybe I would like to do java.awt.Shape contain() and check it intersects or not.
The result will be like this:

Maybe the simplest solution is using the gluProject(); to get the screen coordinate, and using java.awt.geom.Path2D to check the mouse coordinate is in the area or not.
Here is a simple sample code:
// do inside the matrix stack? of rendering rectangles to get the right matrix we want
float[][] FinalCoordinate = new float[4][4];
float[] ModelView = new float[16];
float[] Projection = new float[16];
gl.glGetFloatv(GL2.GL_MODELVIEW_MATRIX,ModelView,0);
gl.glGetFloatv(GL2.GL_PROJECTION_MATRIX,Projection,0);
for(int x = 0; x < 4; x++)
{
glu.gluProject(Vertex[x][0],0f,Vertex[x][2],ModelView,0,Projection,0,new int[]{0,0,800,600},0,FinalCoordinate[x],0);;
}
after getting four points of the final coorindates, use Path2D to check the intersection:
Path2D p = new Path2D.Float();
p.moveTo(FinalCoordinate[0][0],FinalCoordinate[0][1]);
for(int x = 1;x<4;k++)
{
p.lineTo(fc[x][0],fc[x][1]);
}
p.closePath();
boolean Result = p.contains(MouseX, MouseY);
thats it! Thanks those suggestions and links :)

Related

LIBGDX 3D : Turning textures off on shader : OpenGL

Sorry if question is a bit niche. I wrote some code a few years back and it renders many meshes as 1 big mesh for performance.
What I am trying to do now is just render meshes without textures, so a single colour
boxModel = modelBuilder.createBox(10f, 10f, 10f, Material(ColorAttribute.createDiffuse(Color.WHITE), ColorAttribute.createSpecular(Color.RED), FloatAttribute.createShininess(15f)), (VertexAttributes.Usage.Position or VertexAttributes.Usage.Normal or VertexAttributes.Usage.TextureCoordinates).toLong()) // or VertexAttributes.Usage.TextureCoordinates
for (x in 1..10) {
for (y in 1..10) {
modelInstance = ModelInstance(boxModel, x * 15f, 0.0f, y * 15f)
chunks2[0].addMesh(modelInstance.model.meshes, modelInstance.transform, btBoxShape(Vector3(10f, 10f, 10f)))
}
}
chunks2[0].mergeBaby()
So I build up the giant mesh and then render it
shaderProgram.begin()
texture.bind()
shaderProgram.setUniformMatrix("u_projTrans", camera.combined)
shaderProgram.setAttributef("a_color", 1f, 1f, 1f, 1f)
shaderProgram.setUniformi("u_texture", 0)
renderChunks()
shaderProgram.end()
This works for textured stuff great and the right texture is shown etc but the base colour (I guess that's "a_color" is set to white) where I actually want it to use what I supplied in the Material property.

LibGDX - Gradient rectangle using 2x2 texture (or similar way?)

CMIIW: I heard libgdx's ShapeRenderer is slow and it is better to use Batch.
I tried using Pixmap to produce 2x2 texture and rely on the linear blending:
public void rect(Batch batch, float x, float y, float w, float h, float rot, Color c00, Color c01, Color c10, Color c11){
Pixmap pm = new Pixmap(2, 2, Format.RGBA4444);
pm.drawPixel(0, 0, Color.rgba8888(c00));
pm.drawPixel(0, 1, Color.rgba8888(c01));
pm.drawPixel(1, 0, Color.rgba8888(c10));
pm.drawPixel(1, 1, Color.rgba8888(c11));
Texture tx = new Texture(pm);
tx.setFilter(Texture.TextureFilter.Linear, Texture.TextureFilter.Linear);
batch.end();
batch.begin();
batch.draw(new TextureRegion(tx), x, y, w/2f, h/2f, w, h, 1f, 1f, rot, true);
batch.end();
batch.begin();
tx.dispose();
pm.dispose();
}
And it produces this:
It is not the effect I would want.
If I could throw half pixel from each side of the texture then I think it would be good.
I thought in order to do that I have to change the TextureRegion to this:
new TextureRegion(tx, 0.5f, 0.5f, 1f, 1f)
but this produces:
What is happening there?
Or is there better way to efficiently draw gradient rectangle?
EDIT:
Ouch! Thanks TenFour04 - I tried with
new TextureRegion(tx, 0.25f, 0.25f, 0.75f, 0.75f) but got this instead:
weird, I got exactly what I want with
new TextureRegion(tx, 0.13f, 0.13f, 0.87f, 0.87f):
looks like some rounding problem? 0.126f would still give me that (seemingly) but 0.125f would give me something much closer to the very first image in the post.
#Pinkie Swirl: hmm right, I wanted a method to draw gradient rectangles because I don't want to make textures, but in the end I do.. actually I can avoid making those 2x2 textures on the fly.
#minos

OpenGL Fog effect changes when I move or rotate the camera

I want to add fog to a small 3D world, I tried fiddling with the arguments, however, the fog is not homogeneous.
I have two problems that are maybe linked :
Fog Homogeneity:
When I move or rotate my viewpoint with gluLookAt, the fog is too heavy and all the world is grey.However the are two angles where the rendering of the fog is nice.
The fog seems normal when the camera orentation on the Y axis is 45° or -135° (opposite)
Fog centered on origin of the scene:
When my fog is correctly displayed, it is centered on the (0;0;0) of the scene
Here is the code I use to initialise the fog and the call to gluLookAt
private static final float density = 1f;
private void initFog() {
float[] vertices = {0.8f, 0.8f, 0.8f, 1f};
ByteBuffer temp = ByteBuffer.allocateDirect(16);
temp.order(ByteOrder.nativeOrder());
FloatBuffer fogColor = temp.asFloatBuffer();
fogColor.put(vertices);
GL11.glClearColor(0.8f,0.8f,0.8f,1.0f);
GL11.glFogi(GL11.GL_FOG_MODE, GL11.GL_LINEAR);
GL11.glFog(GL11.GL_FOG_COLOR, temp.asFloatBuffer());
GL11.glFogf(GL11.GL_FOG_DENSITY, density);
GL11.glHint(GL11.GL_FOG_HINT, GL11.GL_FASTEST);
GL11.glFogf(GL11.GL_FOG_START, 1f);
GL11.glFogf(GL11.GL_FOG_END, 10000f);
}
private void initWindow() {
try {
Display.setDisplayMode(new DisplayMode(1600, 900));
Display.create();
GL11.glMatrixMode(GL11.GL_PROJECTION);
GL11.glLoadIdentity();
GLU.gluPerspective(60f, 1600f / 900f, 3, 100000);
GL11.glMatrixMode(GL11.GL_MODELVIEW);
GL11.glLoadIdentity();
GL11.glEnable(GL11.GL_FOG);
GL11.glEnable(GL11.GL_DEPTH_TEST);
initFog();
initParticles();
} catch (LWJGLException e) {
Display.destroy();
System.exit(1);
}
}
Called from the updatePosition function inside main loop
The angle parameter is the direction of the viewport on y axis and yCpos is a value between -1 and 1 that I use to look up or down.
GL11.glLoadIdentity();
GLU.gluLookAt(xpos, ypos, zpos, xpos + (float)Math.cos(angle), ypos+ yCpos, zpos+ (float)Math.sin(angle), 0, 1, 0);
I was drawing the ground with one giant quad, and now I draw the ground with tiles, and the problem isn't happening any more. Therefore, the cause remains mysterious, but the problem is solved.

3D Camera Rotation in OpenGL: How to prevent camera jitter?

I'm fairly new to OpenGL and 3D programming but I've begun to implement camera rotation using quaternions based on the tutorial from http://www.cprogramming.com/tutorial/3d/quaternions.html . This is all written in Java using JOGL.
I realise these kind of questions get asked quite a lot but I've been searching around and can't find a solution that works so I figured it might be a problem with my code specifically.
So the problem is that there is jittering and odd rotation if I do two different successive rotations on one or more axis. The first rotation along the an axis, either negatively or positively, works fine. However, if I rotate positively along the an axis and then rotate negatively on that axis then the rotation will jitter back and forth as if it was alternating between doing a positive and negative rotation.
If I automate the rotation, (e.g. rotate left 500 times then rotate right 500 times) then it appears to work properly which led me to think this might be related to the keypresses. However, the rotation is also incorrect (for lack of a better word) if I rotate around the x axis and then rotate around the y axis afterwards.
Anyway, I have a renderer class with the following display loop for drawing `scene nodes':
private void render(GLAutoDrawable drawable) {
GL2 gl = drawable.getGL().getGL2();
gl.glClear(GL2.GL_COLOR_BUFFER_BIT | GL2.GL_DEPTH_BUFFER_BIT);
gl.glMatrixMode(GL2.GL_PROJECTION);
gl.glLoadIdentity();
glu.gluPerspective(70, Constants.viewWidth / Constants.viewHeight, 0.1, 30000);
gl.glScalef(1.0f, -1.0f, 1.0f); //flip the y axis
gl.glMatrixMode(GL2.GL_MODELVIEW);
gl.glLoadIdentity();
camera.rotateCamera();
glu.gluLookAt(camera.getCamX(), camera.getCamY(), camera.getCamZ(), camera.getViewX(), camera.getViewY(), camera.getViewZ(), 0, 1, 0);
drawSceneNodes(gl);
}
private void drawSceneNodes(GL2 gl) {
if (currentEvent != null) {
ArrayList<SceneNode> sceneNodes = currentEvent.getSceneNodes();
for (SceneNode sceneNode : sceneNodes) {
sceneNode.update(gl);
}
}
if (renderQueue.size() > 0) {
currentEvent = renderQueue.remove(0);
}
}
Rotation is performed in the camera class as follows:
public class Camera {
private double width;
private double height;
private double rotation = 0;
private Vector3D cam = new Vector3D(0, 0, 0);
private Vector3D view = new Vector3D(0, 0, 0);
private Vector3D axis = new Vector3D(0, 0, 0);
private Rotation total = new Rotation(0, 0, 0, 1, true);
public Camera(GL2 gl, Vector3D cam, Vector3D view, int width, int height) {
this.cam = cam;
this.view = view;
this.width = width;
this.height = height;
}
public void rotateCamera() {
if (rotation != 0) {
//generate local quaternion from new axis and new rotation
Rotation local = new Rotation(Math.cos(rotation/2), Math.sin(rotation/2 * axis.getX()), Math.sin(rotation/2 * axis.getY()), Math.sin(rotation/2 * axis.getZ()), true);
//multiply local quaternion and total quaternion
total = total.applyTo(local);
//rotate the position of the camera with the new total quaternion
cam = rotatePoint(cam);
//set next rotation to 0
rotation = 0;
}
}
public Vector3D rotatePoint(Vector3D point) {
//set world centre to origin, i.e. (width/2, height/2, 0) to (0, 0, 0)
point = new Vector3D(point.getX() - width/2, point.getY() - height/2, point.getZ());
//rotate point
point = total.applyTo(point);
//set point in world coordinates, i.e. (0, 0, 0) to (width/2, height/2, 0)
return new Vector3D(point.getX() + width/2, point.getY() + height/2, point.getZ());
}
public void setAxis(Vector3D axis) {
this.axis = axis;
}
public void setRotation(double rotation) {
this.rotation = rotation;
}
}
The method rotateCamera generates the new permenant quaternions from the new rotation and previous rotations while the method rotatePoint merely multiplies a point by the rotation matrix generated from the permenant quaternion.
The axis of rotation and the angle of rotation are set by simple key presses as follows:
#Override
public void keyPressed(KeyEvent e) {
if (e.getKeyCode() == KeyEvent.VK_W) {
camera.setAxis(new float[] {1, 0, 0});
camera.setRotation(0.1f);
}
if (e.getKeyCode() == KeyEvent.VK_A) {
camera.setAxis(new float[] {0, 1, 0});
camera.setRotation(0.1f);
}
if (e.getKeyCode() == KeyEvent.VK_S) {
camera.setAxis(new float[] {1, 0, 0});
camera.setRotation(-0.1f);
}
if (e.getKeyCode() == KeyEvent.VK_D) {
camera.setAxis(new float[] {0, 1, 0});
camera.setRotation(-0.1f);
}
}
I hope I've provided enough detail. Any help would be very much appreciated.
About the jittering: I don't see any render loop in your code. How is the render method triggered? By a timer or by an event?
Your messed up rotations when rotating about two axes are probably related to the fact that you need to rotate the axis of the second rotation along with the total rotation of the first axis. You cannot just apply the rotation about the X or Y axis of the global coordinate system. You must apply the rotation about the up and right axes of the camera.
I suggest that you create a camera class that stores the up, right and view direction vectors of the camera and apply your rotations directly to those axes. If this is an FPS like camera, then you'll want to rotate the camera horizontally (looking left / right) about the absolute Y axis and not the up vector. This will also result in a new right axis of the camera. Then, you rotate the camera vertically (looking up / down) about the new right axis. However, you must be careful when the camera looks directly up or down, as in this case you can't use the cross product of the view direction and up vectors to obtain the right vector.

how to draw a spiral using opengl

I want to know how to draw a spiral.
I wrote this code:
void RenderScene(void)
{
glClear(GL_COLOR_BUFFER_BIT);
GLfloat x,y,z = -50,angle;
glBegin(GL_POINTS);
for(angle = 0; angle < 360; angle += 1)
{
x = 50 * cos(angle);
y = 50 * sin(angle);
glVertex3f(x,y,z);
z+=1;
}
glEnd();
glutSwapBuffers();
}
If I don't include the z terms I get a perfect circle but when I include z, then I get 3 dots that's it. What might have happened?
I set the viewport using glviewport(0,0,w,h)
To include z should i do anything to set viewport in z direction?
You see points because you are drawing points with glBegin(GL_POINTS).
Try replacing it by glBegin(GL_LINE_STRIP).
NOTE: when you saw the circle you also drew only points, but drawn close enough to appear as a connected circle.
Also, you may have not setup the depth buffer to accept values in the range z = [-50, 310] that you use. These arguments should be provided as zNear and zFar clipping planes in your gluPerspective, glOrtho() or glFrustum() call.
NOTE: this would explain why with z value you only see a few points: the other points are clipped because they are outside the z-buffer range.
UPDATE AFTER YOU HAVE SHOWN YOUR CODE:
glOrtho(-100*aspectratio,100*aspectratio,-100,100,1,-1); would only allow z-values in the [-1, 1] range, which is why only the three points with z = -1, z = 0 and z = 1 will be drawn (thus 3 points).
Finally, you're probably viewing the spiral from the top, looking directly in the direction of the rotation axis. If you are not using a perspective projection (but an isometric one), the spiral will still show up as a circle. You might want to change your view with gluLookAt().
EXAMPLE OF SETTING UP PERSPECTIVE
The following code is taken from the excellent OpenGL tutorials by NeHe:
glViewport(0, 0, width, height);
glMatrixMode(GL_PROJECTION); // Select The Projection Matrix
glLoadIdentity(); // Reset The Projection Matrix
// Calculate The Aspect Ratio Of The Window
gluPerspective(45.0f,(GLfloat)width/(GLfloat)height,0.1f,100.0f);
glMatrixMode(GL_MODELVIEW); // Select The Modelview Matrix
glLoadIdentity(); // Reset The Modelview Matrix
Then, in your draw loop would look something like this:
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); // Clear The Screen And The Depth Buffer
glLoadIdentity();
glTranslatef(-1.5f,0.0f,-6.0f); // Move Left 1.5 Units And Into The Screen 6.0
glBegin(GL_TRIANGLES); // Drawing Using Triangles
glVertex3f( 0.0f, 1.0f, 0.0f); // Top
glVertex3f(-1.0f,-1.0f, 0.0f); // Bottom Left
glVertex3f( 1.0f,-1.0f, 0.0f); // Bottom Right
glEnd();
Of course, you should alter this example code your needs.
catchmeifyoutry provides a perfectly capable method, but will not draw a spatially accurate 3D spiral, as any render call using a GL_LINE primitive type will rasterize to fixed pixel width. This means that as you change your perspective / view, the lines will not change width. In order to accomplish this, use a geometry shader in combination with GL_LINE_STRIP_ADJACENCY to create 3D geometry that can be rasterized like any other 3D geometry. (This does require that you use the post fixed-function pipeline however)
I recommended you to try catchmeifyoutry's method first as it will be much simpler. If you are not satisfied, try the method I described. You can use the following post as guidance:
http://prideout.net/blog/?tag=opengl-tron
Here is my Spiral function in C. The points are saved into a list which can be easily drawn by OpenGL (e.g. connect adjacent points in list with GL_LINES).
cx,cy ... spiral centre x and y coordinates
r ... max spiral radius
num_segments ... number of segments the spiral will have
SOME_LIST* UniformSpiralPoints(float cx, float cy, float r, int num_segments)
{
SOME_LIST *sl = newSomeList();
int i;
for(i = 0; i < num_segments; i++)
{
float theta = 2.0f * 3.1415926f * i / num_segments; //the current angle
float x = (r/num_segments)*i * cosf(theta); //the x component
float y = (r/num_segments)*i * sinf(theta); //the y component
//add (x + cx, y + cy) to list sl
}
return sl;
}
An example image with r = 1, num_segments = 1024:
P.S. There is difference in using cos(double) and cosf(float).
You use a float variable for a double function cos.