How to get more precision in mouse movement - c++

os:: windows xp sp3
Qt:: 4.6
I am playing with some 3D stuff and need to implement mouse moving. I tried with Qt mouseMoveEvent but found that is not good because mouseMoveEvent does not handle with every pixel when mouse is moved. I need somethig that register EVERY pixel of movement.
Searching for solution I cheked Qt online documentation && found QCursor class && its member pos().
Questions:: Does QCursor::pos() register every pixel in movement? Have somebody better idea for precise handling of camera wiew in 3d (i am not using openGL , building my engine in painter(it is for fun && hoby) ) ?

No, mouse may move several pixels at once.
If you need the midway points for something then calculate them. Calculate all points on line between two positions of mouse. It is still unclear to me why you need the points, but that should help.

This most likely does not have much to do with Qt, but with your mouse polling rate. You might want to refer to this quite informative blog post on Coding Horror.

Some time ago I had similar issue (I didn't use QT). Your system does not have that precise information.
What I did, was computing mouse position change (dx, dy) and using that information to move the camera. In many frameworks you don't have to compute (dx,dy) as you get that information with the event (for example SDL).
Alternatively you could compute position change and then interpolate positions between current and previous mouse position - then you could use those positions to move your camera.
You would have the same problem if you wanted to draw mouse movement on the screen. You can then use Bresenham's algorithm http://en.wikipedia.org/wiki/Bresenham's_line_algorithm to generate pixels between two given points

No, QCursor does not prvide that information, as it has no signal giving you this. You have to explicitly query its position and doing that in the mouseMoveEvent limits the precision again. The underlying window system just does not deliver that precision. Like the others said, just work with arbitrary wide movements or compute the intermediary points yourself.

Related

Windows mouse coordinates VS OpenGL mouse coordinates

How can I determine(this isn't the right term to use I know) that, for every position of mouse in a window space, it gets converted to OGL space(-1, 1). In this case, the user moves the mouse very fast, that I assume all of its previous positions are converted into OGL coordinates. What I am trying to say is that...is a common CPU fast enough to do that (to track all previous events) even if my C++ OGL coordinates converter is very computational expensive? lets say I put very time consuming loops in there? or.. very fast method(). How can I assure that no OGL coordinates are skipped out if I move the mouse fast enough?
I'm not jumping to any conclusion here or assuming something else might you think.
Edit:
My program main loop is like this(pseudocode):
void Pollevents()
{
for everyt_obj in this
{
if Not Collide()
{
Move(x, y) //
}
}
}
void MousePos()
{
mouse.pos = To_OGL_Coord2f()
}
These are separate threads to be executed (But not actually a real thread)
Suppose mouse.pos = (0, 0) then I moved the mouse fast enough to make the new mouse.pos to (10, 10). In a single execution of a loop, the mouse position changed very far from where it was before. Now, how can I tell to my program, by implementing Bresenham's line algorithm as mentioned by Christian Rau, that those values generated by that algorithm(not being tracked) have been crossed by the mouse. Will I add another loop for that to step for all those positions?
How can I assure that no OGL coordinates are skipped out if I move the
mouse fast enough?
That's not possible, since there is no way to let the OS generate mouse events for each and every point a mouse move would have crossed when tracked with theoretically infinite precision.
The only way to ensure this is to fill the missing points between the two (possibly far away) mouse positions yourself. If you just want to draw a point for each position the mosue moved over (maybe using OpenGL), draw a line instead.
If you on the other hand need those intermediary mouse positions yourself for further computations, you won't get around computing them yourself using some common line rasterization algorithm (like the Bresenham Algorithm, the school book algorithm for line rasterization). What this basically does is compute each point on a discrete grid that a line from one point to another would have crossed (similar to what your graphics card does when converting a line into discrete pixels), so this will generate each discrete mouse position your virtual mouse path has crossed (ignoring any non-linear mouse movement between measurement points).
EDIT: If you don't need a discrete line with proper equal-width characteristics a much easier way than messing with line rasterization would also be to just work with floating point positions and do a simple linear interpolation of the end points, like datenwolf writes in his comment. This will also give you a better timing precision than discrete mouse positions. But it all depends on what you actually want to do with those mouse positions (and now would be a good way to tell us).
EDIT: From your updated question it looks like you need the mouse positions at a high granularity in order to compute the collision of the mouse with some objects. In this case you don't actually need the intermediary points at all. Just take the line from the current mouse position to the previous one (represented as just a pair of points, or whatever theoretical line representation) and compute the collision of the objects with that line instead of the individual points.

3d line mouse picking

I have 3d scene with thousands lines. I want to be able to pick ALL 3d lines in the 10 pixels neighborhood of the mouse cursor (with perspective projection). I've tried to use unique-color based method. But this method is not suitable for me because I can not pick ALL lines - only the closest one.
Is there any acceptable solution of my problem ? OpenGL or DirectX - it does not matter.
Why not just compute the distance between those lines and the point in question? It's a 2D line-to-point distance computation. You could probably implement it with a Perl script that calls a Python executable that calls a Lua interpeter and still do 100,000 of them in a second.
This is one of those tunnel-vision "when all I have is a hammer, every problem looks like a nail" issues. You don't have to use rendering to do picking.
In old OpenGL (<= 2.1), you can use Selection Mode to do exactly this. Use gluPickMatrix() to select a small region around the cursor position, initialize a selection buffer, slip into selection mode (glRenderMode(GL_SELECT)), and redraw the scene. Then come back out of selection mode and your selection buffer will be full names (really id numbers) of all the drawn objects that appear in your region of interest. You'll have to modify your drawing code a little to push/pop names (glPushName(objIndex)) around each object that you render as well.
It's not the most efficient use of modern graphics hardware, but it always works.
Neither OpenGL nor DirectX will do the job for you, because they only draw things. What you must do is projecting all the lines in your scene to the screen and test, if the closest point to the selected position is nearer than your desired max distance. You can accelerate this by keeping the lines in some spatial subdivision structure (like a Kd tree or similar) to discard quickly all those lines which definitely don't match your criteria.

Trying to implement a mouse look "camera" in OpenGL/SFML

I've been using OpenGL with SFML 1.6 for some time now, and it has been a blast! With one exception: I can't seem to implement a camera class correctly. You see, I am trying to create a C++ class called "Camera". Here are my functions:
Camera::Strafe(float fSpeed)
checks whether the WASD keys are pressed, and if so, move the camera at "fSpeed" in their respective directions.
Camera::MouseMove(int currentX, int currentY)
should provide a first-person mouse look, taking in the current mouse coordinates and rotating the camera accordingly. My Strafe() implementation works fine, but I can't seem to get MouseMove() right.
I already know from reading other resources on OpenGL mouse look implementations that I must center the mouse after every frame, and I have that part down. But that's about it. I can't seem to get how to actually rotate the camera on the spot from the mouse coordinates. Probably need to use some trig, I bet.
I've done something similar to this (it was a 3rd person camera). If I remember what I did correctly, I took the change in mouse position and used that to calculate two angles (I did that with some trig, I believe). One angle gave me horizontal rotation, the other gave me vertical rotation. Pitch, Yaw and Roll specifically, although I can't remember which refers to which direction. There is also one you have to do before the other, or else things will rotate funny. I'm pretty sure it was pitch first, then yaw or roll.
Hopefully it should be obvious what the change in mouse position did. It allowed mouse senesitivity. If I moved the mouse fast, I would have a larger change, and so I would rotate "faster."
EDIT: Ok, I looked at my code and it's a very simple calculation.
This was done with C#, so bear with me for syntax:
_angles.X += MathHelper.ToDegrees(changeInX / 100);
_angles.Y += MathHelper.ToDegrees(changeInY / 100);
my angles were stored in a 2 dimensional vector (since I only rotated on two axes). You'll see I took my changeInX and changeInY values and simply divided them by 100 to get some arbitrary radian value, then converted that number to degrees. Adjust the 100 for sensitivity. Keep in mind, no solid-math was done here to figure this out. I just did some trial-and-error until I got something that worked well.

OpenGL GL_SELECT or manual collision detection?

As seen in the image
I draw set of contours (polygons) as GL_LINE_STRIP.
Now I want to select curve(polygon) under the mouse to delete,move..etc in 3D .
I am wondering which method to use:
1.use OpenGL picking and selection. ( glRenderMode(GL_SELECT) )
2.use manual collision detection , by using a pick-ray and check whether the ray is inside each polygon.
I strongly recommend against GL_SELECT. This method is very old and absent in new GL versions, and you're likely to get problems with modern graphics cards. Don't expect it to be supported by hardware - probably you'd encounter a software (driver) fallback for this mode on many GPUs, provided it would work at all. Use at your own risk :)
Let me provide you with an alternative.
For solid, big objects, there's an old, good approach of selection by:
enabling and setting the scissor test to a 1x1 window at the cursor position
drawing the screen with no lighting, texturing and multisampling, assigning an unique solid colour for every "important" entity - this colour will become the object ID for picking
calling glReadPixels and retrieving the colour, which would then serve to identify the picked object
clearing the buffers, resetting the scissor to the normal size and drawing the scene normally.
This gives you a very reliable "per-object" picking method. Also, drawing and clearing only 1 pixel with minimal per-pixel operation won't really hurt your performance, unless you are short on vertex processing power (unlikely, I think) or have really a lot of objects and are likely to get CPU-bound on the number of draw calls (but then again, I believe it's possible to optimize this away to a single draw call if you could pass the colour as per-pixel data).
The colour in RGB is 3 unsigned bytes, but it should be possible to additionally use the alpha channel of the framebuffer for the last byte, so you'd get 4 bytes in total - enough to store any 32-bit pointer to the object as the colour.
Alternatively, you can create a dedicated framebuffer object with a specific pixel format (like GL_R32UI, or even GL_RG32UI if you need 64 bits) for that.
The above is a nice and quick alternative (both in terms of reliability and in implementation time) for the strict geometric approach.
I found that on new GPUs, the GL_SELECT mode is extremely slow. I played with a few different ways of fixing the problem.
The first was to do a CPU collision test, which worked, but wasn't as fast as I would have liked. It definitely slows down when you are casting rays into the screen (using gluUnproject) and then trying to find which object the mouse is colliding with. The only way I got satisfactory speeds was to use an octree to reduce the number of collision tests down and then do a bounding box collision test - however, this resulted in a method that was not pixel perfect.
The method I settled on was to first find all the objects under the mouse (using gluUnproject and bounding box collision tests) which is usually very fast. I then rendered each of the objects that have potentially collided with the mouse in the backbuffer as a different color. I then used glReadPixel to get the color under the mouse, and map that back to the object. glReadPixel is a slow call, since it has to read from the frame buffer. However, it is done once per frame, which ends up taking a negligible amount of time. You can speed it up by rendering to a PBO if you'd like.
Giawa
umanga, Cant see how to reply inline... maybe I should sign up :)
First of all I must apologize for giving you the wrong algo - i did the back face culling one. But the one you need is very similar which is why I got confused... d'oh.
Get the camera position to mouse vector as said before.
For each contour, loop through all the coords in pairs (0-1, 1-2, 2-3, ... n-0) in it and make a vec out of them as before. I.e. walk the contour.
Now do the cross prod of those two (contour edge to mouse vec) instead of between pairs like I said before, do that for all the pairs and vector add them all up.
At the end find the magnitude of the resulting vector. If the result is zero (taking into account rounding errors) then your outside the shape - regardless of facing. If your interested in facing then instead of the mag you can do that dot prod with the mouse vector to find the facing and test the sign +/-.
It works because the algo finds the amount of distance from the vector line to each point in turn. As you sum them up and you are outside then they all cancel out because the contour is closed. If your inside then they all sum up. Its actually Gauss's Law of electromagnetic fields in physics...
See:http://en.wikipedia.org/wiki/Gauss%27s_law and note "the right-hand side of the equation is the total charge enclosed by S divided by the electric constant" noting the word "enclosed" - i.e. zero means not enclosed.
You can still do that optimization with the bounding boxes for speed.
In the past I've used GL_SELECT to determine which object(s) contributed the pixel(s) of interest and then used computational geometry to get an accurate intersection with the object(s) if required.
Do you expect to select by clicking the contour (on the edge) or the interior of the polygon? Your second approach sounds like you want clicks in the interior to select the tightest containing polygon. I don't think that GL_SELECT after rendering GL_LINE_STRIP is going to make the interior responsive to clicks.
If this was a true contour plot (from the image I don't think it is, edges appear to intersect) then a much simpler algorithm would be available.
You cant use select if you stay with the lines because you would have to click on the line pixels rendered not the space inside the lines bounding them which I read as what you wish to do.
You can use Kos's answer but in order to render the space you need to solid fill it which would involve converting all of your contours to convex types which is painful. So I think that would work sometimes and give the wrong answer in some cases unless you did that.
What you need to do is use the CPU. You have the view extents from the viewport and the perspective matrix. With the mouse coord, generate the view to mouse pointer vector. You also have all the coords of the contours.
Take the first coord of the first contour and make a vector to the second coord. Make a vector out of them. Take 3rd coord and make a vector from 2 to 3 and repeat all the way around your contour and finally make the last one from coord n back to 0 again. For each pair in sequence find the cross product and sum up all the results. When you have that final summation vector keep hold of that and do a dot product with the mouse pointer direction vector. If its +ve then the mouse is inside the contour, if its -ve then its not and if 0 then I guess the plane of the contour and the mouse direction are parallel.
Do that for each contour and then you will know which of them are spiked by your mouse. Its up to you which one you want to pick from that set. Highest Z ?
It sounds like a lot of work but its not too bad and will give the right answer. You might like to additionally keep bounding boxes of all your contours then you can early out the ones off of the mouse vector by doing the same math as for the full vector but only on the 4 sides and if its not inside then the contour cannot be either.
The first is easy to implement and widely used.

A method of creating simple game GUI

I have been able to find a lot of information on actual logic development for games. I would really like to make a card game, but I just dont understand how, based on the mouse position, an object can be selected (or atleast the proper way) First I thought of bounding box checking but not all my bitmaps are rectangles. Then I thought f making a hidden buffer wih each object having a different color, but it seems ridiculous to have to do it this way. I'm wondering how it is really done. For example, how does Adobe Flash know the object under the mouse?
Thanks
Your question is how to tell if the mouse is above a non-rectangular bitmap. I am assuming all your bitmaps are really rectangular, but they have transparent regions. You must already somehow be able to tell which part of your (rectangular) bitmap is transparent, depending on the scheme you use (e.g. if you designate a color as transparent or if you use a bit mask). You will also know the z-order (layering) of bitmaps on your canvas. Then when you detect a click at position (x,y), you need to find the list of rectangular bitmaps that span over that pixel. Sort them by z-order and for each one check whether the pixel is transparent or not. If yes, move on to the next bitmap. If no, then this is the selected bitmap.
Or you may use geometric solution. You should store / manage the geometry of the card / item. For example a list of shapes like circles, rectangles.
Maybe triangles or ellipses if you have lots of time. Telling that a triangle has a point or not is a mathematical question and can be numerically unstable if the triangle is very thin (algorithm has a dividing).. Fix: How to determine if a point is in a 2D triangle?
I voted for abc.