Is it possible to obtain float touch coordinates in Windows? - c++

I am working on a project in which I need to use high accuracy input coordinates for further calculation.
I call GetTouchInputInfo function in WM_TOUCH message handler to acquire PTOUCHINPUT variables, but the coordinates provided in this struct are indicated in hundredths of a pixel of physical screen coordinates. So basically the last 2 digits are always 0, and after divided by 100, these coordinates are just integer screen coordinates. I don't know why did they make it in hundredth.
Besides, does the touchscreen sensor gives continuous signal instead of discreet signal? As I know, there is a thing called touch prediction in Window, it creates a small touch input lag as it holds a few latest input points and do some prediction algorithm to fix these fresh inputs smooth and pretty. Even if the touchscreen can only sense discreet signal, the input should be float anyway after the prediction algorithm right?
I really couldn't find out how to obtain float touch input. Is there any way or any function to achieve this?
Thank you!
And sorry about my English.

To convert the scaled long coordinates to float do:
PTOUCHINPUT pt;
float xf = pt->x / 100.0;
float yf = pt->y / 100.0;

Related

Scalable Ambient Obscurance rendering issue

I am trying to implement this SAO algorithm.
I am getting the following result :
I can't figure out why I have the nose on top of the walls, it seems to be a z-buffer issue.
Here are my input values :
const float projScale = 100.0;
const float radius = 0.9;
const float bias = 0.0005;
const float intensityDivR6 = pow(radius, 6);
I am using the original shader without modifications, except that I disable the usage of mipmaps of the depth buffer.
My depth buffer (on different scene, sorry) :
It should be an issue with the zbuffer linearization or it's not between -1 and 1.
Thank you Bruno, I finally figure out what were the issues.
The first was that I didn't transform my Z correctly, they use a specific pre-pass to make the Z linear and put it between -1 and 1. I was using an incompatible method to do it.
I also had to negate my near and far planes values directly in the projection matrix to compute correctly some uniforms.
Result :
I had a similar problem, having visual wrong occlusion, linked to the near/far, so I decided to give you what I've done to fix it.
The problem I had is discribed in a previous comment. I was getting self occlusion, when the camera was close to an object or when the radius was really too big.
If you take a closer look at the conversion from depth buffer value to camera-space value (the reconstructCSZ function from the g3d engine), you will see that replacing the depth by 0 will give you the near plane if you work with positive near/far. So, what it means is that every time you will get a tap outside the model, you will get a z component equals to near, which will give you wrong occlusion for fragments having a z close to 0.
You basically have to discard each taps that are located on the near plane, to avoid them being taken into account when comptuing the full contribution.

Kinect Joint Absolute coordinates

I'm using the Kinect for Windows SDK (v1.8)
I'm successfully reading motion from the Kinect but I'm now wondering how to get the absolute position of the joint I'm tracking (right hand)
I'm using the following function, NuiTransformSkeletonToDepthImage, but the values range from 100-200 in both x and y coordinates.
Any suggestions for how to transform the returned coordinates to screen coordinates?
Got a feeling I'm missing something really obvious though...
Thank you in advance
Ok, after quite a bit of looking around and trial and error; I found the solution.
When you call NuiTransformSkeletonToDepthImage(); you can specify a Kinect resolution using the NUI_IMAGE_RESOLUTION enum. I believe the values are; 80x60, 320x240, 640x480, 1280x960
Depending on what resolution you decide to use; you'll need to divide the returned x and y coords by the corresponding resolution. Then multiply those by your window size and you should have the coordinates in your window.
I believe you can also specify the resolution when creating an instance of the Kinect Sensor interface.
Example as my explanation isn't too concise:
float x, y
const Vector4 fromPos = skeleton.SkeletonPositions[NUI_SKELETON_POSITION_HAND_RIGHT];
NuiTransformSkeletonToDepthImage(fromPos, &x, &y, NUI_IMAGE_RESOLUTION_320x240);
x = x * ((float)WINDOW_WIDTH / 320.0f);
y = y * ((float)WINDOW_HEIGHT / 240.0f);
Any questions; feel free to ask.

Getting the velocity vector from position vectors

I looked at a bunch of similar questions, and I cannot seem to find one that particularly answers my question. I am coding a simple 3d game, and I am trying to allow the player to pick up and move entities around my map. I essentially want to get a velocity vector that will "push" the physics object a distance from the player's eyes, wherever they are looking. Here's an example of this being done in another game (the player is holding a chair entity in front of his eyes).
To do this, I find out the player's eye angles, then get the forward vector from the angles, then calculate the velocity of the object. Here is my working code:
void Player::PickupOtherEntity( Entity& HoldingEntity )
{
QAngle eyeAngles = this->GetPlayerEyeAngles();
Vector3 vecPos = this->GetEyePosition();
Vector3 vecDir = eyeAngles.Forward();
Vector3 holdingEntPos = HoldingEntity.GetLocation();
// update object by holding it a distance away
vecPos.x += vecDir.x * DISTANCE_TO_HOLD;
vecPos.y += vecDir.y * DISTANCE_TO_HOLD;
vecPos.z += vecDir.z * DISTANCE_TO_HOLD;
Vector3 vecVel = vecPos - holdingEntPos;
vecVel = vecVel.Scale(OBJECT_SPEED_TO_MOVE);
// set the entity's velocity as to "push" it to be in front of the player's eyes
// at a distance of DISTANCE_TO_HOLD away
HoldingEntity.SetVelocity(vecVel);
}
All that is great, but I want to convert my math so that I can apply an impulse. Instead of setting a completely new velocity to the object, I want to "add" some velocity to its existing velocity. So supposing I have its current velocity, what kind of math do I need to "add" velocity? This is essentially a game physics question. Thank you!
A very simple implementation could be like this:
velocity(t+delta) = velocity(t) + delta * acceleration(t)
acceleration(t) = force(t) / mass of the object
velocity, acceleration and force are vectors. t, delta and mass scalars.
This only works reasonably well for small and equally spaced deltas. What you are essentially trying to achieve with this is a simulation of bodies using classical mechanics.
An Impulse is technically F∆t for a constant F. Here we might want to assume a∆t instead because mass is irrelevant. If you want to animate an impulse you have to decide what the change in velocity should be and how long it needs to take. It gets complicated real fast.
To be honest an impulse isn't the correct thing to do. Instead it would be preferable to set a constant pick_up_velocity (people don't tend to pick things up using an impulse), and refresh the position each time the object rises up velocity.y, until it reaches the correct level:
while(entPos.y < holdingEntPos.y)
{
entPos.y += pickupVel.y;
//some sort of short delay
}
And as for floating in front of the player's eyes, set an EyeMovementEvent of some sort that also sends the correct change in position to any entity the player is holding.
And if I missed something and that's what you are already doing, remember that when humans apply an impulse, it is generally really high acceleration for a really short time, much less than a frame. You wouldn't see it in-game anyways.
basic Newtonian/D'Alembert physics dictate:
derivate(position)=velocity
derivate(velocity)=acceleration
and also backwards:
integrate(acceleration)=velocity
integrate(velocity)=position
so for your engine you can use:
rectangle summation instead of integration (numerical solution of integral). Define time constant dt [seconds] which is the interval between updates (timer or 1/fps). So the update code (must be periodically called every dt:
vx+=ax*dt;
vy+=ay*dt;
vz+=az*dt;
x+=vx*dt;
y+=vy*dt;
z+=vz*dt;
where:
a{x,y,z} [m/s^2] is actual acceleration (in your case direction vector scaled to a=Force/mass)
v{x,y,z} [m/s] is actual velocity
x,y,z [m] is actual position
These values have to be initialized a,v to zero and x,y,z to init position
all objects/players... have their own variables
full stop is done by v=0; a=0;
driving of objects is done only by change of a
in case of collision mirror v vector by collision normal
and maybe multiply by some k<1.0 (0.95 for example) to account energy loss on impact
You can add gravity or any other force field by adding g vector:
vx+=ax*dt+gx*dt;
vy+=ay*dt+gy*dt;
vz+=az*dt+gz*dt;
also You can add friction and anything else you need
PS. the same goes for angles just use angle/omega/epsilon/I instead of x/a/v/m
to be clear by angles I mean rotation (pitch,yaw,roll) around mass center

2d rotation on set of points causes skewing distortion

I'm writing an application in OpenGL (though I don't think this problem is related to that). I have some 2d point set data that I need to rotate. It later gets projected into 3d.
I apply my rotation using this formula:
x' = x cos f - y sin f
y' = y cos f + x sin f
Where 'f' is the angle. When I rotate the point set, the result is skewed. The severity of the effect varies with the angle.
It's hard to describe so I have pictures;
The red things are some simple geometry. The 2d point sets are the vertices for the white polylines you see around them. The first picture shows the undistorted pointsets, and the second picture shows them after rotation. It's not just skew that's occuring with the rotation; sometimes it seems like displacement occurs as well.
The code itself is trivial:
double cosTheta = cos(2.4);
double sinTheta = sin(2.4);
CalcSimplePolyCentroid(listHullVx,xlate);
for(size_t j=0; j < listHullVx.size(); j++) {
// translate
listHullVx[j] = listHullVx[j] - xlate;
// rotate
double xPrev = listHullVx[j].x;
double yPrev = listHullVx[j].y;
listHullVx[j].x = ((xPrev*cosTheta) - (yPrev*sinTheta));
listHullVx[j].y = ((yPrev*cosTheta) + (xPrev*sinTheta));
// translate
listHullVx[j] = listHullVx[j] + xlate;
}
If I comment out the code under '//rotate' above, the output of the application is the first image. And adding it back in gives the second image. There's literally nothing else that's going on (afaik).
The data types being used are all doubles so I don't think its a precision issue. Does anyone have any idea why rotation would cause skewing like the above pictures show?
EDIT
filipe's comment below was correct. This probably has nothing to do with the rotation and I hadn't provided enough information for the problem;
The geometry I've shown in the pictures represents buildings. They're generated from lon/lat map coordinates. In the point data I use to do the transform, I forgot to use an actual projection to cartesian coordinate space and just mapped x->lon, y->lat, and I think this is the reason I'm seeing the distortion. I'm going to request that this question be deleted since I don't think it'll be useful to anyone else.
Update:
As a result of your comments it tunred out the it is unlikely that the bug is in the presented code.
One final other hint: std transform formulars are only valid if the cooridnate system is cartesian,
on ios you sometimes have inverted y Achsis.

Drawing big circles from scratch [closed]

It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 10 years ago.
I am new to C++, but the language seems alright to me. As a learning project I have decided to make a minor 2D graphic-engine. It might seem like a hard project, but I have a good idea how to move on.
I havn't really started yet, but I am forming things in my head at the moment, when I came across this problem:
At some point I will have to make a function to draw circles on the screen. My approach to that right now would be something like this:
in a square with sides from (x-r) to (x+r) loop through x and y,
if at each point, the current distance sqr(x^2+y^2) is less than or equal to r
, then draw a pixel at that point.
This would work, if not, dont bother telling me, I'll figure it out. I would of cause only draw this circle if the x+r & y+r is on the screen.
The problem lies in that I will need to draw really big circles sometimes. If for example I need to draw a circle with radius 5000, the (if pixel loop calculations would need loop a total of 10000^2 times). So with a processor at 2Ghz, this single circle would only be able to render 2Ghz/(10000^2) which is ~22 times/s while taking up the whole core (believing it only takes one calculation per pixel, which is nowhere the truth).
Which approach is the correct one now? I guess it has something to do with using the GFX for these simple calculations. If so, can I use OpenGL for C++ for this? I'd like to learn that as well :)
My first C/C++ projects were in fact graphics libraries as well. I did not have OpenGL or DirectX and was using DOS at the time. I learned quite a lot from it, as I constantly found new and better (and faster) ways to draw to the screen.
The problem with modern operating systems is that they don't really allow you to do what I did back then. You cannot just start using the hardware directly. And frankly, these days you don't want to anymore.
You can still draw everything yourself. Have a look at SDL if you want to put your own pixels. This is a C library that you will be able to wrap into your own C++ objects. It works on different platforms (including Linux, Windows, Mac,...) and internally it will make use of things like DirectX or OpenGL.
For real-world graphics, one doesn't just go about drawing one's own pixels. That is not efficient. Or at least not on devices where you cannot use the hardware directly...
But for your purposes, I think SDL is definitely the way to go! Good luck with that.
You don't do graphics by manually drawing pixels to screen, that way madness lies.
What you want to use is either DirectX or OpenGL. I suggest you crack open google and go read, there's a lot to read out there.
Once you've downloaded the libs there's lots of sample projects to take a look at, they'll get you started.
There's two approaches at this point: there's the mathematical way of calculating the vectors that describe a shape with a very large number of sides (i.e it'll look like a circle). Or there's the 'cheating' method of just drawing a texture (i.e a picture) of a circle to the screen with an alpha channel to make the rest of the texture transparent. (The cheating method is easier to code, faster to execute, and produces a better result, although it is less flexible).
If you want to do it mathematically then both of these libraries will allow you to draw lines to screen, so you need to begin your approach from the view of start point and end point of each line, not the individual pixels. i,e you want vector graphics.
I can't do the heavy maths right now, but the vector approach might look a little like this (sudo-code):
in-param: num_of_sides, length_of_side;
float angle = 360 / num_of_sides;
float start_x = 0;
float start_y = 0;
x = startX;
y = startX;
for(int i(0); i < num_of_sides; ++i)
{
float endX, endY;
rotateOffsetByAngle(x, y, x + lenth_of_side, y, angle * i, endX, endY);
drawline(x, y, endX, endY);
x = endX;
y = endY;
}
drawline(float startX, startY, endX, endY)
{
//does code that draws line between the start and end coordinates;
}
rotateOffsetByAngle(float startX, startY, endX, endY, angle, &outX, &outY)
{
//the in-parameters startX, startY and endX, endY describe a line
//we treat this line as the offset from the starting point
//do code that rotates this line around the point startX, startY, by the angle.
//after this rotation is done endX and endY are now at the same
//distance from startX and startY that they were, but rotated.
outX = endX;
outY = endY; //pass these new coordinates back out by reference;
}
In the above code we move around the outside of the circle drawing each individual line around the outside 1 by one. For each line we have a start point and an offset, we then rotate the offset by an angle (this angle increases as we move around the circle). Then we draw the line from the start point to the offset point. Before we begin the next iteration we move the start point to the offset point so the next line starts from the end of the last.
I hope that's understandable.
That is one way to draw a filled circle. It will perform appallingly slowly, as you can see.
Modern graphics is based on abstracting away the lower-level stuff so that it can be optimised; the developer writes drawCircle(x,y,r) and the graphics libary + drivers can pass that all the way down to the chip, which can fill in the appropriate pixels.
Although you are writing in C++, you are not manipulating data closest to the core unless you use the graphics drivers. There are layers of subroutine calls between even your setPixelColour level methods and an actual binary value being passed over the wire; at almost every layer there are checks and additional calculations and routines run. The secret to faster graphics, then, is to reduce the number of these calls you make. If you can get the command drawCircle all the way to the graphics chip, do that. Don't waste a call on a single pixel, when it's as mundane as drawing a regular shape.
In a modern OS, there are layers of graphics processing taking the requests of individual applications like yours and combining them with the windowing, compositing and any other effects. So your command to 'draw to screen' is intermediated by several layers already. What you want to provide to the CPU is the minimum information necessary to offload the calculations to the graphics subsystem.
I would say if you want to learn to draw stuff on the screen, play with canvas and js, as the development cycle is easy and comparatively painless. If you want to learn C++, try project Euler, or draw stuff using existing graphics libraries. If you want to write a 2d graphics library, learn the underlying graphics technologies like DirectX and OpenGL, because they are the way that graphics is done in reality. But they seem so complex, you say? Then you need to learn more C++ first. They are the way they are for some very good reasons, however complex the result is.
As the first answer says, you shouldn't do this yourself for serious work. But if you just want to do this as an example, then could could do something like this: First define a function for drawing line segments on the screen:
void draw_line(int x1, int y1, int x2, int y2);
This should be relatively straightforward to implement: Just find the direction that is changing fastest, and iterate over that direction while using integer logic to find out how much the other dimension should change. I.e., if x is changing faster, then y = (x-x1)*(y2-y1)/(x2-x1).
Then use this function to implement a circle as piecewise line elements:
void draw_circle(int x, int y, int r)
{
double dtheta = 2*M_PI/8/r;
int x1 = x+r, x2, y1 = y, y2;
int n = 2*M_PI/dtheta;
for(int i = 1; i < n; i++)
{
double theta = i*dtheta;
x2 = int(x+r*cos(theta)); y2 = int(y+r*sin(theta));
draw_line(x1, y1, x2, y2);
x1 = x2; y1 = y2;
}
}
This uses floating point logic and trigonometric functions to figure out which line elements best approximate a circle. It is a somewhat crude implementation, but I think any implementation that wants to be efficient for very large circles has to do something like this.
If you are only allowed to use integer logic, one approach could be to first draw a low-resolution integer circle, and then subdivide each selected pixel into smaller pixels, and choose the sub-pixels you want there, and so on. This would scale as N log N, so still slower than the approach above. But you would be able to avoid sin and cos.