LibOVR position tracking functions Oculus - c++

There is a ton of different pose tracking functions in LibOVR so I am a little confused. ovr_GetTrackingState returns the position and orientation relative to the tracking origin if I am right, so why would one ever need to call ovr_GetEyePoses instead of just calling ovr_GetTrackingState to get the head orientation then call ovr_CalcEyePoses. Or is that what ovr_GetEyePoses actually does under the hood? And for ovr_GetEyePoses2 you pass in the HmdToEyePose from ovr_GetRenderDesc right, so why would one need the other version that is not the OvrPosef, just convenience or is there actually a difference? Lastly why would one need ovr_GetDevicePoses since the only device type is ovrTrackedDevice_HMD or ovrTrackedDevice_None, does it return something that ovr_GetTrackingState does not? Do some of these functions have performance penalties over the others?
So to sum it all up: What is the difference between using ovr_GetEyePoses(2) vs (ovr_GetTrackingState +ovr_CalcEyePoses). And what is even the point of ovr_GetDevicePoses since we can just call ovr_GetTrackingState?

Related

Is there anyway to use the SetCursorPos(int, int) function but instead of taking two int make it take two doubles for slower speed

I'm trying to make an Xbox controller move the cursor is there anyway I can use the setCursorPos() function and increment by two doubles instead of two ints. The problem is that 1 is still too fast of a change.
SetCursorPos() doesn't increment the cursor, it moves it to a new absolute x/y location. See the documentation for a description. And no, you can't call it with float params, it takes int params.
You did not provide any code, so making comments on other ways to do it is impossible. If you are incrementing the location such as e.g.
x = x+1;
x = y+1;
SetCursoPos(x,y);
then to make it move slower you can simply add a delay between successive calls to SetCursorPos().
For reliable input injection you should be using SendInput instead of SetCursorPos. This ensures that the system performs the entire pipeline of input processing, keeping applications happy. Setting the MOUSEEVENTF_ABSOLUTE flags allows you to pass normalized coordinates in the range of [0..65535]. In the vast majority of cases this provides a higher resolution than the display device; essentially this allows to use sub-pixel precision.

Which way of rendering objects in OpenGL is more efficient?

Which way of rendering two graphic elements with the same shape but different coordinates is more efficient using OpenGL (for eg. one object is upside down)?
Generate two different sets of points using CPU and then use only one shader in while loop
Generate only one set of points using CPU, create two different shaders and then switch between them in the while loop?
Additional question: Is there a possibility to create a set of points inside a shader? (This points will be calculated using sin() and cos() functions)
If in the first case you're switching buffers between OpenGL function calls then probably both solutions are equally bad. If you're holding two shapes in a single buffer and drawing everything in a single call then it's going to be faster than solution 2, but it requires twice the memory, which is also bad.
( You should have a single buffer for the shape and another one for transformations )
In general, you should use as few OpenGL calls as possible.
Regarding the second question: yes. For example, you could use a hard-coded array or derive the point coordinates from gl_VertexID.

Storing collections of items that an algorithm might need

I have a class MyClass that stores a collection of PixelDescriptor* objects. MyClass uses a function specified by a Strategy pattern style object (call it DescriptorFunction) to do something for each descriptor:
void MyFunction()
{
descriptorA = DescriptorCollection[0];
for_each descriptor in DescriptorCollection
{
DescriptorFunction->DoSomething(descriptor)
}
}
However, this only makes sense if the descriptors are of a type that the DescriptorFunction knows how to handle. That is, not all DescriptorFunction's know how to handle all descriptor types, but as long as the descriptors that are stored are of the type that the visitor that is specified knows about, all is well.
How would you ensure the right type of descriptors are computed? Even worse, what if the strategy object needs more than one type of descriptor?
I was thinking about making a composite descriptor type, something like:
class CompositeDescriptor
{
std::vector<PixelDescriptor*> Descriptors;
}
Then a CompositeDescriptor could be passed to the DescriptorFunction. But again, how would I ensure that the correct descriptors are present in the CompositeDescriptor?
As a more concrete example, say one descriptor is Color and another is Intensity. One Strategy may be to average Colors. Another strategy may be to average Intensities. A third strategy may be to pick the larger of the average color or the average intensity.
I've thought about having another Strategy style class called DescriptorCreator that the client would be responsible for setting up. If a ColorDescriptorCreator was provided, then the ColorDescriptorFunction would have everything it needs. But making the client responsible for getting this pairing correct seems like a bad idea.
Any thoughts/suggestions?
EDIT: In response to Tom's comments, a bit more information:
Essentially DescriptorFunction is comparing two pixels in an image. These comparisons can be done in many ways (besides just finding the absolute difference between the pixels themseles). For example 1) Compute the average of corresponding pixels in regions around the pixels (centered at the pixels). 2) Compute a fancier "descriptor" which typically produces a vector at each pixel and average the difference of the two vectors element-wise. 3) compare 3D points corresponding to the pixel locations in external data, etc etc.
I've run into two problems.
1) I don't want to compute everything inside the strategy (if the strategy just took the 2 pixels to compare as arguments) because then the strategy has to store lots of other data (the image, there is a mask involved describing some invalid regions, etc etc) and I don't think it should be responsible for that.
2) Some of these things are expensive to compute. I have to do this millions of times (the pixels being compared are always difference, but the features at each pixel do not change), so I don't want to compute any feature more than once. For example, consider the strategy function compares the fancy descriptors. In each iteration, one pixels is compared to all other pixels. This means in the second iteration, all of the features would have to be computed again, which is extremely redundant. This data needs to be stored somewhere that all of the strategies can access it - this is why I was trying to keep a vector in the main client.
Does this clarify the problem? Thanks for the help so far!
The first part sounds like a visitor pattern could be appropriate. A visitor can ignore any types it doesn't want to handle.
If they require more than one descriptor type, then it is a different abstraction entirely. Your description is somewhat vague, so it's hard to say exactly what to do. I suspect that you are over thinking it. I say that because generally choosing arguments for an algorithm is a high level concern.
I think the best advice is to try it out.
I would write each algorithm with the concrete arguments (or stubs if its well understood). Write code to translate the collection into the concrete arguments. If there is an abstraction to be made, it should be apparent while writing your translations. Writing a few choice helper functions for these translations is usually the bulk of the work.
Giving a concrete example of the descriptors and a few example declarations might give enough information to offer more guidance.

Tweener framework for c++?

for ActionScript there are quite a few "tweening" frameworks to facilitate animating objects. for example TweenLite: http://www.greensock.com/tweenlite/
it allows to animate an arbitrary object with a single line of code:
Pseudocode:
tween(myObject, 3.0f, {xpos:300});
what this line of code does is instanciating a new tweening object, which will step by step, during 3 seconds, animate the "xpos" property of 'myObject' from whatever value it currently has to 300. additionally it allows to use a variaty of different interpolation functions.
So in order to animate an object to a new point, i can write a single line of code and forget about it (the tweening object will destroy itself once it finished animating the value).
My Question is, whether there is anything comparable for c++?
I know that those languages are completely different. Anyway - i think it should be possible and would be highly convenient so if anyone knows a framework that does the trick, would be welcome :)
thanks!
I have stumbled upon libClaw's tweeners, and it looks promising - well documented, pretty mature and more or less alive.
I'm not sure I like the fact that it operates on doubles only whereas I would need it primarily for floats and sometimes ints, but I don't think the double calculation and casting performance penalty should be too big...
How about cpptweener. Of course which is ported from the awesome as3 tweener library.

Should screen dimension constants that hold magic numbers be refactored?

I have a few specific places in my code where I use specific pixel dimensions to blit certain things to the screen. Obviously these are placed in well named constants, but I'm worried that it's still kind of vague.
Example: This is in a small function's local scope, so I would hope it's obvious that the constant's name applies to what the method name refers to.
const int X_COORD = 430.0;
const int Y_COORD = 458.0;
ApplySurface( X_COORD, Y_COORD, .... );
...
The location on the screen was calculated specifically for that spot. I almost feel as if I should be making constants that say SCREEN_BOTTOM_RIGHT so I could do like something like const int X_COORD = SCREEN_BOTTOM_RIGHT - SOME_OTHER_NAME.
Is the code above too ambiguous? Or as a developer would you see that and say, alright, thats (430, 458) on the screen. Got it.
Depends. Is there a particular reason those constants are what they are? (For instance, is that "430" actually 200 pixels to the left of some other element?)
If so, then it'd probably make more sense to express it in terms of the constant used for the other element (or whatever reason results in that number).
If they're all just arbitrary positions, then expressing them as coordinates makes sense. But chances are, they're not actually arbitrary.
What size screen are you assuming I have? People have very different screen resolutions on their machines, and any fixed pixel size or position is going to be wrong for some people some of the time. My normal display is 1900x1220; my other display is 1440x1050; other people use screens with different sizes. If you are displaying to a fixed size window that the user cannot resize, then it may be safer to use fixed sizes.
Without knowing what ApplySurface() does, it is hard to say whether it is clear as written. However, the relative names could well be sensible. As a maintenance programmer, I'd have no idea where the values 430 and 458 were derived from without a supporting comment unless you used an expression to make it clear.