I'm trying to write an app in C++ using IAMCameraControl. I have found Set, Get, etc. but I cannot find a way to actually tell if the camera has completed moving (ex, CameraControl_Pan).
My goal is to turn on autofocus, move the camera, then turn off autofocus after camera has stopped moving and ,if possible, stopped focussing.
Related
I am writing an OpenSceneGraph based program that uses two cameras, one to render the preview for the user and one that uses a callback to make screenshots with MRT. The exporter camera is using an orthographic projection and always from the same position with the same view and up vector, whereas the preview camera can be moved by the usere.
Upon starting the program, I create the preview camera, add the shaders to it, then set up the viewer and add the preview camera to it.
Afterwards I create the export camera, add its shaders, textures and callback and then I add the export camera as a child to the preview camera. Up to this point, nothing has been rendered (i.e. no frame call has been made).
The addChild call is issued (i.e. the break point is reached in VS, and stepping further I can see that the exporter camera is now a child of the preview camera). However, once I issue a command to actually make a screenshot, the exporter camera is not the child of the preview camera anymore (by now, a few render calls have been made).
Why is this, and how can I fix it apart from adding the exporter camera again?
On a sidenote: I started using computergraphics.stackexchange.com, would this question be suited for that side or is it meant for the algorithms rather than the implementations?
If you're using osg::View, it has its own implementation for master and slave cameras - see the osg::View headers for information or this old tutorial for an example: http://thermalpixel.github.io/osg/2014/02/15/rtt-with-slave-cameras.html
I'm using pygame with a joystick controller. The joystick controller is not calibrated correctly, though. The right horizontal controller continually outputs bad values and does not zero correctly upon return to center position. Is this fully a hardware issue, or is there a method of calibration/continual calibration using pygame or another library?
I had to calibrate my joystick by using software provided when I bought the joystick. I couldn't find a way to do so with pygame.
Are the outputs actually incorrect though? My outputs are never zero'd, but my program functions properly when using them.
as a semester project me and my group had to create a game. We decided to make a 2D racing game, how is it different from a normal 2D racing game is that we use image processing to control the cars. The camera detects the users hands and based on their location it moves the car. So far we were able to do most of the code, except one part..
My question is, how do we assign the image (car) functionality based on the location of the hand.
We are using OpenCV, and it did everything so far, but as for functionality to the image, we couldnt figure anything out. I heard that SDL is very good when trying to make a 2D game such as this but I couldnt make it work with OpenCV.
I need suggestions on how to assign functionality to an image OR maybe change the image into something else sprite/object making it easier to assign functionality. I am open to any suggestions. I have tried doing this using only OpenCV/C++ methods, such as waitkey and mouse events but they will not work when we need the data from the video input.
Thank you
I'm making an ipad game using cocos2d and box2d.
Among other elements, there's a fast-moving player object and a bunch of static line objects. I want the lines to detect when the player crosses them but not to act like a wall to the player object or any other moving objects in the game. So I've got the lines set to be sensors.
However, the nifty anti-tunneling code that Box2D has for fast-moving object collision detection doesn't seem to apply to bodies that are set as sensors. So now my player object passes right through the lines and only gets detected maybe one time in five.
How can I get box2d to detect the sprite crossing the line every time, no matter how fast it's going?
Edit: I found this post on the box2D forums where someone had a similar issue and found a possible solution. However I don't follow how to implement the solution. Maybe it'll help someone else, or maybe someone can explain what this person did more clearly. Here's what they said:
OK I got it working. Someone responded in the Box2D forums with a solution, which is to use a ray cast instead of relying on the built-in collision detection. I was able to find instructions on how to do this in this excellent tutorial on RayWenderlich.com
For my purposes, I simply calculated the sprite's velocity from the last frame, then performed a ray cast to see if it crossed any lines. The callback gives the x,y coordinate of where it crossed.
I want to make b2mouse joint working similar to b2setposition, though i know in mouse joint force is applied, so it's not possible to reach the desired point without any delay like setPosition(), but i want to make it works as close as b2setPosition(). So on which mousejoint/body properties should i work on so that it acts as close as b2setposition.
Thanks for your answer.
According to the Box2D API Reference on b2MouseJoint:
NOTE: this joint is not documented in the manual because it was
developed to be used in the testbed. If you want to learn how to use
the mouse joint, look at the testbed.
There's no "b2setposition". There's b2Position which is an internal class, or you meant b2Body->SetTransform() which sets the position of a body.
If you could explain better what you're trying to do and why it has to be a b2Mouse joint, I might be able to help more.