Python pygame calibrate joystick controller - python-2.7

I'm using pygame with a joystick controller. The joystick controller is not calibrated correctly, though. The right horizontal controller continually outputs bad values and does not zero correctly upon return to center position. Is this fully a hardware issue, or is there a method of calibration/continual calibration using pygame or another library?

I had to calibrate my joystick by using software provided when I bought the joystick. I couldn't find a way to do so with pygame.
Are the outputs actually incorrect though? My outputs are never zero'd, but my program functions properly when using them.

Related

IAMCameraControl Interface. How to tell when camera is done moving?

I'm trying to write an app in C++ using IAMCameraControl. I have found Set, Get, etc. but I cannot find a way to actually tell if the camera has completed moving (ex, CameraControl_Pan).
My goal is to turn on autofocus, move the camera, then turn off autofocus after camera has stopped moving and ,if possible, stopped focussing.

Why does the camera not remain child of the other camera after adding it in osg?

I am writing an OpenSceneGraph based program that uses two cameras, one to render the preview for the user and one that uses a callback to make screenshots with MRT. The exporter camera is using an orthographic projection and always from the same position with the same view and up vector, whereas the preview camera can be moved by the usere.
Upon starting the program, I create the preview camera, add the shaders to it, then set up the viewer and add the preview camera to it.
Afterwards I create the export camera, add its shaders, textures and callback and then I add the export camera as a child to the preview camera. Up to this point, nothing has been rendered (i.e. no frame call has been made).
The addChild call is issued (i.e. the break point is reached in VS, and stepping further I can see that the exporter camera is now a child of the preview camera). However, once I issue a command to actually make a screenshot, the exporter camera is not the child of the preview camera anymore (by now, a few render calls have been made).
Why is this, and how can I fix it apart from adding the exporter camera again?
On a sidenote: I started using computergraphics.stackexchange.com, would this question be suited for that side or is it meant for the algorithms rather than the implementations?
If you're using osg::View, it has its own implementation for master and slave cameras - see the osg::View headers for information or this old tutorial for an example: http://thermalpixel.github.io/osg/2014/02/15/rtt-with-slave-cameras.html

Accessing a Projector using MATlab

I wish to display an image through my projector via MATlab. The projected image should be full sized without any figure handle bars (menu bar, the grey stuff which encompasses a figure etc).
Similar to a normal presentation when the projector projects the complete slide or image, I want to do the same using MATlab as my platform. Any thoughts or idea? Can we access the projector using MATlab? My first thoughts were to send data to the corresponding printer IP but that doesn't seem to work :/
If you know the relevant C++ command or method to do this, please suggest a link or a library, so that I may try and import it on my MATlab platform.
Reason for doing this: Projector-Camera calibration for photo-metric correction of my projector display output.
Assuming your projector is set as a second display, you can do something very simple. Get the monitor position information and set the figure frame to be the monitor size
// plot figure however you want
monitorFrames = get(0,'MonitorPositions');
secondMonitor = monitorFrames(2,:);
secondMonitor(3) = secondMonitor(3)-monitorFrames(1,3);
set(gcf,'Position',secondMonitor);
This will put the figure window onto the second monitor and have it take up the whole screen.
You can then use this to do whatever calibration you need, and shift this window around as necessary.
NOTE:
In no way am I saying this is the ideal solution. It is quick and dirty, and will not use any outside libraries.
UPDATE
If the above solution does not suit your specific needs, what you could always do is save the plot as an image, then have your MATLAB script, call a c++ script that opens the image and makes it full screen.
This is non-trivial. For Windows you can use the WindowAPI submission to the MATLAB File Exchange. With the WindowAPI function installed you can do
WindowAPI(FigH, 'Position', 'full');
For Mac and Linux you can use wrappers around OpenGL to do low level plotting, but you cannot use standard MATLAB figure windows. One nice implementation is PsychToolbox.

invoking the mouse function of Open Gl using Kinect

I am creating an app in C++ (OpenGL) using Kinect. Whenever we click in OpenGL the function invoked is
void myMouseFunction( int button, int state, int mouseX, int mouseY )
{
}
But can we invoke them using Kinect? Maybe we have to use the depth buffer for it, but how?
First: You don't "click in openGL", because OpenGL doesn't deal with user input. OpenGL is purely a rendering API. What you're referring to is probably a callback to be used with GLUT; GLUT is not part of OpenGL, but a free standing framework, that also does some user input event processing.
The Kinect does not generate input events. What the Kinect does is, it returns a depth image of what it "sees". You need to process this depth image somehow. There are frameworks like OpenNI which process this depth image and translate it in gesture data or similar. You can then process such gestures data and process it further to interpret it as user input.
In your tags you referred to "openkinect", the open source drivers for the Kinect. However OpenKinect does not gesture extraction and interpretation, but only provides the depth image. You can of course perform simple tests on the depth data as well. For example testing of there's some object within the bounds of some defined volume and interpret this as sort of an event.
I think you are confusing what the Kinect really does. The Kinect feeds depth and video data to your computer, which will then have to process it. Openkinect only does very minimal processing for you -- no skeleton tracking. Skeleton tracking allows you to geta 3D representation of where each of your user's joints is.
If you're just doing some random hacking, you could perhaps switch to the KinectSDK -- with the caveat that you will only be able to develop and deploy on Windows.
KinectSDK works with OpenGL and C++, too, and you can get a said user's "skeleton".
OpenNI -- which is multiplatform and free as in freedom -- also supports skeleton tracking, but I haven't used it so I can't recommend it.
After you have some sort of skeleton tracking up, you can focus on the user's hands and process his movements to get your "mouse clicks" working. This will not use GLUT's mouse handler's though.

2D platform game camera in c++

I was wondering how exactly cameras are programmed in a 2D platform game. How is it programmed only to render whats in the view of the camera without rendering the whole map? Also, whats the proper way to do this?
Lazy foo has some good tutorials on this subject and further http://lazyfoo.net/SDL_tutorials/index.php
navigate to the scrolling tutorial, its in c++ with SDL but the logic should be universal.
There is no secret about that, you can simple check which tiles and which sprites are inside the rectangle that defines the screen and only draw those.
Another trick is to make the cameras always follows the player, but when it gets to the corner of the scenario you stop moving the camera, so you do not show the scenario borders.