PVLIB: tracking system in southern hemisphere - pvlib

how should be configured a tracking system rotating from East to West installed in the Southern Hemisphere? (tilt = 0)
default functiom
pvlib.tracking.SingleAxisTracker(axis_tilt=0, axis_azimuth=0,
max_angle=90, backtrack=True, gcr=2.0/7.0)
Thanks

Related

Body tracking has low precision on Azure Kinect

I am currently working on a project where the Azure Kinect is meant to be used as an interaction method for a 360° Screen room.
When using the Body Tracking Viewer from Microsoft the camera recognizes me well and precisely but in my application the calculated distance (using distanceTo from each vector) between my handtip joint and thumb joint is all over the place. Without moving my hand it will go from 10mm to 120mm.
I absolutely need this to be a bit more precise to be able to select things with the hand.
I suppose I have a problem in my startup configuration in the camera which causes it to be less precise than the Body Tracking Viewer but I don't know where to look.
As I wasn't sure if the default tracker config uses the lite dnn_model_2_0 or the "big" one, I tried setting it manually tracker_config.model_path = "dnn_model_2_0_op11.onnx"; but with the same result. Going to near field of view instead of wide does help a little bit but it stays very janky. Especially the left hand is all over the place, the right one is almost precise.
Here the startup/config from my cpp project
[...]
MQTTClient client;
MQTTClient_connectOptions conn_opts = MQTTClient_connectOptions_initializer;
MQTTClient_message pubmsg = MQTTClient_message_initializer;
int maint()
{
k4a_device_t device = NULL;
VERIFY(k4a_device_open(0, &device), "\nOpen K4A Device failed! Is the camera connected?");
// Start camera. Make sure depth camera is enabled.
k4a_device_configuration_t deviceConfig = K4A_DEVICE_CONFIG_INIT_DISABLE_ALL;
deviceConfig.depth_mode = K4A_DEPTH_MODE_WFOV_UNBINNED;
deviceConfig.color_resolution = K4A_COLOR_RESOLUTION_OFF;
deviceConfig.camera_fps = K4A_FRAMES_PER_SECOND_15;
VERIFY(k4a_device_start_cameras(device, &deviceConfig), "\nStart K4A cameras failed!");
k4a_calibration_t sensor_calibration;
VERIFY(k4a_device_get_calibration(device, deviceConfig.depth_mode, deviceConfig.color_resolution, &sensor_calibration),
"Get depth camera calibration failed!");
k4abt_tracker_t tracker = NULL;
k4abt_tracker_configuration_t tracker_config = K4ABT_TRACKER_CONFIG_DEFAULT;
VERIFY(k4abt_tracker_create(&sensor_calibration, tracker_config, &tracker), "\nBody tracker initialization failed!");
[...]
}

How to manualy set white balance of a uEye camera?

How can I programmatically set the white balance of an uEye USB camera (from the IDS manufacturer) to work with no automatic white balance and pre-defined multipliers when is_SetWhiteBalanceMultipliers() function is obsolete?
Some background: I work with a uEye USB2 camera (from IDS) connected to Linux machine. I need to get an RGB image with pre-defined colors (of cause on a pre-defined scene) from the camera. For example, I want to configure the WB to red 1.25 multiplier, green 1.0, and blue 2.0 multiplier.
For this task, I am using the uEye SDK on Linux (header file ueye.h).
The manual (A: Camera basics > Camera parameters) states that the is_SetWhiteBalanceMultipliers() function is obsolete and suggests to use is_SetAutoParameter() function instead. It was easy to disable the auto-white balance (is_SetAutoParameter( hCam, IS_SET_ENABLE_AUTO_WHITEBALANCE, 0, 0), but I struggle to find a way to configure the red/green/blue multipliers. The parameter IS_SET_AUTO_WB_OFFSET and IS_SET_AUTO_WB_GAIN_RANGE work only when the automatic white balance engaged and do nothing when it is disabled. I will be grateful for any suggestions!
I had the same issue. I think you can achieve the old result using the function "is_SetHardwareGain" on which you directly pass the main, red, green and blue gains. In my case I disabled the white balance before doing it just to make sure it works. In this example, I wanted to set the values to RGB gains = [8%, 0%, 32%] and the master gain to 0% (to not confuse with gain factors 0% normally corresponds to 1x gain factor):
double param1, param2; param1=0;
is_SetColorCorrection (hCam, IS_CCOR_DISABLE, &param1); //Disables the color fitler correction matrix
flagIDS = is_SetAutoParameter (hCam, IS_SET_ENABLE_AUTO_WHITEBALANCE, &param1, &param2);
param1=WB_MODE_DISABLE;
flagIDS = is_SetAutoParameter (hCam, IS_SET_ENABLE_AUTO_SENSOR_WHITEBALANCE, &param1, &param2);
flagIDS = is_SetHardwareGain (hCam, 0, 8, 0, 32);

Using RPA shapefile to select Sentinel-2 data

I have a shapefile which came from the Rural Payments Agency which is the complete set of fields for ourfarm. I would like to use this to find the bounding box for the download. The shapefile is encoded (according to the .prj file) using the British National Grid, GCS_OSGB_1936.
Having downloaded the relevant bit of the sensor data (I am interested in the Sentinel-2 visible and near infra-red so that I can do NDVI and EVI displays) I then want to clip the images to fit the fields.
I tried loading the sensor data and the shapefile, but I must have got the coordinate systems wrong because although I can zoom in on the farm on the sensor data, if I zoom in on the shapefile the sensor data is nowhere to be found.
Any pointers?
Try to change the CRS of the Vector file. Import in QGIS > Right Click > Set Layer CRS > Select the CRS. Hope This helps!

Point/Area of gaze with C++ and Opencv

I made a C++ program using OpenCV to allow the use of my webcam to recognize my face and my eyes. I would then like to determine the center of my pupils and then the point or area of gaze on my screen. Does anybody know how to do that? Please not my program uses a simple computer webcam.
Thank you in advance for your advice.
I think my Optimeyes project here:
https://github.com/LukeAllen/optimeyes
does what you're looking for: pupil detection and gaze tracking. Its included "Theory Paper" pdf discusses the principles of operation, and has references to other papers. The project was written using the Python version of OpenCV but you're welcome to port it to C++!
In case you are looking to identify the Point of Gaze on your laptop screen. Then below is method you can use:
Using shape_predictor_68_face_landmarks.dat, get the eye landmarks (six points per eye)
Calculate the center of eye (Ex, Ey) from the eye landmarks
If you could get the iris center (Ix, Iy) from above answer or from HCT
Calculate scaling factor : W(eye) = Topleftcorner(x) - Toprightcorner(x)
H(eye) = Topleftcorner(x) - Toprightcorner(x)
Scaling factor R(x) = W(screen)/W(eye)
R(y) = H(Screen)/H(eye)
POG (x) = (W(Screen)/2) + (R(x) *r (x))
POG (y) = (H(Screen)/2) +(R(y) *r(y))
r(x) and r(y) indicates the distance of Iris from Eye Center which is calculated as follows:
r(x) = COI (x) - COE (x)
, r(y) = COI(y) - COE (x)
Hope this helps!!

Bullet Physics: Body moves after fall (shakes and moves to the side)

I have a problem that I'm strugling to solve for a few days.
I'm trying to make a bowling game using bullet physics, but the pin shakes, jiggles and moves to the side after I position it and it falls to the floor.
Here is a GIF of what happens:
http://imgur.com/7Mg41sf
Here is how I create a Pin:
btCollisionShape* shape = createShape(pinVertices);
btScalar bodyMass = 1.6f;
btVector3 bodyInertia(0,0,0);
shape->calculateLocalInertia(bodyMass, bodyInertia);
btRigidBody::btRigidBodyConstructionInfo bodyCI = btRigidBody::btRigidBodyConstructionInfo(bodyMass, nullptr, shape, bodyInertia);
bodyCI.m_restitution = 0.7;
bodyCI.m_friction = 0.9f;
_physicsBody = std::unique_ptr<btRigidBody>(new btRigidBody(bodyCI));
_physicsBody->setUserPointer(this);
And here is how I create a floor:
btCollisionShape* shape = createShape(laneVertices);
btScalar bodyMass = 0.0f;
btVector3 bodyInertia(0,0,0);
shape->calculateLocalInertia(bodyMass, bodyInertia);
btRigidBody::btRigidBodyConstructionInfo bodyCI = btRigidBody::btRigidBodyConstructionInfo(bodyMass, nullptr, shape, bodyInertia);
bodyCI.m_restitution = 0.6;
bodyCI.m_friction = 0.5;
_physicsBody = std::unique_ptr<btRigidBody>(new btRigidBody(bodyCI));
_physicsBody->setUserPointer(this);
Right now the floor is a btBoxShape and a pin is a btConvexHullShape, but I've tried using cylinder or cone and they also slide.
Been struggling for few days especially taking into account Bullet Physics website and forum are down.
Looks entirely reasonable to me. A rigid body isn't exactly going to bounce back up, nor is it going to shatter.
You have further issues with the imperfect approximation of reality. The bottom of your pin is probably flat, which means it theoretically hits the floor instantly over multiple points. Furthermore, due to the limited FP accuracy, the pin won't be exactly round, but then that part is realistic.
So the horizontal movements are probably because the small bit of freefall introduced a minor deviation from pure vertical fall. When hitting the ground this component wasn't cancelled, but the friction on moving did eventually bring the pin to a halt. Since the pin had only a small horizontal speed, the friction was not enough to topple the pin.
Perhaps you should set the restitution (bounce) of the pin and floor to something lower (try first with 0.0) This should solve it if the pin is bouncing.
Another thing you could try is to deactivate the pin after creating it. I don't know in Bullet, but in JBullet it's done like this:
body.setActivationState( CollisionObject.WANTS_DEACTIVATION );
This will stop your pin until some other object like the ball or other pin hits it.