I'm writing C code that controls a Logitech gaming wheel using SDL. So far I have successfully implemented the code that sets the steering wheel in autocenter mode with:
SDL_HapticSetAutocenter(haptic, STRENGTH); //set autocenter
I would like to be able to use the motor of the steering wheel to rotate it as desired to particular angle positions. After checking in the documentation of the API, I did not find a simple way to do it.
I wonder if anyone has some advice on this.
Related
I'm trying to write an app in C++ using IAMCameraControl. I have found Set, Get, etc. but I cannot find a way to actually tell if the camera has completed moving (ex, CameraControl_Pan).
My goal is to turn on autofocus, move the camera, then turn off autofocus after camera has stopped moving and ,if possible, stopped focussing.
How to recognize rain on camera vision using with OpenCV in C++?
Or if somebody stick a sticker on a camera how recognize it with OpenCV in C++?
Or if somebody throw color to the camera how can i detect it with OpenCV in C++?
Detect these on camera vision:
Rain
Sticker
Color
Here is an example video of sticker!
Camera Vision-Sticker
In case of a sticker, you're just looking for a large dark area that doesn't change in time.
In case of color, analyze image color stats - if somebody sprays some paint on a camera (is that what you mean by "throwing color"?), some color is going to be dominant over all the others.
You can also try to handle both cases by subtracting frames and detecting image areas that don't change in time that way.
You may want to use machine learning for finding threshold values (e.g. area size, its shape properties, such as width/length ratio, continuousness etc.) used to decide when to consider something to be a sticker/color or something else.
As for the rain, I guess there's no simple answer that can be given in a few sentences. There are some articles available in the web though. That said, I would guess it would be simpler and cheaper to detect rain by just installing external rain sensors (like the ones activating wipers in a car) rather than trying to do it by developing your own computer vision algorithm for that purpose.
This sounds like an interesting project, where a camera can automatically detect obstruction (paint, sticker, rain). It will most likely be necessary for the camera to be mounted without obstructions so that the expected image can be learned. If the usage scenario allows that, it won't be very hard.Both sticker and rain result in strong permanent deviations from the expected image, while rain will result in noisy images.
OpenCV with C++ or Python can help solve this kind of problems, because complicated computer vision algorithms are already implemented there. It takes some time to get started with, but after that OpenCV is not hard.
I am working on the blind man navigation project. for this i need to detect right and left arrows using open cv and python. can anyone help with procedure or with sample code. i am pretty much new to open cv
I have to detect the following shape:
This will be in a live environment i.e. the arrow will be printed on a piece of A4 paper and hung up in a corridor. The camera which needs to detect the image will most likely be a bit shaky so there will be some deformation of the image I presume, also lighting might be an issue. Further the arrow only needs to be detected from the front e.g. not from the side where it will be deformed.
I am now wondering now what my best approach might be to correctly detect the arrow and as such its direction, left or right.
You can use template matching to detect the arrow, however if you using handheld camera, then to get the direction right, you need to make sure that the camera pose is correct, that is the camera is not rotated.
You can use feature based classifier such as HOG or Corners, Lines etc to build a detector and later predict the direction.
This Paper parents a road sign detection approach which is applicable in your case.
I'm using pygame with a joystick controller. The joystick controller is not calibrated correctly, though. The right horizontal controller continually outputs bad values and does not zero correctly upon return to center position. Is this fully a hardware issue, or is there a method of calibration/continual calibration using pygame or another library?
I had to calibrate my joystick by using software provided when I bought the joystick. I couldn't find a way to do so with pygame.
Are the outputs actually incorrect though? My outputs are never zero'd, but my program functions properly when using them.
I'm trying to build a simple traffic motion monitor to estimate average speed of moving vehicles, and I'm looking for guidance on how to do so using an open source package like OpenCV or others that you might recommend for this purpose. Any good resources that are particularly good for this problem?
The setup I'm hoping for is to install a webcam on a high-rise building next to the road in question, and point the camera down onto moving traffic. Camera altitude would be anywhere between 20 ft and 100ft, and the building would be anywhere between 20ft and 500ft away from the road.
Thanks for your input!
Generally speaking, you need a way to detect cars so you can get their 2D coordinates in the video frame. You might want to use a tracker to speed up the process and take advantage of the predictable motion of the vehicles. You, also, need a way to calibrate the camera so you can translate the 2D coordinates in the image to depth information so you can approximate speed.
So as a first step, look at detectors such as deformable parts model DPM, and tracking by detection methods. You'll probably need to port some code from Matlab (and if you do, please make it available :-) ). If that's too slow, maybe do some segmentation of foreground blobs, and track the colour histogram or HOG descriptors using a Particle Filter or a Kalman Filter to predict motion.