I want to develop a drone using Webots with python or c++. I want to program the drone to take off automatically, follow a predetermined route (like moving on a sinusoidal pattern) and then return and land. Does anyone have an experience in doing this or know of any documents helpful to this that can be shared with me, please?
By the way, I have started my coding, I want to set 2 propellers to rotate clockwise and the other 2 to rotate in the opposite direction when making it to take off so I have tried to set the velocity for 2 propellers with a negative value as the code is shown in the picture attached but those 2 propellers start later than the other 2 so it affects the drone's performance. Does anyone know what the problem is?
Thank you so much!
About your propeller issue, it is probably simpler just to inverse the axis of two of the propellers.
You should probably have a look at the example of drone simulation already available in Webots:
https://www.youtube.com/watch?v=-hJssj_Vcw8
You can find the documentation of this drone model at:
https://www.cyberbotics.com/doc/guide/mavic-2-pro
Here is the source of the model: https://github.com/cyberbotics/webots/blob/master/projects/robots/dji/mavic/protos/Mavic2Pro.proto
And the source of the controller:
https://github.com/cyberbotics/webots/blob/master/projects/robots/dji/mavic/controllers/mavic2pro/mavic2pro.c
Related
I did not took computer vision in college but I'd like to give it a try.
I want to make 3D model from a set of pictures.
I know you can do it with 123d catch or agisoft photoscan. doing it is not the point, it's writing the software.
At first I want to do stereo image matching then reconstruction from those 2 image, then multi view image matching and reconstruction.
According to this :
http://vision.middlebury.edu/stereo/eval/
The best algorythm is TSGO.
However I can't find any information about this TSGO algoritm, would any of you would know what TSGO stand for?
or if you know a better one.
Thanks.
The link on the page refers to a yet-to-be published paper:
[143] Anonymous. Accurate stereo matching by two step global
optimization. ECCV 2014 submission 74.
You'll have to wait until ECCV 2014 (September 6-12, 2014) to read it.
Meanwhile, you can take a look at OpenCV. It implements several stereo algorithms and get help you get up and running with the setup. Once you write your own implementation, you can contribute it to the community via OpenCV.
The paper and code is available at
http://www.cvc.uab.es/~mozerov/Stereo/index.htm
But it is not the best now...
I'm new to Kalman tracking so I've got no idea how to start. I have program to detect faces, after a face has been detected, i want to send the center x.y of the face to the Kalman filter to draw a line showing the direction of movement. How do i start? Thanks in advance.
You will need to understand the math for formulating the problem, the link offered by William is good place to experiment with the code. If you want to follow the math there are a few good places to check:
http://home.hit.no/~hansha/documents/control/theory/kalmanfilter.pdf
http://www.cl.cam.ac.uk/~rmf25/papers/Understanding%20the%20Basis%20of%20the%20Kalman%20Filter.pdf
http://old.shahed.ac.ir/references/kalman_filter_notes.pdf
and ofcourse
http://en.wikipedia.org/wiki/Kalman_filter
has some excellent references to go through. Also...
Check out the Udacity course:
https://www.udacity.com/course/cs373
This has a section on Kalman filters programming using python.
I am working with opencv these days and I am capable of doing 99% of stuff explained in opencv official tutorials. And I managed to do motion tracking manually with background substraction, where some users claimed as impossible.
However, right now I am working with object detection, where I need to track the hand and want to find whether the hand is moved to left or right. Can this be done by following steps? (used in motion detection)
Get camera 2 instances of camera video (real time)
blur it to reduce noise
theresold it to find hand (or leave it if blur is enough)
find the absolute deference between 2 images
Get PSR
find pixel position of motion
However, it seems like it is not 100% same as motion detection, because I read some stuff about Kalman Filter, Block-matching, etc which I did not use in motion detection. However, I found this tutorial
http://homepages.cae.wisc.edu/~ece734/project/s06/lintangwuReport.pdf
But, I really need your advice. Is there any tutorial which teach me how to do this? I am interested in learning core theory with opencv explanation (c++).
Since I am not good at maths( I am working on it - I didnt go to the university , they found me and invited me to join the final year for free because of my programming skills, so I missed math) , full of math stuff will not work.
Please help. Thank you.
I was trying to get RGBDemo(mostly reconstructor) working with 2 logitech stereo cameras, but I did not figure out how to do it.
I noticed that there is a opencv grabber in nestk library and its header file is included in the reconstructor.cpp. Yet, when I try "rgbd-viewer --camera-id 0", it keeps looking for kinect.
My questions:
1. Is RGBDemo only working with kinect so far?
2. If RGBDemo can work with non-kinect stereo cameras, how do I do that?
3. If I need to write my own implementation for non-kinect stereo cameras, any suggestion on how to start?
Thanks in advance.
if you want to do it with non-kinect cameras. You don't even need stereo. There are algorithms now that are able to determine whether two images' viewpoints are sufficiently different that they can be used as if they were taken by a stereo camera. In fact, they use images from different cameras that are found on the internet and reconstruct 3D models of famous places. I can write you a tutorial on how to get it working. I've been meaning to do so. The software is called Bundler. Along with Bundler, people often also use CMVS and PMVS. CMVS preprocesses the images for PMVS. PMVS generates dense clouds.
BUT! I highly recommend that you don't go this route. It makes a lot of mistakes because there is so much less information in 2D images. It makes it very hard to reconstruct the 3D model. So, it ends up making a lot of mistakes, or not working. Although Bundler and PMVS are awesome compared to previous software, the stuff you can do with kinect is on a whole other level.
To use kinect will only cost you $80 for the kinect off of ebay or $99 off of amazon and another $5 for the power adapter off of amazon. So, I'd highly recommend this route. Kinect provides much more information for the algorithm to work with than 2D images do, making it much more effective, reliable and fast. In fact, it could take hours to process images with Bundler and PMVS. Whereas with kinect, I made a model of my desk in just a few seconds! It truly rocks!
I am working on a project to stitch together around 400 high resolution aerial images around 36000x2600 to create a map. I am currently using OpenCV and so far I have obtained the match points between the images. Now I am at a lost in figuring out how to get the matrix transformation of the images so I can begin the stitching process. I have absolutely no background in working with images nor graphics so this is a first time for me. Can I get some advice on how I would approach this?
The images that I received also came with a data sheet showing longitude, latitude, airplane wing angle, altitude, etc. of each image. I am unsure how accurate these data are, but I am wondering if I can use these information to perform the proper matrix transformation that I need.
Thanks
Do you want to understand the math behind the process or just have an superficial idea of whats going on and just use it?
The regular term for "image snitching" is image alignment. Feed google with it and you'll find tons of sources.
For example, here.
Best regards,
zhengtonic
In recent opencv 2.3 release...they implemented a whole process of image stitching. Maybe it is worth looking at.