DroneKit simple takeoff not working - dronekit

I'm trying to make a simple Takeoff command.
Here is the code below:
ControlApi.getApi(this.drone).takeoff(10, new AbstractCommandListener() {
#Override
public void onSuccess() {
}
#Override
public void onError(int executionError) {
alertUser("Error: " + executionError);
}
#Override
public void onTimeout() {
alertUser("timeout");
}
});
Although I am managing to ARM the copter, the takeoff command always returns error, with executionError 3 or 4
and I don't know what it means?
Any one have the executionError codes meanings?
Or maybe know what is the issue?

[Solved].
Here is the steps that need to be taken in order the code to work:
1.
VehicleApi.getApi(this.drone).arm(true, new AbstractCommandListener() {
#Override
public void onSuccess() {
}
#Override
public void onError(int executionError) {
}
#Override
public void onTimeout() {
}
});
Make sure you get proper response in onSuccess method.
2.
VehicleApi.getApi(drone).setVehicleMode(VehicleMode.COPTER_GUIDED);
Here is where was my problem. I somehow managed to put the copter in Guided_NoGps
According to Arducopter documentation, this mode can be put without sufficient GPS satellites count.
Also you need a 3DFix in GPS to switch to Guided Mode.
You need to make sure you have more than 9 stable satellites locks or this code just wont work.
3.
Run the code below
ControlApi.getApi(this.drone).takeoff(10, new AbstractCommandListener() {
#Override
public void onSuccess() {
}
#Override
public void onError(int executionError) {
alertUser("Error: " + executionError);
}
#Override
public void onTimeout() {
alertUser("timeout");
}
});
I have tested this on real quad-copter based on PX4 controller.
Also you need ArduCopter version 3.4 (or above)

Related

osmdroid: Tile server which allows bulk download

I am working on an application, for which I need to be able to use map data, while offline. I have looked up how to use the CacheManager and how to download tiles referencing the SampleCacheDownloader.java.
I am using the following code to download the tiles:
CacheManager cMgr = new CacheManager(map);
BoundingBox bBox = cMgr.extendedBoundsFromGeoPoints(currentGeoPoints, 16);
cMgr.downloadAreaAsync(this, bBox, 16, 21, new CacheManager.CacheManagerCallback() {
#Override
public void onTaskComplete() {
}
#Override
public void updateProgress(int progress, int currentZoomLevel, int zoomMin, int zoomMax) {
}
#Override
public void downloadStarted() {
}
#Override
public void setPossibleTilesInArea(int total) {
}
#Override
public void onTaskFailed(int errors) {
}
});
The code works, as in starting the download. But all the tile servers, which are included in the TileSourceFactory class either throw the TileSourcePolicyException "This online tile source doesn't support bulk download" (e.g. MAPNIK and WIKIMEDIA) or fail to get the tiles, because they don't exist for the set zoom levels (e.g. USGS_SAT and USGS_TOPO).
So my question is, are there other publicly available tile servers or do I have to set up my own to be able to download tiles in bulk? Or do I have the completely wrong approach to caching the tiles included in a bounding box with set zoom levels?

Google Maps two finger pinch/stretch callbacks

In my map, I am trying to capture zoom in/out using ScaleGestureDetector but I am never receiving any callbacks to either of onScale or onScaleBegin or onScaleEnd.
In my Fragment's onCreateView, I initialize:
scaleGestureDetector = new ScaleGestureDetector(getContext(), new simpleOnScaleGestureListener());
And I implement the callbacks like so:
public class simpleOnScaleGestureListener extends
SimpleOnScaleGestureListener {
#Override
public boolean onScale(ScaleGestureDetector detector) {
// TODO Auto-generated method stub
startScale = detector.getScaleFactor();
Log.d(TAG, "::onScale:: " + detector.getScaleFactor());
return true;
}
#Override
public boolean onScaleBegin(ScaleGestureDetector detector) {
// TODO Auto-generated method stub
Log.d(TAG, "::onScaleBegin:: " + detector.getScaleFactor());
return true;
}
#Override
public void onScaleEnd(ScaleGestureDetector detector) {
// TODO Auto-generated method stub
Log.d(TAG, "::onScaleEnd:: " + detector.getScaleFactor());
endScale = detector.getScaleFactor();
}
Also, is it fair to assume that the callbacks will be called continuously whenever the user zooms in/out?
I was able to get past the issue of getting callbacks. Essentially, two things:
In your activity/fragment, implement, GoogleMap.OnCameraIdleListener
In onMapReady(), call mMap.setOnCameraIdleListener(this);
Hence, override onCameraIdle():
#Override
public void onCameraIdle() {
Log.i(TAG, "::onCameraIdle::" + mMap.getCameraPosition().toString());
}
to get lat/long, zoom, tilt and bearing, essentially CameraPosition.
I found a way to get radius in meters by referring to this response
VisibleRegion vr = map.getProjection().getVisibleRegion();
Location center = new Location("center");
center.setLatitude(vr.latLngBounds.getCenter().latitude);
center.setLongitude(vr.latLngBounds.getCenter().longitude);
//Location("radiusLatLng") as mentioned in google maps sample
Location farVisiblePoint = new Location("radiusLatLng");
farVisiblePoint.setLatitude(vr.farLeft.latitude);
farVisiblePoint.setLongitude(vr.farLeft.longitude);
radius = center.distanceTo(farVisiblePoint);

JavaFX - using toggle isSelected() in conditional statements

I'm building a simple sketch program using JavaFX. I want the user to be able to switch between drawing a rectangle, circle or line and I've put toggle radio buttons in the menu for these options.
Is it possible to write an if/else statement so I can write code for three different functions depending on which is selected? At the moment it will only draw lines. This is part of my code so far (sorry it's messy):
package Sketchbook;
public class Sketchbook extends Application {
final static int CANVAS_WIDTH = 800;
final static int CANVAS_HEIGHT = 600;
ColorPicker colorPicker1;
ColorPicker colorPicker2;
#Override
public void start(final Stage primaryStage) {
final Canvas canvas = new Canvas(CANVAS_WIDTH, CANVAS_HEIGHT);
final GraphicsContext graphicsContext = canvas.getGraphicsContext2D();
initDraw(graphicsContext);
canvas.addEventHandler(MouseEvent.MOUSE_PRESSED,
new EventHandler<MouseEvent>(){
#Override
public void handle(MouseEvent event) {
graphicsContext.beginPath();
graphicsContext.moveTo(event.getX(), event.getY());
graphicsContext.setStroke(colorPicker1.getValue());
graphicsContext.stroke();
}
});
canvas.addEventHandler(MouseEvent.MOUSE_DRAGGED,
new EventHandler<MouseEvent>(){
#Override
public void handle(MouseEvent event) {
graphicsContext.lineTo(event.getX(), event.getY());
graphicsContext.setStroke(colorPicker1.getValue());
graphicsContext.stroke();
}
});
canvas.addEventHandler(MouseEvent.MOUSE_RELEASED,
new EventHandler<MouseEvent>(){
#Override
public void handle(MouseEvent event) {
}
});
Group root = new Group();
ToggleGroup toggleGroup = new ToggleGroup();
RadioButton rectangle = new RadioButton("Rectangle");
RadioButton circle = new RadioButton("Circle");
RadioButton line = new RadioButton("Line");
rectangle.setSelected(true);
rectangle.setToggleGroup(toggleGroup);
circle.setToggleGroup(toggleGroup);
line.setToggleGroup(toggleGroup);
You've pretty much already described what you need to do:
canvas.addEventHandler(MouseEvent.MOUSE_DRAGGED,
new EventHandler<MouseEvent>(){
#Override
public void handle(MouseEvent event) {
if (toggleGroup.getSelectedToggle() == line) {
graphicsContext.lineTo(event.getX(), event.getY());
graphicsContext.setStroke(colorPicker1.getValue());
graphicsContext.stroke();
} else if (toggleGroup.getSelectedToggle() == rectangle) {
// etc...
} // etc...
}
}
});
Obviously you may need to reorder the code a little to make sure variables are declared and initialized before you use them.

Glass - Slow camera \ FileObserver notification - XE12 - using ACTION_IMAGE_CAPTURE

I have basically implemented the new XE12\GDK2 cameramanager sample code to capture an image on application start. However, the notification to the FileObserver callback takes anywhere from 3 to 30 seconds to get the notification of the image file creation. Taking a picture using the default 'Take a Picture' app works just fine so I dont thin it is an OS\update issue.
My app's behavior is like:
- Take the picture
- Tap to accept
Wait 3 to 30 seconds
- Get the callback and the imageview is updated with the captured image.
I dont think I have modified a single line of the sample code provided in the GDK 2.0 camera tutorial. So wondering what I am missing.
I have attached the relevant section of the code below. Any tips\pointers highly appreciated.
#Override
protected void onStart() {
super.onStart();
Intent takePictureIntent = new Intent(MediaStore.ACTION_IMAGE_CAPTURE);
// String path = Environment.getExternalStorageDirectory().getPath();
if (takePictureIntent.resolveActivity(getPackageManager()) != null) {
startActivityForResult(takePictureIntent, REQUEST_IMAGE_CAPTURE);
}
}
private void processPictureWhenReady(final String picturePath) {
final File pictureFile = new File(picturePath);
if (pictureFile.exists()) {
// The picture is ready; process it. Takes 3-30 seconds to get here!
try {
Bitmap imageBitmap = BitmapFactory.decodeFile(picturePath);
int w = imageBitmap.getWidth();
int h = imageBitmap.getHeight();
Bitmap bm2 = Bitmap.createScaledBitmap(imageBitmap, w/2, h/2, true);
imageBitmap = bm2.copy(bm2.getConfig(), true);
//m_ImageView.setImageBitmap(bm2);
} catch (Exception e) {
Log.e("Exc", e.getMessage());
}
} else {
tm = System.currentTimeMillis();
// The file does not exist yet. Before starting the file observer, you
// can update your UI to let the user know that the application is
// waiting for the picture (for example, by displaying the thumbnail
// image and a progress indicator).
final File parentDirectory = pictureFile.getParentFile();
FileObserver observer = new FileObserver(parentDirectory.getPath()) {
// Protect against additional pending events after CLOSE_WRITE is
// handled.
private boolean isFileWritten;
#Override
public void onEvent(int event, String path) {
if (!isFileWritten) {
// For safety, make sure that the file that was created in
// the directory is actually the one that we're expecting.
File affectedFile = new File(parentDirectory, path);
isFileWritten = (event == FileObserver.CLOSE_WRITE
&& affectedFile.equals(pictureFile));
if (isFileWritten) {
stopWatching();
// Now that the file is ready, recursively call
// processPictureWhenReady again (on the UI thread).
runOnUiThread(new Runnable() {
#Override
public void run() {
processPictureWhenReady(picturePath);
}
});
}
}
}
};
observer.startWatching();
}
}
Answering my own question - though I got the clarifications from Jenny Murphy and John Feig :-). Hopefully it helps others.
To the first point - why is image capture using the sample code from the GDK guide so slow:
This is the expected behavior. The Glass camera intent (ACTION_IMAGE_CAPTURE) performs a ton of proprietary post-processing on the captured image - auto-HDR etc which takes time. This is cleverly disguised in the 'Take a picture' command by only displaying the preview image (which is available immediately.). As proof, try to find the image you just took in your time-line. You will not see it for several seconds (around 8 seconds on average in my experience.).
Frankly, unless you are ok just grabbing the preview image, the camera intent may not be very useful in most apps.
The solution is to use the Camera directly using default Android APIs. For convenience, I have pasted a snippet of this code. Please excuse if it is kind of basic for many of you. A lot of the code is copied from John Feig's GIFCamera glassware on GitHub
activity_main layout contains a SurfaceView called preview
<SurfaceView
android:id="#+id/preview"
android:layout_width="500dp"
android:layout_height="500dp"
android:layout_alignParentTop="true"
android:layout_marginTop="20dp"
/>
MainActivity.java
public class MainActivity extends Activity implements PhotoCallback {
public byte[] m_jpg = null;
Camera cam = null;
SurfaceHolder m_sh;
private final SurfaceHolder.Callback mSurfaceHolderCallback = new SurfaceHolder.Callback() {
#Override
public void surfaceCreated(SurfaceHolder hldr) {
m_sh = hldr;
}
#Override
public void surfaceDestroyed(SurfaceHolder holder) {
}
#Override
public void surfaceChanged(SurfaceHolder holder, int format, int width, int height) {
myCapHandler2(); //Start Camera Preview etc.
}
};
#Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
SurfaceView preview = (SurfaceView) findViewById(R.id.preview);
preview.getHolder().addCallback(mSurfaceHolderCallback);
}
public void myCapHandler2() {
//open camera
try {
cam = Camera.open(0);
Camera.Parameters params = cam.getParameters();
List<Size> sizes = params.getSupportedPreviewSizes();
params.setJpegQuality(90);
params.setPreviewFpsRange(30000, 30000);
params.setPictureSize(sizes.get(1).width, sizes.get(1).height);
params.setPreviewSize(sizes.get(1).width, sizes.get(1).height);
cam.setParameters(params);
try {
cam.setPreviewDisplay(m_sh);
}
catch (IOException e) {
e.printStackTrace();
}
// Important: Call startPreview() to start updating the preview
// surface. Preview must be started before you can take a picture.
cam.startPreview();
cam.takePicture(null, null,
new PhotoHandler(this));
} catch (Exception e) {
if (null != cam) {
cam.stopPreview();
cam.release();
}
}
}
#Override
public void pictureTaken(byte[] jpg) {
m_jpg = jpg;
//Picture captured - release the camera for other apps
cam.stopPreview();
cam.release();
}
#Override
public void onPause() {
if (null != cam) {
cam.stopPreview();
cam.release();
}
}
#Override
public void onDestroy() {
if (null != cam) {
cam.stopPreview();
cam.release();
}
}
}
PhotoHandler.java
import android.hardware.Camera;
import android.os.AsyncTask;
public class PhotoHandler implements Camera.PictureCallback {
private PhotoCallback photoCallback;
public PhotoHandler(PhotoCallback photoCallback) {
super();
this.photoCallback = photoCallback;
}
#Override
public void onPictureTaken(byte[] data, Camera camera) {
new ProcessCapturedImage().execute(data);
}
private class ProcessCapturedImage extends AsyncTask<byte[], Void, byte[]> {
#Override
protected byte[] doInBackground(byte[]... params) {
if (null == params || null == params[0])
return null;
return params[0];
}
#Override
protected void onPostExecute(byte[] params) {
photoCallback.pictureTaken(params);
}
}
}
PhotoCallback.java
public interface PhotoCallback {
public void pictureTaken(byte[] jpg);
}
All the best with your camera glassware.

Google Glass GDK CameraManager Intent

Does anyone know if when you use the GDK Cameramanager Intent to take a picture, is there a way to not show the preview or close it automatically? Capturing an image for use in app and don't want to have to tap to accept.
I probably have missed something.
Thanks,
You can try this:
Intent localIntent = new Intent("com.google.glass.action.TAKE_PICTURE_FROM_SCREEN_OFF");
localIntent.putExtra("should_finish_turn_screen_off", true);
localIntent.putExtra("should_take_picture", true);
localIntent.putExtra("screenshot_file_path", pathToFile);
startActivity(localIntent);
It will close your preview automatically after few seconds.
try this...
private void takePicture() {
Intent intent = new Intent(MediaStore.ACTION_IMAGE_CAPTURE);
startActivityForResult(intent, 0);
}
#Override
protected void onActivityResult(int requestCode, int resultCode, Intent data) {
if (requestCode == 0 && resultCode == RESULT_OK) {
String picturePath=data.getStringExtra(CameraManager.EXTRA_PICTURE_FILE_PATH);
processPictureWhenReady(picturePath);
}
super.onActivityResult(requestCode, resultCode, data);
}
private void processPictureWhenReady(final String picturePath) {
final File pictureFile = new File(picturePath);
if (pictureFile.exists()) {
// The picture is ready; process it.
// Write your code here
} else {
final File parentDirectory = pictureFile.getParentFile();
FileObserver observer = new FileObserver(parentDirectory.getPath()) {
// Protect against additional pending events after CLOSE_WRITE is
// handled.
private boolean isFileWritten;
#Override
public void onEvent(int event, String path) {
if (!isFileWritten) {
// For safety, make sure that the file that was created in
// the directory is actually the one that we're expecting.
File affectedFile = new File(parentDirectory, path);
isFileWritten = (event == FileObserver.CLOSE_WRITE
&& affectedFile.equals(pictureFile));
if (isFileWritten) {
stopWatching();
// Now that the file is ready, recursively call
// processPictureWhenReady again (on the UI thread).
runOnUiThread(new Runnable() {
#Override
public void run() {
processPictureWhenReady(picturePath);
}
});
}
}
}};
observer.startWatching();
}
}