DJI SDK Can't start coordinated Waypoint mission - c++

I am creating WaypointMission using DJI ROS SDK.
I want my vehicle to pass smoothly through provided waypoints, so I set enable coordinated mode like so:
waypoint_task.trace_mode = dji_sdk::MissionWaypointTask::TRACE_COORDINATED;
The problem is, no matter how many or how dense control points are, SDK always responds with error message WAYPOINT_MISSION_POINTS_NOT_ENOUGH:
Screenshot
With TRACE_POINT mission uploads successfully, but those stops at waypoints are no good for filmmaking.
Also, where I can find information about actions I can execute on waypoints using dji_sdk/MissionWaypointAction.msg?

I've It's been about 6 months since I used the onboard SDK, so it may have been fixed by now, but when I was using the ROS version, one of the bugs I found was that after you upload "two many" waypoints, the waypoint mission won't work. If I recall correctly, that number was around 28. So, if you have a waypoint mission with 27 waypoints, it would work, but 28 wouldn't. Also, it was cumulative, so if you did a waypoint mission with 10 waypoints, and later did one with 18, the second would fail, and you wouldn't be able to do a waypoint mission again until you restarted.
I also tried the non-ros version of the SDK. It worked better, but was also buggy and hard to use, and wouldn't allow more than 99 waypoints in a mission.

Related

Took a Consumer Google Glass out of storage and now it won't hold a charge(?)

This Google Glass has been in it's bag for a couple of years; It's the consumer version with the little nose rests and no apparent model number or identification. Here's a commercial picture of the model I have: https://images.techhive.com/images/article/2014/11/google-glass-100528641-medium.idge.jpg
I took it out and plugged it in to a known good USB cable and a known good 5v USB charger (that is, not the cable and charger from Google).
The light next to the switch comes on when I plug it in but turns off within 2 seconds. I left it charging overnight, and I can't get it to turn on or reset or factory reset.
I don't know for sure whether it is taking a charge and something else is wrong, or if it isn't charging due to a cable or charger incompatibility.
I've Googled about this and can't find any advice other than how to turn on (nothing happens), reset (nothing happens), or factory reset (nothing happens). Are there any other suggestions from people working with these devices?
Alternatively, does anybody have the spec for a replacement battery? The tear-downs spec it at 570 mAh but don't mention the voltage.

Gauss Blur 3d image in cuda, sometimes it works sometimes it does not [duplicate]

I've noticed that CUDA applications tend to have a rough maximum run-time of 5-15 seconds before they will fail and exit out. I realize it's ideal to not have CUDA application run that long but assuming that it is the correct choice to use CUDA and due to the amount of sequential work per thread it must run that long, is there any way to extend this amount of time or to get around it?
I'm not a CUDA expert, --- I've been developing with the AMD Stream SDK, which AFAIK is roughly comparable.
You can disable the Windows watchdog timer, but that is highly not recommended, for reasons that should be obvious.
To disable it, you need to regedit HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Watchdog\Display\DisableBugCheck, create a REG_DWORD and set it to 1.
You may also need to do something in the NVidia control panel. Look for some reference to "VPU Recovery" in the CUDA docs.
Ideally, you should be able to break your kernel operations up into multiple passes over your data to break it up into operations that run in the time limit.
Alternatively, you can divide the problem domain up so that it's computing fewer output pixels per command. I.e., instead of computing 1,000,000 output pixels in one fell swoop, issue 10 commands to the gpu to compute 100,000 each.
The basic unit that has to fit within the time slice is not your entire application, but the execution of a single command buffer. In the AMD Stream SDK, a long sequence of operations can be broken up into multiple time slices by explicitly flushing the command queue with a CtxFlush() call. Perhaps CUDA has something similar?
You should not have to read all of your data back and forth across the PCIX bus on every time slice; you can leave your textures, etc. in gpu local memory; you just have some command buffers complete occasionally, to prove to the OS that you're not stuck in an infinite loop.
Finally, GPUs are fast, so if your application is not able to do useful work in that 5 or 10 seconds, I'd take that as a sign that something is wrong.
[EDIT Mar 2010 to update:] (outdated again, see the updates below for the most recent information) The registry key above is out-of-date. I think that was the key for Windows XP 64-bit. There are new registry keys for Vista and Windows 7. You can find them here: http://www.microsoft.com/whdc/device/display/wddm_timeout.mspx
or here: http://msdn.microsoft.com/en-us/library/ee817001.aspx
[EDIT Apr 2015 to update:] This is getting really out of date. The easiest way to disable TDR for Cuda programming, assuming you have the NVIDIA Nsight tools installed, is to open the Nsight Monitor, click on "Nsight Monitor options", and under "General" set "WDDM TDR enabled" to false. This will change the registry setting for you. Close and reboot. Any change to the TDR registry setting won't take effect until you reboot.
[EDIT August 2018 to update:]
Although the NVIDIA tools allow disabling the TDR now, the same question is relevant for AMD/OpenCL developers. For those: The current link that documents the TDR settings is at https://learn.microsoft.com/en-us/windows-hardware/drivers/display/tdr-registry-keys
On Windows, the graphics driver has a watchdog timer that kills any shader programs that run for more than 5 seconds. Note that the Xorg/XFree86 drivers don't do this, so one possible workaround is to run the CUDA apps on Linux.
AFAIK it is not possible to disable the watchdog timer on Windows. The only way to get around this on Windows is to use a second card that has no displayed screens on it. It doesn't have to be a Tesla but it must have no active screens.
Resolve Timeout Detection and Recovery - WINDOWS 7 (32/64 bit)
Create a registry key in Windows to change the TDR settings to a
higher amount, so that Windows will allow for a longer delay before
TDR process starts.
Open Regedit from Run or DOS.
In Windows 7 navigate to the correct registry key area, to create the
new key:
HKEY_LOCAL_MACHINE>SYSTEM>CurrentControlSet>Control>GraphicsDrivers.
There will probably one key in there called DxgKrnlVersion there as a
DWord.
Right click and select to create a new key REG_DWORD, and name it
TdrDelay. The value assigned to it is the number of seconds before
TDR kicks in - it > is currently 2 automatically in Windows (even
though the reg. key value doesn't exist >until you create it). Assign
it with a new value (I tried 4 seconds), which doubles the time before
TDR. Then restart PC. You need to restart the PC before the value will
work.
Source from Win7 TDR (Driver Timeout Detection & Recovery)
I have also verified this and works fine.
The most basic solution is to pick a point in the calculation some percentage of the way through that I am sure the GPU I am working with is able to complete in time, save all the state information and stop, then to start again.
Update:
For Linux: Exiting X will allow you to run CUDA applications as long as you want. No Tesla required (A 9600 was used in testing this)
One thing to note, however, is that if X is never entered, the drivers probably won't be loaded, and it won't work.
It also seems that for Linux, simply not having any X displays up at the time will also work, so X does not need to be exited as long as you screen to a non-X full-screen terminal.
This isn't possible. The time-out is there to prevent bugs in calculations from taking up the GPU for long periods of time.
If you use a dedicated card for CUDA work, the time limit is lifted. I'm not sure if this requires a Tesla card, or if a GeForce with no monitor connected can be used.
The solution I use is:
1. Pass all information to device.
2. Run iterative versions of algorithms, where each iteration invokes the kernel on the memory already stored within the device.
3. Finally transfer memory to host only after all iterations have ended.
This enables control over iterations from CPU (including option to abort), without the costly device<-->host memory transfers between iterations.
The watchdog timer only applies on GPUs with a display attached.
On Windows the timer is part of the WDDM, it is possible to modify the settings (timeout, behaviour on reaching timeout etc.) with some registry keys, see this Microsoft article for more information.
It is possible to disable this behavior in Linux. Although the "watchdog" has an obvious purpose, it may cause some very unexpected results when doing extensive computations using shaders / CUDA.
The option can be toggled in your X-configuration (likely /etc/X11/xorg.conf)
Adding: Option "Interactive" "0" to the device section of your GPU does the job.
see CUDA Visual Profiler 'Interactive' X config option?
For details on the config
and
see ftp://download.nvidia.com/XFree86/Linux-x86/270.41.06/README/xconfigoptions.html#Interactive
For a description of the parameter.

EDSDK 3.4.0 on OS X 10.12.1 with Rebel t6i: `kEdsObjectEvent_DirItemCreated` event is not received for up to 30 seconds after photo is taken

When using EDSDK version 3.4.0 to take a photo with the Rebel T6i it can take anywhere from 2 to 30 seconds after calling EdsSendCommand(camera, kEdsCameraCommand_TakePicture, 0); for the corresponding kEdsObjectEvent_DirItemCreated to be received, signalling that the image is ready to download from the camera. Note that the camera itself takes the photo and the flash goes off almost instantly after sending the TakePicture command - it is only the kEdsObjectEvent_DirItemCreated event that is delayed for seemingly random, large amounts of time.
The delays become much longer and more frequent when connecting to a second Rebel T6i, even when only taking photos with one of the cameras. This even occurs when both cameras are ran from separate applications.
We're hoping to use both of these cameras as a part of an installation that requires we're able to download each photo from the camera within at most 5 seconds from when EdsSendCommand(camera, kEdsCameraCommand_TakePicture, 0) is called.
If anyone has any ideas on why this large delay might be occurring or any other suggestions on how to fix it, we'd greatly appreciate it!
Note: We're building 64-bit at the moment but are currently attempting to get a 32-bit build working in the meantime to see if that improves anything.
EDSDK v3.4.0
OS X 10.12.1
64-bit
Rebel T6i
Not using live view will fix the problem. You need to download the image direct to the computer instead of saving to the SD card first as well. If ANY other camera is plugged in that is using live mode then you will continue to have the above problem unfortunately.

zwave/cpp - GetValue from Thermostat (Temperature and Humidity)

I know I'm asking a dumb question, but I'm quite of a zwave/openzwave beginner, so I wanted to get some help on that.
My zwave network is already up, and I have two nodes:
the key itself to control the other nodes
a sensor for temperature and humidity (the ST814, from Everspring)
Now, I want to display the temperature and the humidity in my console, but I'm not really understanding how it works. From what I understood, I need to configure the auto-report of my sensor (doc is here, see page 6), and get the notifications every X minutes, but I'm not sure.
Does someone already did that or know how to do it?
Thank you a lot,
Maxime
Imagine there's a room full of people from Sweden, and they're all talking to each other in Swedish. Even though you can hear what they're saying, it doesn't mean anything to you because you don't speak Swedish. If you had the ability to speak Swedish, you would understand exactly what was going on.
Now imagine there's a network full of devices and a controller that all speak Z-Wave. Sensors are reporting temperature and humidity at regular intervals to the controller. But, even though you can hear what they're saying, it doesn't mean anything to you because you don't speak Z-Wave.
OpenZWave is a library that enables you to understand and speak Z-Wave. You can use it to create software that listens to the conversations, decides what action to take and even barks out orders in Z-Wave to devices (e.g., motion detection -> call the police). OpenZWave comes with sample applications that show you how to construct your own home automation software using the OpenZWave library. You can also use a software package such as Domoticz, HomeSeer, OpenHAB or SmartThings. These applications provide a broad set of home automation features and functionality so you don't have to program them yourself.
To use the least amount of battery, a device such as the ST814 spends most of its time sleeping. At user-defined regular intervals (for example, every hour), the device wakes up, reports the temperature and humidity to the controller and checks to make sure there are no other commands or requests waiting for it. Then it goes back to sleep. You determine how often the device wakes up and can set it according to the instructions you referenced.
If you want to intercept the temperature and humidity report from the ST814 to the controller and output it to the console with OpenZwave, you need to write some code or use someone else's program. The latter is easier, but might not enable you to do exactly what you want to do. Using OpenZWave is harder, but provides the capability to do just about anything you want to do.

GStreamer issue with time

I use GStreamer to play audio and regularly require a timestamp of where I am in the file.
If I adjust the rate of play, be it using a seek command specifying a new play rate or if I use a plugin like “pitch” to adjust the “tempo” component. All timings go out of the window as GStreamer adjusts the length of the audio and its current position to factor in the speed that it is playing at. So what was at say 18 seconds is now at 14 seconds, for example.
I have also tried stopping the audio and starting afresh passing the new settings and also issuing the seek with a rate of 1.00 and also the tempo rate neither has worked. I have run out of ideas for the moment, thus this plea to SO.
Example code
#Slow down rate of file play
def OnSlow(self, evt):
media_state = self.media_get_state()
self.rate = Gpitch.get_property("tempo")
if self.rate > 0.2:
self.rate = self.rate - 0.10
Gpitch.set_property("tempo", self.rate)
r = "%.2f" % (self.rate)
self.ma3.SetLabel("Speed: "+r)
if media_state == Gst.State.PLAYING or media_state == Gst.State.PAUSED:
self.timer.Stop() #momentarily stop updating the screen
seek_event = Gst.Event.new_seek(self.rate, Gst.Format.TIME,
(Gst.SeekFlags.FLUSH),#NONE),
Gst.SeekType.NONE, 0, Gst.SeekType.NONE, -1)
Gplayer.send_event(seek_event)
time.sleep(0.1)
self.timer.Start() #Restart updating the screen
I have tried multiplying the duration and the current position by the adjustment in an attempt to pull or push the timestamps back to their positions, as if it were being played at normal speed but to no avail.
I have been pulling my hair out over this and the real kick in the teeth, is that, if I perform the same task using Vlc as my audio engine it works but I have to alter the pitch separately. The whole reason for moving over to GStreamer was that the pitch plugin tracks the “tempo” component and yet if I cannot get accurate and consistent timestamps, the project is dead in the water.
My question, has anybody a) come across this issue and b) mastered it
The answer, for anyone who finds themselves in a similar predicament, seems to lie with a fundamental tussle between the soundtouch pitch plugin and GStreamer's rate of play.
Audacity even makes a note of it in their User Manual.
The only way I eventually found a way round the problem, was to ditch the pitch plugin entirely from the pipeline, as just having it in there was enough to mess things up.
Instead, I used the ladspa-am-pitchshift-1433-so-ampitchshift plugin to adjust the pitch of the audio and left GStreamer to vary the rate using normal seek commands whilst altering the rate, to give slower and faster rates of play.
In this way the timestamps remain consistent but the pitch has to be manually adjusted. Although it can be semi-automated by picking from a list of predefined pitch values for given rates of play.
I trust that this saves someone else 2 days of head scratching.
Additional Note:
Even though GStreamer works in nanoseconds and one could be forgiven for thinking that not using the flag Gst.SeekFlags.ACCURATE, when performing a seek, wouldn't make that much difference, one would be very much mistaken.
I have noticed that not using the ACCURATE flag can make a difference of up to 10 seconds when GStreamer is asked to report its current position, if the seek didn't use the ACCURATE flag.
So forewarned is forearmed.
(note that using this flag will make the seek take longer but at least if gives consistent results)