Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 5 years ago.
Improve this question
I want to develop some application that could work with drone.
I looking on the SDK of DJI and i don't understand how can i develop to their drone - because i want to write a code that can make the fly by self according to the information that the drone send to my application - and my application will send the flying commands
is it possible ?
Can i found some drone that i can write a code that will work on the drone himself and not only on my 'ground station' ?
for example - if i want to write a code that enable two drone to talk each other i need to write a protocol that will embedded on the drone.
Beside DJI that i rad about their SDK - is there are more drone brand that i can write a code to their drone ?
You would need to create a mobile app (either Android or iOS) and include DJI's mobile SDK to control the drones. The SDK already supports the flying commands.
DJI has a developer platform called Matrice 100. In this platform, we can bring your own computer (like Raspberry Pi or some other computer on a board) and run DJI provided onboard SDK to execute your programs.
There are a few other drone brands that support programming using SDK. A simple google search can help.
I think I can give some more details than the validated answer, so I hope that might be helpful.
DJI currently has an Android and iOS SDK. You can control the drone from your application using it (tell the drone to takeoff, go to a waypoint, take a picture, take a video, etc).
(Note that the following solutions are not sorted. It might depend on your needs)
If you get a 3DR Solo, you can write code on the drone directly. The preferred way for that would be to use DroneKit Python. DroneKit also works on Android, but will probably not be released on iOS (see the post from the 3DR Staff here). The Solo is very cool because you can simply SSH into its embedded Linux.
Still using DroneKit, you can build your own drone around the Pixhawk flight controller.
Parrot has an SDK for their drones, but you cannot run code on the drone itself. The interesting point is that their SDK is in C, with wrappers for Android and iOS.
If you get a Matrice from DJI, you can put your own controller (e.g.Raspberry Pi) on it and use the so-called onboard SDK from DJI.
Still using the onboard SDK, you can build your drone around the A3.
Using the mobile SDK from DJI, you can build your drone around the A2.
That is actually the same as 2, right?
Related
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 2 years ago.
The community reviewed whether to reopen this question 5 months ago and left it closed:
Original close reason(s) were not resolved
Improve this question
OpenGL and Windows Remote don't play along nicely.
Solutions for this are dependent on the use case and answers are fragmented across the vast depths of the net.
This is a write-up I wish existed when I started researching this, both for coders and non-coders.
Problem:
A RDP session of Windows does not expose the graphics card, at least not directly. For instance you cannot change the desktop resolution and GraphicsCard drivers usually just disable their setting menus. Starting a OpenGL context higher than v1.1 fails because of this. The, especially in support IRCs, often suggested "Don't use WindowsRemote" is unfortunately not an option for many. In many corporate environments Windows Remote is a constantly used tool and an app has to work there as well.
Non-Coder workarounds
You can start the OpenGL program, allowing it to see the graphics card, create an opengl context and then connect via WindowsRemote. This always works, as Windows remote just transfers the window content. This can be accomplished by:
A batch script, that closes the session and starts the program, allowing you to connect to the program already running. (Source)
Using VNC or other to remote into the machine, start the program and then switch to Windows Remote. (Simple VNC programm, also with a portable client)
Coder workarounds
(Only for OpenGL ES)Translate OpenGL to DirectX. DirectX works under Windows Remote flawselly and even has a Software rendering fallback built into DX11 if something fails.
Use the ANGLE Project to do this at run-time. This is what QT officially suggests you do and how Chrome and Firefox implement WebGL. (Source)
Switch to software rendering as a fall back. Some CAD software like 3dsMax does this for instance:
Under SDL2 you can use SDL_CreateSoftwareRenderer (Source)
Under GLFW version 3.3 will release OSMesa (Mesa's off screen rendering), in the mean time you can build the Github version with -DGLFW_USE_OSMESA=TRUE, but I personally still struggle to get that running (Source)
Directly use Mesa's LLVM pipe for a fast OpenGL implementation. (Source)
Misc:
Use OpenGL 1.1: Windows has a built in implementation of OpenGL 1.1 and
earlier. Some game engines have a built in fall back to this and thus
work under Windows Remote.
Apparently there is a middle-ware, that allows for even OpenGL 4 over Windows Remote, but it's part of a bigger package and is a commercial solution. (Source)
Any other solutions or corrections are greatly appreciated.
[10] Nvidia -> https://www.khronos.org/news/permalink/nvidia-provides-opengl-accelerated-remote-desktop-for-geforce-5e88fc2035e342.98417181
According to this article it seems that now RDP handles newer versions of Direct3D and OpenGL on Windows 10 and Windows Server 2016, but by default it is disabled by Group Policy.
I suppose that for performance reasons, using a hardware graphics card is disabled, and RDP uses a software-emulated graphics card driver that provides only some baseline features.
I stumbled upon this problem when trying to run Ultimaker CURA over standard Remote Desktop from a Windows 10 client to a Windows 10 host. Cura shouted "cannot initialize OpenGL 2.0 context". I also noticed that Repetier Host's "preview" window runs terribly slow, and Repetier detects only an OpenGL 1.1 card. Pretty much fits the "only baseline features" description.
By running gpedit.msc then navigating to
Local Computer Policy\Computer Configuration\Administrative Templates\Windows Components\Remote Desktop Services\Remote Desktop Session Host\Remote Session Environment
and changing the value of
Use hardware graphics adapters for all Remote Desktop Services sessions
I was able to successfully run Ultimaker CURA via with no issues, and Repetier-Host now displays OpenGL 4.6, and everything finally runs fast as it should.
Note from genpfault:
As usual, this Policy is kept in the HKLM registry group in
HKEY_LOCAL_MACHINE\SOFTWARE\Policies\Microsoft\Windows NT\Terminal Services
Set REG_DWORD:bEnumerateHWBeforeSW to 1 to turn ON using GPUs in RDP.
OpenGL works great by RDP with professional Nvidia cards without anything like virtual machines and RemoteFX. For Quadro (Quadro 4000 tested) you need driver 377.xx. For M60 you can use the same driver. If you want to use last driver with M60, you have to change the driver mode to WDDM mode (see c:\Program Files\NVIDIA Corporation\NVSMI\nvidia-smi.1.pdf). It is possible that there are some problems with licensing in this last case.
Some people recommend using "tscon.exe" if you can: https://stackoverflow.com/a/45723167/32453 or using a scheduler to do it on native hardware: https://stackoverflow.com/a/41839102/32453 or creating a group policy:
https://community.esri.com/thread/225251-enabling-gpu-rendering-on-windows-server-2016-windows-10-rdp
maybe copy opengl32.dll (or opengl64.dll) to your executable's dir: https://blender.stackexchange.com/a/73014 and newer version of the dll: https://fdossena.com/?p=mesa/index.frag
Remote Desktop and OpenGL does not play very well. When you connect to a Windows box the OpenGL Driver is unloaded and you end up with software emulation of OpenGL.
When you disconnect from the Windows box the OpenGL driver is not reloaded. This causes issues when you are running tests on the machine as you have to physically login to the machine to reset the drivers.
The solution I ended up using was to:
Disable Remote Desktop.
Delete all other software for remote desktop access. Because if it's used for logging in remotely the current set of drivers loaded may be messed up.
Install NoMachine
NoMachine is my personal favourite (when it does not play up) for a number of reasons:
Hardware acceleration of compression (video of desktop).
Works on Windows and Linux.
Works well on low-bandwidth connections especially if the client and server have the necessary hardware for compression of the data stream.
On Linux you get your desktop as you last left it when you were sitting in front of the machine.
On Windows it does not affect OpenGL.
currently free for personal and commercial use. Do check the licence in case it's changed.
When NoMachine plays up it hogs the CPU but this happens rarely. It is however in active development
Others to consider:
TurboVNC
TightVNC
TeamViewer - only free for personal use.
We are developing a software on Hi3536 processor based board. The SDK provided by HiSilicon comes with samples for developing user interface using frame buffer API - which is too low level. i.e., to design Combo Box, Text box, we have to write code from Scratch.
We are now trying to use QT. Not sure what other vendors do use for developing software on Hi3535 or Hi3536.
Can somebody suggest which SDK is most suitable for developing user Interface on HiSilicon processor based boards ?
We referred the sample code given in the following link and we would bring up the GUI successfully in Hi3536 using QT 5.6 - http://bbs.ebaina.com/thread-8217-1-1.html.
Please note that you need to use Google translator to translate the text in chinese.
In a past job, a few years ago, I worked on a GUI for an older Hisilicon chip based board.
I greatly enjoyed using Qt for Embedded Linux, version 4.8 on Linux framebuffer.
As far as I remember, be sure to study the Hisilicon documentation on how the framebuffers can/must be initialized and used. Hisilicon SDK used to contain also some sample programs with source code, there should be one that deals with framebuffers too.
My knowledge of Qt for Embedded is stuck at 4.8, I know version 5.x has radically re-designed that part, but I can't help you on details related to Qt5.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 8 months ago.
Improve this question
for my job, we are searching for an application which allows us to do export display. The specifications are :
clients use Windows/Linux System
server is a Linux Red-Hat 6 cluster
there are OpenGL based applications on server side. they must be running fast on client, at least as much as possible
the GPU are on server side. Users open a visualization session on cluster which allocates specific nodes with GPU.
For the moment, we use TurboVNC ( with a vnc client called "vncviewer" and securised by ssh tunnel ) and virtualGL on server for launching OpenGL applications (type paraview) with "vglrun name_application" command.
Could someone give me advices for alternative solutions ?
I saw XDCMP solution but it is not securised.
We can't use ssh X forwarding because it is tool slow.
By the way, what is the proportion for export display, between the ressources allocating by the client and ressources allocated by the server ?
TurboVNC seems to allocate more resources on server : does it mean that client does not manage graphics processing and only receives raw data from the server, which are displaying on client side ?
Then, this would not be the case when I do a "ssh -X" ? (this should be the client which deals locally with OpenGL processing)
How long are you willing to wait to put this into production?
Right now the Linux graphics stack is built around Xorg. And Xorg has the inconvenient drawback, that you can't run purely off-screen X servers that make use of the GPU. If you can live with only one user making use of the GPU and the GPU holding the VT then you might want to look into Xpra which you start with a X server configuration that uses the GPU instead of the dummy driver.
If you're willing to wait another two years (hopefully) all drivers will fully support KMS and the DRM kernel interfaces; as much as I dislike certain aspects of Wayland, it's also a huge game changer that puts a lot of peer pressure on NVidia to finally get around and use the "standard" APIs. Already now you can use libgbm to create purely off-screen OpenGL render contexts with GPUs that support it and no display server running; i.e. GPUs with open source drivers in the Mesa3D tree (Intel and AMD, however for now just OpenGL-3 and no OpenCL). Give it another 2 years and the APIs and tools will have stabilized that you can use this conveniently in production.
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
I am attempting to create my own operating system and I am just wondering if there is a way to tell the BIOS to set a VGA pixel on my screen in C++.
C as a language does not provide any built-in graphics capabilities. If you want graphics, you have to use some OS-specific library.
Aside from that, modern operating systems generally don't allow any old program to poke around in memory however it wants to. Instead, they use intermediates called drivers and, yes, graphics libraries and APIs such as OpenGL.
If you really want to do it yourself get a copy of MS-DOS and dig up some old VGA specs and start from there.
You can turn on a given pixel, but this requires platform specific code and may not portable when the OS or platform changes.
My understanding is you want direct access to the screen buffer with nothing between stopping you. Here's the way to go.
Common Research
On your platform, find out the graphics controller, brand name and model name, if you are using one. Search the web for the data sheets on the graphics controller chip. Most likely, the screen memory is inside the chip and not directly accessible by the CPU.
Next, find out how to access the board that the Graphics Controller resides on. You may be able to access the Graphics Controller chip directly by I/O ports or memory addresses; or you may have to use an interrupt system. Research the hardware.
Linux
Download a source distribution for the Linux kernel. Find the graphic driver. Search the code in the graphic driver to see how the Graphics Controller is manipulated.
For Linux, you will have to write your own graphics driver and rebuild the kernel. Next you will need to write a program that accesses your driver and turns on the pixel. Research "Linux driver API". There are books available on writing Linux drivers and the standard API that they use.
Windows
Windows uses the same concept of drivers. You will have to write your own Windows driver and let the OS know you want to use it. Your driver will talk to the Graphics Controller. There are books available about writing Windows drivers. After writing the driver, you will need to write a demo program that uses your driver.
Embedded Systems
Embedded systems range from simple to complex as far as displays go. This simplest embedded system uses memory that the display views. Any writes to this memory are immediately reflected on the display.
The more complex embedded systems use Graphic Controllers to control the display. You would need to get the data sheets on the Graphic Controller, figure out how to set it up, then how to turn on a pixel.
Driver Writers
Drivers are not an easy thing to write. Most drivers are written by teams of experts and take months to produce. Graphic Controller chips are becoming more and more complex as new features are added. The driver must be able to support new features and the older models. Not an easie issue.
Summary
If you really want to access a pixel directly, go ahead. It may require more research and effort that using an Off The Shelf (OTS) library. Most people in the industry use OTS libraries or frameworks (such as QT, wxWidgets and XWindows). Drivers are only rewritten or modified for performance reasons or to support new graphics hardware. Driver writing is not a simple task and requires a quality development process as well as a verification strategy.
Good luck on writing your pixel. I hope your library has something better to offer than the many graphic libraries already in existence.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking for code must demonstrate a minimal understanding of the problem being solved. Include attempted solutions, why they didn't work, and the expected results. See also: Stack Overflow question checklist
Closed 9 years ago.
Improve this question
I would like to write a very simple Linux desktop environment or a program that runs without a DE, and here is my requirements
the application or DE will be a IPTV player (as a IPTV set-top boxes)
and I want it to run directly after booting (no login screen or such things)
1- the DE will be full screen
2- no need to run any other GUI programs, just command line programs called through my application, so no need to window manager nor display manager (if possible)
3-minimal services, just want to connect to LAN and read rtp (udp) streams
4-use Qt and Qt Quick to write this DE or application, and if couldn't use openGL
5-MUST use libvlc or any other library to read and play rtp streams
6-use apt-get to install or remove packages
7- keyboard and mouse support
I am a c++ and Qt programmer and I have a good Linux administration background
if you have any idea to help write the DE or if any existing one that run directly on XWindow, please help
the DE will be used as if the PC a a normal DVB receiver to list channels and select one to view
How could I boot my Qt application as a DE and put it in /usr/share/xsessions
as /usr/share/xsessions/myDE.desktop
how to configure Qt to run without a window manager or display manager
should I use QApplication or any other class to run my app
I would like to start by saying, you should think only about Qt 5 for, and forget about Qt 4. The Qt 4 design with QWS is a bit old, and hence flawed. Qt 5 has a nice QPA (Qt Platform Abstraction) interface for easily adding platform plugins which makes the architecture robust and flexible.
how to configure Qt to run without a window manager or display manager
You can use Qt with the appropriate platform plugins, like eglfs, linuxfb, directfb, minimal, minimalegl, etc without complicated windows and display managers if you wanna have some lightweight solution.
Here you can find the list of the platform plugins that Qt 5 currently tries to support:
https://qt.gitorious.org/qt/qtbase/source/475cbed2446d0e3595e7b8ab71dcbc1ae5f59bcf:src/plugins/platforms
should I use QApplication or any other class to run my app
No, you should use QGuiApplication for this sort of thing. QApplication is for widgets based application with Qt 5, and that is the major Qt version you should use for this.
That being sad, Qt Quick 2 rendering depends on the availability of the OpenGL api, so you need to have that in place for your information. That does not necessarily mean hardware acceleration with GPU. Having a software based implementation of the open standard is also fine.