Related
I need to implement an application on Linux that drives a USB connected device (a medical instrument). The application will be written in C++ (2011 standard).
The current application is written for Windows 10 in C# and uses the standard Winusb driver enumerated for the device. I have a complete protocol specification for the commands and the events/interrupts coming back. Unfortunately, I'm not sure how I can pass these to the USB layer in Linux. If it was a simple serial device, there would be no problem but I'm guessing the command responses and the interrupt events are multiplexed by the driver using the functionality in the Winusb driver.
Where's the best place to start in terms of documentation? Alternatively, is there a Linux library (or driver) that provides more or less the same functionality as winusb for Linux?
Any help appreciated. Thanks
Alternatively, is there a Linux library (or driver) that provides more or less the same functionality as winusb for Linux?
One way is using directly the generic kernel API for USB (see the Asynchronous I/O parts to get the interrupts):
https://www.kernel.org/doc/html/v4.15/driver-api/usb/usb.html
This is the strict Linux equivalent of WinUSB.
The potentially less hard way is using libusb, which also get you cross-platform features.
(I am assuming your device is recognized correctly by the kernel, and doesn't need a custom driver)
Is there a Linux library (or driver) that provides more or less the same functionality as winusb for Linux?
Yes. libusb is a popular USB abstraction library that supports Linux, macOS, and Windows. I also wrote a similar library called libusbp with a different set of features that were more useful for my applications. These are C libraries so it will take some work to interoperate with them from C#, but once you do that, you can probably use the same code on either Windows or Linux (so you wouldn't have to maintain your code for calling WinUSB).
From my understanding, Qt and GTK on the Windows and OS X side are just wrappers around the native GUI libraries, like for OS X it wraps around Cocoa, and for Windows around Win32. However, my question is, how do they integrate with Linux? Do the Desktop Environment developers have to implement special libraries for either Qt or GTK or how does it work? I have looked around but I can't really find the answer.
A few further notes.
Neither GTK+ nor Qt use the native widgets of Windows and OS X. They approximate the look and feel using native APIs, but internally everything is all done custom.
GTK+ and Qt are responsible for, and define, the themes available to programs on Linux. Desktop environments typically provide a way to change the theme globally for all applications, but how this is done is defined by GTK+ and Qt. For example, GTK+ 3 typically uses ~/.config/gtk-3.0/settings.ini to store this information (and there is a programmatic API to this file).
Qt has a bridge for GTK+ 2 themes via QGtkStyle, and the KDE developers maintain versions of their Oxygen theme for GTK+ 2 and GTK+ 3. (The previous sentence may change in the future, especially now that GTK+ 2 is long dead.)
Update 1: Unix systems only provide a way to reserve a rectangular region of the screen to do what you want with it, including drawing (as in plotting a bitmap image) to it. Drawing (as in drawing shapes) is done by hand. GTK+ uses a library called cairo to do its drawing; I believe Qt wrote their own (QPainter?). Both Windows and OS X provide drawing APIs (Windows has several; OS X has Core Graphics). (X11 does have drawing primitives, but I assume they are not expressive enough to be used for modern 2D graphics; I wouldn't know...)
The same applies to font rendering, though modern Unix systems tend to base their font rendering on some generally accepted base libraries (freetype, fontconfig, fribidi, harfbuzz). GTK+ uses Pango to do text layout (actually arranging blocks of text into lines and paragraphs) and drawing (Pango integrates with cairo); I believe Qt also uses its own (this time I'm not sure).
I wrote about what X11 does do some time ago.
On Linux (desktops and laptops) the graphical screen is generally displayed (at least that was the case in beginning of 2015) by the X11 server. Your GUI app is communicating with that server thru sockets, often locally on a Unix socket like /tmp/.X11-unix/X0. The X11 server is generally Xorg.
For some embedded devices like Android mobile phones or some gadgets (GPS in cars, automotive or medical device industry) it is different (DirectFB, framebuffer devices -which is used by the X11 server on your desktop, ...)
Some distributions are switching to Wayland (or perhaps to Mir). Since I don't know these much, I cannot explain the gory details. AFAIU, there is still some server involved (which, like Xorg, is the only user-land software component talking to your graphics card) and some protocol, and major toolkits like Qt & GTK are been adapted to them (so if you code for Qt or for GTK, you don't care about those details, but you should upgrade your toolkit).
The graphical toolkits (Qt, Gtk) are interacting with the X11 server (or the Wayland one) thru some specific protocol(s), e.g. X Window System protocols for X11. For historical reasons, these protocols are quite complex, and practically require to follow some conventions like EWMH.
See also this answer to a related question. I explain there that X11 is not used today as it was in the previous century; in particular the server-side drawing abilities of X11 (e.g. Xlib's XDrawLine or XDrawText) are rarely used today, because the toolkit is drawing a pixmap image client side and sending it to the server.
Notice that you might consider giving not a GUI interface, but a Web interface, to your application (e.g. using libraries like libonion, Wt, ....); then your application becomes a specialized Web server, and the user would go thru his browser (in his desktop/laptop/tablet/phone) to interact with your app.
Practically speaking, user interfaces are so complex that you really should use some toolkit for them (Qt if coding in C++). Coding from scratch (even above Xlib or XCB for X11) would requires years of work.
There exist several other widget toolkits above X11, e.g. FOX toolkit, FLTK (but most of them have much less features than Qt or GTK).
There's no clear answer. There's no native GUI on Linux, as there is on Windows and OSX. X11, which is windowing system used on Linux (this applies to Wayland and Mir too), is very basic and low level and is responsible mainly for handling input devices and allocating windows to applications. It does not provide any GUI components such as buttons or text fields. In that sense, both Qt and GTK+ can be seen as "native" Linux GUI libraries. To make matters worse, desktop environment plays a part too. On Gnome, GTK+ can be seen as more "native", whereas on KDE QT is more "native".
I am finding a Ubuntu OS command, which lets the program to read the data from keyboard even if the program is in background. I tried to search it a lot but got no success. If any Ubuntu/Linux programmer knows the OS command which lets the program to do so, Please share it with me.
I am a beginner of Ubuntu programming.
You can use the Linux input subsystem to read events from mice and keyboards. It will only work if your application has the necessary privileges. Basically, you have to run the application as root for this to work.
If you cannot run as root, you should not be attempting to monitor the keyboard anyway.
You can create an X11 application to monitor keyboard events in the current session. It only works for the current user, and in the current graphical environment, and may not be able to observe privileged dialogs, for example password inputs. For details, look at the application shortcut launcher for your desktop environment; all Linux DEs I've ever heard of have one.
I think the old Linux Journal articles, The Linux USB Input Subsystem and Using the Input Subsystem, are still one of the best introductions to the Linux input subsystem. Most Linux distributions nowadays also support uinput, a similar device that allows injecting input events back to the kernel subsystem, designed to allow user-space input device drivers. Their interfaces are described in /usr/include/linux/input.h and /usr/include/linux/uinput.h. I recommend you start at the above articles, and then look at some input and uinput examples.
If you are comfortable using a program, have a look at Logkeys project
. It directly takes input from /dev/input/event*.
Both Windows (Win32 API) and OS X (Cocoa) have their own APIs to handle windows, events and other OS stuff. I have never really got a clear answer as to what Linux’s equivalent is?
I have heard some people say GTK+, but GTK+ being cross platform. How can it be native?
In Linux the graphical user interface is not a part of the operating system. The graphical user interface found on most Linux desktops is provided by software called the X Window System, which defines a device independent way of dealing with screens, keyboards and pointer devices.
X Window defines a network protocol for communication, and any program that knows how to "speak" this protocol can use it. There is a C library called Xlib that makes it easier to use this protocol, so Xlib is kind of the native GUI API. Xlib is not the only way to access an X Window server; there is also XCB.
Toolkit libraries such as GTK+ (used by GNOME) and Qt (used by KDE), built on top of Xlib, are used because they are easier to program with. For example they give you a consistent look and feel across applications, make it easier to use drag-and-drop, provide components standard to a modern desktop environment, and so on.
How X draws on the screen internally depends on the implementation. X.org has a device independent part and a device dependent part. The former manages screen resources such as windows, while the latter communicates with the graphics card driver, usually a kernel module. The communication may happen over direct memory access or through system calls to the kernel. The driver translates the commands into a form that the hardware on the card understands.
As of 2013, a new window system called Wayland is starting to become usable, and many distributions have said they will at some point migrate to it, though there is still no clear schedule. This system is based on OpenGL/ES API, which means that in the future OpenGL will be the "native GUI API" in Linux. Work is being done to port GTK+ and QT to Wayland, so that current popular applications and desktop systems would need minimal changes. The applications that cannot be ported will be supported through an X11 server, much like OS X supports X11 apps through Xquartz. The GTK+ port is expected to be finished within a year, while Qt 5 already has complete Wayland support.
To further complicate matters, Ubuntu has announced they are developing a new system called Mir because of problems they perceive with Wayland. This window system is also based on the OpenGL/ES API.
Linux is a kernel, not a full operating system. There are different windowing systems and gui's that run on top of Linux to provide windowing. Typically X11 is the windowing system used by Linux distros.
Wayland is also worth mentioning as it is mostly referred as a "future X11 killer".
Also note that Android and some other mobile operating systems don't include X11 although they have a Linux kernel, so in that sense X11 is not native to all Linux systems.
Being cross-platform has nothing to do with being native. Cocoa has also been ported to other platforms via GNUStep but it is still native to OS X / macOS.
Strictly speaking, the API of Linux consists of its system calls. These are all of the kernel functions that can be called by a user-mode (non-kernel) program. This is a very low-level interface that allows programs to do things like open and read files. See http://en.wikipedia.org/wiki/System_call for a general introduction.
A real Linux system will also have an entire "stack" of other software running on it, in order to provide a graphical user interface and other features. Each element of this stack will offer its own API.
To aid in what has already been mentioned there is a very good overview of the Linux graphics stack at this blog: http://blog.mecheye.net/2012/06/the-linux-graphics-stack/
This explains X11/Wayland etc and how it all fits together. In addition to what has already been mentioned I think it's worth adding a bit about the following API's you can use for graphics in Linux:
Mesa - "Mesa is many things, but one of the major things it provides that it is most famous for is its OpenGL implementation. It is an open-source implementation of the OpenGL API."
Cairo - "cairo is a drawing library used either by applications like Firefox directly, or through libraries like GTK+, to draw vector shapes."
DRM (Direct Rendering Manager) - I understand this the least but its basically the kernel drivers that let you write graphics directly to framebuffer without going through X
I suppose the question is more like "What is linux's native GUI API".
In most cases X (aka X11) will be used for that: http://en.wikipedia.org/wiki/X_Window_System.
You can find the API documentation here
XWindows is probably the closest to what could be called 'native' :)
The linux kernel graphical operations are in /include/linux/fb.h as struct fb_ops. Eventually this is what add-ons like X11, Wayland, or DRM appear to reference. As these operations are only for video cards, not vector or raster hardcopy or tty oriented terminal devices, their usefulness as a GUI is limited; it's just not entirely true you need those add-ons to get graphical output if you don't mind using some assembler to bypass syscall as necessary.
Wayland
As you might hear, wayland is the featured choice of many distros these days, because of its protocol is simpler than the X.
Toolkits of wayland
Toolkits or gui libraries that wayland suggests are:
QT 5
GTK+
LSD
Clutter
EFL
The closest thing to Win32 in linux would be the libc, as you mention not only the UI but events and "other os stuff"
GUI is a high level abstraction of capability, so almost everything from XOrg server to OpenGL is ported cross-platform, including for Windows platform. But if by GUI API you mean *nix graphics API then you might be wandering around "Direct Rendering Infrastructure".
I am building cross-platform game engine and now I am focused on Input system.
I have written an abstract Input system which passes the messages up
and is beeing fed by platform dependent modules, running in separate thread.
In windows I have created "Message-only" window, which feed Input
system with messages (translated to platform independent) from RAWINPUT.
Now I am having troubles to figure out how to do similar thing on unix based system.
Is there any convenient way to get input (keyup, keydown, mousemove...) from kernel?
Or any other way without need showing any window?
EDIT
I do not want to my Input System be dependent on my Renderer. Renderer should just notify
input when app focus changed... So I want Input system to run on different thread than renderer.
Usually cross-platform input is achieved by using a wrapper library -- SDL is one that is pretty good at that, and the current version is even BSD licenced.
The advantages of using a wrapper are so big, that even Windows games that use their own solution on Windows tend to use SDL as a wrapper when running on Linux (that was the original reason SDL was created).
So in the worst case, you may keep your libraries on Windows, and use SDL for implementation specifically on *nix systems.
Assuming you're using X11:
Peter Hutterer has a series of XInput2 articles. Supports raw events apparently.
ManyMouse claims to use XInput2 without a window:
On Unix systems, we try to use the XInput2 extension if possible.
ManyMouse will try to fallback to other approaches if there is no X server
available or the X server doesn't support XInput2. If you want to use the
XInput2 target, make sure you link with "-ldl", since we use dlopen() to
find the X11/XInput2 libraries. You do not have to link against Xlib
directly, and ManyMouse will fail gracefully (reporting no mice in the
ManyMouse XInput2 driver) if the libraries don't exist on the end user's
system. Naturally, you'll need the X11 headers on your system (on Ubuntu,
you would want to apt-get install libxi-dev). You can build with
SUPPORT_XINPUT2 defined to zero to disable XInput2 support completely.
Please note that the XInput2 target does not need your app to supply an X11
window. The test_manymouse_stdio app works with this target, so long as the
X server is running. Please note that the X11 DGA extension conflicts with
XInput2 (specifically: SDL might use it). This is a good way to deal with
this in SDL 1.2:
Might be worth looking through the source.
Under X Window system there is a concept of input-only windows, which is more or less parallel to that of message-only window under Windows.