Mouse/Keyboard input in OSX without Cocoa - c++

I am writing an application in C++ using CGL/OpenGL, and need keyboard/mouse input. I was trying to avoid Objective-C and Cocoa if possible. I am aware that one can capture input using Carbon, but seems that Carbon is slowly being phased out, plus it is not clear if it plays well with 64-bit applications. Does anybody know if there any other alternatives in OSX for mouse/keyboard input using C++ without going to something very low level (e.g. I/O kit)? Any code snippets to get me started?
Thank you-

Quartz event taps might do what you want. Without knowing why you are trying to avoid using the Cocoa event system it's hard to know what technology would be best for what you are trying to do.

Gaffer on Games wrote an article doing just what you want, context and surface creation with CGL and a minimal event loop (although it's a little out of date, for El Capitan you need CGLSetFullScreenOnDisplay())
http://gafferongames.com/2009/01/19/opengl-on-macosx/
It uses InstallApplicationEventHandler, one of the two keyboard APIs mentioned here:
Keyboard input on OSX

Related

developing GUI application without using GUI toolkit

Is it possible to write GUI application without using GUI toolkit ? As the GUI toolkit like GTK+ itself is written in c language, when there were no such toolkits at starting so how could programmers developed GUI apps only using c or c++ without using such toolkits? How can one write Gui application in c or c++ without using any GUI toolkit?
You can program Windows GUI applications using the Win32 API directly, without using any separate toolkit like GTK+. One reference on how to do that is here: http://www.winprog.org/tutorial/start.html
It's not so common these days, and not for the faint of heart.
Doing GUI stuff at the API level in Windows is not difficult, but involves a lot of work.
As a starting point you can check out my old Windows API programming tutorial “Lessons in Windows API Programming (C++)”.
Going that route you would do well to obtain a copy of the 5th edition or earlier (not 6th or later) of Charles Petzold’s “Programming Windows”, which is considered the Bible on the subject.
You start with a frame buffer for the graphics, upon that you write a set of primitive functions to do basic geometry (lines, circles, polygons, bit copies). Then you create an event queue, and a way to populate it with input events (keyboard, mouse, etc.).
You'll also need to create font and text routines.
Those are the basics upon which any GUI are built, as basic guis are little more than boxes that take click events, and eventually keyboard events.
It's a lot of work.
If you want to look at GUI programming at a lower level, consider looking up are it was originally done in the primitve OSes (such as early Windows, early Mac OS, early X Windows).
Mac OS made much of the work explicit. It offered a Window Manager, and other high level controls, but with a bit of study you can see how these were built on top of Quickdraw (MacOS graphics primitive library).
None of this addresses the modern issue of GPU acceleration and the like, that's a completely different layer of complexity to the problem.

Creating many native GUI frontends for a cross-platform application

I've been away from GUI programming for quite some time so please pardon my ignorance.
I would like to attempt the following:
Write a Mac OSX app but still be able to port to Win/Linux (i.e. C++ core with Obj-C GUI)
Avoid Qt/other toolkits on OSX (i.e. talk to Cocoa directly - I feel that many Qt apps I use stick out like sore thumbs compared to the rest of my system)
Not as important, but it would be nice to avoid Visual Studio if it means I can have the freedom to use newer C++ features even on Windows if they help create better code.
I believe this configuration might get me what I'm looking for:
Core C++ Static Library
OSX GUI (Cocoa)
Windows GUI (Qt+MinGW?) OR (no new C++ features, Visual Studio + ManagedC++/C#/????)
Linux GUI (Qt)
Once again, sorry for my ignorance but is this possible? Is this sane? Are there any real-world open source examples accomplish something like this?
There is quite a few OS X applications that have completely custom-designed looks that don't use very many stock controls. iStat Menus comes to mind, but there are many other examples. They still look good, but it's done by manually designing them to look good and to "mesh" with the overall look of OS X applications. Even their preferences pane doesn't use stock buttons.
Thus, you can go quite far using Qt, you just have to pay close attention to what you're doing - similarly to the way other developers are paying close attention even when using Cocoa. You'll find that Qt's controls offer functionality often above and beyond what's offered in Cocoa.
That said, on OS X sometimes you may need to run some native code that expects a CFRunLoop to be present. It's good to know that Qt's event loop already spins a runloop for you, so as long as you have an event loop spinning in a given thread, you can use runloop-based code - the default runloop is provided by Qt's implementation of QEventDispatcher (somewhere in its guts). For non-gui threads, the unmodified QThread does it for you. This is useful for using asynchronous IOKit functionality, for example. Another answer of mine presents some Cocoa mouse event grabbing code. A previous version that used Carbon can be found in the edit history of that answer.
Same goes for Windows: Qt runs a message sink for all top-level windows it owns, and you can integrate native controls/windows using qtwinmigrate. You can also integrate ActiveX controls using the Active Qt framework.
Well I think you should try Qt even on OSX. Qt allows native/custom look of applications (those cases you mentioned are probably bad examples - you probably haven't noticed that lots of other applications also use Qt).
Tools I usually use for multi-platform development:
C++ (now C++11 since all major compilers more or less support it)
Boost
Qt
CMake as build system generator
If you use this tool-set you can choose whichever platform you like for development and still be multi-platform without extensive work on the other platforms.

Mute all but my application

I made little sound generator in C# 4.0 using DirectSound.
I would like to mute all other sounds. I want only my application to be able to emit sounds.
How to do it?
I know how to pInvoke so you can give me unmanaged code.
Properly designed programs either stop playing back sound when their main window becomes deactivated. Or use IDirectSound::SetCooperativeLevel() so they play nice with other programs that want to be heard.
You are asking how to make a improperly designed program behave nicely. With a bit of a hint that you don't contemplate being nice yourself. Teaching that uncooperative program a lesson is simple, run its uninstaller. Avoid being the victim of that same advice.

C++ UI framework from scratch?

I want to create a C++ UI framework (something like QT or like ubuntu unity Desktop)
How is programmed , is it using OpenGL or lets take plasma ui of QT (how is this programmed )?
Direct answers , reference links anything will be helpful.
Some interesting opengl based UI I founf on the web
LiquidEngine
http://www.youtube.com/watch?v=k0saaAIjIEY
Libnui
en.wikipedia.org/wiki/Libnui
Some UI frameworks render everything themselves, and work based on some kind of clipping-window-within-the-host-systems-screen. Non-display aspects (such as input event handling) have to be translated to/from the host systems underlying APIs.
Some UI frameworks translate as much as possible to some underlying framework.
wxWidgets can do both. You can choose a native version (e.g. wxMSW if you're on Windows) and most wxWidgets controls will be implemented using native Windows controls. Equally, you can choose the wxUniversal version, where all controls are implemented by the wxWidgets library itself.
The trouble is that typical GUI frameworks are huge. If you want a more manageable example to imitate, you might look at FLTK. I haven't got around to studying it myself, but it has a reputation for being consise.
There are also some GUI toolkits that are specifically aimed at games programming, such as Crazy Eddies GUI. My guess - these are probably as idependent of the underlying API as possible, so that particular applications can implement the mapping to whichever underlying API they happen to target (OpenGL, DirectX, SDL, whatever) and can be the boss of the GUI rather than visa versa.
http://www.wxwidgets.org/
http://www.fltk.org/
http://www.cegui.org.uk/wiki/index.php/Main_Page
"no really, don't write your own wm or toolkit"
The #Xorg-devel guys on irc.freenode.org
doing one anyway means that you have to test against a wide range of more or less buggy WMs and X implementations, and that you have to frequently update to be compatible with the latest Xorg server and X protocol features (like Xinput 2.1)
understandably, the Xorg people are tired to support old, unmaintained toolkits and applications. They already have enough bugs.
The GUI frameworks are very dependant on a windows system, which dictates what is allowed and how windows are created and rendered. For example, pass a specific option to create a borderless or full-screen window.
Since you mentioned opengl and ubuntu, I guess you want to start on a linux platform. You should study xlib, for which you can find reference here.
Since the qt library is open source, you can download it and peek into it's sources.
A UI library isn't developed from scratch. It relies on the OS' windowing system, which relies on the driver from your graphics adapter, which relies on the OS kernel, which relies on... and so on.
To develop any software "from scratch", you can start by writing your own BIOS. Once you're done with that, move on to writing an OS, and then you should be just about ready to write the software you wanted. Good luck.
And this is assuming you're willing to cheat, of course, and use a compiler you didn't write from scratch.
Before you do that, it's worth that you spend one week on thinking:
1, Do you really know how to do it? I doubt that.
2, Do you really need to do it? I doubt that too.

Controlling mouse in linux

Basically I'm currently using the wiiuse library to get the wiimote working on linux. I want to now be able to control the mouse through the IR readings.
Can somebody point me in the right direction as to how to approach this? I know of uinput but there doesn't seem to be a lot of tutorials/guides on the web.
I'm working with c/c++ so a library in c/c++ would be helpful.
Cheers.
I think you should look into "becoming" a new mouse device. This would require developing a device driver that knows how to read the Wii device, and present that data to the input system as if it came from a mouse. The Linux kernel supports multiple mice connected at the same time, and merges the inputs from all of them, so this will work fine.
This book might be a handy help along the way. Not sure if it's possible to do this totally in userland, but that is of course worth investigating too.
I`m not sure if I understood you question corectly. If looking for controling mouse pointer from userspace look at XTest Extension Usefull link
Edit:
From kernel POV uinput looks like good starting point
In the end I decided to just draw "cursor" objects on the screen and use setup each input device to control a separate "cursor" object. This seemed the best idea as we were short on time.