OS X Associate an extension with the application - c++

In my application, I need to associate an extension with my application programmatically. That is when my application runs, it associates the extensions with itself as preferred application.
On Windows, this is done by using Registry APIs.
I am not able to find out how to achieve this in Mac OS X using Cocoa or Core Foundation in my C++ program.
This is like application associate itself with the extensions.

I assume you want your app to activate when a document with the overtaken extension is double-clicked in Finder, yes? Unfortunately there is no API to "take over" an extension not already declared by the app's Info.plist and, for what should be obvious security reasons, there is no way to modify that mechanism at runtime.
The closest you can come is allowing any extension, which will allow your app to be launched or activated by dragging the document onto the application icon, but I doubt that's all you want to do, given how you phrased your question.

Related

Is there any way to make a keylogger in linux without root?

What I'am trying to do
I made a keylogger by reading the event file, but it needs root permission to work.I want to make a keylogger that can work without root permission.
My device
ubuntu16.04 using X11
ubuntu21.04 using Wayland
My thoughts
I understand that it is feasible on windows, and it can also be implemented through Xlib on some linux using X11.
But my project needs to run on X11 and Wayland, so it's obviously not possible to use Xlib.
Question
Is there any other way that I can get key logged without root permission?
It may be possible, but any non root solution will depend on the keyboard virtualization tool. Let us look how (modern) OS work:
the hardware is under the exclusive control of the kernel and its drivers. It is possible to implement a keylogger at that level that would only be kernel dependent but it requires admin privileges.
if you have a multiple windows capable system (X11, XWindow) the OS passes low level events to the window manager which in turn will passes them to the client program. In Windows that part is include in the kernel for historical reasons. Here again it is possible to implement a (still low level) keylogger, but if the window manager has been started as root, interacting with it as a whole still requires admin privileges. At least the X11 server can be started as a non admin user process and in that use case, the keylogger can also run under the same user.
at the end, the window manager passes events to the client application. On some (windowing) sytems, it is possible to implement hooks but they will be restricted to the same process or process group or at least the same user. Whether it is possible or not, and the way to implement it if possible will be anyway window manager dependant.
That means that it may be possible to implement a user level keylogger, but it will depend on the windowing system and not only on the kernel. Said differently, you will have to search for a Wayland specific way and a X11 specific way if you want to support both of them.

Control c++ application remotely via web interface

I created a C++ QT application on Linux with a simple visual interface with three buttons. This application runs on a machine with a certain IP address in my wifi network. I want to be able to access such visual interface from the browser of a smartphone and click the buttons.
To accomplish this I used a remote desktop connection, but it's just a temporary solution as I want to be able to access my GUI from any smartphone without the need of installing anything, and also without offering other functionalities... the client should be able to press the three buttons and nothing more.
In other words, I would like to be able to do the following:
After I type the IP address of the linux machine in the browser of my smartphone, a html page opens up with my visual interface with three buttons;
When I press one button, my C++ code starts running in the background;
When I press again the buttons, the C++ app receives the commands and acts accordingly.
And so on until I click the button that closes my C++ App.
Now, is there a way to accomplish this? In a few words, is it possible to have a web interface to act as a GUI for my C++ app? I must admit I am really ignorant in web applications :) But maybe you know about a QT widget that solves my problems.
Thanks!
There are a lot of ways to accomplish this.
A simple way is to use QtHttpServer to talk to the objects inside your Qt Application to do the work you asked it to.
You can quickly get started by adapting the example here to your use case.
You may want to check out https://sourceforge.net/projects/conair/ - this lib allows you to communicate with your c++ qt code via javascript. You would, however, have to replicate your existing qt gui with a traditional web front-end technology - some straight forward HTML and javascript would probably suffice.

Managing Windows Though QT Application in Linux (C++)

I've been making a simple application which is able to launch a variety of other applications through QT5 (using QProcess class) but I've been running into a few key issues with the design. Specifically, it seems that Qprocess cannot set focus to windows that have been created via QProcess' start() function. This means that once a user opens more than one window, it can never return to the previous window that has been opened. After looking further into this dilemma, it has become clear that my program will need to be able to handle basic window management in order to fulfill my specifications.
I've decided that the best example to study for my program is Docky, which is capable of opening, closing and switching windows. Looking at the source code for that project was helpful, but there were many C# system calls that were used for fetching the list of client-windows which aren't available for my C++ program.
How can I get a list of all the X11 Windows that the client is running and provide basic window management (Switch To/ Open / Close Window) using C++? Is there a cross platform way of doing this through QT? Can I get this information directly using XServer?

Global hotkeys in a cross-platform Qt application

I'm creating a cross platform utility, in C++ using Qt, for which I need to have shortcut keys (or hotkeys, not really sure about the difference). Essentially the application will run and only be visible as an icon in the system tray, and do stuff when you press certain shortcut keys (eg, Ctrl-Shift-f4 or something).
I am under the impression that Qt doesn't provide a way to handle shortcut keys unless the application is in focus, which, in my case it won't be. So, that's out (if however that is a viable option, please clue me in).
I've found plenty of examples/documentation explaining how to do this using Xlib/Xcb for linux, win32 api for windows, and carbon for osx, but I'm having a hard time finding a way to do this that would be applicable within the scope of a Qt application.
What would be a way to accomplish what I need?
I'm digging up this old non-answered question because, using QML, I encountered the same issue. The Shortcut QML type allow you to specify a context property but you still need a focused application or window.
However, I found a library resolving this issue : QHotkey. Describing itself on Github as :
A global shortcut/hotkey for Desktop Qt-Applications.
The QHotkey is a class that can be used to create hotkeys/global shortcuts, aka shortcuts that work everywhere, independent of the application state. This means your application can be active, inactive, minimized or not visible at all and still receive the shortcuts.
QHockey is available as a package through qpm and can be used directly from C++.

How to customize right click with already existing instance of a program?

I need to customize right click so that I can scan a directory with my anti-virus. I know how to do that using registry keys, but the problem is that I don't want to start a new instance of my program every time I want to scan a directory. My anti-virus needs to load some signature databases so it will take around 15 seconds for the program to load those. I need to use the instance of the program which I have already opened and is running for scanning the directory. How can I do that?
I am using C++Builder.
Thanks.
Considering you already know how to add the item to the right click contextual menu, I suggest implementing a client/server set of applications:
A server that loads up when you turn your computer on and does the scanning, and
The client that tells it what to do using IPC - inter-process communication.
You then add the client application to various contextual menus, passing it arguments that indicate what it should get the server to do depending on what you right-clicked on.
IPC is a bit of a pain in the butt, the easiest way is to use TCP/IP to and do local networking using a network library. There are many out there, however given you'll likely want to have other features such as UI elements and a tray icon, I suggest you look at Qt, namely the following components:
QtNetwork: For performing communication between the client and the server executable.
QSystemTrayIcon: For displaying a small icon on the tray.
There are quite a few other little bits of Qt you'll no doubt encounter (like all the fabulous UI stuff), and fortunately Qt is well documented and help is always available here, and from the Qt Developer Network. You can get started with Qt by downloading and installing the SDK:
http://qt.nokia.com/downloads/
Best of luck :).
Implement a DDE server in your anti-virus, and then add a ddeexec subkey to your Registry key. Alternatively, add an OLE Automation object to your app that implements the IDropTarget interface, and then add a DropTarget subkey to your Registry key that specifies the object's CLSID.
Either way, whenever your menu item is then invoked, Windows will call into your existing app instance if it is already running, otherwise it will launch a new instance and then call into it. Either way, Windows is handling all of that for you. All you are doing is providing an entry point for Windows to call into.
I would suggest the IDropTarget method, because DDE is deprecated, and because IDropTarget is more flexible. While your app is running, you could re-use the same IDropTarget object to handle OLE Drag&Drop operations on your app's UI window and Taskbar button, and support automated invokations of your scanner by other apps.