My software is protected by a USB hardware token and I want to achieve the following:
A user should be able to start as many instances of the software as he likes
A Second user on the same machine should not be able to start the application if the first user is already running the software.
So basically if the first user starts the software I would like to lock the usb port so only this user can access it. The Software is written in C++ and is running on Windows >7 or Windows Server >2008
Any Ideas?
As Jonathon suggested, you can use global names object. Check the example for global shared memory: https://msdn.microsoft.com/en-us/library/windows/desktop/aa366551(v=vs.85).aspx
When process starts, it shall
get the GetUserName (maybe also GetCurrentProcessId).
open named shared memory, like "Global\\MyAppRunningInstances"
Parse every line for values like: Username, ProcessId, heartbeat-timestamp...
If it finds another process with different username, check the heartbeat timestamp, maybe it is long ago crashed :) (if that process wasn't the only one, shared memory mapping will not be destroyed)
If timestamp indicates that "alien" process is running, you can quit current process with a message that there is another user session running
If no "alien" processes detected, add/edit entry for current user.
Do periodic heartbeat update with timestamp
Related
At the moment I am playing around with bluetooth LE and iBeacon devices.
I wrote an Server that keeps looking for nearby beacons.
My server follows this example (Link)
Unfortunately calling the function:
hci_le_set_scan_parameters()
requires root privileges.
As I don't want to run the whole server with root privileges I wanted to ask if there is any possibility for calling only this function with root privileges?
I am aware that asking for sudo when executing a program is always at least questionable I could not find any other possibility to scan for iBeacons.
If there is another possibility I am happy to hear about it as well.
Thanks for your help and kind regards
nPLus
According to POSIX, UID/GID are process attributes. All code inside your process is executed with UID/GID currently set for the whole process.
You could start server as root and immediately drop root privileges. Then, you can temporary gain root privileges using seteuid(2) when executing your function.
See also this answer.
You can also gain only selected capabilities(7) instead (temporary or permanently).
Thread-safety note
AFAIK, on Linux UID/GID are per-thread attributes and it's possible to set them for single thread, see NOTES section in seteuid() man-page and this post.
If you can move the privileged part into a separate process, I warmly recommend doing so. The parent process will construct at least one Unix Domain socket pair, keeping one end for itself, and put the other end as the child process standard input or output.
The reason for using an Unix domain socket pair is that such a pair is not only bidirectional, but also supports identifying the process at the other end, and passing open file descriptors from one process to another.
For example, if your main process needs superuser access to read a file, perhaps in a specific directory, or otherwise identifiable, you can move the opening of such files into a separate helper program. By using an Unix domain socket pair for communication between the two, the helper program can use getsockopt(ufd, SOL_SOCKET, SO_PEERCRED, &ucred, &ucred_size) to obtain the peer credentials: process ID, effective user ID, and effective group ID. Using readlink() on the pseudofile /proc/PID/exe (where PID is the process ID as a positive decimal number) you can obtain the executable the other end is currently running.
If the target file/device can be opened, then the helper can pass the open file descriptor back to the parent process. (Access checks in Linux are only done when the file descriptor is opened. Read accesses will only be blocked later if the descriptor was opened write-only or the socket read end has been shut down, and write accesses only blocked if the descriptor was opened read-only or the socket write end has been shut down.)
I recommend passing an int as the data, which is 0 if successful with the descriptor as an ancillary message, and an errno error code otherwise (without ancillary data).
However, it is important to consider the possible ways how such helpers might be exploited. Limiting to a specific directory, or perhaps having a system-wide configuration file that specifies allowed path glob patterns (and not writable by everyone), and using e.g. fnmatch() to check if the passed path is listed, are good approaches.
The helper process can gain privileges either by being setuid, or via Linux filesystem capabilities. For example, giving the helper only the CAP_DAC_OVERRIDE capability would let it bypass file read, write, and execute checks. In Debian derivatives, the command-line tool to manipulate filesystem capabilities, setcap, is in the libcap2-bin package.
If you cannot move the privileged part into a separate process, you can use the interface supported in Linux, BSDs, and HP-UX systems: setresuid(), which sets the real, effective, and saved user IDs in a single call. (There is a corresponding setresgid() call for the real, effective, and saved group IDs, but when using that one, remember that the supplementary group list is not modified; you need to explicitly call setgroups() or initgroups() to modify the supplementary group list.)
There are also filesystem user ID and filesystem group ID, but the C library will set these to match the effective ones whenever effective user and/or group ID is set.
If the process is started with superuser privileges, then the effective user ID will be zero. If you first use getresuid(&ruid, &euid, &suid) and getresgid(&rgid, &egid, &sgid), you can use setresgid(rgid, rgid, rgid) to ensure only the real group identity remains, and temporarily drop superuser privileges by calling setresuid(ruid, ruid, 0). To re-gain superuser privileges, use setresuid(0, ruid, 0), and to permanently drop superuser privileges, use setresuid(ruid, ruid, ruid).
This works, because a process is allowed to switch between real, effective, and saved identities. Effective is the one that governs access to resources.
There is a way to restrict the privilege to a dedicated thread within the process, but it is hacky and fragile, and I don't recommend it.
To keep the privilege restricted to within a single thread, you create custom wrappers around the SYS_setresuid/SYS_setresuid32, SYS_setresgid/SYS_setresgid32, SYS_getresuid/SYS_getresuid32, SYS_getresgid/SYS_getresgid32, SYS_setfsuid/SYS_setfsuid32, and SYS_setfsgid/SYS_setfsgid32 syscalls. (Have the wrapper call the 32-bit version, and if it returns -ENOSYS, fall back to the 16-bit version.)
In Linux, the user and group identities are actually per-thread, not per-process. The standard C library used will use e.g. realtime POSIX signals and an internal handler to signal other threads to switch identity, as part of the library functions that manipulate these identities.
Early in your process, create a privileged thread, which will keep root (0) as the saved user identity, but otherwise copy real identity to effective and saved identities. For the main process, copy real identity to effective and saved identities. When the privileged thread needs to do something, it first sets the effective user identity to root, does the thing, then resets effective user identity to the real user identity. This way the privileged part is limited to this one thread, and is only applied for the sections when it is necessary, so that most common signal etc. exploits will not have a chance of working unless they occur just during such a privileged section.
The downside to this is that it is imperative that none of the identity-changing C library functions (setuid(), seteuid(), setgid(), setegid(), setfsuid(), setfsgid(), setreuid(), setregid(), setresuid(), setresgid()) must be used by any code within the process. Because in Linux C library functions are weak, you can ensure that by replacing them with your own versions: define those functions yourself, with the correct name (both as shown and with two underscores) and parameters.
Of all the various methods, I do believe the separate process with identity verification through an Unix domain socket pair, is the most sensible.
It is the easiest to make robust, and can be ported between POSIX and BSD systems at least.
I need to get process name from process id in windows to find process names associated with a logged event. It is able to get Execution process id only from the logged event. Process handle is the required input to use GetProcessImageFileName() method. It's not able to get process handle from logged event.
In the duplicate question, it talks about currently running process. But I need not currently running process since it talks about logged event. & I have a doubt of whether processID vs processName combination is unique or not in Windows. So need to consider that also..
I expect that there must be some structure to map process id to process name. Are there any structure so? or any other methods to get process image name from process id?
I need to get process name from process id in windows to find process names associated with a logged event.
If you are getting the Process ID from a log, it will only be valid if the original process is still running. Otherwise, the ID is no longer valid for that process name. If the process has already exited before you read the log, all bets are off.
I need not currently running process since it talks about logged event.
Then you are out of luck, if the original process name was not logged.
I have a doubt of whether processID vs processName combination is unique or not in Windows.
A Process ID is unique only while being used for a running process. Once a process ends, its Process ID is no longer valid, and can be re-used for a subsequent new process.
I expect that there must be some structure to map process id to process name.
Yes, but only for a running process. You can pass the Process ID to OpenProcess(). If successful, it will return a HANDLE to the running process. You can then pass that HANDLE to GetModuleFileName(), GetProcessImageFileName(), or QueryFullProcessImageName(), depending on OS version and permissions you are able to gain from OpenProcess().
I'm using WinDivert to pipe connections (TCP and UDP) through a transparent proxy on Windows. How this works is by doing a port-to-pid lookup using functions like GETTcpTable2, then checking to see if the PID matches or does not match the PID of the proxy or any of it's child processes. If they don't match, they get forwarded through the proxy, if they do, the packets are untouched.
My question is, is there a safe way, or a safe duration, that I can "cache" the results of that port-to-pid lookup? Whenever I get a lot of packets flowing through, say watching a video on youtube, the code using WinDivert suddenly chomps all of my CPU up, and I'm assuming this is from making a TcpTable2 lookup on every packet received. I can see with UDP there not really being a safe duration that I can assume it's the same process bound to a port, but is this possible with TCP?
As a complement to Luis comment, I think that the application that caches the port to pid lookup could also keep a handle to the processes (just get it through OpenProcess). The problem, if that resources associated to a process are not freed until all handles to it are closed. That is normal, because until you have a valid handle to a process, you can query the system for various informations such as used memory or times. So you should periodically look whether the cached processes are terminated to purge the entry from cache and close the handle.
As an alternative, you could just keep another information such as the starting time of a process, that is accessible through GetProcessTimes. When looking in the cache to find a process id, you open the process and controls its start time. If ok, it is the right process, if not, the process id has been reused and you should purge the entry from cache.
The first way should be more efficient because you do not have to re-open the process for each packet, but you have to be more strict for identifying terminated processes to release resources, maybe with a thread that would use WaitForMultipleObjectsEx on all process handles to be alerted as soon as one is terminated.
The second way should be simpler to implement.
So, all I ended up doing here was using two std::unordered_maps. One map was to store the port number (as a key) and the last system time in milliseconds that the TCPTable was queried to find the process ID that was bound to the port (the key). If the key didn't exist or the last time was greater than the current system time plus 2 seconds, then a fresh query the to TCPTable is needed to re-check the PID bound to the port. After we've done that check, we update the second map which uses the port # as the key and returns an int that represents the PID found using the port in question on the last query. Gives us a 2 second cache on lookups which dropped peak CPU usage from well over 50% down to a max of 3%.
That's the problem:
I don't like multiple instances of my program, that's why I've disabled them. My program opens a specific mime-type. In my system (Ubuntu 12.04), when I double click one of these files, this is executed:
/usr/bin/myprogram /path/to/double/clicked/file.myextension
As I said, I don't like multiple instances, so, if the program is already running and the user chooses to open one of these files, a DBus message is being sent to the already instance so as to take care the opened file. So, if there's an already running instance and the user choose 3 files to open with my program and hit the [Enter] button, the system executes:
/usr/bin/myprogram /path/to/double/clicked/file1.myextension
/usr/bin/myprogram /path/to/double/clicked/file2.myextension
/usr/bin/myprogram /path/to/double/clicked/file3.myextension
All of these instances detect the already running instance and sent the opened file to it. No problems at all, till now.
But, what if there isn't an already running instance and the user chooses to open 3 files altogether with my program? The system will call concurrently, again:
/usr/bin/myprogram /path/to/double/clicked/file1.myextension
/usr/bin/myprogram /path/to/double/clicked/file2.myextension
/usr/bin/myprogram /path/to/double/clicked/file3.myextension
and each of these instances will realize that there's an already running instance, it will try to sent a DBus message to the already running instance and it will exit. So, all the 3 processes will do the same thing and nothing will run.
How can I avoid this problem?
PS: In order to detect if there are already running instances I implement the following code:
bool already_runs(){
return !system("pidof myprogram | grep \" \" > /dev/null");
}
I would use some shared memory to store the pid of the first process. The QSharedMemory class will help you here.
The first thing your program should do is try to create a shared memory segment (using your own made up key) and store your pid inside it. If the create call fails, then you can try to attach to the segment instead. If that succeeds then you can read the pid of the original process from it.
EDIT: also, remomber to use lock() before writing to or reading from the shared memory, and then call unlock() when you are done.
The standard way to do this in DBus is to acquire your application's name on the bus; one instance will win the race and can become the running instance.
However, you should be able to do this using Qt functionality which will integrate better with the rest of your application; see Qt: Best practice for a single instance app protection.
I'm searching for different options for implementing communication between a service and other services/applications.
What I would like to do:
I have a service that is constantly running, polling a device connected to a serial port. At certain points, this service should send a message to interested clients containing data retrieved from the device. Data is uncomplicated, most likely just a single string.
Ideally, the clients would not have to subscribe to receive these messages, which leads me to some sort of event 'broadcast' setup (similar to Windows events). The message sending process should not block, and does not need a response from any clients (or that there even are any clients for that matter).
I've been reading about IPC (COM in particular) and windows events, but am yet to come across something that really fits with what I want to do.
So is this possible? If so, what technologies should I be using? If not, what are some viable communication alternatives?
Here's the particulars of the setup:
Windows 2000/XP environments
'Server' service is a windows service, using VC++2005
Clients would vary, but always be in the windows environment (usual clients would be VC++6 windows services, VB6 applications)
Any help would be appreciated!
Windows supports broadcasting messages, check here. You can SendMessage to HWND_BROADCAST from the service, and receive it in each client.
There are a number of ways to do a broadcast system, but you'll have to either give up reliability (ie, some messages must be lost) or use a proper subscription system.
If you're willing to give up reliability, you can create a shared memory segment and named manual-reset event object. When a new message arrives, write it to the shared memory segment, signal the event object, then close the event object and create a new one with a different name (the name should be in the shmem segment somewhere). Clients open the shmem segment, find the current event object, wait for it to be signaled, then read off the message and new event segment.
In this option, you must be careful to deal with the case of a client reading at the same time as the shmem segment is updated properly. One way to do this is to have two sequence number fields in the shmem segment - one is updated before the new message is written, one after. Clients read the second sequence number prior to reading the message, then re-read both sequence numbers after, and check that they are all equal (and discard the message and retry after a delay if they are not). Be sure to place memory barriers around accesses to these sequence numbers to ensure the compiler does not reorder them!
Of course, this is all a bit hairy. Named pipes are a lot simpler, but a subscription (of a sort) is required. The server calls CreateNamedPipe, then accepts connections with ConnectNamedPipe. Clients use CreateFile to connect to the server's pipe. The server then just loops to send data (using WriteFile) to all of its clients. Note that you will need to create addititonal instance of the pipe using CreateNamedPipe each time you accept a connection. An example of a named pipe server can be found here: http://msdn.microsoft.com/en-us/library/aa365588(v=vs.85).aspx