I am trying to use shared memory between user process and kernel.
Option one - to let kernel to create section and let user mode app to open memory by name "Global\my_mem". It's working only in read-only mode. When I am trying to open section with FILE_MAP_WRITE it gives access denied(5). Not sure how to grant access or modify DACL.
Option two - pass handle back via IOCTL. This one is questionable since handle to section opened in KERNEL is 0xFFFFFFFF80001234. My understanding that handles that have any of upper bits set can not be used in user mode. Especially if app will be 32-bit :) Initially I expected that section handle will be somewhat similar to kernel file handle and I will be able to use it.
What would be the correct approach to establish shared memory channel between kernel and user mode?
For option 1, you can specify the security descriptor assigned to the newly created object via the SecurityDescriptor member of the OBJECT_ATTRIBUTES structure.
For option 2, you would need to create an additional handle as a user handle, which you do by not specifying the OBJ_KERNEL_HANDLE flag in the OBJECT_ATTRIBUTES structure. This will only work if you open the new handle while running in the context of a thread belonging to the user application's process, e.g., while processing an IOCTL received from the user application.
Another option is for the kernel driver to map the section into the user-mode application's address space itself, using ZwMapViewOfSection.
One issue with using a section is that the driver itself can only safely access it from a system thread. If that is a problem, you can share memory directly rather than via a section. If you allocate the memory in kernel mode, you can map it into the user-mode application's address space using MmMapLockedPagesSpecifyCache.
Yet another option is for the driver to access a memory buffer allocated by the user-mode process.
The downside to either of these approaches is that the buffer (or the part of it being shared) must be locked in memory, whereas using a section allows the buffer to be pageable.
Since you referred to 32bit app, I assume it is between a user process and a device driver - I would go with IOCTL - METHOD_IN_DIRECT (receives data in the buffer) and METHOD_OUT_DIRECT (write data into the buffer).
If shared memory is between multiple user processes and one or more device drivers - using shared Memory Object method is recommended .
Related
I have a shared library that hands out an integer handle to a client after a successful connection request. Something like:
int ConnectionRequest(const std::string& authorization_token);
Subsequent actions then need to use that handle to access further operations:
result DoOperation(int handle, const std::string& payload);
It occurred to me that a second client could hijack the connection simply by presenting a plausible handle value to the interface. How do I uniquely link the handle to the client that made the original request? Is there a way to get the process ID from the client and check against it?
Internally I use a std::map to link the handle to a shared_ptr object. All this is in user space.
Coding on linux in C++.
Both Linux and Windows have already solved this problem - you can look to their implementations for a working method.
In short, use multiple tables.
When referencing handles given out by the kernel, the system needs to ensure that a rogue process can't just steal your file handle and access your data. To do so, the system creates a per-process handle table that contains only the handles relevant to your process. If a rogue process steals one of your handles and tries to use it, they won't be able to access your data - the OS will index their handle table, and find either nothing, or one of their resources.
You can duplicate this behavior by looking up a handle table via process ID first, then look up the structure with the handle provided. If the process ID doesn't exist in the handle table map, return an error. Otherwise, run the function on the structure referenced by the handle if it exists.
I'm writing a kernel driver, which should read (and in some cases, also write) some memory addresses in kernel session space (win32k.sys). I've read in another topic that for example in Windbg I should change the context to a random user process to read the memory of kernel session space (with .process /p). How can I do that in a kernel driver? Should I create a user process which communicate with the driver (that's my idea now, but I hope that there is a better solution) or there is a more simple solution for this?
Session space are not mapped in system address space (that drivers share, if not attached to any process). Those why you getting BSOD while accessing win32k.
You need to be attached to EPROCESS via KeStackAttachProcess to perform this operation. You can get session id with ZwQueryInformationProcess(ProcessSessionInformation) function.
Kernel memory space is shared among all of the kernel objects ( just like a real/unprotected mode in DOS and early Windows versions). Kernel driver can access any address within the kernel space, whether it belongs to him or not.
You must find and attach to the csrss process!
win32k.sys is not loaded in the system address space of all process only for csrss.
You should do stack attach to csrss process.
At the moment I am playing around with bluetooth LE and iBeacon devices.
I wrote an Server that keeps looking for nearby beacons.
My server follows this example (Link)
Unfortunately calling the function:
hci_le_set_scan_parameters()
requires root privileges.
As I don't want to run the whole server with root privileges I wanted to ask if there is any possibility for calling only this function with root privileges?
I am aware that asking for sudo when executing a program is always at least questionable I could not find any other possibility to scan for iBeacons.
If there is another possibility I am happy to hear about it as well.
Thanks for your help and kind regards
nPLus
According to POSIX, UID/GID are process attributes. All code inside your process is executed with UID/GID currently set for the whole process.
You could start server as root and immediately drop root privileges. Then, you can temporary gain root privileges using seteuid(2) when executing your function.
See also this answer.
You can also gain only selected capabilities(7) instead (temporary or permanently).
Thread-safety note
AFAIK, on Linux UID/GID are per-thread attributes and it's possible to set them for single thread, see NOTES section in seteuid() man-page and this post.
If you can move the privileged part into a separate process, I warmly recommend doing so. The parent process will construct at least one Unix Domain socket pair, keeping one end for itself, and put the other end as the child process standard input or output.
The reason for using an Unix domain socket pair is that such a pair is not only bidirectional, but also supports identifying the process at the other end, and passing open file descriptors from one process to another.
For example, if your main process needs superuser access to read a file, perhaps in a specific directory, or otherwise identifiable, you can move the opening of such files into a separate helper program. By using an Unix domain socket pair for communication between the two, the helper program can use getsockopt(ufd, SOL_SOCKET, SO_PEERCRED, &ucred, &ucred_size) to obtain the peer credentials: process ID, effective user ID, and effective group ID. Using readlink() on the pseudofile /proc/PID/exe (where PID is the process ID as a positive decimal number) you can obtain the executable the other end is currently running.
If the target file/device can be opened, then the helper can pass the open file descriptor back to the parent process. (Access checks in Linux are only done when the file descriptor is opened. Read accesses will only be blocked later if the descriptor was opened write-only or the socket read end has been shut down, and write accesses only blocked if the descriptor was opened read-only or the socket write end has been shut down.)
I recommend passing an int as the data, which is 0 if successful with the descriptor as an ancillary message, and an errno error code otherwise (without ancillary data).
However, it is important to consider the possible ways how such helpers might be exploited. Limiting to a specific directory, or perhaps having a system-wide configuration file that specifies allowed path glob patterns (and not writable by everyone), and using e.g. fnmatch() to check if the passed path is listed, are good approaches.
The helper process can gain privileges either by being setuid, or via Linux filesystem capabilities. For example, giving the helper only the CAP_DAC_OVERRIDE capability would let it bypass file read, write, and execute checks. In Debian derivatives, the command-line tool to manipulate filesystem capabilities, setcap, is in the libcap2-bin package.
If you cannot move the privileged part into a separate process, you can use the interface supported in Linux, BSDs, and HP-UX systems: setresuid(), which sets the real, effective, and saved user IDs in a single call. (There is a corresponding setresgid() call for the real, effective, and saved group IDs, but when using that one, remember that the supplementary group list is not modified; you need to explicitly call setgroups() or initgroups() to modify the supplementary group list.)
There are also filesystem user ID and filesystem group ID, but the C library will set these to match the effective ones whenever effective user and/or group ID is set.
If the process is started with superuser privileges, then the effective user ID will be zero. If you first use getresuid(&ruid, &euid, &suid) and getresgid(&rgid, &egid, &sgid), you can use setresgid(rgid, rgid, rgid) to ensure only the real group identity remains, and temporarily drop superuser privileges by calling setresuid(ruid, ruid, 0). To re-gain superuser privileges, use setresuid(0, ruid, 0), and to permanently drop superuser privileges, use setresuid(ruid, ruid, ruid).
This works, because a process is allowed to switch between real, effective, and saved identities. Effective is the one that governs access to resources.
There is a way to restrict the privilege to a dedicated thread within the process, but it is hacky and fragile, and I don't recommend it.
To keep the privilege restricted to within a single thread, you create custom wrappers around the SYS_setresuid/SYS_setresuid32, SYS_setresgid/SYS_setresgid32, SYS_getresuid/SYS_getresuid32, SYS_getresgid/SYS_getresgid32, SYS_setfsuid/SYS_setfsuid32, and SYS_setfsgid/SYS_setfsgid32 syscalls. (Have the wrapper call the 32-bit version, and if it returns -ENOSYS, fall back to the 16-bit version.)
In Linux, the user and group identities are actually per-thread, not per-process. The standard C library used will use e.g. realtime POSIX signals and an internal handler to signal other threads to switch identity, as part of the library functions that manipulate these identities.
Early in your process, create a privileged thread, which will keep root (0) as the saved user identity, but otherwise copy real identity to effective and saved identities. For the main process, copy real identity to effective and saved identities. When the privileged thread needs to do something, it first sets the effective user identity to root, does the thing, then resets effective user identity to the real user identity. This way the privileged part is limited to this one thread, and is only applied for the sections when it is necessary, so that most common signal etc. exploits will not have a chance of working unless they occur just during such a privileged section.
The downside to this is that it is imperative that none of the identity-changing C library functions (setuid(), seteuid(), setgid(), setegid(), setfsuid(), setfsgid(), setreuid(), setregid(), setresuid(), setresgid()) must be used by any code within the process. Because in Linux C library functions are weak, you can ensure that by replacing them with your own versions: define those functions yourself, with the correct name (both as shown and with two underscores) and parameters.
Of all the various methods, I do believe the separate process with identity verification through an Unix domain socket pair, is the most sensible.
It is the easiest to make robust, and can be ported between POSIX and BSD systems at least.
I'm looking for a possibility to create a shared memory block on Windows platforms that is write-protected for all processes except for the process that created the shared memory block.
In detail I need the following:
Process (1) has to create a shared memory block and should be able to modify the buffer.
Process (2) should be able to open and to read the created shared memory block but must not have permission to modify the content. This is important due to security/safty reasons.
Currently I have a solution creating a shared memory block using CreateFileMapping() together with MapViewOfFile() which then has read and write permission in process (1) and (2) like:
HANDLE handle = CreateFileMapping(INVALID_HANDLE_VALUE, NULL, PAGE_READWRITE, highSize, lowSize, L"uniquename");
void* sharedMemory = MapViewOfFile(handle, FILE_MAP_ALL_ACCESS, 0, 0, 0);
// now we can modify sharedMemory...
These two lines of code can be applied in both processes as the first process creates the shared memory block and the second process simply opens the shared memory.
However, obviously the second process will have write permission due to the provided access values (PAGE_READWRITE and FILE_MAP_ALL_ACCESS) during creation of the memory block.
I would need to create the shared memory block in process (1) by using access values PAGE_READONLY and FILE_MAP_READ but obviously than I'm not allowed to initialize/set/modify the memory block in process (1) which than is a useless memory buffer.
To my best knowledge, the definition of security attributes can not solve the problem as my problem does not depend on users or groups.
I even would be happy with a solution which creates a shared memory block in process (1) relying on memory content that is known before the creation of the shared memory block (and which will not be modified in process (1) afterwards).
Do you trust process #2 to use FILE_MAP_READ? That will prevent accidental overwrites from e.g. wild pointers corrupting the shared memory.
If you're trying to protect against malicious overwrites, then you need to use the OS-provided security principals and run process #2 in a different session with lesser credentials. If process #2 runs under the same security credentials as process #1, then it can perform any operation process #1 can perform (for example, by injecting code into process #1).
(On Windows, users are security principals and processes are not. Users are not the only level of restriction, for example User Access Control in Vista and later creates tokens corresponding to an administrative user both with and without the Administrators group membership)
Since you say process #1 doesn't need continuing write access, only one time, you could create the mapping, map it for write, then adjust the ACL using SetSecurityInfo so that future accesses cannot write.
Another possibility is to instead map a disk file, and open it with FILE_SHARE_READ (but not FILE_SHARE_WRITE) access from the first process.
But neither of these prevent process #2 from coercing process #1 to make changes on its behalf. Only using separate tokens can prevent coercion.
You don't explain why you can't provide different arguments in each case, so I'm going to assume you don't know which process is the creator until you have the file open. In which case, you might want to try:
HANDLE h = OpenFileMapping(FILE_MAP_READ, /*args*/);
if (h) {
v = MapViewOfFile(h, FILE_MAP_READ, 0, 0, 0);
} else {
h = CreateFileMapping(INVALID_HANDLE_VALUE, NULL, PAGE_READWRITE, highSize, lowSize, L"uniquename");
if (!h)
FirePhotonTorpedoes();
v = MapViewOfFile(handle, FILE_MAP_ALL_ACCESS, 0, 0, 0);
}
The CreateFileMapping function allows you to set an ACL for the file mapping object. If you create an ACL that only allows read-only access, other processes should be unable to open the file mapping object with read-write access.
Typically, when creating an object, the permissions you assign don't apply to the handle you obtain during creation. However, I haven't tested this with CreateFileMapping in particular.
This only provides weak protection. A malicious process could change the permissions on the file mapping object, or inject code into the process that created the object.
I am currently developing a modular framework using shared memory in C & C++.
The goal is to have independent programs in both C and C++, talk to each other through shared memory.
E.g. one program is responsible for reading a GPS and another responsible for processing the data from several sensors.
A master program will start all the slave programs
(currently i am using fp = popen(./slave1/slave1,"r"); to do this) and then make shared memory segments that each slave can connect to.
The thought behind this is that if a slave dies, it can be revived by the master and reconnect to the same shared memory segment.
Slaves can also be exchanged during runtime (e.g. switch one GPS with another).
The problem is that I spawn the slave via popen, and pass the shared memory ID to the slave. Via the pipe the slave transmits back the size needed.
After this is done i want to reroute the slave's pipe to terminal to display debug messages and not pass through the master.
Suggestions are greatly appreciated, as well as other solutions to the issue.
The key is to have some form of communication prior to setting up the shared memory.
I suggest to use another mean to communicate. Named pipe are the way I think. Rerouting standard out/err will be tricky at best.
I suggest to use boost.interprocess to handle IPC. And be attentive to synchronization :)
my2c
You may want to look into the SCM_RIGHTS transfer mode of unix domain sockets - this lets you pass a file descriptor across a local socket. This can be used to pass stderr and the like to your slave processes.
You can also pass shared memory segments as a file descriptor as well (at least on Linux). Create a file with a random name in /dev/shm and unlink it immediately. Now ftruncate to the desired size and mmap with MAP_SHARED. Now you have a shared memory segment tied to a file descriptor. One of the nice things about this approach is the shared memory segment is automatically destroyed if all processes holding it terminate.
Putting this together, you can pass your slaves one end of a unix domain socketpair(). Just leave the fd open over exec, and pass the fd number on the command line. Pass whatever configuration information you want as normal data over the socket pair, then hand over a file descriptor pointing to your shared memory segment.
Pipes are not reroutable -- they go where they go when they were created. What you need to do is have the slave close the pipe when its done with it, and then reopen its stdout elsewhere. If you always want output to the terminal, you can use freopen("/dev/tty", "w", stdout), but then it will always go to the terminal -- you can't redirect it anywhere else.
To address the specific issue, send debug messages to stderr, rather than stdout.