Linux, execve how to run a child binary and pass payload efficently? - c++

till today when i was starting up a child program in my application (distributed computing) i used execv and as an argument i was passing a filename which has a payload stored.
So i had two files:
1) child-program.binary (+x)
2) child-program.payload (+r)
When child-program.binary executed it knew that it has to load child-program.payload on startup, then the computing took place and the new payload was stored into child-program.payload file.
i would like to change that and instead of storing payload on the hard drive, i would love to run the binary and pass the payload the different way, maybe via pipes?
Also, do i have to store the binary on the hard disk to be able to run it?
Isn't there any other memory like option to execute something?
What are the possible options?
Thanks all !

The advantage of your file-approach is that it is non-volatile and the data can be easily distributed around the globe as file.
Based on your thought about pipes, I assume your "distributed computing" is on the same node. You could also use shared memory see: shm_open and pass the name of the "file" name of your shared memory to the child.
BTW. Pipes or FIFOs let you easily synchornize using poll/select. AFAIK you need a bit more infrastructure to synchronize access to Shared Memory.

Related

Input the output of one program into another program

I am trying to make a billing software for which I need to take the output of one cpp program as an input in the second program.
eg:
In program 1, the user chose multiple things which made the to the value 'bill'=100.
I need Program 2 to read the value of 'bill' which was 100 in the end of program 1, and then add/subtract whatever the user does to this (interger?) and print the final value in the end of program #2..
I don't know if i explained it correctly but you can take a rough idea of what I meant..
As mentioned in the comments, there are many alternatives. However, I do not suggest the solutions like writing out to a file and reading from another program because you can face with problems like synchronization issues while accessing the file or bottleneck issues (may not be issue in this case but it may happen during big data transfers) regarding file I/O performance. Using mechanisms such as pipes or sockets would be a better solution.
If your software is using Qt Framework, I recommend using Qt Remote Objects. Both PyQt and Qt with C++ support QtRO communication.
In QtRO, the objects can be shared between applications through a defined interface. The source node (program 1) shares the object that contains bill. The clients can access the replicas of the shared object and get properties. When the replica is received, it can be used like any other QObject.
For more information about Qt Remote Objects check out: https://doc.qt.io/qt-6/qtremoteobjects-index.html

How XML DOM Object is being loaded in Memory from Disk

Hello and happy new year.
I need a little guide through process of loading a XML DOM from disk to memory with C++, on windows.
Microsoft provide this example, but it doesn't cover the actual process of what ntKernel Functions are being used to do this, and it doesn't explain what process is behind the actual load .
Does the main process make a call to kernel function to load xml from disk to mem?
VariantFromString(L"stocks.xml", varFileName);
pXMLDom->load(varFileName, &varStatus);
Or there is global process that handle request's to load, and then after it load xml via Kernel Functions, it make's a link to DOM Object, and return it to the process were it was asking.
I want to know what Kernel Function does the job for loading .xml file from disk ?
Thanks !
There is no kernel function for 'loading XML' (at least not one used by the DOMDocument60 coclass.
Instead it simply uses generic file reading calls (in the kernel this is ZwReadFile), the DOMDocument60 code then parses the file content into whatever internal representation it uses.
The only context switch involved will be between user and kernel mode not between one process, kernel mode and another process (unless perhaps some kind of user-mode file system is involved but if it were you likely wouldn't need to be asking this question).

Passing arguments to daemon from other processes

I recently wrote a daemon in c++ that backs up certain folders by periodically copying a directory (and its contents) on my computer to an external flash drive. So far I can only back up one directory with a specific fixed path that I set in my source code. I would like to be able to pass an argument from another process to the daemon, while it is running, to change the path of the directory I want to backup. I have done research on signals like kill(), but I do not think that they are the correct kind of inter-process communication for my specific application.
Any help or direction as to how I should accomplish this task is greatly appreciated.
You need to use pipes or shared memory see poping 2 processes

C++, linux: how to limit function access to file system?

Our app is ran from SU or normal user. We have a library we have connected to our project. In that library there is a function we want to call. We have a folder called notRestricted in the directory where we run application from. We have created a new thread. We want to limit access of the thread to file system. What we want to do is simple - call that function but limit its access to write only to that folder (we prefer to let it read from anywhere app can read from).
Update:
So I see that there is no way to disable only one thread from all FS but one folder...
I read your propositions dear SO users and posted some kind of analog to this question here so in there thay gave us a link to sandbox with not a bad api, but I do not really know if it would work on anething but GentOS (but any way such script looks quite intresting in case of using Boost.Process command line to run it and than run desired ex-thread (which migrated to seprate application=)).
There isn't really any way you can prevent a single thread, because its in the same process space as you are, except for hacking methods like function hooking to detect any kind of file system access.
Perhaps you might like to rethink how you're implementing your application - having native untrusted code run as su isn't exactly a good idea. Perhaps use another process and communicate via. RPC, or use a interpreted language that you can check against at run time.
In my opinion, the best strategy would be:
Don't run this code in a different thread, but run it in a different process.
When you create this process (after the fork but before any call to execve), use chroot to change the root of the filesystem.
This will give you some good isolation... However doing so will make your code require root... Don't run the child process as root since root can trivially work around this.
Inject a replacement for open(2) that checks the arguments and returns -EACCES as appropriate.
This doesn't sound like the right thing to do. If you think about it, what you are trying to prevent is a problem well known to the computer games industry. The most common approach to deal with this problem is simply encoding or encrypting the data you don't want others to have access to, in such a way that only you know how to read/understand it.

Reading Data From Another Application

How do I read data from another window's application?
The other application has a TG70.ApexGridOleDB32 according to Spy++. It has 3 columns and a few rows. I need to read this data from another application I am writing. Can someone help me?
I am writing the code in MFC/C++
Operating systems donot allow directly reading data from different applications/processes. In case your "application" is a sub-process of main application, you can use shared objects to pass data to and fro.
However, in your case, it seems like the most appropriate would be to dump your data on disk. Suppose you have applications A and B. So B can generate the data and push this data onto a regular file or a database. Then A can access the file/database to proceed. Note that this will be a very costly implementation because of sheer number of I/Os performed.
So if your application is generating a lot of data, making both the applications as threads would be the way to go.