I want to make a script that can run in the background, but work like a command line tool at the same time. I have no experience with making daemons, so I have no idea if that could do what I am describing better.
I would like a loop that uses some values, and I want to be able to change these values through the Linux terminal.
E.g. I want it to run continuously and for me to be able to adjust some variables using the terminal if necessary, without restarting it.
Sorry for the pretty bad question
You'll need two programs -- one that runs in the background, and a second one that communicates with it to tell it to update its values.
There are various options on how to do this depending on your requirements. One possibility is to have the background program accept TCP connections and take commands over them. Another would be to have a configuration file that it re-reads each time it does something. More exotic options useful in some circumstances are shared memory blocks and named pipes.
The general keyword here is "inter-process communication (IPC)."
Related
My question is similar to this question, but i didn't get my answer.
I am trying to design a judge.
The users of the online judge system submit their source code, then
the server program compiles and runs it. So the server program must
keep the server safe.
And there are a lot of things a user can use to make changes to the server.
How can i change the permission of a program? So that compiled code won't be able to do anything except printing something!
P.S: searching for suspicioius words is not a good idea. For instance, The user can use the following command instead of word system in C++:
#define glue(a,b) a ## b
glue(sys,tem) ("rm *"); //DO NOT RUN THIS CODE
So actually user used the following code without using the word system:
system ("rm *"); //DO NOT RUN THIS CODE
The are two options for you: the one you are currently looking into - trying to make your compiler, aka the server process that runs the user provided source code detect "exploits". And that might be hard. If you allow users to send you c++ source code, there is a lot of things that become possible. I guess you would need some real c++ gurus in order to get that solution even "half way secure".
So, option two: you have to run that user-provided input within some sort of sandbox. Examples could be:
A docker container (but for sure: a non-privileged container; run by a user, not root)
A virtual machine
If you are serious about what you are doing, you would probably focus on option 2 first (because that gives you a lot of benefit, at medium cost); but you definitely want to look into option 1, too (because one could learn from that a lot).
You can run them in a chroot jail, with user id set to nobody or some nonce account if nobody actually can do something significant. (You can use su or sudo for this.) Or even in their own VM. Pipe the output into a file, and read it from your judge program.
Currently I am calling an R script from C++ in the following way:
system("PATH C:\\Program Files\\R\\R-3.0.1\\bin\\x64");
system("RScript CommandTest.R");
Where CommandTest.R is my script.
This works, but is slow, since I need a particular package and this method makes the package load on every call.
Is there a way to load the package once and then keep that instance of Rscript open so that I can continue to make calls to it without having to reload the package every time?
PS: I know that the 'better' method is probably to go with Rcpp/Rinside, and I will go down that route if necessary, but I thought it'd be worth asking if there's an easy way to do what I need without it.
It seems like the Rserve package is what you seek. Basically it keeps open a "server" which can be asked to evaluate expressions.
It has options for Java, C++ and communication between one R session and another.
In the documentation, you might want to look at run.Rserve and self.ctrlEval
I'm not aware of a solution to keeping R permanently open, but you can speed up startup by calling R with the --vanilla option. (See Appendix B.1 of Intro to R for more options.)
You could also try accessing the functions using :: to save completely loading the package. (Try profiling to see if that really saves you much time. Is the package load actually the slow part of your analysis?)
I have an executable that needs to process records in the database when the command arrives to do so. Right now, I am issuing commands via TCP exchange but I don't really like it cause
a) queue is not persistent between sessions
b) TCP port might get locked
The idea I have is to create a folder and place files in it whose names match the commands I want to issue
Like:
1.23045.-1.1
2.999.-1.1
Then, after the command has been processed, the file will be deleted or moved to Errors folder.
Is this viable or are there some unavoidable problems with this approach?
P.S. The process will be used on Linux system, so Antivirus problems are out of the question.
Yes, a few.
First, there are all the problems associated with using a filesystem. Antivirus programs are one (though I cannot see why it doesn't apply to Linux - no delete locks?). Disk space, file and directory count maximums are others. Then, open file limits and permissions...
Second, race conditions. If there are multiple consumers, more than one of them might see and start processing the command before the first one has [re]moved it.
There are also the issues of converting commands to filenames and vice versa, and coming up with different names for a single command that needs to be issued multiple times. (Though these are programming issues, not design ones; they'll merely annoy.)
None of these may apply or be of great concern to you, in which case I say: Go ahead and do it. See what we've missed that Real Life will come up with.
I probably would use an MQ server for anything approaching "serious", though.
I'm experimenting my distributed clustering algorithm (implemented with MPI) on 24 computers that I set up as a cluster using BCCD (Bootable Cluster CD) that can be downloaded at http://bccd.net/.
I've written a batch program to run my experiment that consists in running my algorithm several times varying the number of nodes and the size of the input data.
I want to know the amount of data used in the MPI communications for each run of my algorithm so I can see how the amount of data changes when varying the previous mentioned parameters. And I want to do all this automatically using a batch program.
Someone told me to use tcpdump, but I found some difficulties in this approach.
First, I don't know how to call tcpdump in my batch program (which is written in C++ using the command system for making calls) before each run of my algorithm, since tcpdump requires another terminal to run in parallel with my application. And I can't run tcpdump in another computer since the network uses a switch. So I need to run it on the master node.
Second, I saw the traffic with tcpdump while my experiment was going on and I couldn't figure out what was the port used by MPI. It seems to use many ports. I wanted to know that for filtering the packages.
Third, I tried capturing whole packages and saving it to a file using tcpdump and in a few seconds the file was 3,5MB. But my whole experiment takes 2 days. So the final log file will be huge if I follow this approach.
The ideal approach would be to capture just the size field in the header of the packages and sum this up to obtain the total amount of data transmitted. In that way the logfile would be much smaller than if I were capturing the whole package. But I don't know how to do it.
Another restriction is that I don't have access to the computer disc. So I only have the RAM and my 4GB USB Flash drive. So I can't have huge logfiles.
I have already thought about using some MPI tracing or profiling tool such as those mentioned at http://www.open-mpi.org/faq/?category=perftools. I have only tested Sun Performance Analyzer until now. The problem is that I guess it will be difficult to install those tools on BCCD and maybe even impossible. In addtion to that, this tool will make my experiment take longer to end, sice it adds overhead. But if someone is familiar with BCCD and think it is a good choice to use one of those tools, so please let me know.
Hope someone have a solution.
Implementations like tcpdump won't work if there are multi-core nodes which use shard memory to communicate, anyway.
Using something like MPE is almost certainly the way to go. Those tools add very little overhead, and some overhead is always going to be necessary if you want to count messages. You can use mpitrace to write out every MPI call, and parse the resulting text file yourself. By the way, note that MPE is explicitly discussed on the bccd website. MPICH2 comes with MPE built in, but it can be compiled for any implementation. I've only found a very modest overhead for MPE.
IPM is another nice tool that does counting of messages and sizes; you should be able either parse the XML output, or use the postprocessing tools and just manually integrate the graphs (say either bytes_rx/bytes_tx by rank, or the message buffer size/count graph). The overhead for IPM is even less than for MPE, and mostly comes after the program's finished running to do the file I/O.
If you were really super worried about the overhead with either of these approaches, you could always write your own MPI wrappers using the profiling interface that wrapped MPI_Send, MPI_Recv, etc, and just counted # of bytes sent and recieved for each process, and output only that total at the end.
I was wondering if you can add a link to a website in a C++ program running in the CMD Prompt type window (No GUI)
If it's possible can some one please give me a few examples?
You mean output text in the command prompt that the user can then click on? No, not unless the terminal supported it. Linux terminals usually autolink text that matches a URL pattern, so you could just printf("http://stackoverflow.com/\n"); and it would be clickable, but that's up to the terminal, not your program
When you write 'direct link' it is not clear if you mean clickable text or a means to open a url. At any rate, command line programs usually respond to command line parameters. Your program could open a url in the default browser in response to a command line flag. On Windows, you could call ShellExecute - on other systems, system might be appropriate.
It depends. In Windows, for example, yes, it's entirely possible, though somewhat non-trivial. You can receive mouse events via ReadConsoleInput, so in theory it's a fairly straightforward matter of reading the input event, and if it's a mouse click over the area you've defined as a link, you direct the user to the link as you see fit -- if you want to display the web site in text mode, that's possible (though, again, distinctly non-trivial). If you want to start up the user's normal web browser, that's a lot simpler (normally just ShellExecute the URL).
In reality, the details get a bit ugly. You have to enable mouse input for it to work at all. ReadConsoleInput gives you an INPUT_RECORD, which is a union of a number of different input record types, one of which is a mouse input record. By the time you get to react to a mouse click, your code is nested fairly deeply. None of it is unmanageable by any means, but unless you already have a fair amount of experience with Windows console programming, it might easily take most of a day (maybe even a bit more) before you have it working, rather than the hour or two you'd initially guess.
That, of course, is strictly for Windows -- if you ever want to port the code to another system, I'd guess there's a pretty good chance you'd be looking at a complete rewrite. For GUIs there are a fair number of cross-platform libraries, but text mode mouse operations aren't nearly so well supported.