How do you send a ctrl-c to a remote machine using Fabric? - fabric

I am writing a script that execute in sequence some operations on a shell; on a remote machine using Fabric.
At one point, I need to pass ^C (control-C), but I don't know how.
I tried to look on google, and I can see that the hex should be 0003, but I can't figure out how do you actually tell Fabric to read input, as hex/command/ascii, instead than literal string.
I tried various solutions found on the net but neither is specific to Fabric. Whatever I pass to fabric in a command, is always taken as literal and that won't work.
Does anyone know where I can find instructions, to pass keystrokes to simulate ctrl-c ?

According to that discussion:
exec_command("\x03")
should do the job

Related

system() executes shell command differently C++

I need to run this shell command in a C++ script:
"/usr/local/bin/mjpg_streamer -i "/usr/local/lib/input_uvc.so" -o "/usr/local/lib/output_http.so –w /usr/local/www" -b"
This command launches an application which broadcasts a video feed. When I execute this command via system() in C++ the application doesn't start properly.
I use:
system("/usr/local/bin/mjpg_streamer -i \"/usr/local/lib/input_uvc.so\" -o \"/usr/local/lib/output_http.so –w /usr/local/www\" -b");
When I try to access the video stream after I started it with the C++ application the webpage returns:
501: Not Implemented!
no www-folder configured
I can't expect you guys to give me an application related solution, but I'm wondering if there's a difference in the way commands from a C++ application using system() and commands directly entered in a terminal are executed.
EDIT: The application broadcasts the video stream on IP:8080. I access it by going to that IP in my browser. Usually it opens a webpage with the stream in it but when I execute the command with the C++ application I get that error.
Edit: The old idea of mis-placed quotes was wrong; I realize that -w is actually an option to output_http.so, so the whole shebang must be passed as a single parameter to the -o option, as shown here or here etc.
In that case, check file permissions etc. Does /usr/local/www exist? Is it possible that you are running the shell command from a root shell?
Hey, I have a book recommendation, too, "one of the best tech books ever published": Stevens' Advanced Programming in the Unix Environment. The guy knows -- sorry: knew -- what he was talking about.
I would avoid using the system(3) library function, or at the very least, check its returning error code. I don't understand why you are using " inside your command (I believe that in your particular case, you don't need them; but in general beware of code injection!). Read about globbing
You could use popen(3) to at least get the output of the command.
Even better, code yourself the running of the mjpg_streamer program using the fork(2) & execve(2) & waitpid(2) and other syscalls(2) (perhaps pipe(2), poll(2), dup2(2) etc...). Read Advanced Lnux Programming for more.

How to detect when ssh connection (over a QProcess) has finished?

I am running an ssh tunnel from an application using a QProcess:
QProcess* process = new QProcess();
process->start("ssh", QStringList()<<"-L"<<"27017:localhost:27017"<<"example.com");
So far it works great, the only problem being that there is no way for me to see when the port has actually been created.
When I run the command on a shell, it takes about 10 seconds to connect to the remote host after which the forwarded port is ready for usage. How do I detect it from my application?
EDIT:
As suggested by vahancho, I used the fact that post-connection there is some output on the terminal that can be used to detect that the connection has succeeded. However, there is a line which is run instantly after launch Pseudo-terminal will not be allocated because stdin is not a terminal, which probably would give a false alarm. The correct output is available in the second signal, emitted a bit later (which is a true indicator of the port having being opened). To get rid of the first message, I am now running ssh using ssh -t -t to force an stdin allocation.
So, the only question left is, can anyone help me without any concerns in this approach?
So, the only question left is, can anyone help me without any concerns in this approach?
This is not a stable and robust solution, unfortunately. It is similarly a broken concept to handling git outputs rather than using an actual library. The main problem is that these softwares do not have any guarantee for output compatibility, rightfully.
Just imagine that what happens if they have an unclear text, a typo, et all, unnoticed. They inherently need to fix the output respectively, and all the applications relying on the output would abruptly break.
This is also the reason behind working on dedicated libraries giving access to the functionality for reuse rather than working with the user facing output directly. In case of git, this means the libgit2 library, for instance.
Qt does not have an ssh mechanism in place by default like you can have such libraries in python, e.g. paramiko.
I would suggest to establish a way in your code by using libssh or libssh2 as you also noted yourself in the comment. I can understand the inconvenience that is not a truly Qt'ish way as of now, but at this point Qt cannot provide anything more robust without third-party.
That being said, it would be nice to see a similar add-on library in the Qt Project for the future, but this may not be happen any soon. If you write your software with proper design in mind, you will be able to switch to such a library withour major issues once someone stands up to maintain such an additional library to Qt or elsewhere.
I had the same problem, but in my case ssh do not output anything - so I couldn't just wait for output. I'm also using ssh to setupt tunnel, so I used QTcpSocket:
program = "ssh";
arguments << m_host << "-N" << "-L" << QString("3306:%1:3306").arg(m_host);
connect(tunnelProcess, &QProcess::started, this, &Database::waitForTunnel);
tunnelProcess->start(program, arguments);
waitForTunnel() slot:
QTcpSocket sock;
sock.connectToHost("127.0.0.1", 3306);
if(sock.waitForConnected(100000))
{
sock.disconnectFromHost();
openDatabaseConnection();
}
else
qDebug() << "timeout";
I hope this will help future people finding this question ;)

C++, linux: how to limit function access to file system?

Our app is ran from SU or normal user. We have a library we have connected to our project. In that library there is a function we want to call. We have a folder called notRestricted in the directory where we run application from. We have created a new thread. We want to limit access of the thread to file system. What we want to do is simple - call that function but limit its access to write only to that folder (we prefer to let it read from anywhere app can read from).
Update:
So I see that there is no way to disable only one thread from all FS but one folder...
I read your propositions dear SO users and posted some kind of analog to this question here so in there thay gave us a link to sandbox with not a bad api, but I do not really know if it would work on anething but GentOS (but any way such script looks quite intresting in case of using Boost.Process command line to run it and than run desired ex-thread (which migrated to seprate application=)).
There isn't really any way you can prevent a single thread, because its in the same process space as you are, except for hacking methods like function hooking to detect any kind of file system access.
Perhaps you might like to rethink how you're implementing your application - having native untrusted code run as su isn't exactly a good idea. Perhaps use another process and communicate via. RPC, or use a interpreted language that you can check against at run time.
In my opinion, the best strategy would be:
Don't run this code in a different thread, but run it in a different process.
When you create this process (after the fork but before any call to execve), use chroot to change the root of the filesystem.
This will give you some good isolation... However doing so will make your code require root... Don't run the child process as root since root can trivially work around this.
Inject a replacement for open(2) that checks the arguments and returns -EACCES as appropriate.
This doesn't sound like the right thing to do. If you think about it, what you are trying to prevent is a problem well known to the computer games industry. The most common approach to deal with this problem is simply encoding or encrypting the data you don't want others to have access to, in such a way that only you know how to read/understand it.

Is there a way to send some procesess with known pid in background?

I am new in Linux and system programming .
I Want to write a c program which finds processes whose cpu% usage are more than a specific given value and sends them to background.
anybody can help me !
I really appreciate it
I'm fairly sure that what you're asking is that you want to detect if a process is using X amount of CPU and if so, take it off the CPU for a while. There's a piece of software already that does this: It's called the kernel. I'm not aware of any way to programatically take another process off CPU unless that other program supports an external interface to reduce its load.
Most likely what you really want to do is configure the nice and other scheduler parameters of the running process so the kernel is more like to to take it off CPU when another program needs to do work.
But what underlying problem are you really trying to solve here? Maybe if you tell us that we can offer an alternate solution.
Please look at source code of process managament utilities like:
htop
top (standard unix command)
ps (standard unix command)
IMHO, You can't.
Background management ensures the shell. So, the & is interpreted for example by /bin/bash command. When pressed CTRL-Z, the kernel stopping your current fg-job, and again by your shell you can send it into background.
Youre looking for the way how to remote control the shell what running some program in fg. I don't know any 'remote-controling' way.
Ofc, here are alternative solutions, for example:
use the screen command, and you can recall the specific screen into your terminal, and can manually send process into bg.
or you can use some screen-sharing utility, to overtake a specific terminal and CTRL-Z, bg
or, you can patch bash and adding remote control functionality. ;)
or, here is something what i don't know. ;) - hm, maybe trap some user-signal handling code in the /etc/profile?
You can read a bit about here: http://en.wikipedia.org/wiki/Process_group
Honestly, after a half hour of thinking I don't get any idea why you want remotely (from the another terminal - by its PID) send some processes from the fg into the bg. Give me no sense.
Can you please tell, what you want achieve?
You probably want to reduce process priority, but I not sure it's good idea.
We send process to background generally to free shell's prompt.
The "+" means that the program "is in the foreground process group". I don't believe, however, that this state at all affects the process's scheduling.
However, you can change it with tcsetpgrp.
From the man page: "The function tcsetpgrp() makes the process group with process group ID pgrp the foreground process group on the terminal associated to fd, which must be the controlling terminal of the calling process, and still be associated with its session. Moreover, pgrp must be a (non-empty) process group belonging to the same session as the calling process."
By my reading, you just call this function and make the shell (or some other program) be the foreground process.

C++: stop RPC service

from my C++ source, I am starting an RPC service calling svc_run(). Everything looks just fine and I can see my service running if I type rpcinfo -p in my terminal.
Now I am working on a "cleanup" function which should stop this service and remove it from the rpcinfo -p list.
How can I do that? At the moment I am only able to stop it using sudo rpcinfo -d program version in my terminal. How can I do this from my source file?
Thanks.
After some time, I found out how to do this. Actually I faced some unexpected difficulties. The standard way to do this would be to use this:
svc_unregister(PROGID, VERSION)
but somehow, it did not work for me. After lots of trial and some online help (http://www.spinics.net/lists/linux-nfs/msg05619.html) I was able to delete the RPC service calling:
pmap_unset(PROGID, VERSION);
Hope this will help :)
Try to use void svc_exit(void) function. For more detailed description please refer to rpc_svc_calls chapter.
I tried this force stop of svc_run(), however did not find a solution, I however made the svc_run() stop from within the registered function and then it stopped - perhaps this could help you - please look at this : svc_exit Subroutine
The 'nicest' solution is to use both DevCpp's and Danilo's solution combined:
Among the RPC functions of your server, define one function which, when called by the client, executes svc_exit(). This will let your RPC server return from the svc_run() loop. Now you can either extend your RPC client application or create a separate client application to terminate your server.
In your RPC server's main program, right after the call to svc_run(), execute 'pmap_unset(PROGID, VERSION);'. This will let rpcbind unregister your RPC address.
Then do the usual cleanup of your application.
This combination allows your RPC server to run as a demon, i.e. without user interaction, while still offering a clean exit without having to cancel the process.