Execute the command and return immediately, not blocking until the command finishes.
Concepts: Background execution, signals, signal handlers, processes, asynchronous execution
System calls: sigset()
How?
You can direct the outputs to a separate buffer, a file if you don't want to spam your current terminal.
yourapp >> ~/tempOutput.txt &
If you want output it to "nowhere" you can redirect to null
yourapp >> /dev/null &
To run the command :
sudo nohup {your command} &
To check the running command process id using nohup
ps -ef | grep nohup
and to kill the command if needed
kill {process id}
You can execute a command (or shell script) as a background job by appending an ampersand to the command as shown below.
$ ./my-script.sh &
You can execute a command (or shell script) in the background using &, But the problem with this, if you logout from the session, the command will get killed. To avoid that, you should use nohup as shown below.
example
$nohup ./my-shell-script.sh &
or
$nohup ps -aux &
After you execute a command in the background using nohup, the command will get executed even after you logout. But, you cannot connect to the same session again to see exactly what is happening on the screen. To do that, you should use screen command.
Apart from this, I recommend to use tmux, you can create a session and reattach a session any time.
$tmux new -s mysessionname
More info about tmux
Related
So, I'm using Google Cloud Platform and set below startup script
#! /bin/bash
cd /home/user
sudo ./process1
sudo ./process2
I worried about this script because process1 blocks shell and prevent to run sudo ./process2. And it really was. process1 was started successfully but process2 was not started.
I checked that script has no problem with starting process1 and process2. Execute ./process2 via SSH worked but after I close the SSH shell and process2 was stopped too.
How can I start both process in booting time(or even after)?
I tried testing your startup script in my environment,it seems the script works well.
1.You can please try checking process1 and process2 scripts.
2.If you want your process to run in the background even after the SSH session is closed, you can use “&” { your_command & }at the end of your command.
To run a command in the background, add the ampersand symbol (&) at the end of the command:
your_command &
then the script execution continues and isn't blocked. Or use linux internal means to auto run processes on boot.
I start a gdb session in the background with a command like this:
gdb --batch --command=/tmp/my_automated_breakpoints.gdb -p pid_of_proces> &> /tmp/gdb-results.log &
The & at the end lets it run in the background (and the shell is immediately closed afterwards as this command is issued by a single ssh command).
I can find out the pid of the gdb session with ps -aux | grep gdb.
However: How can I gracefully detach this gdb session from the running process just like I would if I had the terminal session in front of me with the (gdb) detach command?
When I kill the gdb session (and not the running process itself) with kill -9 gdb_pid, I get unwanted SIGABRTs afterwards in the running program.
A restart of the service is too time consuming for my purpose.
In case of a successful debugging session with this automated script I could use a detach command inside the batch script. This is however not my case: I want to detach/quit the running gdb session when there are some errors during the session, so I would like to gracefully detach gdb by hand from within another terminal session.
If you run the gdb command from terminal #1 in the background, you can always bring gdb back into foreground by running the command fg. Then, you can simply CTRL+C and detach as always to stop the debugging session gracefully.
Assuming that terminal #1 is now occupied by something else and you cannot use it, You can send a SIGHUP signal to the gdb process to detach it:
sudo kill -s SIGHUP $(pidof gdb)
(Replace the $(pidof gdb) with the actual PID if you have more than one gdb instance)
My Development Environment has already started after all the pre-requisites needed:
vagrant up
vagrant ssh
make membersrvc
make peer
But when trying to Start the membersrvc by doing membersrvc after coming into the folder $ cd $GOPATH/src/github.com/hyperledger/fabric, It is not Responding!
No Response even after One Hour!
Any suggestions?
This is exactly how membersrvc supposed to behave. when you execute membersrvc command you don't see any output whatsoever, however you can verify that it is running by opening a separate terminal window and running
ps -a | grep membersrvc
command.
Besides, as Sergey Balashevich commented, you also need to make sure that membersrvc is started and running beforepeer process will be able to get a valid certificate, which means that you need to start both membersrvc and peer process in separate terminal windows simultaneously.
If you want to run all the processes in a single terminal window you can execute them in background asmembersrvc > result 2>&1 & it will start the process and redirect both stdout and stderr to a result file which you can specify. If you don't care about the output at all - you can use /dev/null instead of specifying the file.
I am trying to run a command (Jar file execution) on a remote machine using the 'Execute Command' keyword of SSH library. But the control returns even before the command execution is completed. Is there a way to wait until the command is executed?
Below are the keywords written:
Run The Job
[Arguments] ${machine_ip} ${username} ${password} ${file_location} ${KE_ID}
Open Connection ${machine_ip} timeout=60
Login ${username} ${password}
${run_jar_file}= Set Variable java -jar -Dspring.profiles.active=dev ${file_location} Ids=${KE_ID}
${output}= Execute Command ${run_jar_file}
Log ${output}
Sleep 30
Close Connection
Use Read and Write, instead of using "Execute Command" so that you can specify timeout for command execution.
refer: http://robotframework.org/SSHLibrary/latest/SSHLibrary.html#Write
You are explicitly asking for the command to be run in the background (by virtue of adding & as the last character in the command to be run), so the ssh library has no way of knowing when the program you're running exits. If you want to wait for it to finish, don't run it in the background.
In other words, remove the trailing & from the command you are running.
If anyone is still strugling with this one, i have discovered solution
Open Connection ${SSH_HOST} timeout=10s
Login login pass
Write your_command
Set Client Configuration prompt=$
${output}= Read Until Prompt
Should End With ${output} ~ $
I have a script that takes a lot of time to complete.
Instead of waiting for it to finish, I'd rather just log out and retrieve its output later on.
I've tried;
at -m -t 03030205 -f /path/to/./thescript.pl
nohup /path/to/./thescript.pl &
And I have also verified that the processes actually exist with ps and at -l depending on which scheduling syntax i used.
Both these processes die when I exit out of the shell. Is there a way to keep a script from terminating when I close the connection?
We have crons here and they are set up and are working properly, but I would like to use at or nohup for single-use scripts.
Is there something wrong with my syntax? Are there any other methods to producing the desired outcome?
EDIT:
I cannot use screen or disown - they aren't installed in my HP Unix setup and i am not in the position to install them either
Use screen. It creates a terminal which keeps going when you log out. When you log back in you can switch back to it.
If you want to keep a process running after you log out:
disown -h <pid>
is a useful bash built-in. Unlike nohup, you can run disown on an already-running process.
First, stop your job with control-Z, get the pid from ps (or use echo $!), use bg to send it to the background, then use disown with the -h flag.
Don't forget to background your job or it will be killed when you logout.
This is just a guess, but something I've seen with some versions of ssh and nohup: if you've logged in with ssh then you may need to need to redirect stdout, stderr and stdin to avoid having the session hang when you exit. (One of those may still be attached to the terminal.) I would try:
nohup /path/to/./thescript.pl > whatever.stdout 2> whatever.stderr < /dev/null &
(This is no longer the case with my current versions of ssh and nohup - the latter redirects them if it detects that any is attached to a terminal - but you may be using different versions.)
Syntax for nohup looks ok, but your account may not allow for processes to run after logout. Also, try redirecting the stdout/stderr to a log file or /dev/null.
Run your command in background.
/path/to/./thescript.pl &
To get lits of your background jobs
jobs
Now you can selectively disown any of the above jobs, with its jobid.
disown <jobid>
All the disowned process should be keep on running even after you logged out.