So, I'm using Google Cloud Platform and set below startup script
#! /bin/bash
cd /home/user
sudo ./process1
sudo ./process2
I worried about this script because process1 blocks shell and prevent to run sudo ./process2. And it really was. process1 was started successfully but process2 was not started.
I checked that script has no problem with starting process1 and process2. Execute ./process2 via SSH worked but after I close the SSH shell and process2 was stopped too.
How can I start both process in booting time(or even after)?
I tried testing your startup script in my environment,it seems the script works well.
1.You can please try checking process1 and process2 scripts.
2.If you want your process to run in the background even after the SSH session is closed, you can use “&” { your_command & }at the end of your command.
To run a command in the background, add the ampersand symbol (&) at the end of the command:
your_command &
then the script execution continues and isn't blocked. Or use linux internal means to auto run processes on boot.
Related
here's a problem that is driving me nuts. First off, I am not a Linux expert, so I might just be missing some detail.
I am trying to restart an application (namely rpi-webrtc-streamer, but that shouldn't matter) using a shell script. The reason is that when a configuration change happens I need to update the config files and restart.
The idea is to call a bash script using system() function and pass in the pid of the current process. The script should then just kill the process using the supplied pid, and execute it again. In theory this shouldn't be a problem...
What may be complicating it is that the process needs to run with sudo. Not sure if that's the case but just thought I should mention it.
Now this is the script:
#!/bin/bash
echo "restarting streamer..."
echo "killing process with PID $1"
kill $1
# I have tried different intervals, even 10 seconds, doesn't help
sleep 2
echo "running new streamer instance"
echo "path:"
pwd
#printenv
echo "id -u"
# just to verify the script runs with sudo
id -u
./webrtc-streamer --verbose
echo "done"
The problem is that the application fails with the following error:
(direct_socket.cc:77): Failed to listen 0.0.0.0:8888.
... and then it shuts down. Well obviously it's not able open the port. It almost looks as if the previous instance of the app is still holding the port open. I have however tried tweaking the sleep amount of seconds in the script but that shouldn't be a problem, first I think the script will continue execution after the process is actually killed and second the process shuts down immediately anyway, I can see that from the logs.
If I however run the app immediately after the script fails from the shell that actually executed the initial app in the first place, it runs without any issues (being able to open the port). No matter how much seconds it waited in the sleep previously.
The only other thing I though of would be that the bash script might be running with different environment variables. I tried to print those but I don't see anything significant.
Also I verified that the app does not change the working directory, but that again should not be a problem as it actually launches. It then just exits after not being able to open the port.
I also tried adding sudo before the app execution in the script (which shouldn't be necessary AFAIK). Doesn't make a difference.
Any ideas?
As suggested by jordanm in the comments, I solved the problem by using systemd.
I have a bash script which will take 5-6 hrs to complete and yesterday i accessed aws 12 month free tire and running ec2 (ubuntu) on it ,i want to run that bash script even after i close my main machine ...how can i do this ?
Assuming this is on linux system, you can run your script in the background using & optons. Something like this
yourBashScript.sh &
Where & tells the shell to run it in the background. So even if you close the shell or end your ssh session, it will keep running in the background till it finishes the job or crashes due to any error.
You can always check whether your script is running or not using ps command. Something like this
ps -eaf | grep yourBashScript
this may return the process information for your script, if it is in running state.
Execute the command and return immediately, not blocking until the command finishes.
Concepts: Background execution, signals, signal handlers, processes, asynchronous execution
System calls: sigset()
How?
You can direct the outputs to a separate buffer, a file if you don't want to spam your current terminal.
yourapp >> ~/tempOutput.txt &
If you want output it to "nowhere" you can redirect to null
yourapp >> /dev/null &
To run the command :
sudo nohup {your command} &
To check the running command process id using nohup
ps -ef | grep nohup
and to kill the command if needed
kill {process id}
You can execute a command (or shell script) as a background job by appending an ampersand to the command as shown below.
$ ./my-script.sh &
You can execute a command (or shell script) in the background using &, But the problem with this, if you logout from the session, the command will get killed. To avoid that, you should use nohup as shown below.
example
$nohup ./my-shell-script.sh &
or
$nohup ps -aux &
After you execute a command in the background using nohup, the command will get executed even after you logout. But, you cannot connect to the same session again to see exactly what is happening on the screen. To do that, you should use screen command.
Apart from this, I recommend to use tmux, you can create a session and reattach a session any time.
$tmux new -s mysessionname
More info about tmux
My Development Environment has already started after all the pre-requisites needed:
vagrant up
vagrant ssh
make membersrvc
make peer
But when trying to Start the membersrvc by doing membersrvc after coming into the folder $ cd $GOPATH/src/github.com/hyperledger/fabric, It is not Responding!
No Response even after One Hour!
Any suggestions?
This is exactly how membersrvc supposed to behave. when you execute membersrvc command you don't see any output whatsoever, however you can verify that it is running by opening a separate terminal window and running
ps -a | grep membersrvc
command.
Besides, as Sergey Balashevich commented, you also need to make sure that membersrvc is started and running beforepeer process will be able to get a valid certificate, which means that you need to start both membersrvc and peer process in separate terminal windows simultaneously.
If you want to run all the processes in a single terminal window you can execute them in background asmembersrvc > result 2>&1 & it will start the process and redirect both stdout and stderr to a result file which you can specify. If you don't care about the output at all - you can use /dev/null instead of specifying the file.
I have a script that takes a lot of time to complete.
Instead of waiting for it to finish, I'd rather just log out and retrieve its output later on.
I've tried;
at -m -t 03030205 -f /path/to/./thescript.pl
nohup /path/to/./thescript.pl &
And I have also verified that the processes actually exist with ps and at -l depending on which scheduling syntax i used.
Both these processes die when I exit out of the shell. Is there a way to keep a script from terminating when I close the connection?
We have crons here and they are set up and are working properly, but I would like to use at or nohup for single-use scripts.
Is there something wrong with my syntax? Are there any other methods to producing the desired outcome?
EDIT:
I cannot use screen or disown - they aren't installed in my HP Unix setup and i am not in the position to install them either
Use screen. It creates a terminal which keeps going when you log out. When you log back in you can switch back to it.
If you want to keep a process running after you log out:
disown -h <pid>
is a useful bash built-in. Unlike nohup, you can run disown on an already-running process.
First, stop your job with control-Z, get the pid from ps (or use echo $!), use bg to send it to the background, then use disown with the -h flag.
Don't forget to background your job or it will be killed when you logout.
This is just a guess, but something I've seen with some versions of ssh and nohup: if you've logged in with ssh then you may need to need to redirect stdout, stderr and stdin to avoid having the session hang when you exit. (One of those may still be attached to the terminal.) I would try:
nohup /path/to/./thescript.pl > whatever.stdout 2> whatever.stderr < /dev/null &
(This is no longer the case with my current versions of ssh and nohup - the latter redirects them if it detects that any is attached to a terminal - but you may be using different versions.)
Syntax for nohup looks ok, but your account may not allow for processes to run after logout. Also, try redirecting the stdout/stderr to a log file or /dev/null.
Run your command in background.
/path/to/./thescript.pl &
To get lits of your background jobs
jobs
Now you can selectively disown any of the above jobs, with its jobid.
disown <jobid>
All the disowned process should be keep on running even after you logged out.