Create a shell that can start 3 separate web services at once - web-services

I'm trying to create a .sh that will start 3 separate web services in order.
Example:
cd to User/me/Scripts/web
./webservice-start
cd to User/me/Scripts/db-services
./db1-start
./db2-start
This is the idea, the problem being that bash waits for the process to see a done statement. These services do not finish, therefore it never gets the queue to move on.
I know the syntax is static, it's just for me. I'm more looking for the methodology.
Thanks

You probably just want to run each of the three scripts in the background:
cd to User/me/Scripts/web
./webservice-start &
cd to User/me/Scripts/db-services
./db1-start &
./db2-start &
The & command terminator runs the command in the background, allowing the shell to move on to the next command.

Related

Start/Stop daemon on Linux via C++ code

I am trying to find out a way to launch a custom daemon from my program. The daemon itself is implemented using double-forking mechanism and works fine if launched directly.
So far I have come across various ways to start a daemon:
Create an init script and install it to init.d directory.
Launch the program using start-stop-daemon command.
Create .desktop file and place in one of the autostart paths.
While the 1st 2 methods are known to start the service using command line, the 3rd method is for autostarting the service (or any other application) at user login.
So far my guess is that the program can be executed directly using exec() family of functions, or the 'start-stop-daemon' command can be executed via system() function.
Is there a better way to start/stop service?
Generally startups are done from shell scripts that would call your C++ program which would then do its double fork. Note that it should also close unneeded file descriptors, use setsid() and possibly setpgid/setpgrp (I can't remember if these apply to Linux too), possibly chdir("/"), etc. There are a number of fairly normal things to do which are described in the Stevens book - for more info see http://software.clapper.org/daemonize/daemonize.html
If the daemon is supposed to run with root or other system user account, then the system /etc/init/ or /etc/init.d/ mechanisms are appropriate places to have scripts to stop|start|status|etc your daemon.
If the deamon is supposed to be for the user, and run under his/her account, you have a couple of options.
1) the .desktop file - I'm not personally a fan, but if it also does something for you on logging out (like let you trigger shutting down your daemon), it might be viable.
2) For console logins, the ~/.bash_login and ~/.bash_logout - you can have these run commands supported by your daemon's wrapper to start it and (later) shut it down. The latter can be done by saving the PID in a file or having the .bash_login keep it in a variable the .bash_logout will use later. This may involve some tweaking to make sure the two scripts get run, once each, by the outermost login shell only (normal .bashrc stuff stays in the .bashrc, and .bash_login would need to read it in for the login shell before starting the daemon, so the PATH and so on would be set up by then).
3) For graphic environments, you'd need to find the wrapper script from which things like your X window manager are run. I'm using lightdm, and at some point /etc/X11/Xsession.d/40x11-common_xsessionrc ends up running my ~/.xsessionrc which gives me a hook to startup anything I want (I have it run my ~/.xinitrc which runs my window manager and everything), as well as the place to shot everything down later. The lack of standardization for giving control to the user makes finding the hook pretty annoying, since just using a different login manager (e.g. lightdm versus gdb) can change where the hook is.
4) A completely different approach is to just have the user's crontab start up the daemon. Run "man 5 crontab" and look for the special #reboot option to have tasks run at boot. I haven't used it myself - there's a chance it's root restricted, but it's easy to test and you need only contemplate having your daemon exist gracefully (and quickly) at system shutdown when the system sends it a SIGTERM signal (see /etc/init.d/sendsigs for details).
Hope something from that helps.

Creating a C++ daemon and keeping the environment

I am trying to create a c++ daemon that runs on a Red Hat 6.3 platform and am having trouble understanding the differences between the libc daemon() call, the daemon shell command, startproc, start-stop-daemon and about half a dozen other methods that google suggests for creating daemons.
I have seen suggestions that two forks are needed, but calling daemon only does one. Why is the second fork needed?
If I write the init.d script to call bash daemon, does the c code still need to call daemon?
I implemented my application to call the c daemon() function since it seems the simplest solution, but I am running into the problem of my environment variables seem to get discarded. How do I prevent this?
I also need to run the daemon as a particular user, not as root.
What is the simplest way to create a C++ daemon that keeps its environment variables, runs as a specific user, and is started on system boot?
Why is the second fork needed?
Answered in What is the reason for performing a double fork when creating a daemon?
bash daemon shell command
My bash 4.2 does not have a builtin command named daemon. Are you sure yours is from bash? What version, what distribution?
environment variables seem to get discarded.
I can see no indication to that effect in the documentation. Are you sure it is due to daemon? Have you checked whether they are present before, and missing after that call?
run the daemon as a particular user
Read about setresuid and related functions.
What is the simplest way to create a C++ daemon that keeps its environment variables, runs as a specific user, and is started on system boot?
Depends. If you want to keep your code simple, forget about all of this and let the init script do this via e.g. start-stop-daemon. If you want to handle this in your app, daemon combined with retresuid should be a good approach, although not the only one.

How to call a python script to process hundreds of text files parallely using make?

I have hundreds of text files in a folder named "in/". I need to run a python script which takes one file at a time, process it and drop it in a folder named "out/". I have the python script in place to do this.
As the number of text files to be processed are very large (10000) and as all the file processing are independent I wanted to use "make -j" to get the best out of my CPU which has 8 cores. I created a make file which looks like this :
SCRIPT_DIR:=/home/xyz/abc/scriptFolder<br/>
IN_DIR:=/home/xyz/abc/data/in/in10000<br/>
OUT_DIR:=/home/xyz/abc/data/out/out10000<br/><br/>
chk:
cd $(OUT_DIR); \<br/>
python $(SCRIPT_DIR)/process_parallel.py --inFile $(IN_DIR)/\*
As mentioned process_parallel.py takes in one file at a time processes it and drops it as a text file in the current folder which is OUT_DIR. I ran htops after this and checked. I could see only one process running, where as I should have seen 8 as I ran it with -j 8. Can you please guide me where I am wrong ?
My first thought is writing a shell script to do this. Something like:
for f in in/*.txt;
do
./process_parallel.py $f &
done
wait
Your OS scheduler should take care of parallelizing the processing across the CPU cores. You can then call the script inside your Makefile.
There's also GNU Parallel https://www.gnu.org/software/parallel/

Waiting for unzip to finish before carrying on C++ code on a RedHat machine

I want to unzip a zipped folder on my Redhat machine.
To do this I send a bash script the string;
"unzip /usr/bin/Folder.gz"
This unzips the folder no problem, as in I get the general
inflating folderA/folderB/fileX
etc.
However, I want to hold the code at the unzip command, waiting until the unzipping is complete.
I have tried using
sleep(5)
but I don’t want to use this and just hope that it will always take less than five seconds especially this is would be inefficient for very small zipped files.
I have searched online but to no avail...
So my question is; what is a reliable way to stall a program until the unzipping is complete?
O/S: Redhat
Programming Language: C++
IDE: Eclipse
How do you run the bash script?
If you use the system() API it will start the program and then wait until the spawned process ends.
system() system is a call that is made up of 3 other system calls: execl(), wait() and fork(). Source.
Try:
unzip /usr/bin/Folder.gz &
wait $!
That will cause the shell to wait on the completion of the last process. The pid of the last executed command is stored in $!.
Not sure how this relates to C++, but if you want to do the same from code you can use the waitpid function.
Of course, if you want your program to block while unzip executes I'm a little confused as to what the exact problem is. Assuming you're using system or some equivalent to run unzip, it should block until the command completes.
I'm not really sure this is the best way, but it is reliable.
Now instead just sending the command to the bash why not send the output to some file.
unzip /usr/bin/Folder.gz > output.txt
Read the file in regular intervals from your C++ code (lets say 1 sec) once you find 100% or whatever the last line of the output should contain and then carry on with your code.
I don't know if this would be overkill or inefficent, but you could move the process of unzipping in to another thread, and halt your main thread until that thread is finished processing

How to ensure that a program is running and restart it if needed?

I developed a software (in C++) which needs to be continuously running. That basically means that it has to be restarted each time it stops.
I was thinking about using cron jobs to check every minutes if it is still alive, but there might be a cleaner way or standard way of doing this.
Thanks in advance
Fedora and Ubuntu use upstart, which has a the ability to automatically restart your deamon if it exits.
I believe the easiest way to do this is to have a script that will start your program and if it gets returned to it just restarts it.
#!/bin/sh
while true; do
./Your_program
done
Monit can do what you want and much more.
cron is an option if your app will be smart enough to check for itself running (this is to avoid many copies of it running). This is usually done in a standard way via PID files.
There are two proper ways to do it on *nix:
Use the OS infrastructure (like smf/svc on solaris, upstart on Ubuntu, etc...). This is the proper way as you can stop/restart/enable/disable/reconfigure at any time.
Use "respawn" in /etc/inittab (enabled at boot time).
launchtool is a program I used for this purpose, it will monitor your process and restart it as needed, it can also wait a few seconds before reinvocation. This can be useful in case there are sockets that need to be released before the app can start again. It was very useful for my purposes.
Create the program you wish to have running continually as a child of a "watcher" process that re-starts it when it terminates. You can use wait/waitpid (or SIGCHILD) to tell when the child terminates. I would expect someone's written code to do this (it's pretty much what init(8) does)
However, the program presumably does something. You might want not only to check the application is running, but that it is not hung or something and is providing the service that it is intended to. This might mean running some sort of probe or synthetic transaction to check it's operating correctly.
EDIT: You may be able to get init to do this for you - give it a type of 'respawn' in inittab. From the man page:
respawn
The process will be restarted whenever it terminates (e.g. getty).
How about a script that check about every 10 minutes to see if the application is running and if it isn't it will restart the computer. If the application is running, then it just continues to check.
Here is my script using PrcView is a freeware process viewer utility. And I used notepad.exe as my example application that needs to be running, I am not sure the command for checking every 10 minutes and where it would go in my script.
#echo off
PATH=%PATH%;%PROGRAMFILES%\PV;%PROGRAMFILES%\Notepad
PV.EXE notepad.exe >nul
if ERRORLEVEL 1 goto Process_NotFound
:Process_Found
echo Notepad is running
goto END
:Process_NotFound
echo Notepad is not running
shutdown /r /t 50
goto END
:END
This is not so easy. If you're thinking "I know, I'll write a program to watch for my program, or see if such a program already exists as a standard service!" then what if someone kills that program? Who watches the watcher?