fetching a http process user using bash - regex

Whats the best way to fetch a web process user (apache|nginx|www-data) for bash script usage?
In my case for setting up folder permissions and changing to the poper owner.
Currently I'm using:
ps aux | grep -E "(www-data|apache|nginx).*(httpd|apache2|nginx)" \
| grep -o "^[a-z\-]*" | head -n1
inside a bash script to fetch the owner of the http process.
Any hints on a more smartly solution or a better regex whould be great.

Your solution will really depend on your operating system. One option might be to check whether likely candidates exist in your password file:
user=$(awk -F: '/www|http/{print $1;exit}' /etc/passwd)
If you really want to look for the owner of running processes, remember that Apache often launches a root-owned "master" process, then launches children as the web user. So perhaps something like this:
user=$(ps aux|awk '$1=="root"{next} /www|http|apache/{print $1;exit}')
But you should also be able to determine things based on OS detection, since things tend to follow standards:
case "`uname -s`" in
Darwin) user=_www; uid=70 ;;
FreeBSD) user=www; uid=80 ;;
Linux)
if grep Ubuntu /etc/lsb-release; then
user=www-data; uid=$(id -u www-data)
elif [ -f /etc/debian_version ]; then
user=www-data; uid=$(id -u www-data)
elif etc
etc
fi
;;
esac
I'm not up on the best ways to detect different Linux distros, so that may require a bit of additional research for you.

Related

How to stop grep after matching

In Windows, I would have done a search for finding a folders name using findsr Similarly, I want to get a specific folder using grep
In windows, I'm using svnlook tree -t [repos_path] | findstr (13\.9\.[0-9]+\/)
In Ec2 Maiche (Linux) svnlook tree /var/www/svn/ILS | grep -Eo '(13\.9\.[0-9]+\/)'
and I got the repos that I need
13.9.4/
13.9.5/
13.9.6/
13.9.7/
my problem is the grep line in Linux doesn't want to stop (exit) it's still running.
how could I stop it after matching?
You can specify the -m: maximal number of counts. After the specified number of matching lines, grep will stop.
After ^Z the svnlook is paused. You kan kill (^C) the program, send it to the background (bg) or continue (fg).
When you want to interrupt you can use ^C or start the grep with a -m option.

So, what exactly is the deal with QSharedMemory on application crash?

When a Qt application that uses QSharedMemory crashes, some memory handles are left stuck in the system.
The "recommended" way to get rid of them is to
if(memory.attach(QSharedMemory::ReadWrite))
memory.detach();
bool created = memory.create(dataSize, QSharedMemory::ReadWrite);
In theory the above code should work like this:
We attach to a left over piece of sh...ared memory, detach from it, it detects that we are the last living user and gracefully goes down.
Except... that is not what happens in a lot of cases. What I actually see happening, a lot, is this:
// fails with memory.error() = SharedMemoryError::NotFound
memory.attach(QSharedMemory::ReadWrite);
// fails with "segment already exists" .. wait, what?! (see above)
bool created = memory.create(dataSize, QSharedMemory::ReadWrite);
The only somewhat working way I've found for me to work around this is to write a pid file on application startup containing the pid of the currently running app.
The next time the same app is run it picks up this file and does
//QProcess::make sure that PID is not reused by another app at the moment
//the output of the command below should be empty
ps -p $previouspid -o comm=
//QProcess::(runs this script, reads output)
ipcs -m -p | grep $user | grep $previouspid | sed "s/ / /g" | cut -f1 -d " "
//QProcess::(passes the result of the previous script to clean up stuff)
ipcrm -m $1
Now, I can see the problems with such approach myself, but it is the only thing that works
The question is: can someone explain to me what exactly is the deal with not so not existing memory in the first piece of code above and how to deal with it properly?

Permissions issue calling bash script from c++ code that apache is running

The goal of this code is to create a stack trace whenever a sigterm/sigint/sigsegv/etc is caught. In order to not rely on memory management inside of the C++ code in the case of a sigsegv, I have decided to write a bash script that will receive the PID and memory addresses in the trace array.
The Sig events are being caught.
Below is where I generate the call to the bash script
trace_size = backtrace(trace, 16);
trace[1] = (void *)ctx->rsi;
messages = backtrace_symbols(trace, trace_size);
char syscom[356] = {0};
sprintf(syscom,"bash_crash_supp.sh %d", getpid());
for (i=1; i<(trace_size-1) && i < 10; ++i)
{
sprintf(syscom,"%s %p",syscom,trace[i]);
}
Below is where my issue arises. The command in syscom is generating correctly. I can stop the code before the following popen, run the command in a terminal, and it functions correctly.
However running the script directly from the c++ code does not seem to work.
setuid(0);
FILE *bashCommand = popen(syscom,"r");
char buf[256] = {0};
while(fgets(buf,sizeof(buf),bashCommand) != 0) {
LogMessage(LOG_WARNING, "%s\n", buf);
}
pclose(bashCommand);
exit(sig);
The purpose of the bash script is to get the offset from /proc/pid/maps, and then use that to run addr2line to get the file name/line number.
strResult=$(sudo cat /proc/"$1"/maps | grep "target_file" | grep -m 1 '[0-9a-fA-F]')
offset=$( cut -d '-' -f 1 <<< "$strResult");
However offset is getting 0 when I run from the c++ code, but when I run the exact same command (that is stored in syscom in the c++ code) in a terminal, I get the expected output.
I have been trying to fix this for a while. Permissions are most likely the issue, but I've tried to work around these with every way I know of/have seen via google. The user trying to run the script (currently running the c++ code) is apache.
The fix does not need to worry about the security of the box. If something as simple as "chmod 777 /proc -r" worked, that would have been the solution (sadly the OS doesn't let me mess do such commands with /proc).
Things I've already tried:
Adding `` around the command that's stored in syscom from the c++ code
chown the script to apache
chmod 4755 on the bash_crash_supp.sh script, allowing it to always fire as root.
I have added apache to sudoers (visudo), allowing them to run sudo without using a password
I have added a sub file to sudoers.d (just in case) that does the same as above
I have looked into objdump, however it does not give me either the offset or the file/line num for an addr (from what I can see)
I have setuid(0) in the c++ code to set the current user to root
Command generated in C++
bash_crash_supp.sh 25817 0x7f4bfe600ec8 0x7f4bf28f7400 0x7f4bf28f83c6 0x7f4bf2904f02 0x7f4bfdf0fbb0 0x7f4bfdf1346e 0x7f4bfdf1eb30 0x7f4bfdf1b9a8 0x7f4bfdf176b8
Params in bash:
25817 0x7f4bfe600ec8 0x7f4bf28f7400 0x7f4bf28f83c6 0x7f4bf2904f02 0x7f4bfdf0fbb0 0x7f4bfdf1346e 0x7f4bfdf1eb30 0x7f4bfdf1b9a8 0x7f4bfdf176b8
Can anyone think of any other ways to solve this?
Long story short, almost all Unix-based systems ignore setuid on any interpreted script (anything with a shebang #!) as a security precaution.
You may use setuid on actual executables, but not the shell scripts themselves. If you're willing to take a massive security risk, you can make a wrapper executable to run the shell script and give the executable setuid.
For more information, see this question on the Unix StackExchange: https://unix.stackexchange.com/a/2910

Fabric raise error if grep results returned

I am using Fabric to deploy Django (of course). I want to be able to run a local command which greps a string, and if returns any results, raises an exception and halts deploy.
Something like:
local('grep -r -n "\s console.log" .')
So if I get > 0 results, I want to halt progress.
What is the best way to handle this?
Run it like this:
with settings(warn_only=True):
local('grep -r -n "\s console.log" .')
This will prevent Fabric from aborting the script execution in case the call returns anything different to zero.

BASH scripts for generating inputs to parallel C++ jobs

I'm an amateur C++ programmer trying to learn about basic shell scripting. I have a complex C++ program that currently reads in different parameter values from Parameters.h and then executes one or more simulations with each parameter value sequentially. These simulations take a long time to run. Since I have a cluster available, I'd like to effectively parallelize this job, running the simulations for each parameter value on a separate processor. I'm assuming it's easier to learn shell scripting techniques for this purpose than OpenMPI. My cluster runs on the LSF platform.
How can I write my input parameters in Bash so that they are distributed among multiple processors, each executing the program with that value? I'd like to avoid interactive submission. Ideally, I'd have the inputs in a text file that Bash reads, and I'd be passing two parameters to each job: an actual parameter value and a parameter ID.
Thanks in advance for any leads and suggestions.
my solution
GNU Parallel does look slick, but I ended up (with the help of an IT admin) writing a simple bash script that echos to screen three inputs (a treatment identifier, treatment/parameter value, and a simulation identifier):
#!/bin/bash
j=1
for treatment in cat treatments.txt; do
for experiment in cat simulations.txt; do
bsub -oo tr_${j}_sim_${experiment}_screen -eo tr_${j}_sim_${experiment}_err -q short_serial "echo \"$j $treatment $experiment\" | ./a.out"
done
let j=$j+1
done
The file treatments.txt contains a list of the values I'd like to vary, simulations.txt contains a list of all the simulation identifiers I'd like to run (currently just 1,...,s, where s is the total number of simulations I want for each treatment), and the treatments are indexed 1...j.
Maybe check out: http://www.gnu.org/software/parallel/
edit:
Or, check out the -P argument to xargs, example:
time echo {1..5} | xargs -n 1 -P 5 sleep
Say you want to run the program simulate with inputs foo, bar, baz and quux in parallel, then the simplest way is:
inputs="foo bar baz quux"
# Launch processes in the background with &
children=""
for x in $inputs; do
simulate "$x" > "$x.output" &
$children = "$children $!"
done
# Wait for each to finish
for $pid in $children; do
wait $pid
done
for x in $inputs; do
echo "simulate '$x' gave:"
cat "$x.output"
rm -f "$x.output"
done
The problem is that all simulations are launched at the same time, so if your number of inputs is much larger than your number of CPUs/cores, they may swamp the system.
My best stab at this is you background multiple instances of your program and let the OS's scheduler take over to put them on different processors. AFAIK there is no way in any shell to specify which processor a given process should run on.
Something to the effect of:
#!/bin/sh
for arg in foo bar baz; do
./your_program "$arg" &
done