Why [ ... ] syntax doesn't work in a Makefile while `test` does? - if-statement

I am on Linux Mint 19. I am entirely new to Makefiles.
Here is the problematic part:
[ $(shell id --user) -eq 0 ] && ( echo && echo "distrib target has to be run as normal user" && echo && exit 1 )
which throws this error:
[ 1000 -eq 0 ] && ( echo && echo "distrib target has to be run as normal user" && echo && exit 1 )
Makefile:25: recipe for target 'distrib' failed
make: *** [distrib] Error 1
On the contrary, using test command directly proves to be working entirely:
if test $(shell id --user) -eq 0; then ( echo && echo "distrib target has to be run as normal user" && echo && exit 1 ) fi
I want to ask why that is, did I break some Makefile rule?

This doesn't have anything to do with makefiles, it has to do with shell scripting and the difference between using && vs. if in terms of the exit code. You are comparing apples and oranges here.
It's not related to test vs [. If you write the version using [ inside an if statement you'll get the same behavior as you do with test, and if you write the test version with the && model you'll get the same behavior as you do with [.
Run this in your shell:
[ 1000 -eq 0 ] && echo hi
echo $?
Now run this in your shell:
if [ 1000 -eq 0 ]; then echo hi; fi
echo $?
You'll see the former gives a non-0 exit code, while the latter gives a 0 (success) exit code. That's how if works; it "swallows" the exit code of the condition.
Make always looks at the exit code of the shell script to decide if it failed or not.
Generally in make scripting you want to re-arrange your expressions to use || rather than &&. That ensures that if the script exits early it exits with a success code not a failure code. You can write your script like this:
[ $$(id -u) -ne 0 ] || ( echo && echo "distrib target has to be run as normal user" && echo && exit 1 )
Note I use $$(id -u) not $(shell id --user); the recipe is run in the shell already and it's an anti-pattern to use the make shell function in a recipe. Also, the -u option is a POSIX standard option while --user is only available in the GNU utilities version of id.

Related

If statement in build using ymal aws

Hey I'm trying to do an if statement in Yaml, something like
if $NUMBER_OF_SOURCES == 3 then echo 1
(echo 1 will change I the future to a script that does something in aws )
what's the correct syntax?
is it even possible?
Hey I'm trying to do an if statement in Yaml,
something like this:
if
$NUMBER_OF_SOURCES == 3 then echo 1
(echo 1 will change I the future to a script that does something in aws ),
what's the correct syntax? is it even possible?
tried to do something like that but i get
- if [ $NUMBER_OF_SOURCES -eq 3 ]
- then
- echo "true"
- fi
[Container] 2022/04/24 10:35:36 Phase context status code: COMMAND_EXECUTION_ERROR Message: Error while executing command: if [ $NUMBER_OF_SOURCES -eq 3 ]. Reason: exit status 2
You do this using pipe (|) notation in yaml:
- |
if [ $NUMBER_OF_SOURCES -eq 3 ]
then
echo "true"
fi
or just put everything in one line:
- if [ $NUMBER_OF_SOURCES -eq 3 ]; then echo "true"; fi

if xx or xx, else... condition in dockerfile

I would need to provide an if or..., else condition inside a dockerfile.
I found a solution for an if, else... But I am not able to include an or condition inside the if
This solution works:
RUN if [ condition 1 ] ; then \
command 1; \
else; then \
condition 2; fi
I have tried something like this, unsuccessfully:
RUN if [ condition 1 | condition 2 ] ; then \
command 1; \
else; then \
condition 2; fi
How can I add the or inside the conditional?
Within a single Docker RUN instruction, you're running a shell command, and any Bourne shell syntax is allowed. If you're using [ ... ] that's actually running test(1) which has a -o operator for "or".
RUN if [ condition1 -o condition 2 ]; then \
command1; \
else \
command2; \
fi
The POSIX spec suggests the -o option is obsolete, and recommends using the shell || operator instead.
RUN if [ condition1 ] || [ condition2 ]; then ...; fi
RUN if test condition1 || test condition2 ; then ...; fi
More generally, Dockerfiles don't support conditionals; you can't conditionally COPY a file in, for example. Conversely, if you are writing complex installation logic, you can write and test an ordinary shell script on the host, COPY it into the image, and RUN it, which can be easier than trying to remember to backslash-escape every line ending.

Creating an alert function in Bash

I wanted to create a function in bash similar to a default alias I got in Ubuntu, looking like:
alias alert='notify-send --urgency=low -i "$([ $? = 0 ] && echo terminal || echo error)" "$(history|tail -n1|sed -e '\''s/^\s*[0-9]\+\s*//;s/[;&|]\s*alert$//'\'')"'
This creates a simple notification after a command has been issued with it.
For example, using
history | grep vim; sleep 5; alert
gives a notification after the sleep is done, simply saying
history | grep vim; sleep 5;
I would like to write the alert into a bash function instead, which have given some trouble with the regex.
I have tried:
function alert2 () {
ICON=$([ $? = 0 ] && echo terminal || echo error)
MSG=$(history | tail -n1 | sed -e s/^\s*[0-9]\+\s*//\;s/[\;\&\|]\s*alert$//)
notify-send --urgency=low -i $ICON $MSG
}
which would output both the linenumber in history when called itself, and give an Invalid number of options when called such as the first example.
Is this possible, and if so, how? Is it simply my regex that is faulty?
I'm running on WSL, so don't have notify-send installed:
function alert2 () {
ICON=$([ $? = 0 ] && echo terminal || echo error);
MSG=$(history | tail -n1| sed -e 's/^\s*[0-9]\+\s*//;s/[;&|]\s*alert2$//');
echo $ICON $MSG;
}
jadams#Temp046317:~/code/data-extract$ cat /etc/apt/sources.list > /dev/null ; alert2
terminal cat /etc/apt/sources.list > /dev/null
I'm hoping that this would work for you (instead of the echo):
notify-send --urgency=low -i "$ICON $MSG"

How can I get the return value of `wget` from a sh script into an `int` variable

OS: Linux raspberrypi 4.19.58-v7l+ #1245 SMP Fri Jul 12 17:31:45 BST 2019 armv7l GNU/Linux
Board: Raspberry Pi 4
I have a script:
#!/bin/bash
line=$(head -n 1 /var/www/html/configuration.txt)
file=/var/www/html/4panel/url_response.txt
if [ -f "$file" ]; then
wget_output=$(wget -q -i "$line" -O $file --timeout=2)
echo "$?"
else
echo > $file
chown pi:pi $file
fi
which I call from a C++ program using:
int val_system = 0;
val_system = system("/var/www/html/4panel/get_page.sh");
std::cout<<"System return value: "<<val_system<<std::endl;
If there is something wrong with the script, echo "$?" will output the return value of wget, but val_system will always be 0.
Does system() returns the value of echo "$?" ? In which case 0 is correct. And if that is the case how can I put the return value of wget in val_system ?
I have taken a situation in which echo "$?" always returns 8, basically I've entered an incorrect URL and:
I have tried deleting echo "$?" but val_system still returned 0;
With echo "$?" deleted I have changed the wget line to wget -q -i "$line" -O $file --timeout=2 and val_system now returns 2048.
None of my attempts bared any fruit and I have come here to seek guidance. How can I make val_system / system() return what echo "$?" returns ?
How can I get the return value of wget from the script into an int variable that's inside the C++ program that calls the script ?
The integer value system() returned contains extra information about executed command's status along with its exit code, see system() and Status Information. You need to extract exit code using WEXITSTATUS macro, like:
std::cout << "System return value: " << WEXITSTATUS(val_system) << std::endl;
If you want to echo the status and return it, you will need to save the value of $? to a variable, and exit with it explicitly.
wget_output=$(wget -q -i "$line" -O $file --timeout=2)
status=$?
...
echo $status
...
exit $status
If you don't need to execute echo or any other command between the call to wget and the end of the script, you can rely on the script exiting with the last status (i.e the one corresponding to the call to `wget) implicitly.

Unicorn doesn't pick up changes with new deploy of Rails app (Capistrano, Nginx)

I'm new to deploying, so this is probably a rookie mistake, but here it goes.
I have a Rails 4 app that I'm deploying to a Linux server using a combination of Capistrano, Unicorn, and Nginx. The deploy script runs fine and the app is now reachable at the desired IP, so that's great. The thing is, a) Unicorn doesn't restart upon deployment (at least, the PIDs don't change) and b) not surprisingly, the new changes aren't reflected in the available app. I don't seem to be able to do anything other than completely stopping and restarting unicorn in order to refresh it. If I do this, then the changes are picked up, but this process is obviously not ideal.
Manually, if I run kill -s HUP $UNICORN_PID then the pids of the workers change but not the master, and changes aren't picked up (which, apparently they are supposed to be); using USR2 appears to have no effect on the current processes.
Here's the unicorn init script I'm using, based on suggestions from other stack overflow questions with similar problems:
set -e
USAGE="Usage: $0 <start|stop|restart|upgrade|rotate|force-stop>"
# app settings
USER="deploy"
APP_NAME="app_name"
APP_ROOT="/path/to/$APP_NAME"
ENV="production"
# environment settings
PATH="/home/$USER/.rbenv/shims:/home/$USER/.rbenv/bin:$PATH"
CMD="cd $APP_ROOT/current && bundle exec unicorn -c config/unicorn.rb -E $ENV -D"
PID="$APP_ROOT/shared/pids/unicorn.pid"
OLD_PID="$PID.oldbin"
TIMEOUT=${TIMEOUT-60}
# make sure the app exists
cd $APP_ROOT || exit 1
sig () {
test -s "$PID" && kill -$1 `cat $PID`
}
oldsig () {
test -s $OLD_PID && kill -$1 `cat $OLD_PID`
}
case $1 in
start)
sig 0 && echo >&2 "Already running" && exit 0
echo "Starting $APP_NAME"
su - $USER -c "$CMD"
;;
stop)
echo "Stopping $APP_NAME"
sig QUIT && exit 0
echo >&2 "Not running"
;;
force-stop)
echo "Force stopping $APP_NAME"
sig TERM && exit 0
echo >&2 "Not running"
;;
restart|reload)
sig HUP && echo "reloaded $APP_NAME" && exit 0
echo >&2 "Couldn't reload, starting '$CMD' instead"
run "$CMD"
;;
upgrade)
if sig USR2 && sleep 2 && sig 0 && oldsig QUIT
then
n=$TIMEOUT
while test -s $OLD_PID && test $n -ge 0
do
printf '.' && sleep 1 && n=$(( $n - 1 ))
done
echo
if test $n -lt 0 && test -s $OLD_PID
then
echo >&2 "$OLD_PID still exists after $TIMEOUT seconds"
exit 1
fi
exit 0
fi
echo >&2 "Couldn't upgrade, starting '$CMD' instead"
su - $USER -c "$CMD"
;;
rotate)
sig USR1 && echo rotated logs OK && exit 0
echo >&2 "Couldn't rotate logs" && exit 1
;;
*)
echo >&2 $USAGE
exit 1
;;
esac
Using this script, start and stop work as expected, but reload/restart do nothing (they print the expected output but don't change the running pids) and upgrade fails. According to the error log, it's because the first master is still running (ArgumentError: Already running on PID: $PID).
And here's my unicorn.rb:
app_path = File.expand_path("../..", __FILE__)
working_directory "#{app_path}"
pid "#{app_path}/../../shared/pids/unicorn.pid"
# listen
listen "#{app_path}/../../shared/sockets/unicorn.sock", :backlog => 64
# logging
stderr_path "#{app_path}/../../shared/log/unicorn.stderr.log"
stdout_path "#{app_path}/../../shared/log/unicorn.stdout.log"
# workers
worker_processes 3
# use correct Gemfile on restarts
before_exec do |server|
ENV['BUNDLE_GEMFILE'] = "#{working_directory}/Gemfile"
end
# preload
preload_app false
before_fork do |server, worker|
old_pid = "#{app_path}/shared/pids/unicorn.pid.oldbin"
if File.exists?(old_pid) && server.pid != old_pid
begin
Process.kill("QUIT", File.read(old_pid).to_i)
rescue Errno::ENOENT, Errno::ESRCH
# someone else did our job for us
end
end
end
after_fork do |server, worker|
if defined?(ActiveRecord::Base)
ActiveRecord::Base.establish_connection
end
end
Any help is very much appreciated, thanks!
It is hard to say for certain, since I haven't encountered this particular issue before, but my hunch is that this is your problem:
app_path = File.expand_path("../..", __FILE__)
working_directory "#{app_path}"
Every time you deploy, Capistrano creates a new directory for your app at the location releases/<timestamp>. It then updates a current symlink to point at this latest release directory.
In your case, you may mistakenly be telling Unicorn to use releases/<timestamp> as its working_directory. (SSH to the server and check the contents of unicorn.rb to be certain.) Instead, what you should do is point to current. That way you don't have to stop and cold start unicorn to get it to see the new working directory.
# Since "current" is a symlink to the current release,
# Unicorn will always see the latest code.
working_directory "/var/www/my-app/current"
I suggest rewriting your unicorn.rb so that you aren't using relative paths. Instead hard-code the absolute paths to current and shared. It is OK to do this because those paths will remain the same for every release.
The line
ENV="production"
looks extremely suspicious to me. I suspect that it wants to be
RAILS_ENV="production".
without this won't rails wake up not knowing which environment it is?