I am wanting to grant user user01 on my machine the ability to shut down the machine, but only with no less than 1 hour notice.
Essentially, this boils down to the following command:
shutdown -h +<time>
In this case, <time> must be greater than 60 (minutes).
Using visudo, I added the following sudoers line:
user01 ALL=/sbin/shutdown -h +<time>
I need some way to ensure that user01 may only issue the halt shutdown if the time argument is greater than 60. I've tried regex, but to no avail. I may be wrong in saying this, but it appears that the sudoers file may not support regex?
Any help with regards to evaluating an expression to achieve this task would be appreciated.
I don't even want to know if sudoers has regex support (a quick grep suggests it doesn't):
Regexes are tricky, a simple mistake could allow a user to run Elvis-knows-what...
[6-9][0-9]|[1-9][0-9]*[0-9][0-9] - you could shorten it if the notation allows shortcuts for counting occurrences (or just use a '+'), but would you really want it?
That being said, imho the easiest (and most sensible) solution is to write a script along the lines of
#! /bin/sh
test $# -eq 1 || exit
test 60 -lt "$1" || exit # maybe print a message
exec /sbin/shutdown -h "+$1"
and allow user01 to sudo that script.
Related
I am trying to find the value "PASS_MAX_DAYS" equal to 90 or less in the /etc/login.defs file But it does not work, I am testing on a suse 12 server but the command does not work.
grep "^PASS_MAX_DAYS\s*([0-9]|[1-8][0-9]|90)" /etc/login.defs
Thanks for your support and time
It is better to test the number against your value, and not test a string against any pattern (as already suggested in comments). For example like this:
awk '/^PASS_MAX_DAYS/ && $2<=90' /etc/login.defs
This way, you can easily modify your command, if your limit changes to 30 or to 365 days. Also I guess values like 090 are still valid for that configuration.
grep, by default, doesn't understand extended regular expressions ..
grep -E "^PASS_MAX_DAYS\s*([0-9]|[1-8][0-9]|90)\s*$" /etc/login.defs
will give you SOME result.
That said, how many different entries for PASS_MAX_DAYS do you expect in that file?
When I run the command
nmcli --get-values TYPE connection show --active
I sometimes receive a list of values as follows
vpn
802-3-ethernet
tun
tun
But other times the vpn line is not present. (the order of the lines cannot be assumed)
I'm looking for a one-liner that will accept the output of that nmcli command (presumably via pipe/stdin?) and return an exit code of 1 when vpn is in that list and an exit code of 0 when vpn is not in that list.
What I've Tried
Every combination of grep that I can think of. grep -v will absolutely not work because it will always find a line that is not vpn. Other options to grep return data, but do not change the error code (that I can find).
Every negation regex I can find or think of. Regexes in the form ^(?!vpn).*$ do not work because there will always be a line that does not say vpn.
Use Case
I am writing a systemd service to update my dynamic DNS. But I don't want to set my dynamic DNS while I'm on a VPN. I want to use systemd built-in abilities (as much as possible) to control it. So I want to use the systemd built-in ExecStartPre= (which fails the unit on exit code 1+) to control whether the service starts.
If you've got a way to run a service (or not) using systemd depending on whether a VPN is connected, I'll accept that in lieu of the above. But naive assumptions like "tun0 active=VPN" are false for me. I have various tun connections active at any one time, for various reasons. So triggering on sys-subsystem-net-devices-tun0.device does not work.
What Doesn't Work
Most of the Google and SO results I find are for line-specific negation and do not address my use case where there will be multiple lines. Or they return the values and do not set the error code. I need error codes set.
Line-based: Regular expression to match a line that doesn't contain a word
Line-based: How to negate specific word in regex?
Instead of just checking for existence of vpn in the output, count the number of occurences, then evaluate a conditional expression checking on the number of lines:
nmcli --get-values TYPE connection show --active | [ $(grep -c ^vpn) -eq 0 ]
So I've done some Google searching and this is something that has very little knowledge out there. What would be an effective and foolproof way of detecting whether X11 or Wayland is in use, preferrably at compile-time and with CMake? I need to apply this to a C++ project of mine.
The accepted answer is very inaccurate and dangerous. It just runs loginctl to dump a large list of user-sessions and greps every line with a username or other string that matches the current user's name, which can lead to false positives and multiple matching lines. Calling whoami is also wasteful. So it's harmful, and inaccurate.
Here's a much better way to get the user's session details, by querying your exact username's details and grabbing their 1st session scope's id.
This is a Bash/ZSH-compatible one-liner solution:
if [ "$(loginctl show-session $(loginctl user-status $USER | grep -E -m 1 'session-[0-9]+\.scope' | sed -E 's/^.*?session-([0-9]+)\.scope.*$/\1/') -p Type | grep -ic "wayland")" -ge 1 ]; then
echo "Wayland!"
else
echo "X11"
fi
I really wish that loginctl had a "list all sessions just for a specific user", but it doesn't, so we have to resort to these tricks. At least my trick is a LOT more robust and should always work!
I assume you want to evaluate the display server during compile time, when calling CMake, instead of for every compilation. That's how CMake works and hot it should be used. One downside is, that you have to re-run CMake for every changed display server.
There is currently no default way to detect the running display server. Similar, there is no default code snippet to evaluate the display server by CMake. Just pick one way of detecting the display server that manually works for you or your environment respectively.
Call this code from CMake and store the result in a variable and use it for your C++ code.
For example loginctl show-session $(loginctl | grep $(whoami) | awk '{print $1}') -p Type works for me. The resulting CMake check is
execute_process(
"loginctl show-session $(loginctl | grep $(whoami) | awk '{print $1}') -p Type"
OUTPUT_VARIABLE result_display_server)
if ("${resulting_display_server}" EQUALS "Type=x11")
set(display_server_x11 TRUE)
else()
set(display_server_x11 FALSE)
endif()
Probably you have to fiddle around with the condition and check for Type=wayland or similar to get it properly working in your environment.
You can use display_server_x11 and write it into a config.h file to use it within C++ code.
I'm trying to remove sensitive data like passwords from my Git history. Instead of deleting whole files I just want to substitute the passwords with removedSensitiveInfo. This is what I came up with after browsing through numerous StackOverflow topics and other sites.
git filter-branch --tree-filter "find . -type f -exec sed -Ei '' -e 's/(aSecretPassword1|aSecretPassword2|aSecretPassword3)/removedSensitiveInfo/g' {} \;"
When I run this command it seems to be rewriting the history (it shows the commits it's rewriting and takes a few minutes). However, when I check to see if all sensitive data has indeed been removed it turns out it's still there.
For reference this is how I do the check
git grep aSecretPassword1 $(git rev-list --all)
Which shows me all the hundreds of commits that match the search query. Nothing has been substituted.
Any idea what's going on here?
I double checked the regular expression I'm using which seems to be correct. I'm not sure what else to check for or how to properly debug this as my Git knowledge quite rudimentary. For example I don't know how to test whether 1) my regular expression isn't matching anything, 2) sed isn't being run on all files, 3) the file changes are not being saved, or 4) something else.
Any help is very much appreciated.
P.S.
I'm aware of several StackOverflow threads about this topic. However, I couldn't find one that is about substituting words (rather than deleting files) in all (ASCII) files (rather than specifying a specific file or file type). Not sure whether that should make a difference, but all suggested solutions haven't worked for me.
git-filter-branch is a powerful but difficult to use tool - there are several obscure things you need to know to use it correctly for your task, and each one is a possible cause for the problems you're seeing. So rather than immediately trying to debug them, let's take a step back and look at the original problem:
Substitute given strings (ie passwords) within all text files (without specifying a specific file/file-type)
Ensure that the updated Git history does not contain the old password text
Do the above as simply as possible
There is a tailor-made solution to this problem:
Use The BFG... not git-filter-branch
The BFG Repo-Cleaner is a simpler alternative to git-filter-branch specifically designed for removing passwords and other unwanted data from Git repository history.
Ways in which the BFG helps you in this situation:
The BFG is 10-720x faster
It automatically runs on all tags and references, unlike git-filter-branch - which only does that if you add the extraordinary --tag-name-filter cat -- --all command-line option (Note that the example command you gave in the Question DOES NOT have this, a possible cause of your problems)
The BFG doesn't generate any refs/original/ refs - so no need for you to perform an extra step to remove them
You can express you passwords as simple literal strings, without having to worry about getting regex-escaping right. The BFG can handle regex too, if you really need it.
Using the BFG
Carefully follow the usage steps - the core bit is just this command:
$ java -jar bfg.jar --replace-text replacements.txt my-repo.git
The replacements.txt file should contain all the substitutions you want to do, in a format like this (one entry per line - note the comments shouldn't be included):
PASSWORD1 # Replace literal string 'PASSWORD1' with '***REMOVED***' (default)
PASSWORD2==>examplePass # replace with 'examplePass' instead
PASSWORD3==> # replace with the empty string
regex:password=\w+==>password= # Replace, using a regex
Your entire repository history will be scanned, and all text files (under 1MB in size) will have the substitutions performed: any matching string (that isn't in your latest commit) will be replaced.
Full disclosure: I'm the author of the BFG Repo-Cleaner.
Looks OK. Remember that filter-branch retains the original commits under refs/original/, e.g.:
$ git commit -m 'add secret password, oops!'
[master edaf467] add secret password, oops!
1 file changed, 4 insertions(+)
create mode 100644 secret
$ git filter-branch --tree-filter "find . -type f -exec sed -Ei '' -e 's/(aSecretPassword1|aSecretPassword2|aSecretPassword3)/removedSensitiveInfo/g' {} \;"
Rewrite edaf467960ade97ea03162ec89f11cae7c256e3d (2/2)
Ref 'refs/heads/master' was rewritten
Then:
$ git grep aSecretPassword `git rev-list --all`
edaf467960ade97ea03162ec89f11cae7c256e3d:secret:aSecretPassword2
but:
$ git lola
* e530e69 (HEAD, master) add secret password, oops!
| * edaf467 (refs/original/refs/heads/master) add secret password, oops!
|/
* 7624023 Initial
(git lola is my alias for git log --graph --oneline --decorate --all). Yes, it's in there, but under the refs/original name space. Clear that out:
$ rm -rf .git/refs/original
$ git reflog expire --expire=now --all
$ git gc
Counting objects: 6, done.
Delta compression using up to 8 threads.
Compressing objects: 100% (4/4), done.
Writing objects: 100% (6/6), done.
Total 6 (delta 0), reused 0 (delta 0)
and then:
$ git grep aSecretPassword `git rev-list --all`
$
(as always, run filter-branch on a copy of the repo Just In Case; and then removing original refs, expiring the reflog "now", and gc'ing, means stuff is Really Gone).
Background: We're using a tape library and the backup software NetWorker to back up data here. The client that's installed is fairly basic, and when we need to restore more than one target directory we create a script that simply calls X client instances in the background via a script with X of the following lines:
recover -c client-srv -t "Mon Dec 10 08:00:00" -s barckup-srv -d /dest/dir/ -f -a /src/dir &
The trouble is that different partitions/directories backed up from the same machine at the same time might be spread across several different tapes, and some of those tapes may have been removed from the library between the backup and restore.
Up until recently the only ways the people here have been finding out about which tapes are needed were to either wait for the library to complain that it doesn't have a particular tape, or to set up a fake restore in an crappy old desktop GUI client and hit a particular menu option. The first option is super bad when the tape turns out to be off-site and takes a day to get back, and the second is tedious and time-consuming.
Actual Question: I've written a "meta"-script that reads the script that we've already created with the commands above, feeds it into the interactive CLI client, and gets it to spit out what tapes are required, and if they're actually in the library. To do this, the script uses the following regular expressions to pull out necessary info:
# pull out a list of the -a targets
restore_targets="`sed 's/^.* -a \([^ ]*\) .*$/\1/' $rec_script`"
# pull out a list of -c clients
restore_clients="`sed 's/^.* -c \([^ ]*\) .*$/\1/' $rec_script`"
numclients=`echo $restore_clients | uniq | wc -l`
# pull out a list of -t dates
restore_dates="`sed 's/^.* -t \"\([^\"]*\)\" .*$/\1/' $rec_script`"
numdates=`echo $restore_dates | uniq | wc -l`
I am not terribly familiar with using s/\(x\)/\1/ types of regexes, to the point that I don't remember the name, but is this the best way of accomplishing what I am doing? The commands work, but I'm wondering if I'm using the .* needlessly.
\1 refers to the first capturing group. If you replace foo(.*?) with \1 and feed in foobar, the resulting text becomes bar, as \1 points to the text captured by the first capturing group.
As for your your question, it might be safer and easier to parse the arguments using Python (or another high-level scripting language):
>>> import shlex
>>> shlex.split('recover -c client-srv -t "Mon Dec 10 08:00:00" -s barckup-srv -d /dest/dir/ -f -a /src/dir &')
['recover', '-c', 'client-srv', '-t', 'Mon Dec 10 08:00:00', '-s', 'barckup-srv', '-d', '/dest/dir/', '-f', '-a', '/src/dir', '&']
Now, this is much easier to work with. The quotes are gone and all of the components of the command are nicely split up into a list.
If you want this to be completely foolproof, you could use argparse and implement your own parser for this command line pretty easily. This will enable you to easily get the info, but it might be overkill for your situation.
As for your actual question, you can dissect the regex:
^.* -t "([^\"]*)" .*$
This regex captures -t "foo \" bar", while a non-greedy version would stop at -t "foo \".