Script to copy from url and paste in file replacing certain content - replace

I'm trying to make a script which would automatically copy the entire content from WordPress's API service on this address and paste it in wp-config.php replacing the existing lines:
45: define('AUTH_KEY', 'put your unique phrase here');
46: define('SECURE_AUTH_KEY', 'put your unique phrase here');
47: define('LOGGED_IN_KEY', 'put your unique phrase here');
48: define('NONCE_KEY', 'put your unique phrase here');
49: define('AUTH_SALT', 'put your unique phrase here');
50: define('SECURE_AUTH_SALT', 'put your unique phrase here');
51: define('LOGGED_IN_SALT', 'put your unique phrase here');
52: define('NONCE_SALT', 'put your unique phrase here');
What's the best way to go?

arutaku helped me with his hint about wget. I still don't think this solution is perfect (I'll explain why below), and would like to see if anyone has any other suggestion. But this one is still a valid option. So, my script:
# Delete lines from 45 to 52 from wp-config.php
sed -i 45,52d wp-config.php
# Get new lines and save them in a temporary file
curl https://api.wordpress.org/secret-key/1.1/salt/ > temp-salt
# Insert temporary file content into wp-config.php after line 44
sed -i '44r temp-salt' wp-config.php
# Delete the temporary file
rm temp-salt
The drawback is that this script manipulates with line numbers, rather than line content, and that's what makes it less than perfect. But wp-content-sample.php file is not likely to change very often, so I guess it's not all bad.

Related

Grepping two patterns from event logs

I am seeking to extract timestamps and ip addresses out of log entries containing a varying amount of information. The basic structure of a log entry is:
<timestamp>, <token_1>, <token_2>, ... ,<token_n>, <ip_address> <token_n+2>, <token_n+3>, ... ,<token_n+m>,-
The number of tokens n between the timestamp and ip address varies considerably.
I have been studying regular expressions and am able to grep timestamps as follows:
grep -o "[0-9]\{4\}-[0-9]\{2\}-[0-9]\{2\}T[0-9]\{2\}:[0-9]\{2\}:[0-9]\{2\}"
And ip addresses:
grep -Eo '[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}'
But I have not been able to grep both patterns out of log entries which contain both. Every log entry contains a timestamp, but not every entry contains an ip address.
Input:
2021-04-02T09:06:44.248878+00:00,Creation Time,EVT,WinEVTX,[4624 / 0x1210] Source Name: Microsoft-Windows-Security-Auditing Message string: An account was successfully logged on.\n\nSubject:\n\tSecurity ID:\t\tS-1-5-18\n\tAccount Name:\t\tREDACTED$\n\tAccount Domain:\t\tREDACTED\n\tLogon ID:\t\tREDACTED\n\nLogon Type:\t\t\t10\n\nNew Logon:\n\tSecurity ID:\t\tREDACTED\n\tAccount Name:\t\tREDACTED\n\tAccount Domain:\t\tREDACTED\n\tLogon ID:\t\REDACTED\n\tLogon GUID:\t\tREDACTED\n\nProcess Information:\n\tProcess ID:\t\tREDACTED\n\tProcess Name:\t\tC:\Windows\System32\winlogon.exe\n\nNetwork Information:\n\tWorkstation:\tREDACTED\n\tSource Network Address:\t255.255.255.255\n\tSource Port:\t\t0\n\nDetailed Authentication Information:\n\tLogon Process:\t\tUser32 \n\tAuthentication Package:\tNegotiate\n\tTransited Services:\t-\n\tPackage Name (NTLM only):\t-\n\tKey Length:\t\t0\n\nThis event is generated when a logon session is created. It is generated on the computer that was accessed.\n\nThe subject fields indicate the account on the local system which requested the logon. This is most commonly a service such as the Server service or a local process such as Winlogon.exe or Services.exe.\n\nThe logon type field indicates the kind of logon that occurred. The most common types are 2 (interactive) and 3 (network).\n\nThe New Logon fields indicate the account for whom the new logon was created i.e. the account that was logged on.\n\nThe network fields indicate where a remote logon request originated. Workstation name is not always available and may be left blank in some cases.\n\nThe authentication information fields provide detailed information about this specific logon request.\n\t- Logon GUID is a unique identifier that can be used to correlate this event with a KDC event.\n\t- Transited services indicate which intermediate services have participated in this logon request.\n\t- Package name indicates which sub-protocol was used among the NTLM protocols.\n\t- Key length indicates the length of the generated session key. This will be 0 if no session key was requested. Strings: ['S-1-5-18' 'DEVICE_NAME$' 'NETWORK' 'REDACTED' 'REDACTED' 'USERNAME' 'WORKSTATION' 'REDACTED' '10' 'User32 ' 'Negotiate' 'REDACTED' '{REDACTED}' '-' '-' '0' 'REDACTED' 'C:\\Windows\\System32\\winlogon.exe' '255.255.255.255' '0' '%%1833'] Computer Name: REDACTED Record Number: 1068355 Event Level: 0,winevtx,OS:REDACTED,-
Desired Output:
2021-04-02T09:06:44, 255.255.255.255
$ sed -En 's/.*([0-9]{4}-[0-9]{2}-[0-9]{2}T[0-9]{2}:[0-9]{2}:[0-9]{2}).*[^0-9]([0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}).*/\1, \2/p' file
2021-04-02T09:06:44, 255.255.255.255
Your regexps can be reduced by removing some of the explicit repetition though:
$ sed -En 's/.*([0-9]{4}(-[0-9]{2}){2}T([0-9]{2}:){2}[0-9]{2}).*[^0-9](([0-9]{1,3}\.){3}[0-9]{1,3}).*/\1, \4/p' file
2021-04-02T09:06:44, 255.255.255.255
It could be simpler still if all of the lines in your log file start with a timestamp:
$ sed -En 's/([^,.]+).*[^0-9](([0-9]{1,3}\.){3}[0-9]{1,3}).*/\1, \2/p' file
2021-04-02T09:06:44, 255.255.255.255
If you are looking for lines that contain both patterns, it may be easiest to do it two separate searches.
If you're searching your log file for lines that contain both "dog" and "cat", it's usually easiest to do this:
grep dog filename.txt | grep cat
The grep dog will find all lines in the file that match "dog", and then the grep cat will search all those lines for "cat".
You seem not to know the meaning of the "-o" switch.
Regular "grep" (without "-o") means: give the entire line where the pattern can be found. Adding "-o" means: only show the pattern.
Combining two "grep" in a logical AND-clause can be done using a pipe "|", so you can do this:
grep <pattern1> <filename> | grep <pattern2>

Grok Filter for Confluence Logs

I am trying to write a Grok expression to parse Confluence logs and I am partially successful.
My Current Grok pattern is :
%{TIMESTAMP_ISO8601:conflog_timestamp} %{LOGLEVEL:conflog_severity} \[%{APPNAME:conflog_ModuleName}\] \[%{DATA:conflog_classname}\] (?<conflog_message>(.|\r|\n)*)
APPNAME [a-zA-Z0-9\.\#\-\+_%\:]+
And I am able to parse the below log line :
Log line 1:
2020-06-14 10:44:01,575 INFO [Caesium-1-1] [directory.ldap.cache.AbstractCacheRefresher] synchroniseAllGroupAttributes finished group attribute sync with 0 failures in [ 2030ms ]
However I do have other log lines such as :
Log line 2:
2020-06-15 09:24:32,068 WARN [https-jsse-nio2-8443-exec-13] [atlassian.confluence.pages.DefaultAttachmentManager] getAttachmentData Could not find data for attachment:
-- referer: https://confluence.jira.com/index.action | url: /download/attachments/393217/global.logo | traceId: 2a0bfc77cad7c107 | userName: abcd
and Log Line 3 :
2020-06-12 01:19:03,034 WARN [https-jsse-nio2-8443-exec-6] [atlassian.seraph.auth.DefaultAuthenticator] login login : 'ABC' tried to login but they do not have USE permission or weren't found. Deleting remember me cookie.
-- referer: https://confluence.jira.com/login.action?os_destination=%2Findex.action&permissionViolation=true | url: /dologin.action | traceId: 8744d267e1e6fcc9
Here the params "userName" , "referer", "url" and "traceId" may or maynot be present in the Log line.
I can write concrete grok expressions for each of these. Instead can we handle all these in the same grok expression ?
In shorts - Match all log lines..
If log line has "referer" param store it in a variable. If not, proceed to match rest of the params.
If log line has "url" param store it, if not try to match rest of the params.
Repeat for 'traceId' and 'userName'
Thank you..

bash script - fetch only unique domains from email list to variable

I am new to bash and having problem understanding how to get this done.
Check all "To:" field email address domains and list all unique domains to a variable to compare it to from domain.
I get the "from address" domain by using
grep -m 1 "From: " filename | cut -f 2 -d '#' | cut -d ">" -f 1
when reading a mail stored in file filename.
For "to address" domain there can be multiple To: addresses and having multiple domains. I am not sure how to get unique domains from "to address field".
Example to address line will be like this:
To: user#domain.com, user2#domain.com,
User Name <sample#domaintest.com>, test#domainname.com
grep -m 1 "^To: " filename | cut -f 2 -d '#' | cut -d ">" -f 1
but there are different format of email. So I am not sure if grep is right or if I should search for awk or something.
I need to get the unique domain list from the "To:" field email address/addresses to a variable in bash script.
Desired output for above example:
domain.com,domaintest.com,domainname.com
If you are hellbent on doing this with line-oriented utilities, there is a utility formail in the Procmail distribution which can normalize things for you somewhat.
bash$ formail -czxTo: <<\==test==
> From: me <sender#example.com>
> To: you <first#example.org>,
> them <other#example.net>
> Subject: quick demo
>
> Very quick, innit.
> ==test==
first#example.org, other#example.net
So with that you have input which you can actually pass to grep or Awk ... or sed.
fromdom=$(formail -czxTo: <message | tr ',' '\n' | sed 's/.*#//')
The From: address will not be normalized by formail -czxFrom: but you can use a neat trick: make formail generate a reply back to the From: address, and then extract the To: header from that.
todoms=$(formail -rtzcxTo: <message | sed 's/.*#//')
In some more detail, -r says to create a new reply to whoever sent you message, and then we do -zcxTo: on that.
(The -t option may or may not do what you want. In this case, I would perhaps omit it. http://www.iki.fi/era/procmail/formail.html has (vague) documentation for what it does; see also the section just before http://www.iki.fi/era/procmail/mini-faq.html#group-writable and sorry for the clumsy link -- there doesn't seem to be a good page-internal anchor to link to.)
Email address normalization is tricky because there are so many variants to choose from.
From: Elvis Parsley <king#graceland.example.com>
From: king#graceland.example.com
From: "Parsley, Elvis" <king#graceland.example.com> (kill me, I have to use Outlook)
From: "quoted#string" <king#graceland.example.com> (wait, he is already dead)
To: This could fold <recipient#example.net>,
over multiple lines <another#example.org>
I would turn to a more capable language with proper support for parsing all of these formats. My choice would be Python, though you could probably also pull this off in a few lines of Ruby or Perl.
The email library was revamped in Python 3.6 so this assumes you have at least that version. The email.Headerregistry class which is new in 3.6 is particularly convenient here.
#!/usr/bin/env python3
from email.policy import default
from email import message_from_binary_file
import sys
if len(sys.argv) == 1:
sys.argv.append('-')
for arg in sys.argv[1:]:
if arg == '-':
handle = sys.stdin
else:
handle = open(arg, 'rb')
message = message_from_binary_file(handle, policy=default)
from_dom = message.get('From').address.domain
to_doms = set()
for addr in message.get('To').addresses:
dom = addr.domain
if dom == from_dom:
continue
to_doms.add(dom)
print(','.join([from_dom] + list(to_doms)))
if arg != '-':
handle.close()
This simply produces a comma-separated list of domain names; you might want to do the rest of the processing in Python too instead, or change this so that it prints something in a slightly different format.
You'd save this in a convenient place (say, /usr/local/bin/fromto) and mark it as executable (chmod 755 /usr/local/bin/fromto). Now you can call this from the shell like any other utility like grep.

Regex Expression with the JQ Tool

I have a json file which I am using the JQ tool on to get a some lines out of it. However I now need to get some information out of this line using regex. I stuck on two parts. The first bit is that I can't figure out the regular expression to get the lines I want and the second issues is that I do now know what the correct syntax is to apply the regex along with the JQ Tool. I have tried the following syntax and get the error of "unterminated regexp"
jq '.msg.stdout_lines[2]' /tmp/vaultKeys.json | awk '{gsub(/\:(.*[\a-zA-Z0-9]))}1'
My json file is as follows:
{
"msg": {
"changed": true,
"cmd": [
"vault",
"operator",
"init"
],
"delta": "0:00:00.568974",
"end": "2018-11-29 15:42:00.243019",
"failed": false,
"rc": 0,
"start": "2018-11-29 15:41:59.674045",
"stderr": "",
"stderr_lines": [],
"stdout": "Unseal Key 1: ZA0Gas2GrHtdMlet1g63N6gvEPYf5mzZEfjPhMDRyAeS\nUnseal Key 2: NY+CLIbgMJIv+e81FuB1OpV0m7rPuqZbIuYT142MrQLl\nUnseal Key 3: HNWmsrXBsSV9JFuGfqpd+GvPYQzHEsLFlxKBfEyBhCZ6\nUnseal Key 4: xDwfI+kFHFRSzq2JyxSGArQsGjCrFiNbkGCP897Zfbuz\nUnseal Key 5: +O8/tTmDNSzaUBMT8QP+2xbvu5uulypf3+xmWzY8fSD3\n\nInitial Root Token: 6kO8ijZzyhcG5Nup5QUca0u3\n\nVault initialized with 5 key shares and a key threshold of 3. Please securely\ndistribute the key shares printed above. When the Vault is re-sealed,\nrestarted, or stopped, you must supply at least 3 of these keys to unseal it\nbefore it can start servicing requests.\n\nVault does not store the generated master key. Without at least 3 key to\nreconstruct the master key, Vault will remain permanently sealed!\n\nIt is possible to generate new unseal keys, provided you have a quorum of\nexisting unseal keys shares. See \"vault operator rekey\" for more information.",
"stdout_lines": [
"Unseal Key 1: ZA0Gas2GrHtdMlet1g63N6gvEPYf5mzZEfjPhMDRyAeS",
"Unseal Key 2: NY+CLIbgMJIv+e81FuB1OpV0m7rPuqZbIuYT142MrQLl",
"Unseal Key 3: HNWmsrXBsSV9JFuGfqpd+GvPYQzHEsLFlxKBfEyBhCZ6",
"Unseal Key 4: xDwfI+kFHFRSzq2JyxSGArQsGjCrFiNbkGCP897Zfbuz",
"Unseal Key 5: +O8/tTmDNSzaUBMT8QP+2xbvu5uulypf3+xmWzY8fSD3",
"",
"Initial Root Token: 6kO8ijZzyhcG5Nup5QUca0u3",
"",
"Vault initialized with 5 key shares and a key threshold of 3. Please securely",
"distribute the key shares printed above. When the Vault is re-sealed,",
"restarted, or stopped, you must supply at least 3 of these keys to unseal it",
"before it can start servicing requests.",
"",
"Vault does not store the generated master key. Without at least 3 key to",
"reconstruct the master key, Vault will remain permanently sealed!",
"",
"It is possible to generate new unseal keys, provided you have a quorum of",
"existing unseal keys shares. See \"vault operator rekey\" for more information."
]
}
}
Out of the line
"Unseal Key 3: HNWmsrXBsSV9JFuGfqpd+GvPYQzHEsLFlxKBfEyBhCZ6"
I would like just
HNWmsrXBsSV9JFuGfqpd+GvPYQzHEsLFlxKBfEyBhCZ6
Currently using my regex I get only if I use it without the JQ tool syntax
: ZA0Gas2GrHtdMlet1g63N6gvEPYf5mzZEfjPhMDRyAeS
So to summarise I need help with
a) getting a correct regular expression and
b) the correct syntax to use the expression with the JQ Tool.
Thanks
For this particular case you can use split instead of regex.
jq -r '.msg.stdout_lines[2]|split(" ")[-1]' file
Do you have GNU grep?
jq -r '.msg.stdout_lines[2]' /tmp/vaultKeys.json | grep -Po '(?<=: ).+'
In the interests of one-stop shopping, you could for example use this invocation:
jq -r '.msg.stdout_lines[2]
| capture(": (?<s>.*)").s'
Of course there are many other possibilities, depending on your precise requirements.
There are many ways, besides the obvious | grep -Po '(?<=: ).+\b' you could even use substr with awk if the string length is fixed:
jq .. | awk '{print substr($1, RSTART+14)}'

Awk - replace coumn 2 in table 1 from coumn 2 in table 2 based on matching data in column 1 (common between tables)

After my company purchased new servers I'm doing a top-down upgrade of the server room. since all the hardware is changing I'm not able to use bare-metal cloning tool to migrate. Using the newusers command from Debian I am able to create in bulk all the users from the old server. For the /etc/shadow file, you can copy the second column from your shadow.sync (from the old server) file into the second column of the associated account in the new system. This will transfer the passwords for your accounts to the new system. However I'm not sure how to do this programmatically using awk (or something else I can integrate into my shell script I already have setup).
shadow.sync contains the following (users & passwords changed for security reasons) This is the file to be copied INTO the current shadow file which looks almost identical except the data in the second column has the INCORECT values.
An in-depth explanation of the fields for the /etc/shadow file can be found here
user1:$6$HiwQEKYDgT$xYU9F3Wv0jFWHmZxN60nFMkTqWn87RRIOvx7Epp57rOmdHN9plJgjhC.jRVVNc1.HUaqSpX/ZcCEFSn6RmQQA0:17531::0:99999:7:::
user2:$6$oOuwJtrIKk$THLsfDppLI8QVw9xEOAaIoZ90Mcz3xGukVdyWGJJqygsavtXvtJ8X9ECc0CfuGzHp0pHNSAqdZY9TAzF5YKLc.:17531::0:99999:7:::
user3:$6$IEHAyRsokQ$e5K3RicE.PUAej8IxG9GnF/SUl1NQ57pqzUVuAzsP8.89SNhuaKE1W7kG5P4hbzV23Bb2zWHx353t.e9ERSVy.:17531::0:99999:7:::
user4:$6$lFOIUQvxdb$W5ITiH/Y021xw1vo8uw6ZtIOmfKjnNnC/SttQjN85MHtLbFeQ2Th5kfAIijXC81CRG4T0kJQ3rzRNRSyQHjyb1:17531::0:99999:7:::
user5:$6$RZbtYxWiwE$lnP8.tTbs0JbLZg5FsmPR8QvrJARbcRuJi2nYm1okwjfkWPkj212mBPjVF1BTo2hVCxLGSw64Cp6DgXheacSx.:17531::0:99999:7:::
Essentially i need to match column 1 (username) between the sync file and the shadow file and copy column 2 from the sync file over-top of the same column on the actual shadow file. Doing this by hand would be terrible as I have 90 servers that I'm migrating with over 900 users total.
Random shadow.sync file for demonstration was generated using:
#!/usr/bin/env python
import random, string, crypt, datetime
userList = ['user1','user2','user3','user4','user5']
dateNow = (datetime.datetime.utcnow() - datetime.datetime(1970,1,1)).days
for user in userList:
randomsalt = ''.join(random.sample(string.ascii_letters,10))
randompass = ''.join(random.sample(string.ascii_letters,10))
print("%s:%s:%s::0:99999:7:::" % (user, crypt.crypt(randompass, "$6$"+randomsalt), dateNow))
Please note this python script was ONLY for demonstration and not for actual production data. As users are added to the server the /etc/shadow file is generated with the password presented on the command line. The Original data (from shadow.sync) needs to be "Merged" with the data in /etc/shadow after the newusers command is run (which essentially sets every password to the letter x)
#!/usr/bin/env python
with open('/etc/shadow','rb') as file:
for line in file:
TargetLine = line.rstrip().split(":")
with open('shadow.sync','rb') as shadow:
for row in shadow:
SyncLine = row.rstrip().split(":")
if TargetLine[0] == SyncLine[0]:
TargetLine[1] = SyncLine[1]
break
print "NEW MODIFIED LINE: %s" % ":".join(TargetLine)
This will open the /etc/shadow file and loop through the lines. For each line on the /etc/shadow file we loop through the shadow.sync file once a match for the usernames TargetLine[0] == SyncLine[0] the password field is modified and the loop is broken.
If a match is NOT found (username in /etc/shadow but NOT in the shadow.sync file) the if block on the inner loop falls through and the line is left untouched the results are handled on the final print statement. As this answers the question I will leave the data output and file manipulation up to the user.
use Data::Dumper;
# we only need to process the sync file once -
# and store what we find in a hash (dictionary)
open $fh1, '<', 'shadow.sync.txt';
while (<$fh1>)
{
m/^([^:]+):(.*)$/;
$hash->{$1} = $2;
}
close $fh1;
# this shows us what we found & stored
print Dumper $hash;
# now we'll process the shadow file which needs updating -
# here we output a side-by-side comarison of what the passwords
# currently are & what they will be updated to (from the hash)
open $fh2, '<', 'shadow.txt';
open $fh3, '>', 'shadow.UPDATED.txt';
while (<$fh2>)
{
m/^([^:]+):(.*)$/;
printf ( "%s => %s\n", $1, $2 );
printf ( "%s => %s\n\n", $1, $hash->{$1} );
printf $fh3 ( "%s:%s\n", $1, $hash->{$1} );
}
close $fh3;
close $fh2;
Sample output:
$VAR1 = {
'user5' => '$6$RZbtYxWiwE$lnP8w64Cp6DgXheacSx.:17531::0:99999:7:::',
'user1' => '$6$HiwVVNc1.HUaqSpX/ZcCEFSn6RmQQA0:17531::0:99999:7:::',
'user4' => '$6$lFOIUQv1CRG4T0kJQ3rzRNRSyQHjyb1:17531::0:99999:7:::',
'user3' => '$6$P8.89SNhu23Bb2zWHx353t.e9ERSVy.:17531::0:99999:7:::',
'user2' => '$6$Cc0CfuGzHp0pHNSAqdZY9TAzF5YKLc.:17531::0:99999:7:::'
};
user1 => $6$RANDOM1RANDOM1RANDOM1RANDOM1:17531::0:99999:7:::
user1 => $6$HiwVVNc1.HUaqSpX/ZcCEFSn6RmQQA0:17531::0:99999:7:::
user2 => $6$RANDOM2RANDOM2RANDOM2RANDOM2:17531::0:99999:7:::
user2 => $6$Cc0CfuGzHp0pHNSAqdZY9TAzF5YKLc.:17531::0:99999:7:::
user3 => $6$RANDOM3RANDOM3RANDOM3RANDOM3:17531::0:99999:7:::
user3 => $6$P8.89SNhu23Bb2zWHx353t.e9ERSVy.:17531::0:99999:7:::
user4 => $6$RANDOM4RANDOM4RANDOM4RANDOM4:17531::0:99999:7:::
user4 => $6$lFOIUQv1CRG4T0kJQ3rzRNRSyQHjyb1:17531::0:99999:7:::
user5 => $6$RANDOM5RANDOM5RANDOM5RANDOM5:17531::0:99999:7:::
user5 => $6$RZbtYxWiwE$lnP8w64Cp6DgXheacSx.:17531::0:99999:7:::