Regex to extract http links from an XML file - regex

I have an xml file with many lines like:
<xhtml:link vip="true" href="http://store.vcenter.com/stores/en/product/tigers-midi/100" />
How do I extract just the link - http://store.vcenter.com/stores/en/product/tigers-midi/100?
I tried http://www\.\.com[^<]+ but that captures everything untill the end of the line - including quotes and closing XML tags.
I'm using this expression with egrep.

Don't parse HTML with regex, use a proper XML/HTML parser.
Check: Using regular expressions with HTML tags
You can use one of the following :
xmllint
xmlstarlet
saxon-lint
File:
<root>
<xhtml:link vip="true" href="http://store.vcenter.com/stores/en/product/tigers-midi/100" />
</root>
Example with xmllint :
xmllint --xpath '//*[#vip="true"]/#href' file.xml 2>/dev/null
Output:
href="http://store.vcenter.com/stores/en/product/tigers-midi/100"
If you need a quick & dirty one time command, you can do:
egrep -o 'https?://[^"]+' file

Related

A Regular Expression that I know is correct, doesn't work with awk. Please advise

Following up on an answer by #dawg to my question how to delete multiple sections in a file based on known patterns, I want to use a regular expression in awk to identify the start of the section(s) I want to delete.
The file I am working with is an xml file. It is in fact the file containing the recently used filenames list (RUFL) in Linux Mint (~/.local/share/recently-used.xbel).
This is how the RUFL is structured:
<?xml version="1.0" encoding="UTF-8"?>
<xbel version="1.0"
xmlns:bookmark="http://www.freedesktop.org/standards/desktop-bookmarks"
xmlns:mime="http://www.freedesktop.org/standards/shared-mime-info"
>
<bookmark href="file:///home/ocor61/Documents/Linux/Linux%20Mint%20Cinnamon%20Keyboard%20Shortcuts.pdf" added="2021-07-18T01:57:02Z" modified="2021-07-18T01:57:02Z" visited="1969-12-31T23:59:59Z">
<info>
<metadata owner="http://freedesktop.org">
<mime:mime-type type="application/pdf"/>
<bookmark:applications>
<bookmark:application name="Document Viewer" exec="&apos;xreader %u&apos;" modified="2021-07-18T01:57:02Z" count="1"/>
</bookmark:applications>
</metadata>
</info>
</bookmark>
<bookmark href="file:///home/ocor61/Documents/Linux/Linux%20Command%20Line%20Cheat%20Sheet.pdf" added="2021-07-18T01:57:09Z" modified="2021-07-18T01:57:09Z" visited="1969-12-31T23:59:59Z">
<info>
<metadata owner="http://freedesktop.org">
<mime:mime-type type="application/pdf"/>
<bookmark:applications>
<bookmark:application name="Document Viewer" exec="&apos;xreader %u&apos;" modified="2021-07-18T01:57:09Z" count="1"/>
</bookmark:applications>
</metadata>
</info>
</bookmark>
<bookmark href="file:///home/ocor61/Documents/work.bfproject" added="2021-07-20T10:52:59Z" modified="2021-07-22T08:41:57Z" visited="1969-12-31T23:59:59Z">
<info>
<metadata owner="http://freedesktop.org">
<mime:mime-type type="application/x-bluefish-project"/>
<bookmark:applications>
<bookmark:application name="bluefish" exec="&apos;bluefish %u&apos;" modified="2021-07-22T08:41:57Z" count="2"/>
</bookmark:applications>
</metadata>
</info>
</bookmark>
</xbel>
I am working on a script to remove filenames from the list. It works fine, but I am also working with an array that contains patterns that should not be used. For example: if the pattern [bookmark] would be used to identify a section that must be removed, the entire file would become unusable. That goes for parts of [bookmark], but also for href, added, info... You get my drift.
So, I want to work with a regexp to counter the problems of entering patterns that cannot be used.
Currently, this is the awk code I am using now (thanks to #dawg):
ENDLINE='</bookmark>'
awk -v f=1 -v st="$1" -v end="$ENDLINE" '
match($0, st) {f=0}
f
match($0, end){f=1}' ~/.local/share/recently-used.xbel
$1 would be the pattern a user enters at the command line, which is part of the file name that must be removed from the RUFL.
The following is the code I would like to use, including the regexp, which doesn't work:
STARTLINE='/(<bookmark href)(.*)($1)(.*)(>)/'
ENDLINE='</bookmark>'
awk -v f=1 -v st="$STARTLINE" -v end="$ENDLINE" '
match($0, st) {f=0}
f
match($0, end){f=1}' ~/.local/share/recently-used.xbel
I have tested the regular expression at https://regexr.com/, so I know it is correct. However, when I use it in my script, this is the error message I am getting:
./ruffle.sh: line 99: syntax error near unexpected token `$0,'
./ruffle.sh: line 99: ` match($0, st) {f=0}'
I have also tried to enter the regexp itself in the awk command line instead of the variable, but that has the same result.
I don't know how to proceed, so any help is appreciated.
The answer to my question lies in how regular expressions can differ when used in different environments. The website I used to check my regexp does so for languages like JS, but not for Bash or likely other shell implementations.
With shellcheck.net as well as by putting the command 'set -vx' in my script right before the awk command, I managed to work things out.
Another mistake I made was to attempt to catch the complete line in the regexp, while I need only the part in that line that can hold the pattern that is entered (which is the part between 'file:' and 'added' in the file ~/.local/share/recently-used.xbel).
The regexp that ultimately works for me now with the variable STARTLINE is:
STARTLINE='file:.*'$1'.*added='
I will have to look into using an xml parser, thanks for the suggestion! For now, however, my script works. Thanks #Sundeep and #EdMorton!

regex to exclude string and delete line

I have the following lines in an XML file
<User id="10338" directoryId="1" sometext txt text test/>
<User id="10359" directoryId="100" some more text text text/>
<User id="103599" directoryId="100" some more text text text/>
<User id="10438" directoryId="1" sometext txt text test/>
I am trying to remove any lines that start with User id=" but I want to keep the ones that have directoryId="1"
my current sed command is
sed -i '' '/<User id="/d' file.xml
I have looked at A regular expression to exclude a word/string and a few other stack overflow posts but not able to get this to work. Please can someone help me write the regex. I essentially need to delete any lines that start with <User id= but excluding the ones where directoryId="1"
You can use
sed -i '' -e '/directoryId="1"/b' -e '/<User id="/d' file.xml
With this sed command,
/directoryId="1"/b skips the lines containing directoryId="1" and
/<User id="/d deletes the other lines that contain <User id=".
See an online demo.

Using sed to replace multiline xml

I'm trying to use sed to edit/change a xml file, but I'm having problems with multilines
The file I want to change has (extract)
<keyStore>
<location>repository/resources/security/apimanager.jks</location>
<password>wso2carbon</password>
</keyStore>
I want to change the password (and only the keyStore password, the file has another password tag)
I'm trying
sed -i 's/\(<keyStore.*>[\s\S]*<password.*>\)[^<>]*\(<\/password.*>\)/\1$WSO2_STORE_PASS\2/g' $WSO2_PATH/$1/repository/conf/broker.xml
but it's not working (change nothing, pattern not found)
If I test the pattern in on-line tester (https://regex101.com/) it seems to work find.
Also, I have tried to replace the [\s\S]* by [^]*, but in this case, sed generate a syntax error.
I'm using Ubuntu 16.04.1.
Any suggestion will be welcome
Parsing XML with regular expressions is always going to be problematic, as XML is not a regular language. Instead, you can use a proper XML parser, for example with XMLStarlet:
xmlstarlet ed --inplace -u "keyStore/password" -v "$WSO2_STORE_PASS" $WSO2_PATH/$1/repository/conf/broker.xml
Sed is not the tool for the job. Use an XML-aware tool, for example xsh:
open { shift } ;
insert text { shift } replace //keyStore/password/text() ;
save :b ;
Run as
xsh script.xsh "$WSO2_PATH/$1/repository/conf/broker.xml" "$WSO2_STORE_PASS"

Parsing HTML page using bash

I have a web HTML page and im trying to parse it.
Source ::
<tr class="active0"><td class=ac><a name="redis/172.29.219.17"></a><a class=lfsb href="#redis/172.29.219.17">172.29.219.17</a></td><td>0</td><td>0</td><td>-</td><td>0</td><td>0</td><td></td><td>0</td><td>0</td><td>-</td><td><u>0<div class=tips><table class=det><tr><th>Cum. sessions:</th><td>0</td></tr><tr><th colspan=3>Avg over last 1024 success. conn.</th></tr><tr><th>- Queue time:</th><td>0</td><td>ms</td></tr><tr><th>- Connect time:</th><td>0</td><td>ms</td></tr><tr><th>- Total time:</th><td>0</td><td>ms</td></tr></table></div></u></td><td>0</td><td>?</td><td>0</td><td>0</td><td></td><td>0</td><td></td><td>0</td><td><u>0<div class=tips>Connection resets during transfers: 0 client, 0 server</div></u></td><td>0</td><td>0</td><td class=ac>17h12m DOWN</td><td class=ac><u> L7TOUT in 1001ms<div class=tips>Layer7 timeout: at step 6 of tcp-check (expect string 'role:master')</div></u></td><td class=ac>1</td><td class=ac>Y</td><td class=ac>-</td><td><u>1<div class=tips>Failed Health Checks</div></u></td><td>1</td><td>17h12m</td><td class=ac>-</td></tr>
<tr class="backend"><td class=ac><a name="redis/Backend"></a><a class=lfsb href="#redis/Backend">Backend</a></td><td>0</td><td>0</td><td></td><td>1</td><td>24</td><td></td><td>29</td><td>41</td><td>200</td><td><u>5<span class="rls">4</span>033<div class=tips><table class=det><tr><th>Cum. sessions:</th><td>5<span class="rls">4</span>033</td></tr><tr><th>- Queue time:</th><td>0</td><td>ms</td></tr><tr><th>- Connect time:</th><td>0</td><td>ms</td></tr><tr><th>- Total time:</th><td><span class="rls">6</span>094</td><td>ms</td></tr></table></div></u></td><td>5<span class="rls">4</span>033</td><td>1s</td><td><span class="rls">4</span>89<span class="rls">1</span>000</td><td>1<span class="rls">8</span>11<span class="rls">6</span>385<div class=tips>compression: in=0 out=0 bypassed=0 savings=0%</div></td><td>0</td><td>0</td><td></td><td>0</td><td><u>0<div class=tips>Connection resets during transfers: 54004 client, 0 server</div></u></td><td>0</td><td>0</td><td class=ac>17h12m UP</td><td class=ac> </td><td class=ac>1</td><td class=ac>1</td><td class=ac>0</td><td class=ac> </td><td>0</td><td>0s</td><td></td></tr></table><p>
What I want is ::
172.29.219.17 L7TOUT in 1001ms
So what Im trying right now is ::
grep redis index.html | grep 'a name=\"redis\/[0-9]*.*\"'
to extract the IP address.
But the regex doesnt seem to look at pick out the only the first row and returns both the rows whereas the IP is only in row 1.
Ive doublecheck the regex im using but it doesnt seem to work.
Any ideas ?
Using xpath expressions in xmllint with its built-in HTML parser would produce an output as
ipAddr=$(xmllint --html --xpath "string(//tr[1]/td[1])" html)
172.29.219.17
and for the time out value prediction, I did a manual calculation of the number of the td row containing the value, which turned out to be 24
xmllint --html --xpath "string(//tr[1]/td[24]/u[1])" html
produces an output as
L7TOUT in 1001ms
Layer7 timeout: at step 6 of tcp-check (expect string 'role:master')
removing the whitespaces and extracting out only the needed parts with Awk as
xmllint --html --xpath "string(//tr[1]/td[24]/u[1])" html | awk 'NF && /L7TOUT/{gsub(/^[[:space:]]*/,"",$0); print}'
L7TOUT in 1001ms
put in a variable as
timeOut=$(xmllint --html --xpath "string(//tr[1]/td[24]/u[1])" html | awk 'NF && /L7TOUT/{gsub(/^[[:space:]]*/,"",$0); print}'
Now you can print both the values together as
echo "${ipAddr} ${timeOut}"
172.29.219.17 L7TOUT in 1001ms
version details,
xmllint --version
xmllint: using libxml version 20902
Also there is an incorrect tag in your HTML input file </table> at the end just before <p> which xmllint reports as
htmlfile:147: HTML parser error : Unexpected end tag : table
remove the line before further testing.
Here is a list of command line tools that will help you parse different formats via bash; bash is extremely powerful and useful.
JSON utilize jq
XML/HTML utilize xq
YAML utilize yq
CSS utilize bashcss
I have tested all the other tools, comment on this one
If the code starts getting truly complex you might consider the naive answer below as coding languages with class support will assit.
naive - Old Answer
Parsing complex formats like JSON, XML, HTML, CSS, YAML, ...ETC is extremely difficult in bash and likely error prone. Because of this I recommend one of the following:
PHP
RUBY
PYTHON
GOLANG
because these languages are cross platform and have parsers for all the above listed formats.
If you want to parse HTML with regexes, then you have to make assumptions about the HTML formatting. E.g. you assume here that the a tag and its name attribute is on the same line. However, this is perfect HTML too:
<a
name="redis/172.29.219.17">
Some text
</a>
Anyway, let's sole the problem assuming that the a tags are on one line and the name is the first attribute. This is what I could come up with:
sed 's/\(<a name="redis\)/\n\1/g' index.html | grep '^<a name="redis\/[0-9.]\+"' | sed -e 's/^<a name="redis\///g' -e 's/".*//g'
Explanation:
The first sed command makes sure that all <a name="redis text goes to a separate line.
Then the grep keeps only those lines that start with `
The last sed contains two expressions:
The first expressions removes the leading <a name="redis/ text
The last expression removes everything that comes after the closing "

How to solve this sed syntax issue

I wrote a regex code to extract anchor tags from a html file and got this output.
mdlinks.txt
<a href='/aspnet/aspnet_refhtmlcontrols.asp'>ASP.NET Reference</a>
<a href='/aspnet/webpages_ref_classes.asp'>Razor Reference</a>
<a href='/html/html_examples.asp'>HTML Examples</a>
<a href='/css/css_examples.asp'>CSS Examples</a>
<a href='/w3css/w3css_examples.asp'>W3.CSS Examples</a>
JavaScript Examples
HTML DOM Examples
I have to represent the output as
"text to display" using the sed tool.
<a[\s]href=('|")([^>]+)">((?:.(?!\<\/a\>))*.)<\/a>
This is my regex which captures the text and href link.
Here is the sed command i wrote
sed -E "s/\"<a[\s]href=('|\")([^>]+)\">((?:.(?!\<\/a\>))*.)<\/a>\"/\[\2\] \(\1\)/" mdlinks.txt
But this gives me error.
Can some please help me?
This is not a job for regex (or any other string manipulation tool). You need tools able to parse html. An example using xsltproc:
1) install the xsltproc package (if needed)
2) Write this xsl file that describes how to transform the html input: stylesheet.xsl
<?xml version="1.0" encoding="UTF-8"?>
<xsl:stylesheet version= "1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform">
<xsl:output method="text" encoding="UTF-8"/>
<xsl:template match="//a">[<xsl:value-of select="text()"/>] (<xsl:value-of select="#href"/>)</xsl:template>
</xsl:stylesheet>
3) Take your original file or your original html content in a variable (let's say "CONTENT"), but not mdlinks.txt (this step is useless and greping links in html content is error-prone and a waste of time (at least 5 hours for you)) and write:
xsltproc --html --novalid stylesheet.xsl <(echo "$CONTENT")
You obtain:
[Google.com] (http://google.com)
[An Example] (http://example.com/files.html)
[File #23] (file23.html)
[See my picture!] (images/mypic.png)
[Email Joel] (mailto:joelross#uw.edu)
Link: http://scott.dd.com.au/wiki/XSLT_Tutorial
Parsing html with line oriented tools will normally fail. Given your simple layout, you could try
tr -s "<" ">" < mdlinks.txt | cut -d">" -f3