Regex gets more result then in text available - regex

I have a really weird problem: i searching for URLs on a html site and want only a specific part of the url. In my test html page the link occurs only once, but instead of one result i get about 20...
this is my regex im using:
perl -ne 'm/http\:\/\myurl\.com\/somefile\.php.+\/afolder\/(.*)\.(rar|zip|tar|gz)/; print "$1.$2\n";'
sample input would be something like this:
<html><body>Somelinknme</body></html>
which is a very easy example. so in real the link would apper on a normal website with content around...
my result should be something like this:
testfile.zip
but instead i see this line very often... Is this a problem with the regex or with something else?

Yes, the regex is greedy.
Use an appropriate tool for HTML instead: HTML::LinkExtor or one of the link methods in WWW::Mechanize, then URI to extract a specific part.
use 5.010;
use WWW::Mechanize qw();
use URI qw();
use URI::QueryParam qw();
my $w = WWW::Mechanize->new;
$w->get('file:///tmp/so10549258.html');
for my $link ($w->links) {
my $u = URI->new($link->url);
# 'http://myurl.com/somefile.php?x=foo&y=bla&z=sdf&path=/foo/bar/afolder/testfile.zip&more=arguments&and=evenmore'
say $u->query_param('path');
# '/foo/bar/afolder/testfile.zip'
$u = URI->new($u->query_param('path'));
say (($u->path_segments)[-1]);
# 'testfile.zip'
}

Are there 20 lines following in the file after your link?
Your problem is that the matching variables are not reseted. You match your link the first time, $1 and $2 get their values. In the following lines the regex is not matching, but $1 and $2 has still the old values, therefore you should print only if the regex matches and not every time.
From perlre, see section Capture Groups
NOTE: Failed matches in Perl do not reset the match variables, which makes it easier to write code that tests for a series of more specific cases and remembers the best match.

This should do the trick for your sample input & output.
$Str = '<html><body>Somelinknme</body></html>';
#Matches = ($Str =~ m#path=.+/(\w+\.\w+)#g);
print #Matches ;

Related

Process a Perl qr RegEx string in reverse direction

I have a series of qr// RegEx string patterns to match URLs fed to my site. For example, qr#^/safari/article/([0-9]+)\.html(\?(.*))?$#. This string would match a path from a URL such as /safari/article/299.html?parameter=1. I have a separate subroutine where I can create URLs to link to different parts of the program. It occurred to me that it would be nice if that latter part could somehow use those aforementioned patterns I have already written -- it would reduce the likelihood of error if both the way URLs were generated and the way they are later processed came from the same set of patterns.
When a user comes to the site, my program takes the URL given to the server, runs it against strings like the one above and it will output $1 and $2 with the patterns it finds (e.g. "299" and "parameter=1," the two parameters for loading a page). In essence, now I'd like to do that in reverse and somehow provide $1 and $2 and feed them against that qr// string to create a new path output (say, I'd set $1 to "300" and $2 to "parameter=2," somehow merge that against the qr// string and get the output /safari/article/300.html?parameter=2.
Is there a simple way to do that sort of "reverse regex"? It seems like one way to do it would simply be to do a regex pattern match against those two parenthetical patterns, but that somehow feels sloppy to me. Is there a cleaner way?
EDIT: Part of the reason for storing the patterns in RegEx is that they all get thrown into a multidimensional array for later processing that can help figure out what module should be called. Here's a couple of sample items:
[
{ function => 'article', pattern => qr#^/safari/article/([0-9]+)\.html(\?(.*))?$#, weight => 106 },
{ function => 'topCommentedPosts', pattern => qr#^/safari/top\.html$#, weight => 100 }
]
I'm not sure I understand exactly what you want to achieve. The following works, but going this ways seems rather fragile and dangerous. Why do you need to generate the paths, anyway?
#!/usr/bin/perl
use warnings;
use strict;
use feature qw{ say };
my $TEMPLATE = '/safari/article/$1.html?$2';
sub generate {
my (#replacements) = #_;
return $TEMPLATE =~ s/\$([12])/$replacements[$1-1]/gr
}
sub match {
my ($string) = #_;
my $regex = "$TEMPLATE";
$regex =~ s/([?.])/\\$1/g;
$regex =~ s/\$[0-9]+/(.*)/g;
return $string =~ /$regex/
}
use Test::More;
is generate(300, 'parameter=2'), '/safari/article/300.html?parameter=2';
is_deeply [match('/safari/article/299.html?parameter=1')], [299, 'parameter=1'];
done_testing();

Using Perl to strip everything from a string except HTML Anchor Links

Using Perl, how can I use a regex to take a string that has random HTML in it with one HTML link with anchor, like this:
Whatever Example
and it leave ONLY that and get rid of everything else? No matter what was inside the href attribute with the <a, like title=, or style=, or whatever.
and it leave the anchor: "Whatever Example" and the </a>?
You can take advantage of a stream parser such as HTML::TokeParser::Simple:
#!/usr/bin/env perl
use strict;
use warnings;
use HTML::TokeParser::Simple;
my $html = <<EO_HTML;
Using Perl, how can I use a regex to take a string that has random HTML in it
with one HTML link with anchor, like this:
Whatever <i>Interesting</i> Example
and it leave ONLY that and get rid of everything else? No matter what
was inside the href attribute with the <a, like title=, or style=, or
whatever. and it leave the anchor: "Whatever Example" and the </a>?
EO_HTML
my $parser = HTML::TokeParser::Simple->new(string => $html);
while (my $tag = $parser->get_tag('a')) {
print $tag->as_is, $parser->get_text('/a'), "</a>\n";
}
Output:
$ ./whatever.pl
Whatever Interesting Example
If you need a simple regex solution, a naive approach might be:
my #anchors = $text =~ m#(<a[^>]*?>.*?</a>)#gsi;
However, as #dan1111 has mentioned, regular expressions are not the right tool for parsing HTML for various reasons.
If you need a reliable solution, look for an HTML parser module.

How to remove a part of an URL with regexes?

How can I turn this:
http://site.com/index.php?id=15
Into this?:
http://site.com/index.php?id=
Which RegEx(s) do I use?
I've been trying to do this for a good 2 hours now and I've had no luck.
I can't seem to take out the number(s) at the end, and sometimes there are
letters in the end as well which give me problems.
I am using Bing! instead of Google.
My RegEx so far is this when I search something:
$start = '<h3><a href="';
$end = '" onmousedown=';
while ($result =~ m/$start(.*?)$end/g)
What can I add in their to take out the letters and digits in the end and just leave it as an equal sign?
Thank you.
Since you cannot parse [X]HTML properly with regular expressions, you should look for the minimum possible context that will get you the href you want.
To the best of my knowledge, the one character that cannot be in a href is ". therefore
/href="([^"]+)"/
Should yield a URL in $1. I would sanity check it for URL-ishness before extracting the id string you want, and then:
s/\?id=\w+/id=/
But this has hack written all over it, because you can't parse HTML with regular expressions. So it will probably break the first time you demonstrate it to a customer.
You should really check out proper Perl parsing: http://www.google.com/webhp?q=perl+html+parser
You asked for a regular expression solution but your problem is a bit ill-defined and regexes for HTML are only for stop-gap/one-off stuff or else you’re probably just hurting yourself.
Since I am really not positive what your actual need and HTML source look like this is a generic solution to taking a URL and spitting out all the links found on the page without query strings. Having id= is for all reasonable purposes/code equivalent to no id.
There are many ways, at least three or four of them good solutions, to do this in Perl. This is one that is often overlooked: libxml. Docs: XML::LibXML, URI, and URI::QueryParam (if you want better query manipulation).
use warnings;
use strict;
use URI;
use XML::LibXML;
my $source = shift || die "Give a URL!\n";
my $parser = XML::LibXML->new;
$parser->recover(1);
my $doc = $parser->load_html( location => $source );
for my $anchor ( $doc->findnodes('//a[#href]') )
{
my $uri = URI->new_abs( $anchor->getAttribute("href"), $source );
# commented out ideas.
# next unless $uri->host eq "TARGET HOST NAME";
# next unless $uri->path eq "TARGET PATH";
# Clear the query completely; id= might as well be nothing.
$uri->query(undef);
print $uri, $/;
}
It sounds like maybe you’re using Bing! for scraping. This kind of thing is against pretty much every search engine’s ToS. Don’t do it. They have APIs (well, Google does at least) if you register and get a dev token.
I'm not 100% sure what you are doing, but this is the problem:
while ($result =~ m/$start(.*?)$end/g)
What's the purpose of this loop? You're taking a scalar called $result and checking for a pattern match. How is $result changing?
Your original question was how to make this:
http://site.com/index.php?id=15
into this:
http://site.com/index.php?id=
That is, how do you remove the 15 (or another number) from the expression. The answer is pretty simple:
$url =~ s/=\d+$/=/;
That'll anchor your regular expression at the end of the URL replacing the ending digits with nothing.
If you're removing any string, it's a bit more complex:
$url =~ s/=[^=]+/=/;
You can't simply use \S+ because regular expressions are normally greedy. Therefore, you want to specify any series of non-equal sign characters preceded by an equal sign.
Now, as for the while loop, maybe you want an if statement instead...
if ($result =~ /$start(.*?)$end/g) {
print "Doing something if this matched\n";
}
else {
print "Doing something if there's no match\n";
}
And, I'm not sure what this means:
I am using Bing! instead of Google.
Are you trying to parse the input from Bing!? If so, please explain exactly what you're really trying to do. Maybe we know a better way of doing this. For example, if you're parsing the output of a search result, there might be an API that you can use.
How can I turn this:
http://site.com/index.php?id=15
Into this?:
http://site.com/index.php?id=
I think this is the solution you are looking for
#!/usr/bin/perl
use strict;
use warnings;
my $url="http://site/index.php?id=15";
$url =~ s/(?<=id=).*//g;
print $url;
Output :
http://site.com/index.php?id=
as per your need anything after = sign will be omitted from the URL

Except URL regex

Sigh, regex trouble again.
I have following in $text:
[img]http://www.site.com/logo.jpg[/img]
and
[url]http://www.site.com[/url]
I have regex expression:
$text = preg_replace("/(?<!(\[img\]|\[url\]))([http|ftp]+:\/\/)?\S+[^\s.,>)\];'\"!?]\.+[com|ru|net|ua|biz|org]+\/?[^<>\n\r ]+[A-Za-z0-9](?!(\[\/img\]|\[\/url\]))/","there was link",$text);
The point is to replace url only if it's not preceded by [img] or [url] and not followed by [/img] or [/url]. On the output of previous example I get:
there was link
and
there was link
Both, URL and lookbehind and lookforward regexps are working fine separately.
$text = "[img]bash.org/logo.jpg[/img]";
$text = preg_replace("/(?<!(\[img\]|\[url\]))bash.org(?!(\[\/img\]|\[\/url\]))/","there was link",$text);
echo $text leaves everything as is and gives me [img]bash.org/logo.jpg[/img]
I suppose the problem is in combination of lookarounds and URL regex. Where's my mistake?
I WANT TO
replace http://www.google.com with "there was link", but leave as is "[url]http://www.google.com[/url]"
I'M GETTING
http://www.google.com replaced with "there was link" and [url]http://www.google.com[/url] replaced with "there was link"
HERE'S PHP CODE TO TEST
<?php
$text = "[url]http://www.google.com[/url] <br><br> http://www.google.com";
// should NOT be changed //should be changed
$text = preg_replace("/(?<!\[url\])([http|ftp]+:\/\/)?\S+[^\s.,>)\];'\"!?]\.+[com|ru|net|ua|biz|org]+\/?[^<>\n\r ]+[A-Za-z0-9](?!\[\/url\])/","there was link",$text);
echo $text;
echo '<hr width="100%">';
$text = ":) :-) 0:) 0:-) :)) :-))";
$text = preg_replace("/(?<!0):-?\)(?!\))/","smiley",$text);
echo $text; // lookarounds work
echo '<hr width="100%">';
$text = "http://stackoverflow.com/questions/2482921/regexp-exclusion";
$text = preg_replace("/([http|ftp]+:\/\/)?\S+[^\s.,>)\];'\"!?]\.+[com|ru|net|ua|biz|org]+\/?[^<>\n\r ]+[A-Za-z0-9]/","it's a link to stackoverflow",$text);
echo $text; // URL pattern works fine
?>
Assuming I'm understanding you, you wish to replace all URLs in your $input, with the words 'link was here', unless the URL was within either the url or img bbcode tags. The reason the lookaround assertions aren't working is because those parts are actually matching against your very greedy URL pattern (which I'm fairly sure does lots of things you don't mean it to). Writing a pattern that will match any valid URL (including query string) within other text and that will also not match the tags attached to it is not necessarily the simplest of matters. Especially since your current pattern has the http:// or ftp:// as optional.
The only way you are likely to gain any success is to decide on a strict set of rules that constitute a url.
It is tough to fully understand your question, but it looks like you're doing reverse BBcode. So, leave it alone if it's surrounded by tags? If that is the case, then I think you will have an interesting problem on your hands because URL regexes are notoriously complex.
I think you may be making this more complex than it needs to be. Instead, I would change anything that is between the BBcode. Here's what I think needs to happen:
find the string segment "[url]"
capture anything that proceeds it
end the capture when the string segment "[/url]" is seen
That is an easy regex:
$string = "[url]http://www.google.com[/url] <br><br> http://www.google.com";
$replace = "there was link";
$text = preg_replace_all($regex,$replace,$text);
echo $text;
I know this isn't exactly what you asked for (in fact, probably the exact opposite), but it would achieve the same result and be much easier.
You can probably try using negative lookaheads with this regex, but I am not sure it would give you proper results:
$regex = "#(?!\[url\])(.*)(?!\[/url\])#";
One important note: This does not sanitize user input. Make sure you do this, but I would separate the logic so it is very easy to see what you are doing and where you are doing it. I would also use a library to do this because it's easier and probably safer.
Final working regexp looks like:
(?<!\[img\]|\[url\])((^|\s)([\w-]+://|www[.])[^\s()<>]+(?:\([\w\d]+\)|([^[:punct:]\s]|/)))(?!\[\/img\]|\[/url\])
Example:
<?php
$text = "
[img]http://google.com/logo.jpg[/img]
[img]www.google.com/logo.jpg[/img]
[img]http://www.google.com/logo.jpg[/img]
[url]http://google.com/logo.jpg[/url]
[url]www.google.com/logo.jpg[/url]
[url]http://www.google.com/logo.jpg[/url]
www.google.com/logo.jpg
http://google.com/logo.jpg
http://www.google.com/logo.jpg
";
$text = nl2br($text);
$text = preg_replace("'(?<!\[img\]|\[url\])((^|\s)([\w-]+://|www[.])[^\s()<>]+(?:\([\w\d]+\)|([^[:punct:]\s]|/)))(?!\[\/img\]|\[/url\])'i","<font color=\"#ff0000\">link</font>",$text);
echo $text;
?>
outputs:
[img]http://google.com/logo.jpg[/img]
[img]www.google.com/logo.jpg[/img]
[img]http://www.google.com/logo.jpg[/img]
[url]http://google.com/logo.jpg[/url]
[url]www.google.com/logo.jpg[/url]
[url]http://www.google.com/logo.jpg[/url]
link
link
link
The trick is to replace only links starting with ^ or \s . No other way to solve this issue wasn't found.
Where's my mistake?
Well, the worst mistake is the lookbehind. It isn't needed, and it's making the job much harder than it needs to be. Assuming the existing tags are well formed, you needn't bother looking for the opening tag; its presence is implied by the presence of the closing tag.
EDIT: Your regex has several other problems besides the lookbehind, but it didn't seem worthwhile to try and fix it. Instead, I grabbed a regex from RegexBuddy's built-in library of useful regexes, and added the lookahead to it.
Try this regex (or see it in action on ideone):
'_\b(?>
(?>www\.|ftp\.|(?:https?|ftp|file)://) # scheme or subdomain
[-+&##/%=~|$?!:,.\w]*[+&##/%=~|$\w] # everything else
)(?!\[/(?:img|url)\])
_x'
Just because a problem can be described in terms of looking forward or backward, preceding or following, etc., doesn't mean you should design the regex that way. Lookbehind in particular should never be the first tool you reach for.

How can I get the file extensions from relative links in HTML text using Perl?

For example, scanning the contents of an HTML page with a Perl regular expression, I want to match all file extensions but not TLD's in domain names. To do this I am making the assumption that all file extensions must be within double quotes.
I came up with the following, and it is working, however, I am failing to figure out a way to exclude the TLDs in the domains. This will return "com", "net", etc.
m/"[^<>]+\.([0-9A-Za-z]*)"/g
Is it possible to negate the match if there is more than one period between the quotes that are separated by text? (ie: match foo.bar.com but not ./ or ../)
Edit I am using $1 to return the value within parentheses.
#!/usr/bin/perl
use strict; use warnings;
use File::Basename;
use HTML::TokeParser::Simple;
use URI;
my $parser = HTML::TokeParser::Simple->new( \*DATA );
while ( my $tag = $parser->get_tag('a') ) {
my $uri = URI->new( $tag->get_attr('href') );
my $ext = ( fileparse $uri->path, qr/\.\w+\z/ )[2];
print "$ext\n";
}
__DATA__
<p>link link on example.com
</p>
First of all, extract the names with an HTML parser of your choice. You should then have something like an array containing the names, as if produced like this:
my #names = ("http://foo.bar.net/quux",
"boink.bak",
"mms://three.two.one"
"hello.jpeg");
The only way to distinguish domain names from file extensions seems to be that in "file names", there is at least one more slash between the :// part and the extension. Also, a file extension can only be the last thing in the string.
So, your regular expression would be something like this (untested):
^(?:(?:\w+://)?(?:\w+\.)+\w+/)?.*\.(\w+)$
#!/usr/bin/perl -w
use strict;
while (<>) {
if (m/(?<=(?:ref=|src=|rel=))"([^<>"]+?\.([0-9A-Za-z]+?))"/g) {
if ($1 !~ /:\/\//) {
print $2 . "\n";
}
}
}
Used positive lookbehind to get only the stuff between doublequotes behind one of the 'link' attributes (scr=, rel=, href=).
Fixed to look at "://" for recognizing URL's, and allow files with absolute paths.
#Structure : There is no proper way to protect against someone leaving off the protocol part, as it would just turn into a legitimate pathname : http://www.noo.com/afile.cfg -> www.noo.com/afile.cfg. You would need to wget (or something) all of the links to make sure they are actually there. And that's an entirely different question...
Yes, I know I should use a proper parser, but am just not feeling like it right now :P