Querying a website with Perl LWP::Simple to Process Online Prices - regex

In my free time, I've been trying to improve my perl abilities by working on a script that uses LWP::Simple to poll one specific website's product pages to check the prices of products (I'm somewhat of a perl noob). This script also keeps a very simple backlog of the last price seen for that item (since the prices change frequently).
I was wondering if there was any way I could further automate the script so that I don't have to explicitly add the page's URL to the initial hash (i.e. keep an array of key terms and do a search query amazon to find the page or price?). Is there anyway way I could do this that doesn't involve me just copying Amazon's search URL and parsing in my keywords? (I'm aware that processing HTML with regex is generally bad form, I just used it since I only need one small piece of data).
#!usr/bin/perl
use strict;
use warnings;
use LWP::Simple;
my %oldPrice;
my %nameURL = (
"Archer Season 1" => "http://www.amazon.com/Archer-Season-H-Jon-Benjamin/dp/B00475B0G2/ref=sr_1_1?ie=UTF8&qid=1297282236&sr=8-1",
"Code Complete" => "http://www.amazon.com/Code-Complete-Practical-Handbook-Construction/dp/0735619670/ref=sr_1_1?ie=UTF8&qid=1296841986&sr=8-1",
"Intermediate Perl" => "http://www.amazon.com/Intermediate-Perl-Randal-L-Schwartz/dp/0596102062/ref=sr_1_1?s=books&ie=UTF8&qid=1297283720&sr=1-1",
"Inglorious Basterds (2-Disc)" => "http://www.amazon.com/Inglourious-Basterds-Two-Disc-Special-Brad/dp/B002T9H2LK/ref=sr_1_3?ie=UTF8&qid=1297283816&sr=8-3"
);
if (-e "backlog.txt"){
open (LOG, "backlog.txt");
while(){
chomp;
my #temp = split(/:\s/);
$oldPrice{$temp[0]} = $temp[1];
}
close(LOG);
}
print "\nChecking Daily Amazon Prices:\n";
open(LOG, ">backlog.txt");
foreach my $key (sort keys %nameURL){
my $content = get $nameURL{$key} or die;
$content =~ m{\s*\$(\d+.\d+)} || die;
if (exists $oldPrice{$key} && $oldPrice{$key} != $1){
print "$key: \$$1 (Was $oldPrice{$key})\n";
}
else{
print "\n$key: $1\n";
}
print LOG "$key: $1\n";
}
close(LOG);

Yes, the design can be improved. It's probably best to delete everything and start over with an existing full-featured web scraping application or framework, but since you want to learn:
The name-to-URL map is configuration data. Retrieve it from outside of the program.
Store the historic data in a database.
Learn XPath and use it to extract data from HTML, it's easy if you already grok CSS selectors.
Other stackers, if you want to amend my post with the rationale for each piece of advice, go ahead and edit it.

I made simple script to demonstate Amazon search automation. Search url for all departments was changed with escaped search term. The rest of code is simple parsing with HTML::TreeBuilder. Structure of HTML in question can be easily examined with dump method (see commented-out line).
use strict; use warnings;
use LWP::Simple;
use URI::Escape;
use HTML::TreeBuilder;
use Try::Tiny;
my $look_for = "Archer Season 1";
my $contents
= get "http://www.amazon.com/s/ref=nb_sb_noss?url=search-alias%3Daps&field-keywords="
. uri_escape($look_for);
my $html = HTML::TreeBuilder->new_from_content($contents);
for my $item ($html->look_down(id => qr/result_\d+/)) {
# $item->dump; # find out structure of HTML
my $title = try { $item->look_down(class => 'productTitle')->as_trimmed_text };
my $price = try { $item->look_down(class => 'newPrice')->find('span')->as_text };
print "$title\n$price\n\n";
}
$html->delete;

Related

Twitch TMI, extract all users

I'm trying to extract all the user names from the source (https://tmi.twitch.tv/group/user/twitchpresents/chatters), but so far I'm only able to get like the first "name".
Goal is to get all the names into an array, and then just remove the "viewers", "admins", "staff", etc. names.
use strict;
use warnings;
my #listusers = userlist();
sub userlist {
my $url = "https://tmi.twitch.tv/group/user/twitchpresents/chatters";
my $array = get($url);
my #array2;
my $time = 0;
while ($time != 2){
my $mylist = (join "",grep(/"\s*(.*?)\s*"/, $array[$time])) =~ /"\s*(.*?)\s*"/;
print $1;
$time++;
}
return #array2;
}
print #listusers;
I assume that you get the page with get from LWP::Simple. Please always show relevant includes.
Since this is valid JSON, use a module for that
use warnings;
use strict;
use feature 'say';
use Data::Dump qw(dd);
use LWP::Simple;
use JSON;
my $data_json = get($url);
my $data = JSON->new->decode($data_json);
#dd $data;
my #names = #{ $data->{chatters}{moderators} }; # get "moderators"
say "#names";
# my #all_names = map { #$_ } values %{$data->{chatters}}; # or get all names
This prints the line: cliccer cuda hnlbot nixi93 scorpy0 somppe
I print "names" of moderators as an example of getting one category out; the commented out line gets all names in one array. Once you have a hashref there are various ways to extract what you need.
Your code seems to be trying to parse that JSON string. It is very easy with a module.
One can view complex data structures with Data::Dumper or such. I use Data::Dump.
Here JSON is used, which delegates to JSON::XS if installed or to the "pure Perl" (and slower) JSON::PP otherwise. Another option is Cpanel::JSON::XS.
For convenience, the structure printed by dd $data is
{
_links => {},
chatter_count => 15,
chatters => {
admins => [],
global_mods => [],
moderators => [
"cliccer",
"cuda",
"hnlbot",
"joffy95",
"nixi93",
"scorpy0",
"somppe",
],
staff => [],
viewers => [
"coldblood94",
"coldbot",
"gabenator",
"gharokk",
"reconcrusadershadow",
"scrubnubslulz",
"shai_the_panda",
"sonadourge",
],
},
}
I recommend that you use
Mojo::UserAgent
for this, as it contains a JSON decoder as well as an HTTP user agent and makes the code much more concise
I assume you want the names of all the different categories of chatters so that's what the map statement does: simply flattening all of the categories—admins, global_mods, moderators, staff, and viewers—into a single list
If anything goes wrong with the HTTP transfer then the subroutine will simply return an empty list, so you may want to add some proper error handling
use strict;
use warnings;
use feature 'say';
use Mojo::UserAgent;
my #list_users = chatters();
say for #list_users;
sub chatters {
my $url = 'https://tmi.twitch.tv/group/user/twitchpresents/chatters';
return unless my $data = Mojo::UserAgent->new->get($url)->res->json;
map { #$_ } values %{ $data->{chatters} };
}
output
girlonduty
moobot
ravager
100tesports
123mickeypr
13eebo
13urnxcalibur
2dmoody
2l33t
2tony79
3nticed
...

How to parse a json file without using json library and json parser

Below are the input and Output Details
Input : This is just a sample input and real input is huge and is just in a single line
[{"mnemonic":"PT.IA1","ID":"000628"}, {"mnemonic":"EOR.1","ID":"000703"}]
code : I'm trying to read the file by setting the delimiter as }, so that I get each value,but as its a single line file, its printing everything at one, how do I parse this line by setting a delimiter to this line , is split function
enough to do this job ?
our $conf =
{
chunk_separator => '\{,',
}
open( FH, "Etot_Data.txt" ) or die $!;
while ( my $chunk = <FH> ){
my $sections = [ split $conf->{chunk_separator}, $chunk ]
print "$chunk\n";
}
Output
I would want to pick "ID" from each value and prepend "abc." to it
Final String would look like abc.000628 or abc.000703 and save it in a hash
There's no need of another values except the ID in json string
Is it possible to read the json file as a normal file and operate on it.
I don't have json parser and I don't have an option to use it
Thanks for the help
If you can't install any external modules, you can sure include it...
Create a JSON directory in the same directory your script is in, then copy the contents of the JSON::PP module, and put it into a PP.pm file inside of the JSON directory you just created. Then, in your script, add the current working directory as a library directory: use lib '.';, and use JSON::PP;.
use warnings;
use strict;
use lib '.';
use JSON::PP qw(decode_json);
my $json;
{
local $/;
open my $fh, '<', 'file.json' or die $!;
$json = <$fh>;
}
my $perl = decode_json $json;
for (#$perl){
my $id = 'abc.' . $_->{ID};
print "$id\n";
}
Output:
abc.000628
abc.000703
If you need to hide the fact you've created an additional module, with only slight finagling, you can make some changes to the module, and include it directly within the script itself.
Note that JSON::PP is in Perl core in v5.14+. OP stated in comments that they are on 5.10.
Everything that people have said in the comments is true. JSON is a complex data format and expecting to parse it without using the tools that already exist is foolish in the extreme. I urge you to fix whatever roadblock is preventing you from installing and using a JSON parser.
However...
If you only want the ID, and the format is always identical to your sample data, then there is a way to do it. This code is pretty fragile and I can't emphasise enough how much of a bad idea this is. But this code seems to work on your sample data.
#!/usr/bin/perl
use strict;
use warnings;
use 5.010;
while (<>) {
foreach my $id (/"ID":"(\d+)"/g) {
say "abc.$id";
}
}
It reads from STDIN and writes to STDOUT, so call it like this:
$ ./parse_json.pl < Etot_data.txt
But please get a JSON parser installed.

Perl taint mode with domain name input for CGI resulting in “Insecure dependency in eval”

Given the following in a CGI script with Perl and taint mode I have not been able to get past the following.
tail /etc/httpd/logs/error_log
/usr/local/share/perl5/Net/DNS/Dig.pm line 906 (#1)
(F) You tried to do something that the tainting mechanism didn't like.
The tainting mechanism is turned on when you're running setuid or
setgid, or when you specify -T to turn it on explicitly. The
tainting mechanism labels all data that's derived directly or indirectly
from the user, who is considered to be unworthy of your trust. If any
such data is used in a "dangerous" operation, you get this error. See
perlsec for more information.
[Mon Jan 6 16:24:21 2014] dig.cgi: Insecure dependency in eval while running with -T switch at /usr/local/share/perl5/Net/DNS/Dig.pm line 906.
Code:
#!/usr/bin/perl -wT
use warnings;
use strict;
use IO::Socket::INET;
use Net::DNS::Dig;
use CGI;
$ENV{"PATH"} = ""; # Latest attempted fix
my $q = CGI->new;
my $domain = $q->param('domain');
if ( $domain =~ /(^\w+)\.(\w+\.?\w+\.?\w+)$/ ) {
$domain = "$1\.$2";
}
else {
warn("TAINTED DATA SENT BY $ENV{'REMOTE_ADDR'}: $domain: $!");
$domain = ""; # successful match did not occur
}
my $dig = new Net::DNS::Dig(
Timeout => 15, # default
Class => 'IN', # default
PeerAddr => $domain,
PeerPort => 53, # default
Proto => 'UDP', # default
Recursion => 1, # default
);
my #result = $dig->for( $domain, 'NS' )->to_text->rdata();
#result = sort #result;
print #result;
I normally use Data::Validate::Domain to do checking for a “valid” domain name, but could not deploy it in a way in which the tainted variable error would not occur.
I read that in order to untaint a variable you have to pass it through a regex with capture groups and then join the capture groups to sanitize it. So I deployed $domain =~ /(^\w+)\.(\w+\.?\w+\.?\w+)$/. As shown here it is not the best regex for the purpose of untainting a domain name and covering all possible domains but it meets my needs. Unfortunately my script is still producing tainted failures and I can not figure out how.
Regexp-Common does not provide a domain regex and modules don’t seem to work with untainting variable so I am at a loss now.
How to get this thing to pass taint checking?
$domain is not tainted
I verified that your $domain is not tainted. This is the only variable you use that could be tainted, in my opinion.
perl -T <(cat <<'EOF'
use Scalar::Util qw(tainted);
sub p_t($) {
if (tainted $_[0]) {
print "Tainted\n";
} else {
print "Not tainted\n";
}
}
my $domain = shift;
p_t($domain);
if ($domain =~ /(^\w+)\.(\w+\.?\w+\.?\w+)$/) {
$domain = "$1\.$2";
} else {
warn("$domain\n");
$domain = "";
}
p_t($domain);
EOF
) abc.def
It prints
Tainted
Not tainted
What Net::DNS::Dig does
See Net::DNS::Dig line 906. It is the beginning of to_text method.
sub to_text {
my $self = shift;
my $d = Data::Dumper->new([$self],['tobj']);
$d->Purity(1)->Deepcopy(1)->Indent(1);
my $tobj;
eval $d->Dump; # line 906
…
From new definition I know that $self is just hashref containing values from new parameters and several other filled in the constructor. The evaled code produced by $d->Dump is setting $tobj to a deep copy of $self (Deepcopy(1)), with correctly set self-references (Purity(1)) and basic pretty-printing (Indent(1)).
Where is the problem, how to debug
From what I found out about &Net::DNS::Dig::to_text, it is clear that the problem is at least one tainted item inside $self. So you have a straightforward way to debug your problem further: after constructing the $dig object in your script, check which of its items is tainted. You can dump the whole structure to stdout using print Data::Dumper::Dump($dig);, which is roughly the same as the evaled code, and check suspicious items using &Scalar::Util::tainted.
I have no idea how far this is from making Net::DNS::Dig work in taint mode. I do not use it, I was just curious and wanted to find out, where the problem is. As you managed to solve your problem otherwise, I leave it at this stage, allowing others to continue debugging the issue.
As resolution to this question if anyone comes across it in the future it was indeed the module I was using which caused the taint checks to fail. Teaching me an important lesson on trusting modules in a CGI environment. I switched to Net::DNS as I figured it would not encounter this issue and sure enough it does not. My code is provided below for reference in case anyone wants to accomplish the same thing I set out to do which is: locate the nameservers defined for a domain within its own zone file.
#!/usr/bin/perl -wT
use warnings;
use strict;
use IO::Socket::INET;
use Net::DNS;
use CGI;
$ENV{"PATH"} = ""; // Latest attempted fix
my $q = CGI->new;
my $domain = $q->param('domain');
my #result;
if ( $domain =~ /(^\w+)\.(\w+\.?\w+\.?\w+)$/ ) {
$domain = "$1\.$2";
}
else {
warn("TAINTED DATA SENT BY $ENV{'REMOTE_ADDR'}: $domain: $!");
$domain = ""; # successful match did not occur
}
my $ip = inet_ntoa(inet_aton($domain));
my $res = Net::DNS::Resolver->new(
nameservers => [($ip)],
);
my $query = $res->query($domain, "NS");
if ($query) {
foreach my $rr (grep { $_->type eq 'NS' } $query->answer) {
push(#result, $rr->nsdname);
}
}
else {
warn "query failed: ", $res->errorstring, "\n";
}
#result = sort #result;
print #result;
Thanks for the comments assisting me in this matter, and SO for teaching more then any other resource I have come across.

Building a web service with Perl

I need to build a server-side application (tiny web service) for testing proposes. What are some CPAN modules and Perl libraries for implementing such task?
Testing a tiny Web service with Plack::Test:
use Plack::Test;
use Test::More;
test_psgi(
app => sub {
my ($env) = #_;
return [200, ['Content-Type' => 'text/plain'], ["Hello World"]],
},
client => sub {
my ($cb) = #_;
my $req = HTTP::Request->new(GET => "http://localhost/hello");
my $res = $cb->($req);
like $res->content, qr/Hello World/;
},
);
done_testing;
There are a lot of possibilities
CGI - if you like to do everything like in the olden days
CGI::Application - a little more advanced
or you could use frameworks like
Catalyst
Dancer
Mojolicious
It depends on your skills and aims what solution you should choose.
A web service simply returns a HTTP status code and some data, perhaps serialized in JSON or XML. You can use the CGI module to do this, e.g.:
#!/usr/bin/perl -w
use strict;
use warnings;
use CGI;
use CGI::Pretty qw/:standard/;
use URI::Escape;
my $query = CGI->new;
my $jsonQueryValue = uri_unescape $query->param('helloWorld');
# let's say that 'helloWorld' is a uri_escape()-ed POST variable
# that contains the JSON object { 'hello' : 'world' }
print header(-type => "application/json", -status => "200 OK");
print "$jsonQueryValue";
You can, of course, print an HTTP response with other status codes and data. A web service might need to return a 404 error, for example, depending on what's being asked for. That sort of thing.
I like to use mojolicious. It's lightweight at first and can do the heavy lifting later too. Mojolicious::Lite in particular is good for quick and dirty.
use Mojolicious::Lite;
# Route with placeholder
get '/:foo' => sub {
my $self = shift;
my $foo = $self->param('foo');
$self->render(text => "Hello from $foo.");
};
# Start the Mojolicious command system
app->start;

Perl web API using Data::Dumper

We've developed an open web API using Apache and mod_perl, where you can pass text created by Data::Dumper to make requests.
Our data generally looks like this:
$VAR1 = {
'OurField' => 'OurValue'
};
Currently, I noticed we're using an eval to get the data back into a Perl hash server side:
my $VAR1;
eval $our_dumper_string;
#$VAR1 is now filled with hash value
The problem with this, is it is a major security issue. You can pass malicious perl code in there and it will run server side...
It there a better way to safely take a Data::Dumper string and turn it into a hash?
Yes. Use JSON::XS and use JSON rather than Data::Dumper format. That is much more compatible with other web APIs
If your data is simple and predictable you can even try to write a simple "parser" to read back the values in a data stricture
#!/usr/bin/perl
use strict;
use warnings;
use Data::Dumper;
my $data = { 'key1' => 'value' };
my $dumper = Dumper($data);
print $dumper;
my $data_2;
while( $dumper =~ /(.+)$/mg) {
if ( $1 =~ m/'(.*)' => '(.*)'/ ) {
$data_2->{$1} = $2;
}
}
print Dumper( $data_2 );
(this is just an example and wont work with integers or nested data structures)