How do I call perl cgi script from another perl cgi script - web-services

I have a cgi script which takes few parameter like below.
testScript.cgi?arg=1&arg2=SomeThingElse&arg3=otherThing.....
The above script works well and perfectly.
Now I have another perl cgi script called mySecondScript.cgi . It does its own thing but I want to call textScript.cgi with arguments which are calculated in this script. How do I do that. Any elegant solution will be appreciated.

You probably want to use LWP::Simple to call the second program. Inside mySecondScript.cgi you would need something like this:
my $output = get("http://someserver.somedomain/testScript.cgi?arg=1&arg2=SomeThingElse&arg3=otherThing");
This will return the output from the CGI program (i.e. the HTML page that it generates). If you want more control over what you get back, then you need to use LWP::UserAgent.
my $ua = LWP::UserAgent->new;
my $resp = $ua->get("http://someserver.somedomain/testScript.cgi?arg=1&arg2=SomeThingElse&arg3=otherThing");
$resp will be an HTTP::Response object.
Alternatively, if both of your program are running locally, it might be more convenient to refactor the important bits of testScript.cgi into a module that you can just load and use within mySecondScript.cgi.

There are several ways of calling a "system call" from inside of an perl script. That might be any shell operation or even another perl script. My favorite is qx/[command]/
Look at this mini example to see how it works
my $result = qx/pwd/;
print "result: $result\n";
pwd is a shell command, as would be in your case eg
my $result = qx/mySecondScript $param1 $param2/;
This way you can even tunnel the results from the inner script to the outer script. Other ways could be backticks or the "system" command, but qx ist my personal favorite.

Related

How to update path in Perl script in multiple files

I am working on creating some training material where I am using perl. One of the things I want to do is have the scripts be set up for the student correctly, regardless of where they extra the compressed files. I am working on a Windows batch file that will copy the perl templates to the working location and then update path in the copy of the perl template files to the correct location. The perl template have this as the first line:
#!_BASE_software/perl/bin/perl.exe
The batch file looks like this:
SET TRAINING=%~dp0
copy %TRAINING%\template\*.pl %TRAINING%work
%TRAINING%software\perl\bin\perl -pi.bak -e 's/_BASE_/%TRAINING%/g' %TRAINING%work\*.pl
I have a few problems with this:
Perl doesn't seem to like the wildcard in the filename
It turns out that %TRAINING% is going to expand into a string with backslashes which need to be converted into forwardslashes and needs to be escaped within the regex.
How do I fix this?
First of all, Windows doesn't use the shebang line, so I'm not sure why you're doing any of this work in the first place.
Perl will read the shebang line and look for options if perl is found in the path, even on Windows, but that means that #!perl is sufficient if you want to pass options via the shebang line (e.g. #!perl -n).
Now, it's possible that you use Cygwin, MSYS or some other unix emulation instead of Windows to run the program, but you are placing a Windows path in the shebang line (C:...) rather than a unix path, so that doesn't make sense either.
There are three additional problems with the attempt:
cmd uses double-quotes for quoting.
cmd doesn't perform wildcard expansion like sh, so it's up to your program do it.
You are trying to generate Perl code from cmd. ouch.
If we go ahead, we get:
"%TRAINING%software\perl\bin\perl" -MFile::DosGlob=glob -pe"BEGIN { #ARGV = map glob, #ARGV; $base = $ENV{TRAINING} =~ s{\\}{/}rg } s/_BASE_/$base/g" -i.bak -- %TRAINING%work\*.pl
If we add line breaks for readability, we get the following (that cmd won't accept):
"%TRAINING%software\perl\bin\perl"
-MFile::DosGlob=glob
-pe"
BEGIN {
#ARGV = map glob, #ARGV;
$base = $ENV{TRAINING} =~ s{\\}{/}rg
}
s/_BASE_/$base/g
"
-i.bak -- %TRAINING%work\*.pl

How to run list of perl regex from file in terminal

I'm fairly new to the whole coding game, and am very grateful for every answer!
I am working on a directory with many .txt files in them and have a file with looong list of regex like "perl -p -i -e 's/\n\n/\n/g' *.xml" they all work if I copy them to terminal. But is there a possibility to run them straight from the file?
I tried ./unicode.sh but that resulted in:
No such file or directory.
Any ideas?
Thank you so much!
Here's a (mostly) equivalent Perl script to the oneliner perl -p -i -e 's/\n\n/\n/g' *.xml (one main difference being that this has strict and warnings enabled, which is strongly recommended), which you could expand upon by putting more code to modify the current line in the body of the while loop.
#!/usr/bin/env perl
use warnings;
use strict;
if (!#ARGV) { # if no files on command line
#ARGV = glob('*.xml'); # get a default list of files
}
local $^I = ''; # enable inplace editing (like perl -i)
while (<>) { # read each line of each file into $_
s/\n\n/\n/g; # modify $_ with a regex
# more regexes here...
print; # write the line $_ back out
}
You can save this script in a file such as process.pl, and then run it with perl process.pl, or do chmod u+x process.pl and then run it via ./process.pl.
On the other hand, you really shouldn't modify XML files with regular expressions, there are lots of Perl modules to do XML processing - I wrote about that some more here. Also, in the example you showed, s/\n\n/\n/g actually won't have any effect, since when reading files line-by-line, no string will contain two \n's (you can change how Perl reads files, but I don't see any mention of that in the question).
Edit: You've named the script in your example unicode.sh - if you're processing Unicode files, then Perl has very powerful features to help with that, although the code won't necessarily end up as nice and short as I've showed above. You'll have to tell us some more about what you're doing, and show some example input and output, to get suggestions about that. See also e.g. perlunitut.
It's likely if you got no such file or directory, your problem was you forgot to make unicode.sh executable, as in chmod +x unicode.sh, assuming that's a script that you wrote.
Of course the normal way to run multiple perl commands is this thing that looks like runme.pl which you write, i.e., a perl script.
That said, yes, everything will work from the terminal, you just need to be careful about escaping that bash performs.

Powershell: Read a section of a file into a variable

I'm trying to create a kind of a polyglot script. It's not a true polyglot because it actually requires multiple languages to perform, although it can be "bootstrapped" by either Shell or Batch. I've got this part down no problem.
The part I'm having trouble with is a bit of embedded Powershell code, which needs to be able to load the current file into memory and extract a certain section that is written in yet another language, store it in a variable, and finally pass it into an interpreter. I have an XML-like tagging system that I'm using to mark sections of the file in a way that will hopefully not conflict with any of the other languages. The markers look like this:
lang_a_code
# <{LANGB}>
... code in language B ...
... code in language B ...
... code in language B ...
# <{/LANGB}>
lang_c_code
The #'s are comment markers, but the comment markers can be different things depending on the language of the section.
The problem I have is that I can't seem to find a way to isolate just that section of the file. I can load the entire file into memory, but I can't get the stuff between the tags out. Here is my current code:
#ECHO OFF
SETLOCAL EnableDelayedExpansion
powershell -ExecutionPolicy unrestricted -Command ^
$re = '(?m)^<{LANGB}^>(.*)^<{/LANGB}^>';^
$lang_b_code = ([IO.File]::ReadAllText(^'%0^') -replace $re,'$1');^
echo "${re}";^
echo "Contents: ${lang_b_code}";
Everything I've tried so far results in the entire file being output in the Contents rather than just the code between the markers. I've tried different methods of escaping the symbols used in the markers, but it always results in the same thing.
NOTE: The use of the ^ is required because the top-level interpreter is Batch, which hangs up on the angle brackets and other random things.
Since there is just one block, you can use the regex
$re = '(?s)^<{LANGB}^>(.*)^^.*^<{/LANGB}^>';^
but with -match operator, and then access the text using $matches[1] variable that is set as a result of -match.
So, after the regex declaration, use
[IO.File]::ReadAllText(^'%0^') -match $re;^
echo $matches[1];

Bash, Netcat, Pipes, perl

Background: I have a fairly simple bash script that I'm using to generate a CSV log file. As part of that bash script I poll other devices on my network using netcat. The netcat command returns a stream of information that I can pipe that into a grep command to get to certain values I need in the CSV file. I save that return value from grep into a bash variable and then at the end of the script, I write out all saved bash variables to a CSV file. (Simple enough.)
The change I'd like to make is the amount of netcat commands I have to issue for each piece of information I want to save off. With each issued netcat command I get ALL possible values returned (so each time returns the same data and is burdensome on the network). So, I'd like to only use netcat once and parse the return value as many times as I need to create the bash variables that can later be concatenated together into a single record in the CSV file I'm creating.
Specific Question: Using bash syntax if I pass the output of the netcat command to a file using > (versus the current grepmethod) I get a file with each entry on its own line (presumably separated with the \n as the EOL record separator -- easy for perl regex). However, if I save the output of netcat directly to a bash variable, and echo that variable, all of the data is jumbled together, so it is cumbersome to parse out (not so easy).
I have played with two options: First, I think a perl one-liner may be a good solution here, but I'm not sure how to best execute it. Pseudo code might be to save the netcat output to a a bash variable and then somehow figure out how to parse it with perl (not straight forward though).
The second option would be to use bash's > and send netcat's output to a file. This would be easy to process with perl and Regex given the \n EOL, but that would require opening an external file and passing it to a perl script for processing AND then somehow passing its return value back into the bash script as a bash variable for entry into the CSV file.
I know I'm missing something simple here. Is there a way I can force a newline entry into the bash variable from netcat and then repeatedly run a perl-one liner against that variable to create each of the CSV variables I need -- all within the same bash script? Sorry, for the long question.
The second option would be to use bash's > and send netcat's output to
a file. This would be easy to process with perl and Regex given the \n
EOL, but that would require opening an external file and passing it to
a perl script for processing AND then somehow passing its return value
back into the bash script as a bash variable for entry into the CSV
file.
This is actually a fairly common idiom: save the output from netcat in
a temporary file, then use grep or awk or perl or what-have-you as
many times as necessary to extract data from that file:
# create a temporary file and arrange to have it
# deleted when the script exists.
tmpfile=$(mktemp tmpXXXXXX)
trap "rm -f $tmpfile" EXIT
# dump data from netcat into the
# temporary file.
nc somehost someport > $tmpfile
# extract some information into variable `myvar`
myvar=$(awk '/something/ {print $4}' $tmpfile)
That last line demonstrates how to get the output of something (in this case, an awk script) into a variable. If you were using perl to extract some information you could do the same thing.
You could also just write the whole script in perl, which might make your life easier.

using variable in search n replaces in perl

I'm facing issues while trying to use the search n replace option in perl.
This is not an issue in the unix but appears only in windows. I'm using variable to search a file and replace it with desired string.
Also I'm using it in a one liner in a perl script, so it just adds to the problem!
$oldstring = 1234;
$newstring = 6789;
system("perl -pi.back e s/$oldstring/$newstring/g $filename");
I'm retrieving the file names in a directory from an array and passing them as input to the one-liner. There seems to be no change in the output files, but it does not report any warnings or failures either.
I tried the following too,
system("perl -pi.back e 's/$oldstring/$newstring/g' $filename");
Why is the search n replace not working as expected?
You need appropriate quoting for system() and for command line,
system(qq(perl -pi.back -e "s/$oldstring/$newstring/g" $filename));
or use simpler and more efficient system without calling shell,
system("perl", "-pi.back", "-e", "s/$oldstring/$newstring/g", $filename);