Perl multiline regex replace from CLI - regex

I want to permanently enable a Linux repo from the command line in a file that contains the definitions of multiple repos. So the file looks something like :
[repo-first]
some config line
another config line
enabled=0
more config lines
[repo-second]
some config line
another config line
enabled=0
more config lines
I want to be able to selectively set 'enable=0' to 'enable=1' based on the repo name.
There seem to be multiple ways of sucking in the file and/or ignoring the line separator including -p0e, -0777, "BEGIN {undef $}...". My best guess so far is something like :
perl -0777 -e -pi "BEGIN { undef $/ } s/(\[repo\-first\]\n(.*\n)*?enabled\=)0/$1\1/mg" /etc/yum.repos.d/repo-def-file
But naturally, this doesn't work. Ideas?

It would be much more convenient to use a module, such as Config::Tiny
use strict;
use warnings;
use Config::Tiny;
my $conf = Config::Tiny->read( 'repo-def-file' );
$conf->{'repo-first'}{enabled} = 1;
$conf->write( 'repo-def-file' );

As explained, a regex is generally the wrong approach, but it does have the advantage of retaining the order and structure of the file
So if you must, then this will do the job
perl -0777 -i -pe's/\[repo-first\][^[]+enabled=\K\d+/1/' repo-def-file

Related

How to update path in Perl script in multiple files

I am working on creating some training material where I am using perl. One of the things I want to do is have the scripts be set up for the student correctly, regardless of where they extra the compressed files. I am working on a Windows batch file that will copy the perl templates to the working location and then update path in the copy of the perl template files to the correct location. The perl template have this as the first line:
#!_BASE_software/perl/bin/perl.exe
The batch file looks like this:
SET TRAINING=%~dp0
copy %TRAINING%\template\*.pl %TRAINING%work
%TRAINING%software\perl\bin\perl -pi.bak -e 's/_BASE_/%TRAINING%/g' %TRAINING%work\*.pl
I have a few problems with this:
Perl doesn't seem to like the wildcard in the filename
It turns out that %TRAINING% is going to expand into a string with backslashes which need to be converted into forwardslashes and needs to be escaped within the regex.
How do I fix this?
First of all, Windows doesn't use the shebang line, so I'm not sure why you're doing any of this work in the first place.
Perl will read the shebang line and look for options if perl is found in the path, even on Windows, but that means that #!perl is sufficient if you want to pass options via the shebang line (e.g. #!perl -n).
Now, it's possible that you use Cygwin, MSYS or some other unix emulation instead of Windows to run the program, but you are placing a Windows path in the shebang line (C:...) rather than a unix path, so that doesn't make sense either.
There are three additional problems with the attempt:
cmd uses double-quotes for quoting.
cmd doesn't perform wildcard expansion like sh, so it's up to your program do it.
You are trying to generate Perl code from cmd. ouch.
If we go ahead, we get:
"%TRAINING%software\perl\bin\perl" -MFile::DosGlob=glob -pe"BEGIN { #ARGV = map glob, #ARGV; $base = $ENV{TRAINING} =~ s{\\}{/}rg } s/_BASE_/$base/g" -i.bak -- %TRAINING%work\*.pl
If we add line breaks for readability, we get the following (that cmd won't accept):
"%TRAINING%software\perl\bin\perl"
-MFile::DosGlob=glob
-pe"
BEGIN {
#ARGV = map glob, #ARGV;
$base = $ENV{TRAINING} =~ s{\\}{/}rg
}
s/_BASE_/$base/g
"
-i.bak -- %TRAINING%work\*.pl

How to change the current line in BASH using regex

My task is to change all function names in C source code (definitions, declarations and comments) which have only one underscore and only small letters. For example, my_func(void) ----> myFunc(void). How should I do it? In my code I check if the line of the file contains name of function which I have to change, but I have no idea how to do it. Or may be this task has a more efficient and better solution?
while read line; do
if [[ "$line" =~ ^(int|char|float|long|short|void|double)?[[:space:]]?[^_]([a-z0-9]+[_]?)+[(]([a-z[:space:]])*[)][:space:]*[{]? ]]; then
# here should be the code
fi
done < ${FILENAMES[i]}
I understand that here should be used 'sed' like
sed -i 's/_//' ${FILENAMES[i]}
but this command changes all lines of my file, not only what I want and have to change. Thanks.

How to pass a variable line number in sed substitute command

I am trying to do a sed operation like this
sed -i '100s/abc/xyz/' filename.txt
I wanted 100 in a variable say $var from a perl script. So, I am trying like this
system("sed -i "${vars}s/abc/xyz/" filename.txt").
This is throwing some error.
Again when I am doing like this putting system command in single quotes:
system('sed -i "${vars}s/abc/xyz/" filename.txt')
this is substituting wrongly. What can be done?
Better and safer is to use the LIST variant of system, because it avoids unsafe shell command line parsing. The command, sed in your case, will receive the command line arguments un-alterated and without the need to quote them.
NOTE: I added -MO=Deparse just to illustrate what the one-liner compiles to.
NOTE: I added -e to be on the safe side as you have -i on the command line which expects a parameter.
$ perl -MO=Deparse -e 'system(qw{sed -i -e}, "${vars}s/abc/xyz/", qw{filename.txt})'
system(('sed', '-i', '-e'), "${vars}s/abc/xyz/", 'filename.txt');
-e syntax OK
Of course in reality it would be easier just to do the processing in Perl itself instead of calling sed...
Shelling out to sed from within perl is a road to unnecessary pain. You're introducing additional quoting and variable expansion layers, and that's at best making your code less clear, and at worst introducing bugs accidentally.
Why not just do it in native perl which is considerably more effective. Perl even allows you to do in place editing if you want.
But it's as simple as:
open ( my $input, '<', 'filename.txt');
open ( my $output, '>', 'filename.txt.new');
select $output;
while ( <$input> ) {
if ( $. == $vars ) {
s/abc/xyz/
}
print;
}
Or if you're really keen on the in place edit, you can look into setting `$^I:
Perl in place editing within a script (rather than one liner)
But I'd suggest 'just' renaming the file after you're done is as easy.

How to run list of perl regex from file in terminal

I'm fairly new to the whole coding game, and am very grateful for every answer!
I am working on a directory with many .txt files in them and have a file with looong list of regex like "perl -p -i -e 's/\n\n/\n/g' *.xml" they all work if I copy them to terminal. But is there a possibility to run them straight from the file?
I tried ./unicode.sh but that resulted in:
No such file or directory.
Any ideas?
Thank you so much!
Here's a (mostly) equivalent Perl script to the oneliner perl -p -i -e 's/\n\n/\n/g' *.xml (one main difference being that this has strict and warnings enabled, which is strongly recommended), which you could expand upon by putting more code to modify the current line in the body of the while loop.
#!/usr/bin/env perl
use warnings;
use strict;
if (!#ARGV) { # if no files on command line
#ARGV = glob('*.xml'); # get a default list of files
}
local $^I = ''; # enable inplace editing (like perl -i)
while (<>) { # read each line of each file into $_
s/\n\n/\n/g; # modify $_ with a regex
# more regexes here...
print; # write the line $_ back out
}
You can save this script in a file such as process.pl, and then run it with perl process.pl, or do chmod u+x process.pl and then run it via ./process.pl.
On the other hand, you really shouldn't modify XML files with regular expressions, there are lots of Perl modules to do XML processing - I wrote about that some more here. Also, in the example you showed, s/\n\n/\n/g actually won't have any effect, since when reading files line-by-line, no string will contain two \n's (you can change how Perl reads files, but I don't see any mention of that in the question).
Edit: You've named the script in your example unicode.sh - if you're processing Unicode files, then Perl has very powerful features to help with that, although the code won't necessarily end up as nice and short as I've showed above. You'll have to tell us some more about what you're doing, and show some example input and output, to get suggestions about that. See also e.g. perlunitut.
It's likely if you got no such file or directory, your problem was you forgot to make unicode.sh executable, as in chmod +x unicode.sh, assuming that's a script that you wrote.
Of course the normal way to run multiple perl commands is this thing that looks like runme.pl which you write, i.e., a perl script.
That said, yes, everything will work from the terminal, you just need to be careful about escaping that bash performs.

Read from text file, capture text, write to another text file in specific syntax

I have a text file A with the following syntax:
Attribute_Name, 'Path', 'Tutorial';
Attribute_Name2, 'Path2', 'Tutorial';
....
What I need to do is to read from that file, capture those 3 values: Attribute Name, Path and Project Name (tutorial in that case) and write it to output text file, B with the following syntax:
DELETE ATTRIBUTE "Atribute_Name" IN FOLDER "Path" FROM PROJECT "Tutorial";
and repeat for as many iterations as there are lines in the input file.
What is the best(easiest) language to implement that? Can anyone provide example code for that?
I'd personally do something like that with Perl, because I'm familiar with Perl and it works great for these kinds of tasks. You can also write a sed one-liner to get that done.
If you're not a fan of Perl, any modern dynamic language should let you get the job done with minimal effort.
EDIT: An example Perl script (full file for readability) would look like this:
use warnings;
use strict;
while (my $line = <>) {
$line =~ /^\s*(.+?), '(.+?)', '(.+?)';$/; # Doesn't handle internal escaping
print "DELETE ATTRIBUTE \"$1\" IN FOLDER \"$2\" FROM PROJECT \"$3\";\n";
}
See the result.