The following is memory directory
Every directory need to enter it and modify the following three files :
hcell.list
SmicDR1T_cal40_log_ll_sali_p1mx_1tm_121825.drc
SmicSP1R_cal40_LL_sali_p1mtx_11182533.lvs
The above three file has content "TPSRAM_256X120" and I want to replace it with its own directory path name.
How should I do it?
SPRF_256X34 SPSRAM_128X30 SPSRAM_192X16 SPSRAM_240X48 SPSRAM_2944X72 SPSRAM_480X48 SPSRAM_512X8 SPSRAM_72X8 SPSRAM_960X60_WEM SROM_8192X8
command.log SPSRAM_1024X14 SPSRAM_128X64 SPSRAM_2048X17 SPSRAM_240X56 SPSRAM_304X128 SPSRAM_480X64 SPSRAM_5376X17 SPSRAM_8192X12 SPSRAM_960X8 SROM_960X26
filenames.log SPSRAM_1024X16 SPSRAM_152X8 SPSRAM_2048X8 SPSRAM_240X72 SPSRAM_32X64 SPSRAM_480X66 SPSRAM_5376X80 SPSRAM_8192X20 SPSRAM_960X80 TPSRAM_1920X9
mem_new.list SPSRAM_11520X28 SPSRAM_16384X34 SPSRAM_240X10 SPSRAM_240X8 SPSRAM_384X19 SPSRAM_480X96 SPSRAM_544X20 SPSRAM_8192X34 SPSRAM_960X96 TPSRAM_256X120
SPRF_240X20 SPSRAM_120X72 SPSRAM_16384X38 SPSRAM_240X152 SPSRAM_240X88 SPSRAM_4352X8 SPSRAM_496X44 SPSRAM_544X21 SPSRAM_8192X52 SROM_1024X16
SPRF_240X32 SPSRAM_120X80 SPSRAM_16384X40 SPSRAM_240X17 SPSRAM_240X9 SPSRAM_4480X8 SPSRAM_496X82 SPSRAM_5760X32 SPSRAM_8192X72 SROM_1440X14
SPRF_240X82 SPSRAM_120X88 SPSRAM_1920X56 SPSRAM_240X18 SPSRAM_240X96 SPSRAM_480X128 SPSRAM_496X86 SPSRAM_64X22 SPSRAM_8192X8 SROM_1888X26
SPRF_240X86 SPSRAM_1216X40 SPSRAM_1920X60 SPSRAM_240X22 SPSRAM_256X8 SPSRAM_480X144 SPSRAM_512X10 SPSRAM_64X24 SPSRAM_8192X9 SROM_4096X8
SPRF_240X86_WEM SPSRAM_1280X32 SPSRAM_1920X8 SPSRAM_240X34 SPSRAM_2688X8 SPSRAM_480X16 SPSRAM_512X17 SPSRAM_64X48 SPSRAM_960X24 SROM_512X16
SPRF_240X90 SPSRAM_128X16 SPSRAM_1920X9 SPSRAM_240X40 SPSRAM_2880X8 SPSRAM_480X32 SPSRAM_512X27 SPSRAM_720X12 SPSRAM_960X60 SROM_736X14
get all the directories:
set all_dir [glob -type d -nocomplain -dir $dirname *]
in foreach loop open your files: hcell.list, SmicDR1T_cal40_log_ll_sali_p1mx_1tm_121825.drc, SmicSP1R_cal40_LL_sali_p1mtx_11182533.lvs
set r [open [file join $dir hcell.list] r]
Now replace your content using regsub:
regsub "TPSRAM_256X120" $line [pwd]
To replace a value in a file with another, while writing that file back to the same file, you want code like this:
proc replaceValue {filename changeThis toThis {backupExtension ""}} {
set mapping [list $changeThis $toThis]
# Read and transform the file
set f [open $filename]
set content [string map $mapping [read $f]]
close $f
# Make backup if requested
if {$backupExtension ne ""} {
file rename $filename $filename$backupExtension
}
# Write the new contents back
set f [open $filename "w"]
puts -nonewline $f $content
close $f
}
This is only suitable for files up to a couple of hundred megabytes (assuming you've got plenty of memory) but it is easy.
Then, to apply the alteration to everything in a directory, use glob to list the directory contents, foreach to go over the list, and that procedure to apply the transformation.
# Glob patterns in quotes just because of Markdown formatting bug
foreach filename [glob -directory /the/base/directory "*/*.list" "*/*.drc" "*/*.lvs"] {
# Make backups into .bak
replaceValue $filename TPSRAM_256X120 $filename ".bak"
}
Related
I have installed on my PC draw.io app. I want to export all tabs with drawings to seperate files. The only options I have found is:
"c:\Program Files\draw.io\draw.io.exe" --crop -x -f jpg c:\Users\user-name\Documents\_xxx_\my-file.drawio
Help for draw.io
Usage: draw.io [options] [input file/folder]
Options:
(...)
-x, --export export the input file/folder based on the
given options
-r, --recursive for a folder input, recursively convert
all files in sub-folders also
-o, --output <output file/folder> specify the output file/folder. If
omitted, the input file name is used for
output with the specified format as
extension
-f, --format <format> if output file name extension is
specified, this option is ignored (file
type is determined from output extension,
possible export formats are pdf, png, jpg,
svg, vsdx, and xml) (default: "pdf")
(default: 0)
-a, --all-pages export all pages (for PDF format only)
-p, --page-index <pageIndex> selects a specific page, if not specified
and the format is an image, the first page
is selected
-g, --page-range <from>..<to> selects a page range (for PDF format only)
(...)
is not supporting. I can use one of this:
-p, --page-index <pageIndex> selects a specific page, if not specified
and the format is an image, the first page
is selected
-g, --page-range <from>..<to> selects a page range (for PDF format only)
but how to get page-range or number of pages to select index?
There is no easy way to find the number of pages out of the box with Draw.io's CLI options.
One solution would be export the diagram as XML.
draw.io --export --format xml --uncompressed test-me.drawio
And then count how many diagram elements there are. It should equal the number of pages (I briefly tested this but I'm not 100% sure if diagram element only appears once per page).
grep -o "<diagram" "test-me.xml" | wc -l
Here is an example of putting it all together in a bash script (I tried this on MacOS 10.15)
#!/bin/bash
file=test-me # File name excluding extension
# Export diagram to plain XML
draw.io --export --format xml --uncompressed "$file.drawio"
# Count how many pages based on <diagram element
count=$(grep -o "<diagram" "$file.xml" | wc -l)
# Export each page as an PNG
# Page index is zero based
for ((i = 0 ; i <= $count-1; i++)); do
draw.io --export --page-index $i --output "$file-$i.png" "$file.drawio"
done
OP did ask the question with reference to the Windows version, so here's a PowerShell solution inspired by eddiegroves
$DIR_DRAWIO = "."
$DrawIoFiles = Get-ChildItem $DIR_DRAWIO *.drawio -File
foreach ($file in $DrawIoFiles) {
"File: '$($file.FullName)'"
$xml_file = "$($file.DirectoryName)/$($file.BaseName).xml"
if ((Test-Path $xml_file)) {
Remove-Item -Path $xml_file -Force
}
# export to XML
& "C:/Program Files/draw.io/draw.io.exe" '--export' '--format' 'xml' $file.FullName
# wait for XML file creation
while ($true) {
if (-not (Test-Path $xml_file)) {
Start-Sleep -Milliseconds 200
}
else {
break
}
}
# load to XML Document (cast text array to object)
$drawio_xml = [xml](Get-Content $xml_file)
# for each page export png
for ($i = 0; $i -lt $drawio_xml.mxfile.pages; $i++) {
$file_out = "$($file.DirectoryName)/$($file.BaseName)$($i + 1).png"
& "C:/Program Files/draw.io/draw.io.exe" '--export' '--border' '10' '--page-index' $i '--output' $file_out $file.FullName
}
# wait for last file PNG image file
while ($true) {
if (-not (Test-Path "$($file.DirectoryName)/$($file.BaseName)$($drawio_xml.mxfile.pages).png")) {
Start-Sleep -Milliseconds 200
}
else {
break
}
}
# remove/delete XML file
if ((Test-Path $xml_file)) {
Remove-Item -Path $xml_file -Force
}
# export 'vsdx' & 'pdf'
& "C:/Program Files/draw.io/draw.io.exe" '--export' '--format' 'vsdx' $file.FullName
Start-Sleep -Milliseconds 1000
& "C:/Program Files/draw.io/draw.io.exe" '--export' '--format' 'pdf' $file.FullName
}
I am trying to compress a folder containing files and subfolders (with files) into a single zip. I'm limited to the core perl modules so I'm trying to work with IO::Compress::Zip. I want to remove the working directory file path but seem to end up with a blank first folder before my zipped folder, like there is a trailing "/" I haven't been able to get rid of.
use Cwd;
use warnings;
use strict;
use File::Find;
use IO::Compress::Zip qw(:all);
my $cwd = getcwd();
$cwd =~ s/[\\]/\//g;
print $cwd, "\n";
my $zipdir = $cwd . "\\source_folder";
my $zip = "source_folder.zip";
my #files = ();
sub process_file {
next if (($_ eq '.') || ($_ eq '..'));
if (-d && $_ eq 'fp'){
$File::Find::prune = 1;
return;
}
push #files, $File::Find::name if -f;
}
find(\&process_file, $cwd . "\\source_folder");
zip \#files => "$zip", FilterName => sub{ s|\Q$cwd|| } or die "zip failed: $ZipError\n";
I have also attempted using the option "CanonicalName => 1, " which appears to leave the filepath except the drive letter (C:).
Substitution with
s[^$dir/][]
did nothing and
s<.*[/\\]><>
left me with no folder structure at all.
What am I missing?
UPDATE
The Red level is unexpected and is what is not required, win explorer is not able to see beyond this level.
There are two issues with your script.
First, you are mixing Windows and Linux/Unix paths in the script. Let me illustrate
I've created a subdirectory called source_folder to match your script
$ dir source_folder
Volume in drive C has no label.
Volume Serial Number is 7CF0-B66E
Directory of C:\Scratch\source_folder
26/11/2018 19:48 <DIR> .
26/11/2018 19:48 <DIR> ..
26/11/2018 17:27 840 try.pl
01/06/2018 13:02 6,653 url
2 File(s) 7,493 bytes
When I run your script unmodified I get an apparently empty zip file when I view it in Windows explorer. But, if I use a command-line unzip, I see that source_folder.zip isn't empty, but it has non-standard filenames that are part Windows and part Linux/Unix.
$ unzip -l source_folder.zip
Archive: source_folder.zip
Length Date Time Name
--------- ---------- ----- ----
840 2018-11-26 17:27 \source_folder/try.pl
6651 2018-06-01 13:02 \source_folder/url
--------- -------
7491 2 files
The mix-and-match of windows & Unix paths is created in this line of your script
find(\&process_file, $cwd . "\\source_folder");
You are concatenating a Unix-style path in $cwd with a windows part "\source_folder".
Change the line to use a forward slash, rather than a backslash to get a consistent Unix-style path.
find(\&process_file, $cwd . "/source_folder");
The second problem is this line
zip \#files => "$zip",
FilterName => sub{ s|\Q$cwd|| },
BinmodeIn =>1
or die "zip failed: $ZipError\n";
The substitute, s|\Q$cwd||, needs an extra "/", like this s|\Q$cwd/|| to make sure that the path added to the zip archive is a relative path. So the line becomes
zip \#files => "$zip", FilterName => sub{ s|\Q$cwd/|| } or die "zip failed: $ZipError\n";
Once those two changes are made I can view the zip file in Explorer and get unix-style relative paths in when I use the command-line unzip
$ unzip -l source_folder.zip
Archive: source_folder.zip
Length Date Time Name
--------- ---------- ----- ----
840 2018-11-26 17:27 source_folder/try.pl
6651 2018-06-01 13:02 source_folder/url
--------- -------
7491 2 files
This works for me:
use Cwd;
use warnings;
use strict;
use File::Find;
use IO::Compress::Zip qw(:all);
use Data::Dumper;
my $cwd = getcwd();
$cwd =~ s/[\\]/\//g;
print $cwd, "\n";
my $zipdir = $cwd . "/source_folder";
my $zip = "source_folder.zip";
my #files = ();
sub process_file {
next if (($_ eq '.') || ($_ eq '..'));
if (-d && $_ eq 'fp') {
$File::Find::prune = 1;
return;
}
push #files, $File::Find::name if -f;
}
find(\&process_file, $cwd . "/source_folder");
print Dumper \#files;
zip \#files => "$zip", FilterName => sub{ s|\Q$cwd/|| } or die "zip failed: $ZipError\n";
I changed the path seperator to '/' in your call to find() and also stripped it in the FilterName sub.
console:
C:\Users\chris\Desktop\devel\experimente>mkdir source_folder
C:\Users\chris\Desktop\devel\experimente>echo 1 > source_folder/test1.txt
C:\Users\chris\Desktop\devel\experimente>echo 1 > source_folder/test2.txt
C:\Users\chris\Desktop\devel\experimente>perl perlzip.pl
C:/Users/chris/Desktop/devel/experimente
Exiting subroutine via next at perlzip.pl line 19.
$VAR1 = [
'C:/Users/chris/Desktop/devel/experimente/source_folder/test1.txt',
'C:/Users/chris/Desktop/devel/experimente/source_folder/test2.txt'
];
C:\Users\chris\Desktop\devel\experimente>tar -tf source_folder.zip
source_folder/test1.txt
source_folder/test2.txt
I am trying to remove the old files in a dir if the count is more than 3 over SSH
Kindly suggest how to resolve the issue.
Please refer the code snippet
#!/usr/bin/perl
use strict;
use warnings;
my $HOME="/opt/app/latest";
my $LIBS="${HOME}/libs";
my $LIBS_BACKUP_DIR="${HOME}/libs_backups";
my $a;
my $b;
my $c;
my $d;
my $command =qq(sudo /bin/su - jenkins -c "ssh username\#server 'my $a=ls ${LIBS_BACKUP_DIR} | wc -l;my $b=`$a`;if ($b > 3); { print " Found More than 3 back up files , removing older files..";my $c=ls -tr ${LIBS_BACKUP_DIR} | head -1;my $d=`$c`;print "Old file name $d";}else { print "No of back up files are less then 3 .";} '");
print "$command\n";
system($command);
output:
sudo /bin/su - jenkins -c "ssh username#server 'my ; =ls /opt/app/latest/libs_backups | wc -l;my ; =``;if ( > 3); { print " Found More than 3 back up files , removing older files..";my ; =ls -tr /opt/app/latest/libs_backups | head -1;my ; =``;print "Old file name ";}else { print "No of back up files are less then 3 .";} '"
Found: -c: line 0: unexpected EOF while looking for matching `''
Found: -c: line 1: syntax error: unexpected end of file
If you have three levels of escaping, you're bound to get it wrong if you do it manually. Use String::ShellQuote's shell_quote instead.
Furthermore, avoid generating code. You're bound to get it wrong! Pass the necessary information using arguments, the environment or some other channel of communication instead.
There were numerous errors in the interior Perl script on top of the fact that you tried to execute a Perl script without actually invoking perl!
#!/usr/bin/perl
use strict;
use warnings;
use String::ShellQuote qw( shell_quote );
my $HOME = "/opt/app/latest";
my $LIBS = "$HOME/libs";
my $LIBS_BACKUP_DIR = "$HOME/libs_backups";
my $perl_script = <<'__EOI__';
use strict;
use warnings;
use String::ShellQuote qw( shell_quote );
my ($LIBS_BACKUP_DIR) = #ARGV;
my $cmd = shell_quote("ls", "-tr", "--", $LIBS_BACKUP_DIR);
chomp( my #files = `$cmd` );
if (#files > 3) {
print "Found more than 3 back up files. Removing older files...\n";
print "$_\n" for #files;
} else {
print "Found three or fewer backup files.\n";
}
__EOI__
my $remote_cmd = shell_quote("perl", "-e", $perl_script, "--", $LIBS_BACKUP_DIR);
my $ssh_cmd = shell_quote("ssh", 'username#server', "--", $remote_cmd);
my $local_cmd = shell_quote("sudo", "su", "-c", $ssh_ccmd);
system($local_cmd);
I created a new file and handling the dir check and deletion logic , scp file to remote server and executing in remote server , after completion removing the file.
#!/usr/bin/perl
use strict;
use warnings;
use File::Basename;
use File::Path;
use FindBin;
use File::Copy;
my $HOME="/opt/app/test/latest";
my $LIBS_BACKUP_DIR="${HOME}/libs_backups";
my $a="ls ${LIBS_BACKUP_DIR} | wc -l";
my $b=`$a`;
my $c="ls -tr ${LIBS_BACKUP_DIR} | head -1";
my $d=`$c`;
chomp($d);
print " count : $b\n";
if ($b > 3)
{
print " Found More than 3 back up files , removing older files..\n";
print "Old file name $d\n";
my $filepath="${LIBS_BACKUP_DIR}/$d";
rmtree $filepath;
}
else
{
print "No of back up files are less then 3 .\n";
}
I have a script which poplulates filenames and last modified time of the respective file under a particular directory. I have used DosGlob module to specify the regex.
Sample directory structure is:
//share16/ABC/X/Output/1/
//share16/ABC/X/Output/2/
//share16/ABC/Y/Output/1/
//share16/ABC/Y/Output/2/
Below is the code which does the above and there is further code after this which is out of present context.
use File::DosGlob 'glob';
my #dir_regex = glob "//share16/ABC/*/Output/";
for my $dir (#dir_regex) {
find( { wanted => \&process_file, no_chdir => 1 }, $dir ) or die $!;
}
sub process_file {
my $dummy = $_;
if ( -f $dummy ) {
my $filename = "//share16/TOOLS/report.txt";
open( my $fh, '>>', $filename )
or die "Could not open file '$filename' $!";
my $last_mod_time = ctime( stat($dummy)->mtime );
print $fh "$last_mod_time $dummy\n";
}
close $fh;
}
The script successfully lists down the files under all folders (folder 1, folder 2) under the first directory X but fails immediately when it starts reading the folder Y.
Error: No such file or directory at \share16\traverse4.pl line 5.
I am clueless as to why it is failing as I have tried hardcoding the foldername in dir_regex but it still fails after listing the files under the first directory.
Because you set no_chdir so find doesn't chdir. Thus your various calls - with a relative path - fails. The simple solution would be using $File::Find::name instead of $_
I'd also note - you can just specify a directory list to File::Find - you don't need to do each separately.
Friends , im trying to automate a routing using expect , basically its a debug plugin in a special equipment that i need to log some data , to access this debug plugin my company needs to give me a responsekey based on a challengekey , its a lot of hosts and i need to deliver this by friday , what i've done so far.
#!/usr/bin/expect -f
match_max 10000
set f [open "cimc.txt"]
set hosts [split [read $f] "\n"]
close $f
foreach host $hosts {
spawn ssh ucs-local\\marcos#10.2.8.2
expect "Password: "
send "Temp1234\r"
expect "# "
send "connect cimc $host\r"
expect "# "
send "load debug plugin\r"
expect "ResponseKey#>"
sleep 2
set buffer $expect_out(buffer)
set fid [open output.txt w]
puts $fid $buffer
close $fid
sleep 10
spawn ./find-chag.sh
sleep 2
set b [open "key.txt"]
set challenge [read $b]
close $b
spawn ./find-rep.sh $challenge
sleep 3
set c [open "rep.txt"]
set response [read $c]
close $c
puts Response-IS
send "\r"
expect "ResponseKey#> "
send "$response"
}
$ cat find-chag.sh
cat output.txt | awk 'match($0,"ChallengeKey"){print substr($0,RSTART+15,38)}' > key.txt
$ cat find-rep.sh
curl bla-blabla.com/CIMC-key/generate?key=$1 | grep ResponseAuth | awk 'match($0,"</td><td>"){print substr($0,RSTART+9,35)}' > rep.txt
i dont know how to work with regexp on expect so i put the buffer output to a file and used bash script , the problem is that after i run the scripts with spawn looks like my ssh session is lost , does anyone have any tips? should i use something else instead of spawn to invoke my scripts?
expect -re "my tcl compatible regular expression goes here"
Should allow you to use regular expressions.