Friends:
I have to process a CSV file, using Perl language and produce an Excel as output, using the Excel::Writer::XSLX module. This is not a homework but a real life problem, where I cannot download whichever Perl version (actually, I need to use Perl 5.6), or whichever Perl module (I have a limited set of them). My OS is UNIX. I can also use (embedding in Perl) ksh and csh (with some limitation, as I have found so far). Please, limit your answers to the tools I have available. Thanks in advance!
Even though I am not a Perl developer, but coming from other languages, I have already done my work. However, the customer is asking for extra processing where I am getting stuck on.
1) The stones in the road I found are coming from two sides: from Perl and from Excel particular styles of processing data. I already found a workaround to handle the Excel, but -as mentioned in the subject- I have difficulties while processing zeroes found in CSV input file. To handle the Excel, I am using the '0 way which is the final way for data representation that Excel seems to have while using the # formatting style.
2) Scenario:
I need to catch standalone zeroes which might be present in whichever line / column / cell of the CSV input file and put them as such (as zeroes) in the Excel output file.
I will go directly to the point of my question to avoid loosing your valuable time. I am providing more details after my question:
Research and question:
I tried to use Perl regex to find standalone "0" and replace them by whichever string, planning to replace them back to "0" at the end of processing.
perl -p -i -e 's/\b0\b/string/g' myfile.csv`
and
perl -i -ple 's/\b0\b/string/g' myfile.csv
Are working; but only from command line. They aren't working when I call them from the Perl script as follows:
system("perl -i -ple 's/\b0\b/string/g' myfile.csv")
Do not know why... I have already tried using exec and eval, instead of system, with the same results.
Note that I have a ton of regex that work perfectly with the same structure, such as the following:
system("perl -i -ple 's/input/output/g' myfile.csv")
I have also tried using backticks and qx//, without success. Note that qx// and backticks have not the same behavior, since qx// is complaining about the boundaries \b because of the forward slash.
I have tried using sed -i, but my System is rejecting -i as invalid flag (do not know if this happens in all UNIX, but at least happens in the one at work. However is accepting perl -i).
I have tried embedding awk (which is working from command line), in this way:
system `awk -F ',' -v OFS=',' '$1 == \"0\" { $1 = "string" }1' myfile.csv > myfile_copy.csv
But this works only for the first column (in command line) and, other than having the disadvantage of having extra copy file, Perl is complaining for > redirection, assuming it as "greater than"...
system(q#awk 'BEGIN{FS=OFS=",";split("1 2 3 4 5",A," ") } { for(i in A)sub(0,"string",$A[i] ) }1' myfile.csv#);
This awk is working from command line, but only 5 columns. But not in Perl using #.
All the combinations of exec and eval have also been tested without success.
I have also tried passing to system each one of the awk components, as arguments, separated by commas, but did not find any valid way to pass the redirector (>), since Perl is rejecting it because of the mentioned reason.
Using another approach, I noticed that the "standalone zeroes" seem to be "swallowed" by the Text::CSV module, thus, I get rid off it, and turned back to a traditional looping in csv line by line and a spliter for commas, preserving the zeroes in that way. However I found the "mystery" of isdual in Perl, and because of the limitation of modules I have, I cannot use the Dumper. Then, I also explored the guts of binaries in Perl and tried the $x ^ $x, which was deprecated since version 5.22 but valid till that version (I said mine is 5.6). This is useful to catch numbers vs strings. However, while if( $x ^ $x ) returns TRUE for strings, if( !( $x ^ $x ) ) does not returns TRUE when $x = 0. [UPDATE: I tried this in a devoted Perl script, just for this purpose, and it is working. I believe that my probable wrong conclusion ("not returning TRUE") was obtained when I did not still realize that Text::CSV was swallowing my zeroes. Doing new tests...].
I will appreciate very much your help!
MORE DETAILS ON MY REQUIREMENTS:
1) This is a dynamic report coming from a database which is handover to me and I pickup programmatically from a folder. Dynamic means that it might have whichever amount of tables, whichever amount of columns in each table, whichever names as column headers, whichever amount of rows in each table.
2) I do not know, and cannot know, the column names, because they vary from report to report. So, I cannot be guided by column names.
A sample input:
Alfa,Alfa1,Beta,Gamma,Delta,Delta1,Epsilon,Dseta,Heta,Zeta,Iota,Kappa
0,J5,alfa,0,111.33,124.45,0,0,456.85,234.56,798.43,330000.00
M1,0,X888,ZZ,222.44,111.33,12.24,45.67,0,234.56,0,975.33
3) Input Explanation
a) This is an example of a random report with 12 columns and 3 rows. Fist row is header.
b) I call "standalone zeroes" those "clean" zeroes which are coming in the CSV file, from second row onwards, between commas, like 0, (if the case is the first position in the row) or like ,0, in subsequent positions.
c) In the second row of the example you can read, from the beginning of the row: 0,J5,alfa,0, which in this particular case, are "words" or "strings". In this case, 4 names (note that two of them are zeroes, which required to be treated as strings). Thus, we have a 4 names-columns example (Alfa,Alfa1,Beta,Gamma are headers for those columns, but only in this scenario). From that point onwards, in the second row, you can see floating point (*.00) numbers and, among them, you can see 2 zeroes, which are numbers. Finally, in the third line, you can read M1,0,X888,Z, which are the names for the first 4 columns. Note, please, that the 4th column in the second row has 0 as name, while the 4th column in the third row has ZZ as name.
Summary: as a general picture, I have a table-report divided in 2 parts, from left to right: 4 columns for names, and 8 columns for numbers.
Always the first M columns are names and the last N columns are numbers.
- It is unknown which number is M: which amount of columns devoted for words / strings I will receive.
- It is unknown which number is N: which amount of columns devoted for numbers I will receive.
- It is KNOWN that, after the M amount of columns ends, always starts N, and this is constant for all the rows.
I have done a quick research on Perl boundaries for regex ( \b ), and I have not found any relevant information regarding if it applies or not in Perl 5.6.
However, since you are using and old Perl version, try the traditional UNIX / Linux style (I mean, what Perl inherits from Shell), like this:
system("perl -i -ple 's/^0/string/g' myfile.csv");
The previous regex should do the work doing the change at the start of the each line in your CSV file, if matches.
Or, maybe better (if you have those "standalone" zeroes, and want avoid any unwanted change in some "leading zeroes" string):
system("perl -i -ple 's/^0,/string,/g' myfile.csv");
[Note that I have added the comma, after the zero; and, of course, after the string].
Note that the first regex should work; the second one is just a "caveat", to be cautious.
If you need to clean content in the "cells" of a tsv (tab separated values) file, where the cleaning operation needs include removing certain characters at the beginning and end of each column per line as well as in some cases characters in the middle of the content, then this approach works (ubuntu/linux):
(the example below is using awk but any tool/utility like sed,tr, core bash functionality that can be run from a bash script is the goal)
awk 'BEGIN {FS="\t";OFS="\t"} \
{for (i=1; i <= NF ; i++) \
gsub(/^[[:space:]]+|[[:space:]]+$|[[:cntrl:]]+|(\\|\/)+$/, "", $i) \
gsub(/(\+$|^\+)/, "", $i) \
# and so on (could add more replacement conditions here)
;print $0 }' ${file} > ${file}.cleaned
However when these files are very large, it can take a long time to process because it is breaking each line up into the cells and processing each cell. I want to see if the processing time for the same cleanup operations can be improved.
Can this be rewritten so that instead of breaking up the columns per line and doing the replacement per cell, a regular expression could be constructed to do the same cleanup operations (more could be added later to the above example) on all columns in between the column separator (tabs) for the entire line all at once?
The approach should consider the files to be cleaned have a dynamic/varying number of columns for the file contents (i.e. its not just 2 or 3 columns, could be 10 or more) so building a fixed regular expression based on a known number of columns isn't an option. However, the script could build the regular expression dynamically based on first determining how many columns there are and then execute the result. I am curious though if a statically constructed regular expression could be built to do this for the entire line where it knows how to remove certain characters between the content and the tabs for each cell without removing the tabs.
Update to clarify, in the above example there are some characters that need to be removed anywhere in the cell content (defined as in between the tabs) and others that need to be removed at beginning or end of content (i.e. [[:space:]]) and some of the characters to replace would include a tab (i.e. [[:cntrl:]] and [::space:]]). This update is to clear up any confusion I may have suggested that all cases for removal are just at beginning or end of cell content.
Sed has a rather more cryptic syntax than does Awk, but for simple substitutions where field-splitting is not required, it's not that cryptic. Certainly the regex syntax itself is the hardest part about it, and that does not change appreciably among the various regex-based tools. For example,
sed \
-e 's,^[ \r\v\f]\+\|\([ \r\v\f]\+\|[\\/]+\)$,,g' \
-e 's,\([ \r\v\f]*\|[\\/]*\)\t[ \r\v\f]*,\t,g' \
-e 's,[\x00-\x08\x0a-\x1f\x7f]\+,,g' \
-e 's,^\+|\+$,,g' \
-e 's,+\?\t+\?,\t,g' \
${file} > ${file}.cleaned
That performs the same transformation that your Awk script does (for reals now, I think). There are some catches there, however, large among them:
because you get no field splitting, you have to match not only at the beginning and end of each input, but also around the field delimiters;
yet you need to watch out for matching the field delimiter elsewhere (the first version of this answer did not do an adequate job of that, because the tab is included in both [[:space:]] and [[:cntrl:]]);
you will want to perform at least some of your substitutions with the g (global) flag to replace all matches instead of just the first. In Awk, that's the difference between gsub and sub. In Sed and several other languages it's the difference between supplying the 'g' flag or not.
You could of course package up exactly the same thing with Awk, Perl, etc. without much substantive change. Whether any variation is a bona fide improvement over your original per-field Awk approach is a matter of style preference and performance testing.
I have a lot of relatively small files with about 350.000 lines of text.
For example:
File 1:
asdf
wetwert
ddghr
vbnd
...
sdfre
File 2:
erye
yren
asdf
jkdt
...
uory
As you can see line 3 of file 2 is a duplicate of line 1 in file 1.
I want a program / Notepad++ Plugin that can check and remove these duplicates in multiple files.
The next problem I have is that I want all lists to be combined into large 1.000.000 line files.
So, for example, I have these files:
648563 lines
375924 lines
487036 lines
I want them to result in these files:
1.000.000 lines
511.523 lines
And the last 2 files must consist of only unique lines.
How can I possibly do this? Can I use some programs for this? Or a combination of multiple Notepad++ Plugins?
I know GSplit can split files of 1.536.243 into files of 1.000.000 and 536.243 lines, but that is not enough, and it doesn't remove duplicates.
I do want to create my own Notepad++ plugin or program if needed, but I have no idea how and where to start.
Thanks in advance.
You have asked about Notepad++ and are thus using Windows. On the other hand, you said you want to create a program if needed, so I guess the main goal is to get the job done.
This answer uses Unix tools - on Windows, you can get those with Cygwin.
To run the commands, you have to type (or paste) them in the terminal / console.
cat file1 file2 file3 | sort -u | split -l1000000 - outfile_
cat reads the files and echoes them; normally, to the screen, but the pipe | gets the output of the command left to it and pipes it through to the command on the right.
sort obviously sorts them, and the switch -u tells it to remove duplicate lines.
The output is then piped to split which is being told to split after 1000000 lines by the switch -l1000000. The - (with spaces around) tells it to read its input not from a file but from "standard input"; the output in sort -u in this case. The last word, outfile_, can be changed by you, if you want.
Written like it is, this will result in files like outfile_aa, outfile_ab and so on - you can modify this with the last word in this command.
If you have all the files in on directory, and nothing else is in there, you can use * instead of listing all the files:
cat * | sort -u | split -l1000000 - outfile_
If the files might contain empty lines, you might want to remove them. Otherwise, they'll be sorted to the top and your first file will not have the full 1.000.000 values:
cat file1 file2 file3 | grep -v '^\s*$' | sort -u | split -l1000000 - outfile_
This will also remove lines that consist only of whitespace.
grep filters input using regular expressions. -v inverts the filter; normally, grep keeps only lines that match. Now, it keeps only lines that don't match. ^\s*$ matches all lines that consist of nothing else than 0 or more characters of whitespace (like spaces or tabs).
If you need to do this regularly, you can write a script so you don't have to remember the details:
#!/bin/sh
cat * | sort -u | split -l1000000 - outfile_
Save this as a file (for example combine.sh) and run it with
./combine.sh
I've spent way too much time on this and am looking for suggestions. I have too very large files (FASTQ files from an Illumina sequencing run for those interested). What I need to do is match a pattern common between both files and print that line plus the 3 lines below it into two separate files without duplications (which exist in the original files). Grep does this just fine but the files are ~18GB and matching between them is ridiculously slow. Example of what I need to do is below.
FileA:
#DLZ38V1_0262:8:1101:1430:2087#ATAGCG/1
NTTTCAGTTAGGGCGTTTGAAAACAGGCACTCCGGCTAGGCTGGTCAAGG
+DLZ38V1_0262:8:1101:1430:2087#ATAGCG/1
BP\cccc^ea^eghffggfhh`bdebgfbffbfae[_ffd_ea[H\_f_c
#DLZ38V1_0262:8:1101:1369:2106#ATAGCG/1
NAGGATTTAAAGCGGCATCTTCGAGATGAAATCAATTTGATGTGATGAGC
+DLZ38V1_0262:8:1101:1369:2106#ATAGCG/1
BP\ccceeggggfiihihhiiiihiiiiiiiiihighiighhiifhhhic
#DLZ38V1_0262:8:2316:21261:100790#ATAGCG/1
TGTTCAAAGCAGGCGTATTGCTCGAATATATTAGCATGGAATAATAGAAT
+DLZ38V1_0262:8:2316:21261:100790#ATAGCG/1
__\^c^ac]ZeaWdPb_e`KbagdefbZb[cebSZIY^cRaacea^[a`c
You can see 3 unique headers starting with # followed by 3 additional lines
FileB:
#DLZ38V1_0262:8:1101:1430:2087#ATAGCG/2
GAAATCAATGGATTCCTTGGCCAGCCTAGCCGGAGTGCCTGTTTTCAAAC
+DLZ38V1_0262:8:1101:1430:2087#ATAGCG/2
_[_ceeeefffgfdYdffed]e`gdghfhiiihdgcghigffgfdceffh
#DLZ38V1_0262:8:1101:1369:2106#ATAGCG/2
GCCATTCAGTCCGAATTGAGTACAGTGGGACGATGTTTCAAAGGTCTGGC
+DLZ38V1_0262:8:1101:1369:2106#ATAGCG/2
_aaeeeeegggggiiiiihihiiiihgiigfggiighihhihiighhiii
#DLZ38V1_0262:8:1101:1369:2106#ATAGCG/2
GCCATTCAGTCCGAATTGAGTACAGTGGGACGATGTTTCAAAGGTCTGGC
+DLZ38V1_0262:8:1101:1369:2106#ATAGCG/2
_aaeeeeegggggiiiiihihiiiihgiigfggiighihhihiighhiii
#DLZ38V1_0262:8:1101:1369:2106#ATAGCG/2
GCCATTCAGTCCGAATTGAGTACAGTGGGACGATGTTTCAAAGGTCTGGC
+DLZ38V1_0262:8:1101:1369:2106#ATAGCG/2
_aaeeeeegggggiiiiihihiiiihgiigfggiighihhihiighhiii
There are 4 headers here but only 2 are unique as one of them is repeated 3 times
I need the common headers between the two files without duplicates plus the 3 lines below them. In the same order in each file.
Here's what I have so far:
grep -E #DLZ38V1.*/ --only-matching FileA | sort -u -o FileA.sorted
grep -E #DLZ38V1.*/ --only-matching FileB | sort -u -o FileB.sorted
comm -12 FileA.sorted FileB.sorted > combined
combined
#DLZ38V1_0262:8:1101:1369:2106#ATAGCG/
#DLZ38V1_0262:8:1101:1430:2087#ATAGCG/
This is only the common headers between the two files without duplicates. This is what I want.
Now I need to match these headers to the original files and grab the 3 lines below them but only once.
If I use grep I can get what I want for each file
while read -r line; do
grep -A3 -m1 -F $line FileA
done < combined > FileA.Final
FileA.Final
#DLZ38V1_0262:8:1101:1369:2106#ATAGCG/1
NAGGATTTAAAGCGGCATCTTCGAGATGAAATCAATTTGATGTGATGAGC
+DLZ38V1_0262:8:1101:1369:2106#ATAGCG/1
BP\ccceeggggfiihihhiiiihiiiiiiiiihighiighhiifhhhic
#DLZ38V1_0262:8:1101:1430:2087#ATAGCG/1
NTTTCAGTTAGGGCGTTTGAAAACAGGCACTCCGGCTAGGCTGGTCAAGG
+DLZ38V1_0262:8:1101:1430:2087#ATAGCG/1
BP\cccc^ea^eghffggfhh`bdebgfbffbfae[_ffd_ea[H\_f_c
The while loop is repeated to generate FileB.Final
#DLZ38V1_0262:8:1101:1369:2106#ATAGCG/2
GCCATTCAGTCCGAATTGAGTACAGTGGGACGATGTTTCAAAGGTCTGGC
+DLZ38V1_0262:8:1101:1369:2106#ATAGCG/2
_aaeeeeegggggiiiiihihiiiihgiigfggiighihhihiighhiii
#DLZ38V1_0262:8:1101:1430:2087#ATAGCG/2
GAAATCAATGGATTCCTTGGCCAGCCTAGCCGGAGTGCCTGTTTTCAAAC
+DLZ38V1_0262:8:1101:1430:2087#ATAGCG/2
This works but FileA and FileB are ~18GB and my combined file is around ~2GB. Does anyone have any suggestions on how I can dramatically speed up the last step?
Depending on how often do you need to run this:
you could dump (you'll probably want bulk inserts with the index built afterwards) your data into a Postgres (sqlite?) database, build an index on it, and enjoy the fruits of 40 years of research into efficient implementations of relational databases with practically no investment from you.
you could mimic having a relational database by using the unix utility 'join', but there wouldn't be much joy, since that doesn't give you an index, yet it is likely to be faster than 'grep', you might hit physical limitations...I never tried to join two 18G files.
you could write a bit of C code (put your favourite compiled (to machine code) language here), which converts your strings (four letters only, right?) into binary and builds an index (or more) based on it. This could be made lightning fast and small memory footprint as your fifty character string would take up only two 64bit words.
Thought I should post the fix I came up with for this. Once I obtained the combined file (above) I used a perl hash reference to read them into memory and scan file A. Matches in file A were hashed and used to scan file B. This still takes a lot of memory but works very fast. From 20+ days with grep to ~20 minutes.
I'm not sure how to even ask this question so bear with me. I have a list of (mostly) alpha-numerics that are drawing numbers in a giant XML that I'm tweaking a schema for. There appears to be no standard as to how they've been created, so I'm trying to create an XSD regex pattern for them to validate against. Normally, I'd just grind through them but in this case there are hundreds of them. What I want to do is isolate them down to a single instance of each type of drawing number, and then from that, I can create a regex with appropriate OR statements in the XSD.
My environment is Win7, but I've got an Ubuntu VM as well as Cygwin (where I'm currently doing all of this). I don't know if there's a Linux utility that can do this, or if my grep/sed-fu is just weak. I have no idea how to reduce this problem down except by brute force (which I've done for other pieces of this puzzle that weren't as large as this one).
I used this command line statement to grab the drawing "numbers". It looks for the drawing number, sorts them, only gives me uniques, and then strips away the enclosing tags:
grep "DrawingNumber" uber.xml | sort | uniq | sed -e :a -e 's/<[^>]*>//g;/</N;//ba'
Here is a sample of some of the actual drawing "numbers" (there are hundreds more):
10023C/10024C *<= this is how it's represented in the XML & I can't (easily) change it.
10023C
10043E
10051B
10051D
10058B
10059C
10447B 10447B *<= this is how it's represented in the XML & I can't (easily) change it.
10064A
10079B
10079D
10082B
10095A
10098B
10100B
10102
10109B
10109C
10115
101178
10118F
What I want is a list that would reduce the list of drawing numbers to a single instance of each type. For instance, this group of drawing "numbers":
10023C
10043E
10051B
10051D
10058B
10059C
Would reduce to:
nnnnnx
to represent all instances of 5 digits followed by a single letter for which I can create a pattern like so:
[0-9]{5}[a-z A-Z]{1}
Similarly,
10102
10115
would reduce to:
nnnnn
which would represent all instances of 5 digits with nothing following and be captured with:
[0-9]{5}
and so on. I hope this enough information to present the problem in a workable form. Like I said, I didn't even know how to frame the question, and frequently when I get as far as writing a question in SO I realize a solution & don't even submit it, but this one has me stumped.
Update:
Using #nullrevolution's answer, here's what I came up with (this clarifies my comment below which is largely unreadable).
The command line I eventually used was:
grep "DrawingNumber" uber.xml | sort -d | uniq | sed -e :a -e 's/<[^>]*>//g;/</N;//ba' | sed 's/[A-Za-z]/x/g;s/[0-9]/n/g' | sort -u
On data that looked like this:
<DrawingNumber>10430A</DrawingNumber>
<DrawingNumber>10431</DrawingNumber>
<DrawingNumber>10433</DrawingNumber>
<DrawingNumber>10434</DrawingNumber>
<DrawingNumber>10443A</DrawingNumber>
<DrawingNumber>10444</DrawingNumber>
<DrawingNumber>10446</DrawingNumber>
<DrawingNumber>10446A</DrawingNumber>
<DrawingNumber>10447</DrawingNumber>
<DrawingNumber>10447B 10447B</DrawingNumber>
<DrawingNumber>10447B</DrawingNumber>
<DrawingNumber>10454A</DrawingNumber>
<DrawingNumber>10454B</DrawingNumber>
<DrawingNumber>10455</DrawingNumber>
<DrawingNumber>10457</DrawingNumber>
Which gave me a generified output of (for all my data, not the snippet above):
nnnnn
nnnnnn
nnnnnx
nnnnnx nnnnnx
nnnnnx/nnnnnx
nnxxx
Which is exactly what I needed. Turns out the next two instances of things I need to figure out will benefit from this new method, so who knows how many hours this just saved me?
try stripping away the enclosing tags first, then:
sed 's/[A-Za-z]/x/g;s/[0-9]/n/g' file | sort -u
which will replace all letters with "n" and all numbers with "x", then remove all duplicates.
run against your sample input file, the output is:
nnnnnx
if that's not feasible, then could you share a portion of the input file in its original form?