parse text with Matlab - regex

I have a text file (output from an old program) that I'd like to clean. Here's an example of the file contents.
*|V|0|0|0|t|0|1|1|4|11|T4|H01||||||||||||||||||||||
P|40|0.01|10|1|1|0|40|1|1|1||1|*||0|0|0||||||||||||||||
*|A1|A1|A7|A16|F|F|F|F|F|F|F|||||||||||||||||||||||
*|||||kV|kV|kV|MW|MVAR|S|S||||||||||||||||||||||||
N|I|01|H01N01|H01N01|132|125.4|138.6|0|0|||||||||||||||||||||
N|I|01|H01N02|H01N02|20|19|21|0|0|||||||||||||||||||||||
N|I|01|H01N03|H01N03|20|19|21|0.42318823|0.204959433|||||||||||||||||||||
|||||||||||||||||
|||||||||||||||||
L|I|H010203|H01N02|H01N03|1.884|360|0.41071|0.207886957||3.19E-08|3.19E-08|||||||||||
L|I|H010304|H01N03|H01N04|1.62|360|0.35316|0.1787563||3.19E-08||3.19E-08||||||||||||
L|I|H010405|H01N04|H01N05|0.532|360|0.11598|0.058702686||3.19E-08||3.19E-08|||||||||||
L|I|H010506|H01N05|H01N06|1.284|360|0.27991|0.14168092||3.19E-08||3.19E-08||||||||||||
S|SH01|SEZIONE01|1|-3|+3|-100|+100|||||||||||||||||||
S|SH02|SEZIONE02|1|-3|+3|-100|+100|||||||||||||||||||
S|SH03|SEZIONE03|1|-3|+3|-100|+100|||||||||||||||||||
||||||||||||asasasas
S|SH04|SEZIONE04|1|-3|+3|-100|+100|||||||||||||||||||
*|comment
S|SH05|SEZIONE05|1|-3|+3|-100|+100|||||||||||||||||||
I'd like it to look like:
*|V|0|0|0|t|0|1|1|4|11|T4|H01||||||||||||||||||||||
*|comment
*|comment
P|40|0.01|10|1|1|0|40|1|1|1||1|*||0|0|0||||||||||||||||
*|A1|A1|A7|A16|F|F|F|F|F|F|F|||||||||||||||||||||||
*|||||kV|kV|kV|MW|MVAR|S|S||||||||||||||||||||||||
N|I|01|H01N01|H01N01|132|125.4|138.6|0|0|||||||||||||||||||||
N|I|01|H01N02|H01N02|20|19|21|0|0|||||||||||||||||||||||
N|I|01|H01N03|H01N03|20|19|21|0.42318823|0.204959433|||||||||||||||||||||
*|comment||||||||||||||||
*|comment|||||||||||||||||
L|I|H010203|H01N02|H01N03|1.884|360|0.41071|0.207886957||3.19E-08||3.19E-08|||||||||||
L|I|H010304|H01N03|H01N04|1.62|360|0.35316|0.1787563||3.19E-08||3.19E-08||||||||||||||
L|I|H010405|H01N04|H01N05|0.532|360|0.11598|0.058702686||3.19E-08||3.19E-08|||||||||||
L|I|H010506|H01N05|H01N06|1.284|360|0.27991|0.14168092||3.19E-08||3.19E-08||||||||||||
*|comment
*|comment
S|SH01|SEZIONE01|1|-3|+3|-100|+100|||||||||||||||||||
S|SH02|SEZIONE02|1|-3|+3|-100|+100|||||||||||||||||||
S|SH03|SEZIONE03|1|-3|+3|-100|+100|||||||||||||||||||
S|SH04|SEZIONE04|1|-3|+3|-100|+100|||||||||||||||||||
S|SH05|SEZIONE05|1|-3|+3|-100|+100|||||||||||||||||||
The data are divided into 'packages' distinct from the first letter (PNLS). Each package must have at least two dedicated lines (* |) which is then read as a comment. The white lines between different letters are filled with character * |. The lines between various letters that do not begin with * | to be added. The white lines and characters 'random' between identical letters are removed.
Perhaps it is clearer in the example files.
How do I manipulate the text? Thank you in advance for the help.

Use fileread to get your file into MATLAB.
text = fileread('my file to clean.txt');
Split the resulting character string up by splitting on the new lines. (The newlines characters depend on your operating system.)
lines = regexp(text, '\r\n', 'split');
It isn't entirely clear exactly how you want the file cleaned, but these things might get you started.
% Replace blank lines with comment string
blanks = cellfun(#isempty, lines);
comment = '*|comment';
lines(blanks) = cellstr(repmat(comment, sum(blanks), 1))
% Prepend comment string to lines that start with a pipe
lines = regexprep(lines, '^\|', '\*\|comment\|')
You'll be needing to know your way around regular expressions. There's a good guide to them at regular-expressions.info.

Related

Making a text file which will contain my list items and applying regular expression to it

I am supposed to make a code which will read a text file containing some words with some common linguistic features. Apply some regular expression to all of the words and write one file which will have the changed words.
For now let's say my text file named abcd.txt has these words
king
sing
ping
cling
booked
looked
cooked
packed
My first question starts from here. In my simple text file how to write these words to get the above mentioned results. Shall I write them line-separated or comma separated?
This is the code provided by user palvarez.
import re
with open("new_abcd", "w+") as new, open("abcd") as original:
for word in original:
new_word = re.sub("ing$", "xyz", word)
new.write(new_word)
Can I add something like -
with open("new_abcd", "w+") as file, open("abcd") as original:
for word in original:
new_aword = re.sub("ed$", "abcd", word)
new.write(new_aword)
in the same code file? I want something like -
kabc
sabc
pabc
clabc
bookxyz
lookxyz
cookxyz
packxyz
PS - I don't know whether mentioning this is necessary or not, but I am supposed to do this for a Unicode supported script Devanagari. I didn't use it here in my examples because many of us here can't read the script. Additionally that script uses some diacritics. eg. 'का' has one consonant character 'क' and one vowel symbol 'ा' which together make 'का'. In my regular expression I need to condition the diacritics.
I think the approach you have with one word by line is better since you don't have to trouble yourself with delimiters and striping.
With a file like this:
king
sing
ping
cling
booked
looked
cooked
packed
And a code like this, using re.sub to replace a pattern:
import re
with open("new_abcd.txt", "w") as new, open("abcd.txt") as original:
for word in original:
new_word = re.sub("ing$", "xyz", word)
new_word = re.sub("ed$", "abcd", new_word)
new.write(new_word)
It creates a resulting file:
kxyz
sxyz
pxyz
clxyz
bookabcd
lookabcd
cookabcd
packabcd
I tried out with the diacritic you gave us and it seems to work fine:
print(re.sub("ा$", "ing", "का"))
>>> कing
EDIT: added multiple replacement. You can have your replacements into a list and iterate over it to do re.sub as follows.
import re
# List where first is pattern and second is replacement string
replacements = [("ing$", "xyz"), ("ed$", "abcd")]
with open("new_abcd.txt", "w") as new, open("abcd.txt") as original:
for word in original:
new_word = word
for pattern, replacement in replacements:
new_word = re.sub(pattern, replacement, word)
if new_word != word:
break
new.write(new_word)
This limits one modification per word, only the first that modifies the word is taken.
It is recommended that for starters, utilize the with context manager to open your file, this way you do not need to explicitly close the file once you are done with it.
Another added advantage is then you are able to process the file line by line, this will be very useful if you are working with larger sets of data. Writing them in a single line or csv format will then all depend on the requirement of your output and how you would want to further process them.
As an example, to read from a file and say substitute a substring, you can use re.sub.
import re
with open('abcd.txt', 'r') as f:
for line in f:
#do something here
print(re.sub("ing$",'ring',line.strip()))
>>
kring
sring
pring
clring
Another nifty trick is to manage both the input and output utilizing the same context manager like:
import re
with open('abcd.txt', 'r') as f, open('out_abcd.txt', 'w') as o:
for line in f:
#notice that we add '\n' to write each output to a newline
o.write(re.sub("ing$",'ring',line.strip())+'\n')
This create an output file with your new contents in a very memory efficient way.
If you'd like to write to a csv file or any other specific formats, I highly suggest you spend sometime to understand Python's input and output functions here. If linguistics in text is what you are going for that understand encoding of different languages and further study Python's regex operations.

Remove lines that is shorter than or equal 5 characters after the : using Notepad++

The question is like: Remove lines that is shorter than 5 characters before the # using Notepad++
But it differs a bit...
I have like that:
abc:123
abc:1234
abc:12345
PLEASE NOTE: abc is not on all the lines, it is just an example.
I want to remove the first line in the previous example because 123 which is after : is shorter than or not equal to 5 characters.
Any help would be appreciated.
Thanks!
Open Notepad++ find and replace choose regex mode in the search and place ^((?!.+:\d{5,}).)*$ in search and keep replace with blank and press replaceAll
^((?!.+:\d{5,}).)*$
Without knowing the language there is only so much help I can offer. I'll give you an example of how I would solve this problem in C#.
Start by creating a string for your updated file (without the short lines)
string content = "";
Read a line in from your file.
Then get a substring of the line you read in - the abc: portion and check the length.
line = line.substring(indexof(":"), length - indexof(":"))
if(line.length > 5)
{
content += line;
}
At the end, truncate your file and write content to it.

Find and remove specific string from a line

I am hoping to receive some feedback on some code I have written in Python 3 - I am attempting to write a program that reads an input file which has page numbers in it. The page numbers are formatted as: "[13]" (this means you are on page 13). My code right now is:
pattern='\[\d\]'
for line in f:
if pattern in line:
re.sub('\[\d\]',' ')
re.compile(line)
output.write(line.replace('\[\d\]', ''))
I have also tried:
for line in f:
if pattern in line:
re.replace('\[\d\]','')
re.compile(line)
output_file.write(line)
When I run these programs, a blank file is created, rather than a file containing the original text minus the page numbers. Thank you in advance for any advice!
Your if statement won't work because not doing a regex match, it's looking for the literal string \[\d\] in line.
for line in f:
# determine if the pattern is found in the line
if re.match(r'\[\d\]', line):
subbed_line = re.sub(r'\[\d\]',' ')
output_file.writeline(subbed_line)
Additionally, you're using the re.compile() incorrectly. The purpose of it is to pre-compile your pattern into a function. This improves performance if you use the pattern a lot because you only evaluate the expression once, rather than re-evaluating each time you loop.
pattern = re.compile(r'\[\d\]')
if pattern.match(line):
# ...
Lastly, you're getting a blank file because you're using output_file.write() which writes a string as the entire file. Instead, you want to use output_file.writeline() to write lines to the file.
You don't write unmodified lines to your output.
Try something like this
if pattern in line:
#remove page number stuff
output_file.write(line) # note that it's not part of the if block above
That's why your output file is empty.

Load file in pig based on whitespace

I am trying to load a file in PIG which 2 words may be separated with spaces or tabs (may me more than one). Is there a way to delimit the file load using a regex for whitespace? Or is there any other way to achieve the below?
Input:
COUNTESS This young gentlewoman had a father,--O, that`
Output:
COUNTESS
This
young
gentlewoman
had
a
father,--O,
that
It would be great to have a comma delimiter also, but that would make it more complex. For now, only the whitespace delimiter should work for me.
Load the file as a line and then use TOKENIZE.If you have a mixture of tabs and space then after loading the data add a step to replace the tabs with spaces in the line and then use TOKENIZE.
A = LOAD 'test2.txt' as (line:chararray);
B = FOREACH A GENERATE FLATTEN(TOKENIZE(A.$0));
C = FOREACH B GENERATE TOBAG(*);
DUMP C;
OUTPUT
I don't really know PIG, but here's some info:
https://pig.apache.org/docs/r0.9.1/func.html#strsplit
STRSPLIT(string, regex, limit)
regex could be something like [\s,]+. That will split on any blocks of whitespace and commas. So for instance, a b,c ,d, e would split in to each letter. the order of space and comma does not matter.

Reading files which contain unquoted newline characters in text fields using R

I am trying to read a large table into R but one of the text fields occasionally contains one or more unquoted, un-escaped newline characters (\n), thus the read.table() function is not able to easily import this file. The file is pipe delimited and the text fields are not quoted.
I can read it in if I pass the argument fill=T with read.table() but, of course, rows with newline characters in a text field are be corrupted by this.
I have successfully been able to use f <- readChar(fname, nchars=file.info(fname)["size"], TRUE) to read sub-segments of the file, then use gsub() to search and destroy the offending newline characters. (see code below) However, the full file is > 100mb, so gsub() does little more than turn my laptop into a hand-warmer (it's still trying to gsub all the newline characters as I write this).
Anyone have any suggestions for how to efficiently read in a file like this?
It seems like there should be some way of telling R to expect a certain number of delimiters before expecting a newline, but I haven't been able to find any way to do this in the documentation.
Sorry, this seems like it should be easy, but it's really been stumping me, and I have not been able to find anything in stackoverflow or google offering a solution.
Here is the code I've tried so far:
attempt 1:
fdat = read.table(file=fname,
allowEscapes=F,
stringsAsFactors=F,
quote="",
fill=T,
strip.white=T,
comment.char="",
header=T,
sep="|")
attempt 2:
f <- readChar(fname, nchars=file.info(fname)["size"], TRUE)
f2 = gsub(pattern="\n(?!NCT)",replacement=" ",x=f, perl=T)
fdat = read.table(text=f2,
allowEscapes=F,
stringsAsFactors=F,
quote="",
fill=F,
strip.white=T,
comment.char="",
header=T,
sep="|")
Here are a few lines from the file:
NCT_ID|DOWNLOAD_DATE|DOWNLOAD_DATE_DT|ORG_STUDY_ID|BRIEF_TITLE|OFFICIAL_TITLE|ACRONYM|SOURCE|HAS_DMC|OVERALL_STATUS|START_DATE|COMPLETION_DATE|COMPLETION_DATE_TYPE|PRIMARY_COMPLETION_DATE|PRIMARY_COMPLETION_DATE_TYPE|PHASE|STUDY_TYPE|STUDY_DESIGN|NUMBER_OF_ARMS|NUMBER_OF_GROUPS|ENROLLMENT_TYPE|ENROLLMENT|BIOSPEC_RETENTION|BIOSPEC_DESCR|GENDER|MINIMUM_AGE|MAXIMUM_AGE|HEALTHY_VOLUNTEERS|SAMPLING_METHOD|STUDY_POP|VERIFICATION_DATE|LASTCHANGED_DATE|FIRSTRECEIVED_DATE|IS_SECTION_801|IS_FDA_REGULATED|WHY_STOPPED|HAS_EXPANDED_ACCESS|FIRSTRECEIVED_RESULTS_DATE|URL|TARGET_DURATION|STUDY_RANK
NCT00000105|Information obtained from ClinicalTrials.gov on September 25, 2012|9/25/2012|2002LS032|Vaccination With Tetanus and KLH to Assess Immune Responses.|Vaccination With Tetanus Toxoid and Keyhole Limpet Hemocyanin (KLH) to Assess Antigen-Specific Immune Responses||Masonic Cancer Center, University of Minnesota|Yes|Terminated|July 2002|March 2012|Actual|March 2012|Actual|N/A|Observational|Observational Model: Case Control, Time Perspective: Prospective||3|Actual|112|Samples With DNA|analysis of blood samples before and 4 weeks postvaccination|Both|18 Years|N/A|Accepts Healthy Volunteers|Probability Sample|- Normal volunteers
- Patients with Cancer (breast, melanoma, hematologic)
- Transplant patients (umbilical cord blood transplant, autologous transplant)
- Patients receiving other cancer vaccines|March 2012|March 26, 2012|November 3, 1999|Yes|Yes|Replaced by another study.|No||http://clinicaltrials.gov/show/NCT00000105||6670
NCT00000106|Information obtained from ClinicalTrials.gov on September 25, 2012|9/25/2012|NCRR-M01RR03186-9943|41.8 Degree Centigrade Whole Body Hyperthermia for the Treatment of Rheumatoid Diseases|||National Center for Research Resources (NCRR)||Active, not recruiting||||||N/A|Interventional|Allocation: Randomized, Intervention Model: Parallel Assignment, Primary Purpose: Treatment|||||||Both|18 Years|65 Years|No|||November 2000|June 23, 2005|January 18, 2000||||No||http://clinicaltrials.gov/show/NCT00000106||7998
As can be seen, this sample lines from my problem file include the header (line #1), a problematic line (line #2), and a non-problematic line (line #3). Each non-header line starts with NCT and ends with \n (this was leveraged in gsub's regular expression).
Any suggestions are much appreciated.
It seems there is no way to solve it using read.table. Sadly, it doesn't allow to change the "record separator" as awk can do, for example.
Your attempt 2 failed because the DOS format newline is \r\n (0x0d 0x0a) and only \n is matched by gsub. Say you have following file:
NCTa|b|c
NCT1|how
are
you?|well
NCT2|are
you
sure?|yes
Then look at the output of your second command:
f2 <- gsub(pattern="\n(?!NCT)",replacement=" ",x=f, perl=TRUE)
f2
# [1] "NCTa|b|c\r\nNCT1|how\r are\r you?|well\r\nNCT2|are\r you\r sure?|yes\r "
So you have to remove \r too. Just fix it to:
f2 <- gsub(pattern="\r?\n(?!NCT)",replacement=" ",x=f, perl=TRUE)
And it will work.
Regarding performance, you can try to readChar it by smaller chunks in a loop, gsub them and write them back to file, then read.table it. Just an idea.