remove unwanted pyweb indentation - indentation

#d path #{/path/to/the/directory/#}
#o code #{
filename = '#<path#>the_file'
#}
Running python3 -m pyweb test.w -wHTML results in the following code file:
filename = ' /path/to/the/directory/the_file'
The 12 space characters inside the string at the beginning appear because pyweb considers filename = ' a 12 character indentation.
What to change the above pyweb code the get rid of the spaces inside the string?

This looks like it may be a bug. Consider creating an issue here: https://github.com/slott56/py-web-tool
In order to get indentation right in Python, the weaver preserves the previous indentation level.
When creating a bug report, please include the smallest possible document that reproduces the problem, so it can become a unit test case.

Related

Find and remove specific string from a line

I am hoping to receive some feedback on some code I have written in Python 3 - I am attempting to write a program that reads an input file which has page numbers in it. The page numbers are formatted as: "[13]" (this means you are on page 13). My code right now is:
pattern='\[\d\]'
for line in f:
if pattern in line:
re.sub('\[\d\]',' ')
re.compile(line)
output.write(line.replace('\[\d\]', ''))
I have also tried:
for line in f:
if pattern in line:
re.replace('\[\d\]','')
re.compile(line)
output_file.write(line)
When I run these programs, a blank file is created, rather than a file containing the original text minus the page numbers. Thank you in advance for any advice!
Your if statement won't work because not doing a regex match, it's looking for the literal string \[\d\] in line.
for line in f:
# determine if the pattern is found in the line
if re.match(r'\[\d\]', line):
subbed_line = re.sub(r'\[\d\]',' ')
output_file.writeline(subbed_line)
Additionally, you're using the re.compile() incorrectly. The purpose of it is to pre-compile your pattern into a function. This improves performance if you use the pattern a lot because you only evaluate the expression once, rather than re-evaluating each time you loop.
pattern = re.compile(r'\[\d\]')
if pattern.match(line):
# ...
Lastly, you're getting a blank file because you're using output_file.write() which writes a string as the entire file. Instead, you want to use output_file.writeline() to write lines to the file.
You don't write unmodified lines to your output.
Try something like this
if pattern in line:
#remove page number stuff
output_file.write(line) # note that it's not part of the if block above
That's why your output file is empty.

How to select section in regular expression in linux commands

I have these lines that every line begin a word then equal and several sentence so I like select every section. For example:
delete = \account
user\
admin
admin right is good.
add = \
nothing
no out
input output is not good
edit = permission
bob
admin
alice killed bob!!!
I want to select a section for example:
add = \
nothing
no out
input output is not good
I like do it with regular expression.
Your question is a bit vague but you could try the following ...
/\s*(\w+) = ([^=]*\n)*/m
... subject to the requirement that the last section is terminated with \n.
this works by:
'\s*' matching some optional leading whitespace
'(\w+)' capturing the name of the section
' = ' matches the space equals space separator
'([^=]*\n)' it then captures a string that does not include an equals and ends with a newline
'*' and it does that last bit multiple times
The m flag is then required to set multi-line.
See the following to quickly see the groups that are output for each match ...
https://regex101.com/r/oDKSy9/1
(NOTE: The g flag will probably not be required depending on how you use the regex.)
Solution by OP.
I find this solution:
csplit -k fileName '/.*=/' '{*}'
Thanks #haggisandchips

(workaround solution in last comment) output to latex of string from R sections in knitr with backslash characters

I want to use the latex \underbrace command within a R section of a knitr document.
The goal is to replace all the occurrences of a given word in a text by the same word, with underbrace and some other string, say its translation in some other language (the one that's named "after" in the code below), and have all this appear in a latex->pdf
This is my example code (I also tried with paste and cat instead of paste0)
<<echo=FALSE,results='asis'>>=
require(tm)
require(stringr)
require(textcat)
require(xtable)
require(koRpus)
require(dplyr)
my.text.before <- "the sea is blue."
string.before <- "blue"
string.underneath <- "colour"
string.after <- paste("$\\underbrace{",string.before,"}_\\textrm{",string.underneath,"}$",collapse="",sep="")
my.text.after <- str_replace_all(my.text.before,string.before,string.after)
print(my.text.after)
#
but I can't get it to generate eventually
\underbrace in my latex file. Depending on all the tweaks I tried (such as \\\\underbrace), I obtain either underbrace or \\underbrace in my .tex file, but never what's needs.
I managed to do this sort of operation with \textbf instead of \underbrace, but \textbf doesn't require to be in latex math mode, maybe that's the point (maybe not). I tried \$ instead of $ but it doesn't help.

How to parse csv output requiring multiple matches using one-liner?

I have a scenario, where I have post-process / filter values taken out from DB. I'm using perl ple for the task. All works well until I come across extracted output (csv) which contains multiple text tags. See sample here. The code works same (extract regex) correctly if there is just one text tag. In my db there are instances where there are more then one text files (i.e rule conditions).
The code is
echo "COPY (SELECT rule_data FROM custom_rule) TO STDOUT with CSV HEADER" | psql -U qradar -o /tmp/Rules.csv qradar;
perl -ple '
($enabled) = /(?<=enabled="").*?(?="")/g;
($group) = /(?<=group="").*?(?="")/g;
($name) = /(?<=<name>).*?(?=<\/name>)/g;
($text) = /(?<=<text>).*?(?=<\/text>)/g;
$_= "$enabled;$group;$name;$text";
s/<.*?>//g;
' Rules.csv > rules_revised.csv
Just running the code on sample output I get following content in rule_revised file.
true;Flow Property Tests;DoS: Local Flood (Other);when the flow bias
is any of the following outbound
Actually the line is truncated after outbound which infact should carry information similar to this..
when at least 3 flows are seen with the same Source IP,
Destination IP in 5 minutes and when the IP protocol is one of the
following IPSec, Uncommon and when the source packets is greater than
60000
I have tried to correct this by making the regex greedy removing the ? in $text but then it overflow all in-between text till the last text and at the end removing lt;.*?>messes the rest as it includes all the tag characters (i.e html) elements which I originally intended to dis include before making the regex greedy change.
The reason you are getting a truncated result with multiple matches is that you only store the first one.
($text) = /(?<=<text>).*?(?=<\/text>)/g;
This only stores the first match. If you change that scalar to an array, you will capture all matches:
(#text) = /(?<=<text>).*?(?=<\/text>)/g;
When you interpolate the array, it will insert spaces (the value of $") between the elements. If you do not want that, you can change the value of $" to an acceptable delimiter. To be clear, you would change two characters to get the following lines:
(#text) = /(?<=<text>).*?(?=<\/text>)/g;
...
$_= "$enabled;$group;$name;#text";
If I run your code on your sample with these changes the output looks like this:
false;Flow Property Tests;DoS: Local Flood (Other);when the flow bias is any of the following outbound when at least 3 flows are seen with the same Source IP, Destination IP in 5 minutes when the IP protocol is one of the following IPSec, Uncommon when the source packets is greater than 60000
Have you tried to use the s modifier, it make the dot match newline:
perl -ple '
($enabled) = /(?<=enabled="").*?(?="")/g;
($group) = /(?<=group="").*?(?="")/g;
($name) = /(?<=<name>).*?(?=<\/name>)/g;
($text) = /(?<=<text>).*?(?=<\/text>)/gs;
# here ___^
$_= "$enabled;$group;$name;$text";
s/<.*?>//g;
' Rules.csv > rules_revised.csv

Remove C and C++ comments using Python?

I'm looking for Python code that removes C and C++ comments from a string. (Assume the string contains an entire C source file.)
I realize that I could .match() substrings with a Regex, but that doesn't solve nesting /*, or having a // inside a /* */.
Ideally, I would prefer a non-naive implementation that properly handles awkward cases.
This handles C++-style comments, C-style comments, strings and simple nesting thereof.
def comment_remover(text):
def replacer(match):
s = match.group(0)
if s.startswith('/'):
return " " # note: a space and not an empty string
else:
return s
pattern = re.compile(
r'//.*?$|/\*.*?\*/|\'(?:\\.|[^\\\'])*\'|"(?:\\.|[^\\"])*"',
re.DOTALL | re.MULTILINE
)
return re.sub(pattern, replacer, text)
Strings needs to be included, because comment-markers inside them does not start a comment.
Edit: re.sub didn't take any flags, so had to compile the pattern first.
Edit2: Added character literals, since they could contain quotes that would otherwise be recognized as string delimiters.
Edit3: Fixed the case where a legal expression int/**/x=5; would become intx=5; which would not compile, by replacing the comment with a space rather then an empty string.
C (and C++) comments cannot be nested. Regular expressions work well:
//.*?\n|/\*.*?\*/
This requires the “Single line” flag (Re.S) because a C comment can span multiple lines.
def stripcomments(text):
return re.sub('//.*?\n|/\*.*?\*/', '', text, flags=re.S)
This code should work.
/EDIT: Notice that my above code actually makes an assumption about line endings! This code won't work on a Mac text file. However, this can be amended relatively easily:
//.*?(\r\n?|\n)|/\*.*?\*/
This regular expression should work on all text files, regardless of their line endings (covers Windows, Unix and Mac line endings).
/EDIT: MizardX and Brian (in the comments) made a valid remark about the handling of strings. I completely forgot about that because the above regex is plucked from a parsing module that has additional handling for strings. MizardX's solution should work very well but it only handles double-quoted strings.
Don't forget that in C, backslash-newline is eliminated before comments are processed, and trigraphs are processed before that (because ??/ is the trigraph for backslash). I have a C program called SCC (strip C/C++ comments), and here is part of the test code...
" */ /* SCC has been trained to know about strings /* */ */"!
"\"Double quotes embedded in strings, \\\" too\'!"
"And \
newlines in them"
"And escaped double quotes at the end of a string\""
aa '\\
n' OK
aa "\""
aa "\
\n"
This is followed by C++/C99 comment number 1.
// C++/C99 comment with \
continuation character \
on three source lines (this should not be seen with the -C fla
The C++/C99 comment number 1 has finished.
This is followed by C++/C99 comment number 2.
/\
/\
C++/C99 comment (this should not be seen with the -C flag)
The C++/C99 comment number 2 has finished.
This is followed by regular C comment number 1.
/\
*\
Regular
comment
*\
/
The regular C comment number 1 has finished.
/\
\/ This is not a C++/C99 comment!
This is followed by C++/C99 comment number 3.
/\
\
\
/ But this is a C++/C99 comment!
The C++/C99 comment number 3 has finished.
/\
\* This is not a C or C++ comment!
This is followed by regular C comment number 2.
/\
*/ This is a regular C comment *\
but this is just a routine continuation *\
and that was not the end either - but this is *\
\
/
The regular C comment number 2 has finished.
This is followed by regular C comment number 3.
/\
\
\
\
* C comment */
This does not illustrate trigraphs. Note that you can have multiple backslashes at the end of a line, but the line splicing doesn't care about how many there are, but the subsequent processing might. Etc. Writing a single regex to handle all these cases will be non-trivial (but that is different from impossible).
This posting provides a coded-out version of the improvement to Markus Jarderot's code that was described by atikat, in a comment to Markus Jarderot's posting. (Thanks to both for providing the original code, which saved me a lot of work.)
To describe the improvement somewhat more fully: The improvement keeps the line numbering intact. (This is done by keeping the newline characters intact in the strings by which the C/C++ comments are replaced.)
This version of the C/C++ comment removal function is suitable when you want to generate error messages to your users (e.g. parsing errors) that contain line numbers (i.e. line numbers valid for the original text).
import re
def removeCCppComment( text ) :
def blotOutNonNewlines( strIn ) : # Return a string containing only the newline chars contained in strIn
return "" + ("\n" * strIn.count('\n'))
def replacer( match ) :
s = match.group(0)
if s.startswith('/'): # Matched string is //...EOL or /*...*/ ==> Blot out all non-newline chars
return blotOutNonNewlines(s)
else: # Matched string is '...' or "..." ==> Keep unchanged
return s
pattern = re.compile(
r'//.*?$|/\*.*?\*/|\'(?:\\.|[^\\\'])*\'|"(?:\\.|[^\\"])*"',
re.DOTALL | re.MULTILINE
)
return re.sub(pattern, replacer, text)
I don't know if you're familiar with sed, the UNIX-based (but Windows-available) text parsing program, but I've found a sed script here which will remove C/C++ comments from a file. It's very smart; for example, it will ignore '//' and '/*' if found in a string declaration, etc. From within Python, it can be used using the following code:
import subprocess
from cStringIO import StringIO
input = StringIO(source_code) # source_code is a string with the source code.
output = StringIO()
process = subprocess.Popen(['sed', '/path/to/remccoms3.sed'],
input=input, output=output)
return_code = process.wait()
stripped_code = output.getvalue()
In this program, source_code is the variable holding the C/C++ source code, and eventually stripped_code will hold C/C++ code with the comments removed. Of course, if you have the file on disk, you could have the input and output variables be file handles pointing to those files (input in read-mode, output in write-mode). remccoms3.sed is the file from the above link, and it should be saved in a readable location on disk. sed is also available on Windows, and comes installed by default on most GNU/Linux distros and Mac OS X.
This will probably be better than a pure Python solution; no need to reinvent the wheel.
The regular expression cases will fall down in some situations, like where a string literal contains a subsequence which matches the comment syntax. You really need a parse tree to deal with this.
you may be able to leverage py++ to parse the C++ source with GCC.
Py++ does not reinvent the wheel. It
uses GCC C++ compiler to parse C++
source files. To be more precise, the
tool chain looks like this:
source code is passed to GCC-XML
GCC-XML passes it to GCC C++ compiler
GCC-XML generates an XML description
of a C++ program from GCC's internal
representation. Py++ uses pygccxml
package to read GCC-XML generated
file. The bottom line - you can be
sure, that all your declarations are
read correctly.
or, maybe not. regardless, this is not a trivial parse.
# RE based solutions - you are unlikely to find a RE that handles all possible 'awkward' cases correctly, unless you constrain input (e.g. no macros). for a bulletproof solution, you really have no choice than leveraging the real grammar.
I'm sorry this not a Python solution, but you could also use a tool that understands how to remove comments, like your C/C++ preprocessor. Here's how GNU CPP does it.
cpp -fpreprocessed foo.c
There is also a non-python answer: use the program stripcmt:
StripCmt is a simple utility written
in C to remove comments from C, C++,
and Java source files. In the grand
tradition of Unix text processing
programs, it can function either as a
FIFO (First In - First Out) filter or
accept arguments on the commandline.
The following worked for me:
from subprocess import check_output
class Util:
def strip_comments(self,source_code):
process = check_output(['cpp', '-fpreprocessed', source_code],shell=False)
return process
if __name__ == "__main__":
util = Util()
print util.strip_comments("somefile.ext")
This is a combination of the subprocess and the cpp preprocessor. For my project I have a utility class called "Util" that I keep various tools I use/need.
I have using the pygments to parse the string and then ignore all tokens that are comments from it. Works like a charm with any lexer on pygments list including Javascript, SQL, and C Like.
from pygments import lex
from pygments.token import Token as ParseToken
def strip_comments(replace_query, lexer):
generator = lex(replace_query, lexer)
line = []
lines = []
for token in generator:
token_type = token[0]
token_text = token[1]
if token_type in ParseToken.Comment:
continue
line.append(token_text)
if token_text == '\n':
lines.append(''.join(line))
line = []
if line:
line.append('\n')
lines.append(''.join(line))
strip_query = "\n".join(lines)
return strip_query
Working with C like languages:
from pygments.lexers.c_like import CLexer
strip_comments("class Bla /*; complicated // stuff */ example; // out",CLexer())
# 'class Bla example; \n'
Working with SQL languages:
from pygments.lexers.sql import SqlLexer
strip_comments("select * /* this is cool */ from table -- more comments",SqlLexer())
# 'select * from table \n'
Working with Javascript Like Languages:
from pygments.lexers.javascript import JavascriptLexer
strip_comments("function cool /* not cool*/(x){ return x++ } /** something **/ // end",JavascriptLexer())
# 'function cool (x){ return x++ } \n'
Since this code only removes the comments, any strange value will remain. So, this is a very robust solution that is able to deal even with invalid inputs.
You don't really need a parse tree to do this perfectly, but you do in effect need the token stream equivalent to what is produced by the compiler's front end. Such a token stream must necessarilyy take care of all the weirdness such as line-continued comment start, comment start in string, trigraph normalization, etc. If you have the token stream, deleting the comments is easy. (I have a tool that produces exactly such token streams, as, guess what, the front end of a real parser that produces a real parse tree :).
The fact that the tokens are individually recognized by regular expressions suggests that you can, in principle, write a regular expression that will pick out the comment lexemes. The real complexity of the set regular expressions for the tokenizer (at least the one we wrote) suggests you can't do this in practice; writing them individually was hard enough. If you don't want to do it perfectly, well, then, most of the RE solutions above are just fine.
Now, why you would want strip comments is beyond me, unless you are building a code obfuscator. In this case, you have to have it perfectly right.
I ran across this problem recently when I took a class where the professor required us to strip javadoc from our source code before submitting it to him for a code review. We had to do this several times, but we couldn't just remove the javadoc permanently because we were required to generate javadoc html files as well. Here is a little python script I made to do the trick. Since javadoc starts with /** and ends with */, the script looks for these tokens, but the script can be modified to suite your needs. It also handles single line block comments and cases where a block comment ends but there is still non-commented code on the same line as the block comment ending. I hope this helps!
WARNING: This scripts modifies the contents of files passed in and saves them to the original files. It would be wise to have a backup somewhere else
#!/usr/bin/python
"""
A simple script to remove block comments of the form /** */ from files
Use example: ./strip_comments.py *.java
Author: holdtotherod
Created: 3/6/11
"""
import sys
import fileinput
for file in sys.argv[1:]:
inBlockComment = False
for line in fileinput.input(file, inplace = 1):
if "/**" in line:
inBlockComment = True
if inBlockComment and "*/" in line:
inBlockComment = False
# If the */ isn't last, remove through the */
if line.find("*/") != len(line) - 3:
line = line[line.find("*/")+2:]
else:
continue
if inBlockComment:
continue
sys.stdout.write(line)