Indentation control while developing a small python like language - c++

I'm developing a small python like language using flex, byacc (for lexical and parsing) and C++, but i have a few questions regarding scope control.
just as python it uses white spaces (or tabs) for indentation, not only that but i want to implement index breaking like for instance if you type "break 2" inside a while loop that's inside another while loop it would not only break from the last one but from the first loop as well (hence the number 2 after break) and so on.
example:
while 1
while 1
break 2
'hello world'!! #will never reach this. "!!" outputs with a newline
end
'hello world again'!! #also will never reach this. again "!!" used for cout
end
#after break 2 it would jump right here
but since I don't have an "anti" tab character to check when a scope ends (like C for example i would just use the '}' char) i was wondering if this method would the the best:
I would define a global variable, like "int tabIndex" on my yacc file that i would access in my lex file using extern. then every time i find a tab character on my lex file i would increment that variable by 1. when parsing on my yacc file if i find a "break" keyword i would decrement by the amount typed after it from the tabIndex variable, and when i reach and EOF after compiling and i get a tabIndex != 0 i would output compilation error.
now the problem is, whats the best way to see if the indentation got reduced, should i read \b (backspace) chars from lex and then reduce the tabIndex variable (when the user doesn't use break)?
another method to achieve this?
also just another small question, i want every executable to have its starting point on the function called start() should i hardcode this onto my yacc file?
sorry for the long question any help is greatly appreciated. also if someone can provide an yacc file for python would be nice as a guideline (tried looking on Google and had no luck).
thanks in advance.

I am currently implementing a programming language rather similar to this (including the multilevel break oddly enough). My solution was to have the tokenizer emit indent and dedent tokens based on indentation. Eg:
while 1: # colons help :)
print('foo')
break 1
becomes:
["while", "1", ":",
indent,
"print", "(", "'foo'", ")",
"break", "1",
dedent]
It makes the tokenizer's handling of '\n' somewhat complicated though. Also, i wrote the tokenizer and parser from scratch, so i'm not sure whether this is feasable in lex and yacc.
Edit:
Semi-working pseudocode example:
level = 0
levels = []
for c = getc():
if c=='\n':
emit('\n')
n = 0
while (c=getc())==' ':
n += 1
if n > level:
emit(indent)
push(levels,n)
while n < level:
emit(dedent)
level = pop(levels)
if level < n:
error tokenize
# fall through
emit(c) #lazy example

Very interesting exercise. Can't you use the end keyword to check when the scope ends?
On a different note, I have never seen a language that allows you to break out of several nested loops at once. There may be a good reason for that...

Related

How can you require an undetermined character to be repeated consecutively a certain number of times in Ruby Treetop?

I want to create a rule that requires a non-number, non-letter character to be consecutively repeated three times. The rule would look something like this:
# Note, this code does not do what I want!
grammar ThreeCharacters
rule threeConsecutiveCharacters
(![a-zA-Z0-9] .) 3..3
end
end
Is there any way to require the first character that it detects to be repeated three times?
There was previously a similar question about detecting the number of indentations: PEG for Python style indentation
The solution there was to first initialize the indentation stack:
&{|s| #indents = [-1] }
Then save the indentation for the current line:
&{|s|
level = s[0].indentation.text_value.length
#indents << level
true
}
Whenever a new line begins it peeks at the indentation like this:
!{|s|
# Peek at the following indentation:
save = index; i = _nt_indentation; index = save
# We're closing if the indentation is less or the same as our enclosing block's:
closing = i.text_value.length <= #indents.last
}
If the indentation is larger it adds the new indentation level to the stack.
I could create something similar for my problem, but this seems like a very tedious way to solve it.
Are there any other ways to create my rule?
Yes, you can do it this way in Treetop. This kind of thing not generally possible with a PEG because of the way packrat parsing works; it's greedy but you need to limit its greed using semantic information from earlier in the parse. It's only the addition in Treetop of semantic predicates (&{...}} that make it possible. So yes, it's tedious. You might consider using Rattler instead, as it has a significant number of features in addition to those available in Treetop. I can't advise (as maintainer of Treetop, but not being a user of Rattler) but I am very impressed by its feature set and I think it will handle this case better.
If you proceed with Treetop, bear in mind that every semantic predicate should return a boolean value indicating success or failure. This is not explicit in the initialisation of #indents above.

notepad++ regex custom length word wrap macros

The time of day that you have missed. Recently I began to study the expressive expressions, but the task before me is too complicated.
I think many people need the ability to quickly format texts of almost any type. Problem is not easy and I hope to get a solution to this problem from professionals.
If you use a limited breakdown for each line, then only large lines are broken which is much better. That is, to break only those lines that are larger than a certain size and which are more or less evenly broken Correct formatting of the text is quite Difficult to take into account a lot, I will go out for a long time. As you can see, there are small lines and they have to be sent back by connecting the previous line, but the problem is. And again have to apply formatting. But it is unclear whether this new formatting will create new problems.
All this is write in the macro via notepad ++ one after another and use.
However, it is necessary to solve the most important problems:
It is important to
I want to immediately note: textFX does not offer. My attempts to write a macro with textFX (including attempts with 75 or a number from the clipboard failed, the text was written with unreadable code), I sent a corresponding message to the site notepad ++ a day ago
Use a regular point also do not offer. The fact is that notepad ++ macro does not understand the point of replacing the indentation with spaces by default 4pcs. (Yes, the macro must also indent the text, but this is all right)
About soft wordwrap (not formatted just click) is also not worth talking about.
inscrease line indent 2-4
split lines
descrease line indet 2-4
^(.?)$\s+?^(?=.^\1$) null
2017 (\d+:\d+) 17
subscribe .+$ null
(^.{1,30}$)\R \1
\r\r\n \n
^ spaces\1
i create macro and solved more problems

Creating a simple parser in (V)C++ (2010) similar to PEG

For an school project, I need to parse a text/source file containing a simplified "fake" programming language to build an AST. I've looked at boost::spirit, however since this is a group project and most seems reluctant to learn extra libraries, plus the lecturer/TA recommended leaning to create a simple one on C++. I thought of going that route. Is there some examples out there or ideas on how to start? I have a few attempts but not really successful yet ...
parsing line by line
Test each line with a bunch of regex (1 for procedure/function declaration), one for assignment, one for while etc...
But I will need to assume there are no multiple statements in one line: eg. a=b;x=1;
When I reach a container statement, procedures, whiles etc, I will increase the indent. So all nested statements will go under this
When I reach a } I will decrement indent
Any better ideas or suggestions? Example code I need to parse (very simplified here ...)
procedure Hello {
a = 1;
while a {
b = a + 1 + z;
}
}
Another idea was to read whole file into a string, and go top down. Match all procedures, then capture everything in { ... } then start matching statements (end with ;) or containers while { ... }. This is similar to how PEG does things? But I will need to read entire file
Multipass makes things easier. On a first pass, split things into tokens, like "=", or "abababa", or a quote-delimited string, or a block of whitespace. Don't be destructive (keep the original data), but break things down to simple chunks, and maybe have a little struct or enum that describes what the token is (ie, whitespace, a string literal, an identifier type thing, etc).
So your sample code gets turned into:
identifier(procedure) whitespace( ) identifier(Hello) whitespace( ) operation({) whitespace(\n\t) identifier(a) whitespace( ) operation(=) whitespace( ) number(1) operation(;) whitespace(\n\t) etc.
In those tokens, you might also want to store line number and offset on the line (this will help with error message generation later).
A quick test would be to turn this back into the original text. Another quick test might be to dump out pretty-printed version in html or something (where you color whitespace to have a pink background, identifiers as light blue, operations as light green, numbers as light orange), and see if your tokenizer is making sense.
Now, your language may be whitespace insensitive. So discard the whitespace if that is the case! (C++ isn't, because you need newlines to learn when // comments end)
(Note: a professional language parser will be as close to one-pass as possible, because it is faster. But you are a student, and your goal should be to get it to work.)
So now you have a stream of such tokens. There are a bunch of approaches at this point. You could pull out some serious parsing chops and build a CFG to parse them. (Do you know what a CFG is? LR(1)? LL(1)?)
An easier method might be to do it a bit more ad-hoc. Look for operator({) and find the matching operator(}) by counting up and down. Look for language keywords (like procedure), which then expects a name (the next token), then a block (a {). An ad-hoc parser for a really simple language may work fine.
I've done exactly this for a ridiculously simple language, where the parser consisted of a really simple PDA. It might work for you guys. Or it might not.
Since you mentioned PEG i'll like to throw in my open source project : https://github.com/leblancmeneses/NPEG/tree/master/Languages/npeg_c++
Here is a visual tool that can export C++ version: http://www.robusthaven.com/blog/parsing-expression-grammar/npeg-language-workbench
Documentation for rule grammar: http://www.robusthaven.com/blog/parsing-expression-grammar/npeg-dsl-documentation
If i was writing my own language I would probably look at the terminals/non-terminals found in System.Linq.Expressions as these would be a great start for your grammar rules.
http://msdn.microsoft.com/en-us/library/system.linq.expressions.aspx
System.Linq.Expressions.Expression
System.Linq.Expressions.BinaryExpression
System.Linq.Expressions.BlockExpression
System.Linq.Expressions.ConditionalExpression
System.Linq.Expressions.ConstantExpression
System.Linq.Expressions.DebugInfoExpression
System.Linq.Expressions.DefaultExpression
System.Linq.Expressions.DynamicExpression
System.Linq.Expressions.GotoExpression
System.Linq.Expressions.IndexExpression
System.Linq.Expressions.InvocationExpression
System.Linq.Expressions.LabelExpression
System.Linq.Expressions.LambdaExpression
System.Linq.Expressions.ListInitExpression
System.Linq.Expressions.LoopExpression
System.Linq.Expressions.MemberExpression
System.Linq.Expressions.MemberInitExpression
System.Linq.Expressions.MethodCallExpression
System.Linq.Expressions.NewArrayExpression
System.Linq.Expressions.NewExpression
System.Linq.Expressions.ParameterExpression
System.Linq.Expressions.RuntimeVariablesExpression
System.Linq.Expressions.SwitchExpression
System.Linq.Expressions.TryExpression
System.Linq.Expressions.TypeBinaryExpression
System.Linq.Expressions.UnaryExpression

Vim, C++, look up member function

I am using vim 7.x
I am using alternate file.
I have a mapping of *.hpp <--> *.cpp
Suppose I'm in
class Foo {
void some_me#mber_func(); // # = my cursor
}
in Foo.hpp
is there a way to tell vim to do the following:
Grab word under # (easy, expand("")
Look up the class I'm inside of ("Foo") <-- I have no idea how to do this
Append `1 & 2 (easy: using ".") --> "Foo::some_member_func"
4: Switch files (easy, :A)
Do a / on 4
So basically, I can script all of this together, except the "find the name of the enclosing class I'm in part (especially if classes are nested).
I know about ctags. I know about cscope. I'm choosing to not use them -- I prefer solutions where I understand where they break.
This is relatively easy to do crudely and very difficult to do well. C and C++ are rather complex languages to parse reliably. At the risk of being downvoted, I'd personally recommend parsing the tags file generated by ctags, but if you really want to do it in Vim, there are a few of options for the "crude" method.
Make some assumptions. The assumptions you make depend on how complicated you want it to be. At the simplest level: assume you're in a class definition and there are no other nearby braces. Based on your coding style, assume that the opening brace of the class definition is on the same line as "class".
let classlineRE = '^class\s\+\(\k\+\)\s\+{.*'
let match = search(classlineRE, 'bnW')
if match != 0
let classline = getline(match)
let classname = substitute(classline, classlineRE, '\1', '')
" Now do something with classname
endif
The assumptions model can obviously be extended/generalised as much as you see fit. You can just search back for the brace and then search back for class and take what's in between (to handle braces on a separate line to "class"). You can filter out comments. If you want to be really clever, you can start looking at what level of braces you're in and make sure it's a top level one (go to the start of the file, add 1 every time you see '{' and subtract one every time you see '}' etc). Your vim script will get very very very complicated.
Another one risking the downvote, you could use one of the various C parsers written in python and use the vim-python interface to make it act like a vim script. To be honest, if you're thinking of doing this, I'd stick with ctags/cscope.
Use rainbow.vim. This does highlighting based on depth of indentation, so you could be a little clever and search back (using search('{', 'bW') or similar) for opening braces, then interrogate the syntax highlighting of those braces (using synIDattr(synID(line("."), col("."),1), "name")) and if it's hlLevel0, you know it's a top-level brace. You can then search back for class and parse as per item 1.
I hope that all of the above gives you some food for thought...

Replace C style comments by C++ style comments

How can I automatically replace all C style comments (/* comment */) by C++ style comments (// comment)?
This has to be done automatically in several files. Any solution is okay, as long as it works.
This tool does the job:
https://github.com/cenit/jburkardt/tree/master/recomment
RECOMMENT is a C++ program which
converts C style comments to C++ style
comments.
It also handles all the non-trivial cases mentioned by other people:
This code incorporates suggestions and
coding provided on 28 April 2005 by
Steven Martin of JDS Uniphase,
Melbourne Florida. These suggestions
allow the program to ignore the
internal contents of strings, (which
might otherwise seem to begin or end
comments), to handle lines of code
with trailing comments, and to handle
comments with trailing bits of code.
This is not a trivial problem.
int * /* foo
/* this is not the beginning of a comment.
int * */ var = NULL;
What do you want to replace that with? Any real substitution requires sometimes splitting lines.
int * // foo
// this is not the beginning of a comment.
// int *
var = NULL;
How do you intend to handle situations like this:
void CreateExportableDataTable(/*[out, retval]*/ IDispatch **ppVal)
{
//blah
}
Note the comment inside the parens... this is a common way of documenting things in generated code, or mentioning default parameter values in the implementation of a class, etc. I'm usually not a fan of such uses of comments, but they are common and need to be considered. I don't think you can convert them to C++ style comments without doing some heavy thinking.
I'm with the people who commented in your question. Why do it? Just leave it.
it wastes time, adds useless commits to version control, risk of screwing up
EDIT:
Adding details from the comments from the OP
The fundamental reason of preferring C++-style comment is that you can comment out a block of code which may have comments in it. If that comment is in C-style, this block-comment-out of code is not straight forward. – unknown (yahoo)
that might be a fair/ok thing to want to do, but I have two comments about that:
I know of no one who would advocate changing all existing code - that is a preference for new code. (IMO)
If you feel the need to "comment out code" (another iffy practice) then you can do it as needed - not before
It also appears that you want to use the c-style comments to block out a section of code? Or are you going to use the // to block out many lines?
One alternative is a preprocessor #ifdef for that situation. I cringe at that but it is just as bad as commenting out lines/blocks. Neither should be left in the production code.
I recently converted all C-style comments to C++-style for all files in our repository. Since I could not find a tool that would do it automatically, I wrote my own: c-comments-to-cpp
It is not fool-proof, but way better than anything else I've tried (including RECOMMENT). Among other things, it supports converting Doxygen style comments, for instance:
/**
* #brief My foo struct.
*/
struct foo {
int bar; /*!< This is a member.
It also has a meaning. */
};
Gets converted to:
/// #brief My foo struct.
struct foo {
int bar; ///< This is a member.
///< It also has a meaning.
};
Here's a Python script that will (mostly) do the job. It handles most edge cases, but it does not handle comment characters inside of strings, although that should be easy to fix.
#!/usr/bin/python
import sys
out = ''
in_comment = False
file = open(sys.argv[1], 'r+')
for line in file:
if in_comment:
end = line.find('*/')
if end != -1:
out += '//' + line[:end] + '\n'
out += ' ' * (end + 2) + line[end+2:]
in_comment = False
else:
out += '//' + line
else:
start = line.find('/*')
cpp_start = line.find('//')
if start != -1 and (cpp_start == -1 or cpp_start > start):
out += line[:start] + '//' + line[start+2:]
in_comment = True
else:
out += line
file.seek(0)
file.write(out)
Why don't you write a C app to parse it's own source files? You could find the /* comments */ sections with a relatively easy Regex query. You could then replace the new line characters with new line character + "//".
Anyway, just a thought. Good luck with that.
If you write an application/script to process the C source files, here are some things to be careful of:
comment characters within strings
comment characters in the middle of a line (you might not want to split the code line)
You might be better off trying to find an application that understands how to actually parse the code as code.
There are a few suggestions that you might like to try out:
a)Write your own code (C/ Python/ any language you like) to replace the comments. Something along the lines of what regex said or this naive solution 'might' work:
[Barring cases like the one rmeador, Darron posted]
for line in file:
if line[0] == "\*":
buf = '//' + all charachters in the line except '\*'
flag = True
if flag = True:
if line ends with '*/':
strip off '*/'
flag = False
add '//' + line to buf
b)Find a tool to do it. (I'll look up some and post, if I find them.)
c)Almost all modern IDE's (if you are using one) or text editors have an auto comment feature. You can then manually open up each file, select comment lines, decide how to handle the situation and comment C++ style using an accelerator (say Ctrl + M). Then, you can simply 'Find and Replace' all "/*" and "*/", again using your judgment. I have Gedit configured to do this using the "Code Comment' plugin. I don't remember the way I did it in Vim off hand. I am sure this one can be found easily.
If there are just "several files" is it really necessary to write a program? Opening it up in a text editor might do the trick quicker in practice, unless there's a whole load of comments. emacs has a comment-region command that (unsurprisingly) comments a region, so it'd just be a case of ditching the offending '/*' and '*/'.
Very old question, I know, but I just achieved this using "pure emacs". In short, the solution looks as follows:
Run M-x query-replace-regexp. When prompted, enter
/\*\(\(.\|^J\)*?\)*\*/
as the regex to search for. The ^J is a newline, which you can enter by pressing ^Q (Ctrl+Q in most keyboards), and then pressing the enter key. Then enter
//\,(replace-regexp-in-string "[\n]\\([ ]*?\\) \\([^ ]\\)" "\n\\1// \\2" \1))
as the replacement expression.
Essentially, the idea is that you use two nested regex searches. The main one simply finds C-style comments (the *? eager repetition comes very handy for this). Then, an elisp expression is used to perform a second replacement inside the comment text only. In this case, I'm looking for newlines followed by space, and replacing the last three space characters by //, which is nice for preserving the comment formatting (works only as long as all comments are indented, though).
Changes to the secondary regex will make this approach work in other cases, for example
//\,(replace-regexp-in-string "[\n]" " " \1))
will just put the whole contents of the original comment into a single C++-style comment.
from PHP team convention... some reasonning has to exist if the question was asked. Just answer if you know.
Never use C++ style comments (i.e. // comment). Always use C-style
comments instead. PHP is written in C, and is aimed at compiling
under any ANSI-C compliant compiler. Even though many compilers
accept C++-style comments in C code, you have to ensure that your
code would compile with other compilers as well.
The only exception to this rule is code that is Win32-specific,
because the Win32 port is MS-Visual C++ specific, and this compiler
is known to accept C++-style comments in C code.