Compatibility issue with old lex-yacc code in new flex-bison - c++

I am on a migration project to move a C++ application from HP-UX to redhad 6.4 server. Now there is a parser application written using lex-yacc, which works fine in HP-UX. Now once we moved the lex specification file (l file) and yacc specification file (y file) to the RHEL 6.4 server, we compiled the code into new system without much change. But the generated parser is not working, everytime it is giving some syntax error with same input file which is correctly parsed in HP-UX. Now as per some reference material on lex and flex incompatibility, there are below points I see in the l file -
It has redefined input, unput and output methods.
The yylineno variable is initialized, and incremented in the redifined input method when '\n' character is found.
The data in lex is read from standard input cin, which looks to be in scanner mode.
How can I find out the possible incompatibilities and remedies for this issue? And is there any way other than using gdb to debug the parser?

Both flex and yacc/bison have useful trace features which can aid in debugging grammars.
For flex, simply regenerate the scanner with the -d option, which will cause a trace line to be written to stderr every time a pattern is matched (whether or not it generates a token). I'm not sure how line number tracking by the debugging option will work with your program's explicit yylineno manipulation, but I guess that is just cosmetic. (Personally, unless you have a good reason not to, I'd just let flex track line numbers.)
For bison, you need to both include tracing code in the parser, and enable tracing in the executable. You do the former with the -t command-line option, and the latter by assigning a non-zero value to the global variable yydebug. See the bison manual for more details and options.
These options may or may not work with the HPUX tools, but it would be worth trying because that will give you two sets of traces which you can compare.

You don't want to debug the generated code, you want to debug the parser (lex/yacc code).
I would first verify the lexer is returning the same stream of tokens on both platforms.
Then reduce the problem. You know the line of input that the syntax error occurs on. Create a stripped down parser that supports parsing the contents of that line, and if you can't figure out what is going on from that, post the reduced code.

Related

what's effect of option dsource of ocamlc?

I don't know the effect of option dsource of ocamlc.the -h option tell me it's undocumented
I know the use of dparsetree and dtypedtree,it can show me the ast
I try to use the option dsource,to a file test.ml,It seems to return me the source code,without the null line and the comment,and at bottom tell me the waring of the source code.
Is it the effect of option dsource?Thanks!
-dsource pretty-prints the AST using the OCaml syntax after desugarring syntax extensions such as camlp4 and ppx.
It's mostly used to debug ppxs. The content is exactly the same as -dparsetree (except in source form, instead of AST).
I just spent a few minutes grepping the OCaml compiler sources, and here is what I found.
The -dsource command-line flag sets the dump_source field to true in the Clflags module.
This setting in turn causes the compiler to do something like this in driver/compile.ml when compiling an implementation (.ml) file.
if !Clflags.dump_source then
fprintf ppf "%a#." Pprintast.structure ast
In other words, it pretty-prints the code part of the AST in a form that looks like source code.
Things look similar for an interface (.mli) file, except that it prints out the signature rather than the code.
Since OCaml has a rather flexible front-end, I would guess this is helpful to see the final result of any syntactic transformations that have been applied to the code. (But I might be wrong, I'm not an OCaml compiler hacker.)
I suggest you start looking at the code in driver/compile.ml if you want to figure out more.

Tcl scripts non-instrumenting debugger using Tcl Library and/or Tcl internals?

I would like to know if it is possible to build tcl scripts debugger using Tcl Library API and/or Tcl internal interfaces (I mean if they contain sufficient data to do so). I've noticed that existing tcl debuggers instrument tcl scripts and work with this additional layer. My idea was to use Tcl_CreateObjTrace to trace every evaluated command and use it as a point to retrive callstack, locals etc. Problem is that it seems that not every information is accessible from API at a time of evaluation. For example I would like to know which line is currently evaluated but Interp has such info only for top evaluations (iPtr->cmdFramePtr->line is empty for procedures' bodies). Anyone has tried such approach? Does it make any sense? Maybe should I look into hashed entries in Interp? Any clues and opinions would be appreciated (the best for Tcl 8.5).
Your best bet for a non-intrusive debugging system might be to try using an execution step trace (called for each command called during the execution of the command to which the trace is attached) with info frame to actually get the information. Here's a simple version, attaching to source so that you can watch an entire script:
proc traceinfo args {
puts [dict get [info frame -2] cmd]
}
trace add execution source enterstep traceinfo
source yourscript.tcl
Be prepared for plenty of output. The dictionary out of info frame can have all sorts of relevant entries, such as information about what the line number of the command is and what the source file is; the cmd entry is the unsubstituted source for the command called (if you want the substituted version, see the relevant arguments to the trace callback, traceinfo above).

GDB with multiple files of MySQL source code

I am trying to use gdb with MySQL source code which is written in C/C++. In mysql-test/t, I create a custom test case file, say, example.test and then debug it by using the following line of code
/mysql-test-run --gdb example
Now I want to see the flow of execution as it changes from one function in a file to another in some different file. I am not sure of how the execution changes, so I can't pre define the break points. Any solution to how I can get to see the flow with multiple files of source code?
You can use the next directive to take line-by-line steps through the source. When appropriate, you can use the step directive to take steps "into" the function(s) being called on the current line.
A reasonable method would be to do next until you think you only just pass the externally visible behavior you're looking for. Then start over, stopping at the line right before you saw the behavior last time. Then step this time. Repeat as necessary until you find the code you're looking for. If you believe that it's encountering some sort of deadlock, it's significantly easier -- just interrupt (Ctrl-C) the program when you think it's stuck and it should stop right in the interesting spot.
In general, walking through the source you'll build up some places you think are interesting. You can note the source file and line number and/or function names as appropriate and set those breakpoints directly in the future to avoid tedious next/next/next business.

generate C/C++ command line argument parsing code from XML (or similar)

Is there a tool that generates C/C++ source code from XML (or something similar) to create command line argument parsing functionality?
Now a longer explanation of the question:
I have up til now used gengetopt for command line argument parsing. It is a nice tool that generates C source code from its own configuration format (a text file). For instance the gengetopt configuration line
option "max-threads" m "max number of threads" int default="1" optional
among other things generates a variable
int max_threads_arg;
that I later can use.
But gengetopt doesn't provide me with this functionality:
A way to generate Unix man pages from the gengetopt configuration format
A way to generate DocBook or HTML documentation from the gengetopt configuration format
A way to reuse C/C++ source code and to reuse gengetopt configuration lines when I have multiple programs that share some common command line options
Of course gengetopt can provide me with a documentation text by running
command --help
but I am searching for marked up documentation (e.g. HTML, DocBook, Unix man pages).
Do you know if there is any C/C++ command line argument tool/library with a liberal open source license that would suite my needs?
I guess that such a tool would use XML to specify the command line arguments. That would make it easy to generate documentation in different formats (e.g. man pages). The XML file should only be needed at build time to generate the C/C++ source code.
I know it is possible to use some other command line argument parsing library to read a configuration file in XML at runtime but I am looking for a tool that generate C/C++ source code from XML (or something similar) at build time.
Update 1
I would like to do as much as possible of the computations at compile time and as less as possible at run time. So I would like to avoid libraries that give you a map of the command line options, like for instance boost::program_options::variables_map ( tutorial ).
I other words, I prefer args_info.iterations_arg to vm["iterations"].as<int>()
User tsug303 suggested the library TCLAP. It looks quite nice. It would fit my needs to divide the options into groups so that I could reuse code when multiple programs share some common options. Although it doesn't generate out the source code from a configuration file format in XML, I almost marked that answer as the accepted answer.
But none of the suggested libraries fullfilled all of my requirements so I started thinking about writing my own library. A sketch: A new tool that would take as input a custom XML format and that would generate both C++ code and an XML schema. Some other C++ code is generated from the XML schema with the tool CodeSynthesis XSD. The two chunks of C++ code are combined into a library. One extra benefit is that we get an XML Schema for the command line options and that we get a way to serialize all of them into a binary format (in CDR format generated from CodeSynthesis XSD). I will see if I get the time to write such a library. Better of course is to find a libraray that has already been implemented.
Today I read about user Nore's suggested alternative. It looks promising and I will be eager to try it out when the planned C++ code generation has been implemented. The suggestion from Nore looks to be the closest thing to what I have been looking for.
Maybe this TCLAP library would fit your needs ?
May I suggest you look at this project. It is something I am currently working on: A XSD Schema to describe command line arguments in XML. I made XSLT transformations to create bash and Python code, XUL frontend interface and HTML documentation.
Unfortunately, I do not generate C/C++ code yet (it is planed).
Edit: a first working version of the C parser is now available. Hope it helps
I will add yet another project called protoargs. It generates C++ argument parser code out of protobuf proto file, using cxxopts.
Unfortunately it does not satisfy all author needs. No documentation generated. no compile time computation. However someone may find it useful.
UPD: As mentioned in comments, I must specify that this is my own project

Using MSBuild in VC++ 2010 to do custom preprocessing on files

I am trying to insert a custom preprocessor into the VC++ 2010 build-pipe after the regular preprocessor has finished, so far i figured that the way of doing this is via MSBuild.
To this point I wasn't able to find out much more, so my questions are:
Is this possible at all?
If so, what do I need to look at, to get going.
If you are talking about the c/c++ preprocessor, then you are probably out of luck. AFAIK, the preprocessor is built into the compiler itself. You can get the compiler to output a pre-processed file and then you MIGHT be able to send that through the compiler a second time to get the final output.
This may not work anyway due to the code being produced, at least in previous versions of cl.exe, doesn't seem to be 100% correct (white space gets mangled slightly which can cause errors).
If you want to take this path, what you'd need to do is have an MSBuild 'Target' that runs before the 'ClCompile' target.
This new target would have to run the 'cl.exe' program with all the settings that 'ClCompile' usually sends it with as well as the '/P' option which will "preprocess to a file".
Then you need to run your tool over the processed file and then finally feed these new files into 'ClCompile'.
If you need any more info just reply in the comments and I'll try to add some when I get the time (this question is rather old, so i'm not sure if it's worthwhile investing more time into this answer).