I am developing a project which I have to develop a compiler(actually a parser) besides the main project. I am using Qt 5.3 and Windows (But the result is almost the same on Ubuntu 14.04).
I am experienced in using flex and bison, however this time, firs I develop a Scanner and then Parser. I test the scanner carefully and made sure it is error less. but as I add the parser to the project many errors occurred.(for example undefined reference to yylex(), however I declared extern int yylex() and I cannot use library as I will mention) and after that when I eliminate the parser, some features which worked before are not working now! for example, now I cannot use #include <QDebug>!
When I use this header file, Compiler say there are many errors(77 error) in this library! I got confused, it has been two days I am working on this and there is no progress.
I used "Custom Build Steps" and I set the path to ${sourceDir}. In addition I added lex.yy.c,y.tab.c in SOURCES and y.tab.h in HEADERS in .pro file.
I learned how to use flex and bison via here for the first time. and for those errors I read the followings:
Undefined Reference to yylex()
How to Integrate Flex and Bison
and some other links...
Qt programs are in C++, not C.
If you have not-too-ancient versions of flex and bison, you will have no problem compiling their generated code using C++, but you need to tell the compiler that they are to be compiled in C++. (Using a combination of C and C++ is certainly possible but it requires some care.)
Using gcc, it should be sufficient to make sure that the names of the files generated by bison and flex have a .cc extension instead of .c, which you can do using the -o command-line option. (That's highly recommended in any case because it lets you choose more meaningful filenames then y.tab.c and lex.yy.c.) That requires some adjustments to your Makefile. (You can also use the %output "filename" directive in bison and %option outfile="filename" in flex, but you'll still need to adjust the dependencies in your Makefile.)
Another, less recommended, option is to use the -x c++ command-line option to gcc.
Without more details about the toolset you are using on Windows, it is not easy to give more advice, particularly for me (since I have 0 experience in Windows toolsets). But the key is to ensure that both files are compiled as C++.
From the fact that you are trying to #include a Qt header, it seems like compiling the scanner and parser as C programs is not practical. But if I am misunderstanding your project and neither the scanner nor the parser require C++, then you can do so; you'll need to declare extern "C" int yylex(); in the C++ translation unit which calls yylex. (Don't put this declaration in the scanner.l file; it won't work if you're compiling with C.)
C++ setup of flex/bison is tricky. Check out this example project:
https://github.com/ezaquarii/bison-flex-cpp-example
It is a complete, working C++ Flex and Bison "interpreter", encapsulated in a class. In fact I used it few times with Qt 4.8.
Related
Sorry for asking such basic question. Being a beginner in C++, I'm puzzled about a lot of things:
if we have Visual stuido and other IDEs, as well as Cygwin as compilers, what kind of roles do CMake play in helping a C++ programmer to develop programmes? Can we just write C++ programmes without using C Make?
I understand header files are like prototypes of functions that will be used in C++ files, or declarations of them, so I assume all header files should be accompanied by actual library files in the form of C++ files, as that's where the functions know what to do when being called. However,this is not usally the case, as there are header only libraries such as PyBind11.
What is bash? is it like another Windows command prompt? Whats its relationship wth C++?
Why do we have function macros and what is its advantages over normal functions?
Thank you very much!
CMake is a buildsystem.
if we have Visual stuido and other IDEs, as well as Cygwin as compilers, what kind of roles do CMake play in helping a C++ programmer to develop programmes? Can we just write C++ programmes without using C Make?
CMake allows for crossplatform development. It allows you to create projectfiles/Makefiles/... from a CMakeLists.txt file ==> The file suitable for the platform is created.
If you use CMake, you can simply program on windows (CMake will generate Visual Studio Solution Files) and if somebody tries to compile it on linux, CMake will generate Makefiles.
Using CMake, you only have one buildsystem and don't have to write buildfiles for every platform you support
I understand header files are like prototypes of functions that will be used in C++ files, or declarations of them, so I assume all header files should be accompanied by actual library files in the form of C++ files, as that's where the functions know what to do when being called. However,this is not usally the case, as there are header only libraries such as PyBind11.
I don't understand the question, but a header-only library hasn't to be linked to your program, as all the source code is in the headers.
What is bash? is it like another Windows command prompt? Whats its relationship wth C++?
bash is a shell, that executes commands. It's like the command prompt, only sometimes a bit better. It has no relationship to C++.
Why do we have function macros and what is its advantages over normal functions?
Macros have rarely advantages over functions, because the clutter the code, increase the compilation times (Because the preprocessor inserts a copy of the macro everytime, it is applied==>More code, slower compilation).
The only advantage I can think of, is in a situation like this for conditional compilation:
#ifdef DEBUG
#define LOG(x) printf(x)
#else
#define LOG(x)
#endif
what kind of roles do CMake play in helping a C++ programmer to develop programmes?
Can we just write C++ programmes without using C Make?
Yes. It is possible to write C++ programs without any tools. It is possible to even compile C++ without any build system.
However, building complex C++ programs without a build system is difficult and tedious. Build systems help to keep the complexity under control.
But using a platform specific build system hinders compilation on systems that don't have that specific build system.
CMake solves this dilemma by being able to generate builds for several build systems on various platforms. So CMake is a generator for build systems.
so I assume all header files should be accompanied by actual library files in the form of C++ files
Your assumption is not quite correct. It is a decent rule of thumb that applies often, but not always.
What is bash?
bash is a shell. It is the most commonly used shell on POSIX systems. On windows, bash can be used through WSL or Cygwin.
is it like another Windows command prompt?
Windows command prompt is another shell. It is used only on windows.
Whats its relationship wth C++?
There is no close relationship between bash and C++.
Bash is often used to write shell scripts. Such shell scripts are sometimes used to help with building complex C++ programs.
Why do we have function macros
For text replacement within the source code.
what is its advantages over normal functions?
Functions cannot replace text in source code.
Use cases for macros (functional or otherwise) are rare. They are sometimes overused, which is not a good thing. Generally, functions have many advantages over macros. Among the advantages of functions are type safety and scoped names.
CMake is a kind of a build system. Build systems help to organize all files in the project, handle proper order of compilation, linking, and other steps that are necessary to obtain a binary (executable or library). They allow describing the process of building in an abstract way, reducing dependencies between project and other build tools, e.g. you can easily change the compiler.
CMake is a special flavor of the build systems because it doesn't handle building by itself. It generates scripts of other build systems, e.g. Makefiles on Linux. That allows for a multiplatform approach to building applications.
Considering Your questions about macros and headers, I would suggest a good C++ book
Bash is a system shell with its own scripting language, allowing for running system scripts. It is mostly used on Linux based systems, however it can be used also on windows.
I've been struggling back and forth with this for a while now looking stuff up and asking questions and I'm still at a crossroads. What I've done so far and where I'm currently at based on what I've been told is this: I've added 2 directories to my repo: src for my .cpp files and include for my .hpp files. In my include directory I have all the .hpp files directly in the folder where as in my src directory I have several sub-directories grouping my .cpp files according to the purpose they serve e.g. \src\ValuationFunctions\MonteCarloFunctions\FunctionHelpers.
I've changed the name of all the #include "header.h" to #include "..\include\header.h". This works for my main file which is directly in the src folder but I found now that it doesn't work for my .cpp files that are in sub-directories like in my example above, it would seem I would have to navigate back to the root folder doing something like #include "../../..\include\header.h" which obviously can't be the way to go.
How do I make this work, am I even on the right track here? I have uploaded my repo to github (https://github.com/OscarUngsgard/Cpp-Monte-Carlo-Value-at-Risk-Engine) and the goal is for someone to be able to go there, see how the program is structured, clone the repo and just run it (I imagine this is what the goal always is? Or does some responsibility usually fall on the cloner of the repo to make it work?).
I'm using Windows and Visual Studios, help greatly appreciated.
How properly specify the #include paths in c++ to make your program portable
Please read the C++11 standard n3337 and see this C++ reference website. An included header might not even be any file on your computer (in principle it could be some database).
If you use some recent GCC as your C++ compiler, it does have precompiled headers and link-time optimization facilities. Read also the documentation of its preprocessor. I recommend to enable all warnings and debug info, so use g++ -Wall -Wextra -g.
If you use Microsoft VisualStudio as your compiler, it has a documentation and provides a cl command, with various optimization facilities. Be sure to enable warnings.
You could consider using some C++ static analyzer, such as Clang's or Frama-C++. This draft report could be relevant and should interest you (at least for references).
The source code editor (either VisualStudioCode or GNU emacs or vim or many others) and the debugger (e.g. GDB) and the version control system (e.g. git) that you are using also have documentation. Please take time to read them, and read How to debug small programs.
Remember that C++ code can be generated, by tools such as ANTLR or SWIG.
A suggestion is to approach your issue in the dual way: ensure that proper include paths are passed to compilation commands (from your build automation tool such as GNU make or ninja or meson). This is what GNU autoconf does.
You could consider using autoconf in your software project.
I've changed the name of all the #include "header.h" to #include "..\include\header.h".
I believe it was a mistake, and you certainly want to use slashes, e.g. #include "../include/header.h" if you care about porting your code later to other operating systems (e.g. Linux, Android, MacOSX, or some other Unixes). On most operating systems, the separator for directories is a / and most C++ compilers accept it.
Studying the source code of either Qt or POCO could be inspirational, and one or both of these open source libraries could be useful to you. They are cross-platform. The source code of GCC and Clang could also be interesting to look into. Both are open source C++ compilers, written in C++ mostly (with some metaprogramming approaches, that is some generated C++ code).
See also this and that.
In program development, it is often necessary to use toolkits developed by others. Generally speaking, in Visual Studio, source files are rarely used, and most of them use header files and link libraries that declare classes. If you want to use these classes, you need to include the name of the header file in the file, such as #include "cv.h". But this is not enough, because this file is generally not in the current directory, the solution is as follows:
Open "Project-Properties-Configuration Properties-C/C++-General-Additional Include Directory" in turn and add all the paths.
For all kinds of IDEs, we can do similar operations to include directories. So for those who clone the project, it is quite normal to modify the directory contained in the project.
I'm trying to use the HTML parser - Gumbo (written in C) in my C++ Builder XE6 project.
When I compile, I get many errors (E2140 Declaration is not allowed here etc.), which appear to be coming from the file char_ref.rl.
I've tried a lot to avoid these errors but I didn't succeed.
Has anyone ever used Gumbo in a C++ Builder project, or at least in a C++ project?
Thank you
Note: extern "C" doesn't mean "compile this code as C". It means that the C++ code inside the block should be compiled so that any external names etc. are published in a way compatible with the C ABI. And such a block shouldn't include any function definitions. You may be using extern "C" incorrectly in your code but it's hard to say without seeing your code.
Anyway, the C compiler part of bcc32.exe doesn't seem to allow mixed statements and declarations, even if you give the flag -An which is supposed to mean "Use C99 keywords and extensions".
You will have to either do a 64-bit build or make a whole bunch of changes to this C source for compatibility with the dinosaur that is bcc32. Or you could build Gumbo as a DLL with a modern compiler (if it supports that option, IDK).
I'm hoping this won't turn to a dead end, but here's what I would like to do. Please let me know if this is even remotely possible, or if you have another good (or better) approach.
I'm working on a legacy code base that's C89 standard (what year is it again?) and I would like to build unit tests for some parts of the software. The C unit test frameworks I've found do not seem nearly as useful and easy as the Catch framework in C++, or really most other C++ unit test framework for obvious reasons. Thus, I would like to build these unit tests in C++.
There exists some homebrew build process that will build whatever application you're working on, all using the C89 standard. The system was built because there are a lot of shared dependencies among projects, and this was made before better alternatives existed. What I would like to do is use the artifacts built from that process to be linked in to a C++ unit test application. I could, alternatively, try to find all the dependencies and build them in to the unit test, but that's 1. redundant to rebuild, 2. cumbersome, and 3. removes the C89 compiledness of them, which I'd like to maintain to ensure the code I'm testing will be exactly as it runs for the end user (compiled in C89, not C++).
Using the same compiler (gcc), is this possible to accomplish, or am I stuck with C unit test frameworks? Some technical explanation either way would be very helpful, since I'm not too familiar with the differences among the different language and standard library artifacts (other than the library itself is different). Also, note that at this point changing the build process is (unfortunately) not feasible (read: in the budget).
Converting comments into an answer
In my view, it should be doable provided that you have suitable headers for all the parts of the system that will be exploited by the C++ unit test code. Those headers should be usable by the C++ compiler, and will need to be wrapped inside extern "C" { and a matching }. This might be done inside the unit test code or in the main (C) headers (based on #if defined(__cplusplus)). If you don't have headers that can be compiled by C++, then it is not going to be possible until you do have appropriate headers.
Can I link my C++ unit test executable directly with the C89 objects? No worries about potential conflicts using different versions of the standard library in the same application?
You will be linking with the C++ compiler. You might need to add -lc to the link line, but probably won't. The C library will be the same as you used anyway — there isn't a separate C89 library and C99 library and C11 library. So, although your code under test is written to C89 and your code testing it is in C++11, the C code will be using the current C library and test code will be using the C++ library. The C++ code will be using facilities in the C++ std namespace, so you shouldn't run into any troubles there. That said, I've not validated this, but it is the sort of thing that compilers get right.
There's a book Test Driven Development for Embedded C which recommends C++ unit test frameworks for testing (embedded) C code.
Code is easily included in a C++ project simply by linking in your C object files (.o, .a). C header files are included in the C++ project wrapped with extern "C", e.g.
extern "C" {
#include "my_c_header.h"
}
You might get weird compile- and run-time issues. Look out for some C legacy code gotchas like #defining bool and replacing allocators with custom memory management schemes. -_-'
I'm working on a utility that needs to be able to compile on both standard C++ compilers and pre-standard compilers. The code can and will be thrown at just about any C++ compiler in existence.
I am looking for a means to robustly and portably determine whether the target compiler supports header files with or without an .h extension. I will also need to detect whether namespaces are supported. Both of these needs may or may not be possible.
A little background: The utility is the Inline::CPP language extension for Perl. The extension automatically includes <iostream>, and tries to make a good guess whether a '.h' is required or not (and of course whether or not the compiler supports namespaces). But it's far from perfect in that regard, and this issue is diminishing the breadth of the utility's usefulness.
So to reiterate the question: How do I portably detect whether a compiler supports standard headers such as <iostream>, or pre-standard headers such as <iostream.h>?
Not in the code, but in the build/configure system. For example in CMake, you could use try_compile and provide it a sample file.
...
try_compile(PRE_STANDARD_HEADERS tmp_builds pre_standard_headers_test.cpp)
if ( ${PRE_STANDARD_HEADERS} )
add_definitions( -D PRE_STANDARD_HEADERS )
endif()
You'd need to make that pre_standard_headers_test.cpp .. just a simple compilable exe that #include <iostream.h> for example.
Then in your normal code just an
#ifdef PRE_STANDARD_HEADERS
would do the trick.
The standard approach for Linux and other Unix-friendly platforms is to use a configure script. The script will generate as its output a Makefile and a config.h header file that turns on or off any compiler features that your code could rely on when available.
For Windows it is kind of expected that you will provide solution and project files for Visual Studio, with a pre-generated config.h header file.