Modern screens have large resolutions, fitting two or three columns of 80-column code easily. C++ basically requires that you separate your code into .hpp and .cpp files.
So, to utilize this space, why not automatically open the .cpp file in a second column when you open a .hpp file (and vice versa)? This obviously wouldn't work for extreme cases, although for a lot of projects there is a direct correspondence between the filenames that would be easy to determine. To me, this seems like a very reasonable use of this space, and it's hard to imagine it hasn't been done.
Is there an IDE that does this? A plugin? Or, why do you think it can't be done?
If you're in Visual Studio, a plugin (Visual Assist X), which is already very nice to have for C++ projects has a similar feature. It's not completely automatic, but all you have to do is press Alt+o and it will open the other file in the set. That is, if you're .hpp pressing the key will open the .cpp, and vice versa.
Their website demonstrates how this works in a video. It also works for things like XAML/Code Behind files, Windows Forms/Code files, etc. (Basically anywhere files operate in pairs, that key combo switches to the other file in the pair)
Related
I'm developing an application with a lot of small, custom dialogs.
These dialogs are e.g. giving choices, displaying graphs or offer additional interfaces. Mostly they require very little markup-code and have few child elements.
Currently I'm using embarcadero'c XE2 RAD Studio's C++ Builder, which works with VCL and generates for every form a .dfm file, a .h file and a .cpp file.
Now I would like to keep an overview over the files produced and merge e.g. the .dfm files of multiple small dialogs. (maybe even the .cpp and .h, too).
However, I also want to use C++-Builder's VCL designer.
Is there a way to merge .dfm files and still have the IDE's designer working as usual?
Or should I just dynamically generate those dialogs during runtime?
Now I would like to keep an overview over the files produced and merge e.g. the .dfm files of multiple small dialogs. (maybe even the .cpp and .h, too). However, I also want to use C++-Builder's VCL designer.
Is there a way to merge .dfm files and still have the IDE's designer working as usual?
It is possible (but not recommended) to move design-time generated event handler implementations from one .cpp file to another .cpp file (don't move their declarations in .h files, though). So it is conceivable to have 1 .cpp file with all your event handler implementations, and the app will work fine. I do the opposite in one of my projects - I have a TForm with a LOT of event handlers on it, so I move them into separate .cpp files grouped by functionality (yes, I should use TFrame to manage that, but I am not at liberty to change that at this stage of development).
There is a side effect, though - if you try to double-click on an assigned event in the Object Inspector, it won't be able to find the handler's implementation code if you move it.
However, regarding DFMs, every TForm, TFrame, and TDataModule class that is created at design-time must have its own individual DFM. The IDE and the DFM streaming system both expect that. DFM resources in the final executable are identified by class name, and the DFM streaming system reads an entire DFM resource from start to end when loading a DFM into a single root object instance. Besides, the DFM data format does not support multiple DFMs in a single resource stream.
So no, you cannot merge multiple DFMs together.
Or should I just dynamically generate those dialogs during runtime?
Yes. Or just live with letting the dialogs use individiual DFM resources. If your dialogs really are as small on content as you say, the overhead to your executable should be minimal.
You can use "legacy" TNotebook component ("Win3.1" page in RAD2007) to simulate many small dialogs in one file; it works like page control without tab buttons. Create required number of pages in the component and activate needed page in constructor of the form.
I have been writing code for a little while now and I recently found out how to create classes in different files and include them in main, along with more cpp files including the definitions of those classes. I was wondering when this is really needed, my code isnt normally that long. Should I use this now as a beginning when my code is only a few hundred lines or less or are the multiple files used with alot more code. In cases like this with such a short code I could probably find it easier just to stick to the main cpp. Any thoughts on this?
Bad habits are easy to learn and hard to unlearn. Why not do it the right way from the beginning?
I advise for the sake of organization that you should create a new set of files - .h and .cpp - for each new functionality you want to add. That doesn't mean create a new file for ever class you want to have. As an example, you can define a Sound class and a SoundManager class in the same header if you want. It's easier to find and edit a faulty piece of code when it isn't all jumbled up with other code in a catch-all file.
When you get to multi-thousand or even multi-million line projects (which I'm sure you will sooner or later), you will obviously be in for a world of pain if you shove it all into one file. Like David Schwartz said, it's better learn it correctly the first time than to relearn it correctly the second time.
If you have different parts of your code in different files, then you can open each file in a new editor window and view them side by side instead of having to jump up and down in the same file
Yeah, I know that some editors have a "split window" mode, but those don't play nicely with multiple monitors, etc, etc
I am writing a program that produces a formatted file for the user, but it's not only producing the formatted file, it does more.
I want to distribute a single binary to the end user and when the user runs the program, it will generate the xml file for the user with appropriate data.
In order to achieve this, I want to give the file contents to a char array variable that is compiled in code. When the user runs the program, I will write out the char file to generate an xml file for the user.
char* buffers = "a xml format file contents, \
this represent many block text \
from a file,...";
I have two questions.
Q1. Do you have any other ideas for how to compile my file contents into binary, i.e, distribute as one binary file.
Q2. Is this even a good idea as I described above?
What you describe is by far the norm for C/C++. For large amounts of text data, or for arbitrary binary data (or indeed any data you can store in a file - e.g. zip file) you can write the data to a file, link it into your program directly.
An example may be found on sites like this one
I'll recommend using another file to contain data other than putting data into the binary, unless you have your own reasons. I don't know other portable ways to put strings into binary file, but your solution seems OK.
However, note that using \ at the end of line to form strings of multiple lines, the indentation should be taken care of, because they are concatenated from the begging of the next lineļ¼
char* buffers = "a xml format file contents, \
this represent many block text \
from a file,...";
Or you can use another form:
char *buffers =
"a xml format file contents,"
"this represent many block text"
"from a file,...";
Probably, my answer provides much redundant information for topic-starter, but here are what I'm aware of:
Embedding in source code: plain C/C++ solution it is a bad idea because each time you will want to change your content, you will need:
recompile
relink
It can be acceptable only your content changes very rarely or never of if build time is not an issue (if you app is small).
Embedding in binary: Few little more flexible solutions of embedding content in executables exists, but none of them cross-platform (you've not stated your target platform):
Windows: resource files. With most IDEs it is very simple
Linux: objcopy.
MacOS: Application Bundles. Even more simple than on Windows.
You will not need recompile C++ file(s), only re-link.
Application virtualization: there are special utilities that wraps all your application resources into single executable, that runs it similar to as on virtual machine.
I'm only aware of such utilities for Windows (ThinApp, BoxedApp), but there are probably such things for other OSes too, or even cross-platform ones.
Consider distributing your application in some form of installer: when starting installer it creates all resources and unpack executable. It is similar to generating whole stuff by main executable. This can be large and complex package or even simple self-extracting archive.
Of course choice, depends on what kind of application you are creating, who are your target auditory, how you will ship package to end-users etc. If it is a game and you targeting children its not the same as Unix console utility for C++ coders =)
It depends. If you are doing some small unix style utility with no perspective on internatialization, then it's probably fine. You don't want to bloat a distributive with a file no one would ever touch anyways.
But in general it is a bad practice, because eventually someone might want to modify this data and he or she would have to rebuild the whole thing just to fix a typo or anything.
The decision is really up to you.
If you just want to keep your distributive in one piece, you might also find this thread interesting: Store data in executable
Why don't you distribute your application with an additional configuration file? e.g. package your application executable and config file together.
If you do want to make it into a single file, try embed your config file into the executable one as resources.
I see it more of an OS than C/C++ issue. You can add the text to the resource part of your binary/program. In Windows programs HTML, graphics and even movie files are often compiled into resources that make part of the final binary.
That is handy for possible future translation into another language, plus you can modify resource part of the binary without recompiling the code.
I'm working on a static library project for a c++ course I'm taking. The teacher insists that we define only one function per source file, grouping files/functions belonging to the same class in subdirectories for each class. This results in a structure like:
MyClass
\MyClass.cc (constructor)
\functionForMyClass.cc
\anotherFunctionForMyClass.cc
OtherClass
\OtherClass.cc (constructor)
Whether this is good practice or not is something I'd not like to discuss, since I'm simply obliged to organize my project in this manner.
I'm working in visual studio 2008, and somehow got strange link errors when using an identically named function (and thus filename) in two classes. This appears to be caused by the fact that visual studio puts all .obj files (one for each source file) in one intermediate directory, overwriting earlier generated object files when compiling identically named source files.
This could be solved by putting the object files in subdirectories based on the relative path of the input file. Visual studio allows one to configure the names of object files it generates and has macros to use in there, but there appears to be no macro for 'relative path of input file'.
So, is there some way to get this to work? If not, is using one project for each class the best work-around?
You are right, by default all object files are put into the same directory and their filenames are based on the source file name. The only solution I can think of is to change conflicting file's output file path in here:
Project Properties-C/C++-Output Files-Object File Name http://img37.imageshack.us/img37/3695/outputfile.png
PS. It sounds like the lecturer has a crappy (probably written by the lecturer himself) automatic code verifier that imposes this restriction. To get extra marks, offer to rewrite the parser so it works with normal/sane/non-weird projet layout.
Real answer:
Change
C/C++ => Output Files => Output File Name
to
$(IntDir)/%(RelativeDir)/
Every .obj file is going to be created in a sub folder so its not going to overwrite the previous on linking.
I can't think of any way to fudge the project settings to get VStudio to automatically split out the intermediate files into separate folders.
You have a few chances -
Build the class name into each file name. Most IDE's display just the file name in the tab view so if you do have several methods in different classes with the same name, its going to be difficult to tell them apart if the file name does not include the class name along with the method name. Which is really why I think your teachers advice is madness. I have not seen any programming style guide advocating that approach. Additionally it goes directly against the way various tools work - if you use Visual Studio to create a class, it creates one cpp file and one header, and automatically appends each new function to the single cpp file.
You could create a static library per class. When linking in static libs the obj files are all packaged up inside the .lib so conflicts are no longer a problem.
Switch comp-sci courses to one thats not being taught by a nut job. Seriously, this guy is completely out of touch with industry best practices and is trying to impose their own weird ideas on their students: Ideas that are going to have to be unlearnt the moment they leave the teaching environment.
You can also change output file name per file in its properties. Just make sure you use different names.
Can you use the class name in the filename to disambiguate? I'm thinking that you might have
MyClass
\MyClass.cc (constructor)
\function1_MyClass.cc
\function2_MyClass.cc
That would mean that every file would have a unique-enough name to defeat the problem. Is that an acceptable strategy?
You could probably arrange the properties of the project to put the object files into a folder which is below the folder of each source file. Once the project has this property, then every source file should inherit this property. (But if you've done experiments like Igor has suggested, then you may need to go through the properties as reset them back to the parent).
Having looked at the help files, I think you should go to project properties/C C++/Outpuf Files/Object File Name: and enter $(InputDir) (no trailing backslash). Every source file should then inherit this property and your .obj files should be separated.
You may need to do a Clean Solution before you make any changes.
Renaming the object files will work, but it's going to be a pain, and it will slow your compile/link cycle down. I've never figured out why, but it seems to confuse Visual Studio if the object files don't have the default names.
You could prefix the funciton name with the class name; e.g. myclass-ctor.cc, myclass-function1.cc etc.
You could have one .cc file per class which #includes the individual function files. In this case you'll need to prevent the #included files from being compiled seperately (either rename their extension or set Properties->Exclude From Build to 'Yes').
Out of curiosity, where does your teacher want you to put free functions e.g. local helper functions that might normally belong in an anonymous namespace?
If not, is using one project for each class the best work-around?
Not a good idea - apart from the fact that you won't end up with a single static library (without even more jiggery pokery), your link times are likely to increase and it will hide a
lot of pertinent info from the optimizer.
On another note; If the course is actually about C++ not OO programming, do what you need to pass but take your teacher's advice with a pinch of salt.
You don't have to put them in different translation units... why not put each function in a .h and include them all in one .cc per class? That will very likely give better output from the compiler.
I'd be asking why the teacher is insisting on this odd structure, too, the reasoning behind it should be explained. I know you didn't ask that of us, so that's all I'll say.
In Visual Studio 2010, I set
Properties -> C/C++ -> Output Files -> Output File Name
to
V:\%(Directory)$(PlatformName)_$(ConfigurationName)_%(Filename).obj
for OBJ files to end up next to the sources assuming the project lies on drive V (no idea whether there is a macro for it, yet).
By the way: $(InputDir) refers to the solution/project directory and will cause the same problem in another directory.
I have a substantial body of source code (OOFILE) which I'm finally putting up on Sourceforge. I need to decide if I should go with a monolithic include directory or keep the header files with the source tree.
I want to make this decision before pushing to the svn repo on SourceForge. I expect a lot of people who use it after that move will keep a working copy checked out directly from SF so won't want to change their structure.
The full source tree has about 262 files in 25 folders. There are a lot more classes than that suggests as due to conforming to 8.3 character names (yes it dates back to Win3.1) many classes are in one file. As I used to develop with ObjectMaster, that never bothered me but I will be splitting it up to conform to more recent trends to minimise the number of classes per file. From a quick skim of the class list, there are about 600 classes.
OOFILE is a cross-platform product expected to be built on Mac, Windows and assorted Unix platforms. As it started life on Mac, with compilers that point to include trees rather than flat include dirs, headers were kept with the source.
Later, mainly to keep some Visual Studio users happy, a build was reorganised with a single include directory. I'm trying to choose between those models.
The entire OOFILE product covers quite a few domains:
database front-end
range of database backends
simple 2D graphing engine for Mac and Windows
simple character-mode report-writer for trivial html and text listing
very rich banding report-writer with Mac and Windows Preview and Printing and cross-platform generation of text, RTF, HTML and XML reports
forms integration engine for easy CRUD forms binding to the database, with implementations on PowerPlant and MFC
cross-platform utility classes
file and directory manipulation
strings
arrays
XML and tag generation
Many people only want to use it on a single platform and some of those code areas are pure legacy (eg: PowerPlant UI framework on classic Mac). It therefore seems people would appreciate not having headers from those unwanted areas dumped in their monolithic include directory.
I started thinking about having an include directory split up into a few of the domains above and then realised that was sounding more like the original structure.
In summary, the choices seem to be:
Keep original model, all headers adjacent to source - max flexibility at cost of some complex includes in projects.
one include directory with everything inside
split includes by domain, so there may be about 6 directories for someone using the lot but a pure database user would probably have a single directory.
From a Unix build aspect, the recommended structure has been 2. My situation is complicated by needing to keep Visual Studio and XCode users happy (sniff, CodeWarrior, how I doth miss thee!).
Edit - the chosen solution:
I went with four subdirectories in include. I started trying to divide them up further by platform but it just got very noisy very quickly.
Personally I would go with 2, or 3 if really pushed.
But whichever you choose, please make it crystal clear in the build instructions how to set up the include paths. Nothing dooms an open source project more than it being really difficult to build - developers want a quick out-of-the-box experience and if it involves faffing around with many undocumented environment variables (or whatever) most will simply go away.