File path requirements in VS debug output window for navigating to the code - c++

There is a neat feature in VS that allows you to navigate to the line that has produced the debug output by double clicking on it. It has been covered in multiple questions here and I was happily using the feature until I had to refactor the project a lot to make it cross platform. The refactoring included stripping full file paths from the log messages just to make the lines shorter. However after this move the double click feature in the log does not seem to work any more. Another thing is that now the sources are not stored in the same directory as the projects (maybe this also has some impact when using only filenames in the log instead of a full path).
So the question is - does VS require a full path in the debug output window messages for the double click navigation to work or does a single file name work as well? If the latter is true, are there any requirements on the layout of the files relative to the solution for this to work?
To be more illustrative of the issue here are two screenshots of working and non working output.
This works:
This does not work:
By looking at the length of the relative paths in the working scenario I think it seems logical that I want to reduce them, however if there are no means of doing this and preserving the debug output navigation feature I'll have to leave the long ugly paths.

Related

How to organize a R project structure in interaction with using R-markdown

I've been reading everywhere about the need to organize your R projects and also to use Rmarkdown.
I see an incoherence I can't solve.
Suppose I set for the following standard organization:
Project
data
raw-data
code
docs
out
reports
and also home of setwd().
Now I want to use Rmarkdown with my main project file called My_project.Rmd
I create it at the root project level, then I get at every knit rendition 2 directories created My_project_cache and My_project_files on to of every .hml file rendered that conflicts with the above structure.
This is very impractical.
I tried setting the options of output to avoid this, per this tip, but it fails on the cache directory, and I did not succeeded in setting Knit options to bypass it. And no-one seems to be bothered by this question making the solution look like a dead-end.
The other solution is to put My_project.Rmd directly in reports/ but it feels a little awkward and on top of that it breaks the above project structure by imposing ../ paths everywhere.
The third solution is to work with Rmd format, only at the end of a project, but this seems a little defeating the purpose of documenting everything neatly in the first place.
There may be a 4th solution using R Notebook feature, but it works until you decide to try to finalize your "final" document , which of course is never really final.
What am I missing here ?
For reference, I'm using RStudio on a Mac.

Methods for opening a specific file inside the project WITHOUT knowing what the working directory will be

I've had trouble with this issue across many languages, most recently with C++.
The Issue Exemplified
Let's say we're working with C++ and have the following file structure for a project:
("Project" main folder with three [modules, data, etc] subfolders)
Now say:
Our maincode.cpp is in the Project folder
moduleA.cpp is in modules folder
data.txt is in data folder
moduleA.cpp wants to read data.txt
So the way I'd currently do it would be to assume maincode.cpp gets compiled & executed inside the Project folder, and so hardcode the path data/data.txt in moduleA.cpp to do the reading (say I used fstream fs("data/data.txt") to do so).
But what if the code was, for some reason, executed inside etc folder?
Is there a way around this?
The Questions
Is this a valid question? Or am I missing something with the wd (working directory) concept fundamentals?
Are there any methods for working around absolute paths so as to solve this issue in C++?
Are there any universal methods for doing the same with any language?
If there are no reasonable methods, how would you approach this issue?
Please leave a comment if I missed any important details with the problem's illustration!
At some point the program has to make an assumption where the file(s) are. Either by getting it from user input or a relative path with the presumed filename. As already said in the comments, C++ recently got std::filesystem added in C++17 which can help you making cross-platform code that interacts with the hosts' filesystem.
That being said, every program, big or small, has to make certain assumptions at some point, deleting or moving certain files is problematic for any program in case the program requires them to be at a certain location under a certain name. This is not solvable other than presenting the user with an error message etc.
As #Hatted Rooster said, it's not generally solvable for some arbitrary file without making some assumptions, however there are frameworks that allow you to "store" some files in the resources embedded into the executable (or otherwise). Those frameworks would usually allow your to handle such files in a opaque way, without the need to rely on a current working dir or relative paths.
For example, see the Qt Resource System.
Your program can deduce the path from argv[0] in the main call, if you know that it is always relative to your executable or you use an absolute path like "C:\myProgram\data\data.txt".
The second approach works in every language.

C++ writing HTML onto each of two already opened Firefox tabs from within extension

I'm seeking C++ help in writing HTML code to a new tab in Firefox within an extension.
Our C++ code has been partially wrapped by an XPCOM wrapper and embedded within a Firefox extension thanks to the work of a consultant we have lost contact with, and still partially implemented by calling out to a standalone executable.
To get our output displayed from the standalone executable, the C++ code writes the output to a file and simply calls system(firefox file.html) which then comes up with a file:-based URI.
This no longer works in all situations, based on a report from a user running Vista. So it seems to be time to do it right, and navigate the DOM, likely integrating the rest of the C++ code into the XPCOM-wrapped part. Perhaps there's a right way to do it from the standalone executable using the DOM model?
The "current working directory" seems to no longer match the directory in which the extension installed the standalone executable, with a "VirtualStore" path element.
We also generate parallel output in a different MIME type, VRML to be specific.
Any suggestions or examples for how to properly generate output into a Firefox browser pane under C++ programmatic control would be very much appreciated.
You could call Firefox with a fully specified file:/// URL, not a relative URL (file.html).
Or you if you want to dump a separate executable, you could implement a protocol handler or a simpler about module (where ios.newChannel would be replaced by your own channel implementation that generates the data).
I'd say keeping the file-generation solution is OK and doesn't seem very bad, so I'd go with (1), perhaps changing the generated file location to a temporary folder and specifying it fully both for the executable that generates it and for Firefox.

C++ vim IDE. Things you'd need from it

I was going to create the C++ IDE Vim extendable plugin. It is not a problem to make one which will satisfy my own needs.
This plugin was going to work with workspaces, projects and its dependencies.
This is for unix like system with gcc as c++ compiler.
So my question is what is the most important things you'd need from an IDE? Please take in account that this is Vim, where almost all, almost, is possible.
Several questions:
How often do you manage different workspaces with projects inside them and their relationships between them? What is the most annoying things in this process.
Is is necessary to recreate "project" from the Makefile?
Thanks.
Reason to create this plugin:
With a bunch of plugins and self written ones we can simulate most of things. It is ok when we work on a one big "infinitive" project.
Good when we already have a makefile or jam file. Bad when we have to create our owns, mostly by copy and paste existing.
All ctags and cscope related things have to know about list of a real project files. And we create such ones. This <project#get_list_of_files()> and many similar could be a good project api function to cooperate with an existing and the future plugins.
Cooperation with an existing makefiles can help to find out the list of the real project files and the executable name.
With plugin system inside the plugin there can be different project templates.
Above are some reasons why I will start the job. I'd like to hear your one.
There are multiple problems. Most of them are already solved by independent and generic plugins.
Regarding the definition of what is a project.
Given a set of files in a same directory, each file can be the unique file of a project -- I always have a tests/ directory where I host pet projects, or where I test the behaviour of the compiler. On the opposite, the files from a set of directories can be part of a same and very big project.
In the end, what really defines a project is a (leaf) "makefile" -- And why restrict ourselves to makefiles, what about scons, autotools, ant, (b)jam, aap? And BTW, Sun-Makefiles or GNU-Makefiles ?
Moreover, I don't see any point in having vim know the exact files in the current project. And even so, the well known project.vim plugin already does the job. Personally I use a local_vimrc plugin (I'm maintaining one, and I've seen two others on SF). With this plugin, I just have to drop a _vimrc_local.vim file in a directory, and what is defined in it (:mappings, :functions, variables, :commands, :settings, ...) will apply to each file under the directory -- I work on a big project having a dozen of subcomponents, each component live in its own directory, has its own makefile (not even named Makefile, nor with a name of the directory)
Regarding C++ code understanding
Every time we want to do something complex (refactorings like rename-function, rename-variable, generate-switch-from-current-variable-which-is-an-enum, ...), we need vim to have an understanding of C++. Most of the existing plugins rely on ctags. Unfortunately, ctags comprehension of C++ is quite limited -- I have already written a few advanced things, but I'm often stopped by the poor information provided by ctags. cscope is no better. Eventually, I think we will have to integrate an advanced tool like elsa/pork/ionk/deshydrata/....
NB: That's where, now, I concentrate most of my efforts.
Regarding Doxygen
I don't known how difficult it is to jump to the doxygen definition associated to a current token. The first difficulty is to understand what the cursor is on (I guess omnicppcomplete has already done a lot of work in this direction). The second difficulty will be to understand how doxygen generate the page name for each symbol from the code.
Opening vim at the right line of code from a doxygen page should be simple with a greasemonkey plugin.
Regarding the debugger
There is the pyclewn project for those that run vim under linux, and with gdb as debugger. Unfortunately, it does not support other debuggers like dbx.
Responses to other requirements:
When I run or debug my compiled program, I'd like the option of having a dialog pop up which asks me for the command line parameters. It should remember the last 20 or so parameters I used for the project. I do not want to have to edit the project properties for this.
My BuildToolsWrapper plugin has a g:BTW_run_parameters option (easily overridden with project/local_vimrc solutions). Adding a mapping to ask the arguments to use is really simple. (see :h inputdialog())
work with source control system
There already exist several plugins addressing this issue. This has nothing to do with C++, and it must not be addressed by a C++ suite.
debugger
source code navigation tools (now I am using http://www.vim.org/scripts/script.php?script_id=1638 plugin and ctags)
compile lib/project/one source file from ide
navigation by files in project
work with source control system
easy acces to file changes history
rename file/variable/method functions
easy access to c++ help
easy change project settings (Makefiles, jam, etc)
fast autocomplette for paths/variables/methods/parameters
smart identation for new scopes (also it will be good thing if developer will have posibility to setup identation rules)
highlighting incorrect by code convenstion identation (tabs instead spaces, spaces after ";", spaces near "(" or ")", etc)
reformating selected block by convenstion
Things I'd like in an IDE that the ones I use don't provide:
When I run or debug my compiled program, I'd like the option of having a dialog pop up which asks me for the command line parameters. It should remember the last 20 or so parameters I used for the project. I do not want to have to edit the project properties for this.
A "Tools" menu that is configurable on a per-project basis
Ability to rejig the keyboard mappings for every possible command.
Ability to produce lists of project configurations in text form
Intelligent floating (not docked) windows for debugger etc. that pop up only when I need them, stay on top and then disappear when no longer needed.
Built-in code metrics analysis so I get a list of the most complex functions in the project and can click on them to jump to the code
Built-in support for Doxygen or similar so I can click in a Doxygen document and go directly to code. Sjould also reverse navigate from code to Doxygen.
No doubt someone will now say Eclipse can do this or that, but it's too slow and bloated for me.
Adding to Neil's answer:
integration with gdb as in emacs. I know of clewn, but I don't like that I have to restart vim to restart the debugger. With clewn, vim is integrated into the debugger, but not the other way around.
Not sure if you are developing on Windows, but if you are I suggest you check out Viemu. It is a pretty good VIM extension for Visual Studio. I really like Visual Studio as an IDE (although I still think VC6 is hard to beat), so a Vim extension for VS was perfect for me. Features that I would prefer worked better in a Vim IDE are:
The Macro Recording is a bit error prone, especially with indentation. I find I can easily and often record macros in Vim while I am editing code (eg. taking an enum defn from a header and cranking out a corresponding switch statement), but found that Viemu is a bit flakey in that deptartment.
The VIM code completion picks up words in the current buffer where Viemu hooks into the VS code completion stuff. This means if I have just created a method name and I want to ctrl ] to auto complete, Vim will pick it up, but Viemu won't.
For me, it's just down to the necessities
nice integration with ctags, so you can do jump to definition
intelligent completion, that also give you the function prototype
easy way to switch between code and headers
interactive debugging with breaakpoints, but maybe
maybe folding
extra bonus points for refactoring tools like rename or extract method
I'd say stay away from defining projects - just treat the entire file branch as part of the "project" and let users have a settings file to override that default
99% of the difference in speed I see between IDE and vim users is code lookup and navigation. You need to be able to grep your source tree for a phrase (or intelligently look for the right symbol using ctags), show all the hits, and switch to that file in like two or three keystrokes.
All the other crap like repository navigation or interactive debugging is nice, but there are other ways to solve those problems. I'd say drop the interactive debugging even. Just focus on what makes IDEs good editors - have a "big picture" view of your project, instead of single file.
In fact, are there any plugins for vim that already achieve this?

C++ Header files - put them in one directory or merged in a tree structure?

I have a substantial body of source code (OOFILE) which I'm finally putting up on Sourceforge. I need to decide if I should go with a monolithic include directory or keep the header files with the source tree.
I want to make this decision before pushing to the svn repo on SourceForge. I expect a lot of people who use it after that move will keep a working copy checked out directly from SF so won't want to change their structure.
The full source tree has about 262 files in 25 folders. There are a lot more classes than that suggests as due to conforming to 8.3 character names (yes it dates back to Win3.1) many classes are in one file. As I used to develop with ObjectMaster, that never bothered me but I will be splitting it up to conform to more recent trends to minimise the number of classes per file. From a quick skim of the class list, there are about 600 classes.
OOFILE is a cross-platform product expected to be built on Mac, Windows and assorted Unix platforms. As it started life on Mac, with compilers that point to include trees rather than flat include dirs, headers were kept with the source.
Later, mainly to keep some Visual Studio users happy, a build was reorganised with a single include directory. I'm trying to choose between those models.
The entire OOFILE product covers quite a few domains:
database front-end
range of database backends
simple 2D graphing engine for Mac and Windows
simple character-mode report-writer for trivial html and text listing
very rich banding report-writer with Mac and Windows Preview and Printing and cross-platform generation of text, RTF, HTML and XML reports
forms integration engine for easy CRUD forms binding to the database, with implementations on PowerPlant and MFC
cross-platform utility classes
file and directory manipulation
strings
arrays
XML and tag generation
Many people only want to use it on a single platform and some of those code areas are pure legacy (eg: PowerPlant UI framework on classic Mac). It therefore seems people would appreciate not having headers from those unwanted areas dumped in their monolithic include directory.
I started thinking about having an include directory split up into a few of the domains above and then realised that was sounding more like the original structure.
In summary, the choices seem to be:
Keep original model, all headers adjacent to source - max flexibility at cost of some complex includes in projects.
one include directory with everything inside
split includes by domain, so there may be about 6 directories for someone using the lot but a pure database user would probably have a single directory.
From a Unix build aspect, the recommended structure has been 2. My situation is complicated by needing to keep Visual Studio and XCode users happy (sniff, CodeWarrior, how I doth miss thee!).
Edit - the chosen solution:
I went with four subdirectories in include. I started trying to divide them up further by platform but it just got very noisy very quickly.
Personally I would go with 2, or 3 if really pushed.
But whichever you choose, please make it crystal clear in the build instructions how to set up the include paths. Nothing dooms an open source project more than it being really difficult to build - developers want a quick out-of-the-box experience and if it involves faffing around with many undocumented environment variables (or whatever) most will simply go away.