Avoiding dependencies is exploding the number of projects in my VS solution - c++

I'm working in C++; I'm new to Visual Studio and still trying to understand how to use it effectively.
My problem is that I have what seems to me a fairly small, non-complex project, but I find myself adding more and more Projects to the Solution, and managing them is becoming unwieldy and frustrating.
The project depends on a device, so I've defined DeviceInterface, and I've got FakeDevice and RealDevice implementing the interface.
My core project Foo, I've written is a static library, defined using DeviceInterface. The Foo library isn't familiar with either of the concrete implementations.
I have multiple test executables, let's call them TestExe1, TestExe2, and so forth. These tests share some common code, FooTestUtils.
Using RealDevice requires some init and teardown work before and after use. This doesn't belong within the interface implementation; the client code is naturally responsible for this.
This means that a test-executable is only capable of running using RealDevice if I put in a strong dependency on RealDevice and the init/teardown resources. Which I don't need or want for tests using the Fake.
My present solution is to split test executables up - one for FakeDevice, another for RealDevice that performs the initialization and then goes and calls the same test code.
TL;DR: Core library Foo, depending on DeviceInterface, which has multiple implementations. Multiple test executables, most of which can work with either implementation of DeviceInterface, but one of those implementations requires extra set-up in the client code.
This seems to me like a reasonable level of complexity. But it results in SO MANY Projects:
Static Libraries:
Foo
RealDevice implementation
FooTestUtils (note: includes FakeDevice implementation)
gtest (used for some of the testing)
Library from another solution, needed for RealDevice use
Executables:
2 TestExe$i projects for every test executable I want
In the *nix environments I'm more used to, I'd divide the code into a reasonable directory tree, and a lot of these "Projects" would just be a single object file, or a single .cpp with some client code for the core logic.
Is this a reasonable number of projects for a solution of this scope? It feels like an awful lot to me. Frequently I find some setting I need to change across half a dozen different projects, and I'm finding it increasingly difficult to navigate. At present, this is still manageable, but I'm not seeing how this will remain workable as I proceed into larger, more complex projects. Could I be organizing this better?
(Again, I'm new to Visual Studio, so the problem might be that I don't know how to manage multiple related projects, rather than just the number of the projects themselves.)

Though what your doing is pretty standard - and for a small project like you are describing you solution seems perfectly standard.
However Visual studio does provide some ways to minimize the impact of these issues for experienced developers:
build-configurations and property-sheets:
In short, why have a project for fakeDevice and RealDevice?
Create a project for "Device", that depending on what configuration is chosen builds the sources of fakeDevice, or those of RealDevice. This also allows you to start your project in "testing" configuration, and automatically load the fakeDevice, meanwhile selecting "Debug" or "release" would provide the RealDevice.
Note that both Projects, as well as the entire solution may have configurations independently - allow for rapid batch-building of specific configurations.
real world example
My company produces a plugin for adobe-illustrator, there are seven supported versions of Adobe (each with it's own SDK), as well as 32 and 64bit variants, and further debug and release builds (and double that again to 28+ variants as there are two near-identical branded versions of the plugin).
My Solution is as follows:
Plugin-Solution
[Debug][Release] / (win32/x64)
Plugin
[Debug AI4][Debug AI5][Debug AI6][Debug AI7][Release AI4]
[Release AI5][Release AI6][Release AI7] / (win32/x86)
{libraries with similar setups...}
In my day to day operation, I simple "press play" in the debug config, however when release time comes (or a specific version needs testing) I "Batch Build" the correct combination of projects for debugging, or packaging.
This effectively means, although I have (including shared libraries) near-enough 30 binaries being produced for a release, my solution only had three projects within it.
testing executables
As for the unit-testing executables, I'd recommend creating a separate solution for those - Visual studio has no problem having several solutions open concurrently - I do however have one tip
Create a separate solution and create all your unit tests within it, then in your main solution add a simple "tests" project, in it's post-build event run a powershell/batch script.
That script can then invoke the MSVCC toolchain on the unit-tests solution, and then run the tests collating the results (if your in the correct configuration).
This will allow you to build/run your tests from a single project, even if you do need to alt+tab to create a new unit test.
Personal (opinionated) Advice
Having developed in 'Nix, Windows, and Apple systems, here's a good metaphore of layout.
'Nix expects you to create your own makefiles and folder layout, it assumes you know exactly what your doing (in the terminal!) and the layout becomes your plaything (with enough shellscripts).
Windows/Visual studio is designed to be open to every level of user, an eight-yearold learning programs on visual-basic, or a experienced C++ developer creating hardware-drivers. As such the interface is designed to be very Expandable - "projects" in "solutions" is a basic idea (many beginners don't realise you can have multiple projects. However if you want more options, there is one way to do it as far as MS is concerned (in this case, configurations and property sheets) - if your writing a makefile or creating a layout you are "doing it wrong" (in microsoft's eyes anyway).
If you take a look at the hassle boost has had fitting into the windows ecosystems over the last few years, you'll tend to understand the problem. On 'nix having several dozen shared-libraries from a package apt/yum installed as a dependency is fine! Windows however (feels like) having more than one DLL is a bad idea. There's no package-manager, so either rely on .Net, or package a single boost-dll with your product. (this is why I prefer static linking on windows).
EDIT:
When you have multiple configurations, selecting what sources do and don't build for each can be done in two fashions.
one: manually
Right clicking on any source-file in the solution explorer and selecting properties - Under the "general" section, you can select "Excluded From Build" (this works if you group-select and rightclick also.
two: XML magic
If you open the vcxproj file, you'll fine a well-formed XML file layout!
While handling the exact conditions of managing inclusions, exclusions, and even other options is beyond the scope of this post, basic detailscan be found In this well worded stackoverflow question as well as the MDSN toolchain documentation

Related

create a project from a template

I have several project setups in very different languages. For example an android project.
Whenever I want to create a new android project I copy that project, rename everything I need to rename and I have a ready to go project with which I start working.
Since this is very time consuming and I am sure this can be automated I thought about creating a tool that does this for me, but then I thought there are probably thousand solutions out there, which solve the exact same problem already, I am just not aware of.
So my question is, do you know of any tools like this? The requirements I see are, that it has to be os, language, IDE independent and it must support a command line interface. Ideally with less setup effort.
You should try Telosys (https://www.telosys.org) a lightweight code generator that is able to generate any kind of langage with any kind of framework.
This tool is quite simple, free and Open Source.
It provides a Command Line Interface (so it can be used with any environment/IDE).
It is usualy used to boostrap a project and to generated all the repetitive code (CRUD, Controllers, unit tests, HTML pages, etc)
See also :
https://modeling-languages.com/telosys-tools-the-concept-of-lightweight-model-for-code-generation/
https://www.slideshare.net/lguerin/telosys-project-booster-paris-open-source-summit-2019

What are the principles of organizing C++ code in Visual Studio?

I'm a seasoned C++ developer in a new position. My experience is in *nix-based systems, and I'm working with Visual Studio for my first time.
I find that I'm constantly struggling with Visual Studio for things I consider trivial. I feel like I haven't grokked how I'm supposed to be using VS; so I try doing things "the way I'm used to," which takes me down a rabbit-hole of awkward workarounds, wasted time, and constant frustration. I don't need a VS 101 tutorial; what I need is some kind of conversion guide - "Here's the VS way of doing things."
That's my general question - "What's the VS way of doing things?". That might be a bit vague, so I'll describe what's giving me grief. Ideally, I'm not looking for "Here's the specific set of steps to do that specific thing," but rather "You're looking at it wrong; here's the terms and concepts you need to understand to use VS effectively."
In C++, I'm used to having a great measure of control over code organization and the build process. I feel like VS is working strongly against me here:
I strongly tend to write small, isolated building blocks, and then bigger chunks that put those blocks together in different combination.
As a trivial example, for a given unit or project, I make a point of having strong separation between the unit's headers meant for client inclusion; the unit's actual implementation; and any testing code.
I'm likely to have multiple different test projects, some of which will probably rely on common testing code (beyond the code-under-test itself).
VS makes it onerous to actually control code location. If I want a project's code to be divided into an include/ folder and a src/ folder, that's now a serious hassle.
VS's concept of "projects" seems, as far as I can tell, somewhere between what I'd think of as "final build target" and "intermediate build target." As far as I can tell, basically anything I want to share between multiple projects, must also be a project.
But if many intermediate objects now become projects, then I'm suddenly finding myself with a TON of small projects.
And managing a ton of small projects is incredibly frustrating. They each have a million settings and definitions (under multiple configurations and platforms...) that are a real pain to transfer from one project to the other.
This encourages me to lump lots of unrelated code together in a single project, just to reduce the number of projects I need to manage.
I'm struggling with this constantly. I can find solutions to any one given thing, but it's clear to me that I'm missing a wider understanding of how Visual Studio, as a tool, is meant to be used. Call it correct workflow, or correct project organization - any solutions or advice would be a real help to me.
(Note: much as I'd like to, "Stop working with the Visual Studio buildchain" is not an option at the moment.)
The basic rule is: A project results in a single output file [1].
If you want to package building blocks into static libraries, create a project for each one.
Unit test are separate from the code, so it's common to see a "foo" and a "foo test" project side by side.
With respect to your small building blocks, I use this guideline: If it is closely enough related to be put in the same folder, it is closely enough related to be put in the same project.
And managing a ton of small projects is incredibly frustrating. They each have a million settings and definitions (under multiple configurations and platforms...) that are a real pain to transfer from one project to the other.
Property pages are intended to solve this problem. Just define a property page containing related settings and definitions, and it becomes as easy as adding the property page to a new project.
As each project can pull its settings from multiple property pages, you can group them into logical groups. As an example: a "unit test" property page with all settings related to your unit test framework.
To create property page in Visual Studio 2015: in the View menu, there is an option "Property Manager". You get a different tree view of your solution, with the projects, then the configurations, and then all the property pages for that project+configuration combination. The context menu for the configuration has an option to create a new property page or to add an existing one.
[1] Although it is common to have the Release configuration result in foo.dll and Debug configuration in food.dll, so they can exist next to each other without resorting to the Debug/ and Release/ folders. In the General properties, set the TargetName to "$(ProjectName)d" (for Debug configuration) and remove the "$(Configuration)" from the OutputDirectory (for all configurations) to achieve this.

setting up vim project space with 4 different code but related code bases

I'm a novice vim user who really likes vim and wants to take it to the next step in my development workflow.
I have 4 different C/C++ code bases which are compiled using 3 different compilers. Each of the code bases has its own project and makefiles for the compilers. I keep the compilers open to compile the different projects. Two code bases are for firmware of a device, one code base is for a library and the last code base is for a cross platform desktop app that uses the library to talk to the device.
I mainly use vim for my code editing, and right now I have several different vim windows that I keep open, i.e. one per code base. I exit the vim editors a lot to open different code files, which is very unproductive. I often have to look up functions in different files within the same code base. I often have to switch between code bases because the software compiled from one code base processes data generated by a program from another code base and I have to double check defines and such.
I'm wondering if there is a better way to organize this using vim? How does an expert vim user set up his development workflow to work with multiple related code bases within the same vim environment and how does he/she navigate the code bases efficiently?
If your projects are related, and files from one project are referred to in another, I would recommend to open them all in one GVIM instance. I personally often use tab pages to segregate different projects within one Vim instance, but Vim (together with your favorite plugins) is so flexible in this regard that virtually any workflow can be reflected.
For a more precise comment and recommendation, your question is missing details like:
How (through which plugin) do you open project files / recently opened files?
Is your current working directory set to the project root, and does this matter to your workflow?
How is your window layout, do you have any sidebars, and how do you organize your files (buffer list, minimized splits, arg list, etc.)?
There are some blog posts about how individuals have set up their Vim environments, but these naturally are bound to personal preferences and the particular programming environment. So, use them for inspiration, but be aware that there's no perfect recipe, and you'll have to find your own, personal way.

Testing Process of a C++ & Qt Application

I'm a part of a big (sort of...) C++ application written mainly in Qt.
I was wondering if this is the right / common approach:
whenever I make a change to a certain / several source files, I compile it (QtCreator) in debug mode, and then launch and test it.
the problem is, each compilation takes a couple of minutes (usually a 1 - 3 minutes), I hate that, and I guess I'm doing something wrong here, maybe compiling the whole project for each minor change is not the right path to go?
thanks,
Try to use unit test with QTest as much as possible, then you can verify the parts first and then top it of with some test on the complete application. This saves a lot of time and can also help to produce more robust code if done right.
This do need some kind of modular approach, so the code needs to be grouped in some way.
Most likely you need to tweak your build settings to ensure you are doing minimal rebuild or incremental build where it only compiles the files that have changed and doesn't update or rebuild anything not directly affected by the changes. This still doesn't help when you are changing a header file that's included heavily throughout the project but with a well laid out project that shouldn't happen.
In general there are several approaches to testing as you go but here are the main two things I'd recommend:
Don't rebuild the entire project (no clean, no rebuid all) just do an incremental build and test as you go. Great for testing gui changes and little things in projects that don't need to link against a million things or have a long start up.
Develop it as separate project with a console app or simple test application which you don't include in the final integrated version but can keep for independent testing later. This is better for libraries like say you are making a new encryption algorithm or file manager to replace some old archaic portion of a bigger project.
Of course there is always the approach of coding with overwhelming confidence like a crazy person and crossing your fingers when you compile and run it which is very popular but not quite as effective.
Do you have multiple SUBDIRS targets within your project ? If the answer is yes, You could try tweaking the project file by first removing all "ordered" keywords from project files and then if one subdir is depending on another, declare those as dependencies. And finally, make sure you pass -jX value to make (that is, if your build rules use make) so that all cpu cores are taken into use while compiling.
Also answered in: Qt automated testing
I along with my team recently developed TUG, an open-source framework for Qt GUIs Unit Testing. Is uses Qt Test. Maybe it can help you.
A video better than a thousand words:
https://www.youtube.com/watch?v=tUis6JrycrA
Hope we can make it better together. Github repo: http://pedromateo.github.io/tug_qt_unit_testing_fw/

Why should one use a build system over that which is included as part of an IDE?

I've heard more than one person say that if your build process is clicking the build button, than your build process is broken. Frequently this is accompanied with advice to use things like make, cmake, nmake, MSBuild, etc. What exactly do these tools offer that justifies manually maintaining a separate configuration file?
EDIT: I'm most interested in answers that would apply to a single developer working on a ~20k line C++ project, but I'm interested in the general case as well.
EDIT2: It doesn't look like there's one good answer to this question, so I've gone ahead and made it CW. In response to those talking about Continuous Integration, yes, I understand completely when you have many developers on a project having CI is nice. However, that's an advantage of CI, not of maintaining separate build scripts. They are orthogonal: For example, Team Foundation Build is a CI solution that uses Visual Studio's project files as it's configuration.
Aside from continuous integration needs which everyone else has already addressed, you may also simply want to automate some other aspects of your build process. Maybe it's something as simple as incrementing a version number on a production build, or running your unit tests, or resetting and verifying your test environment, or running FxCop or a custom script that automates a code review for corporate standards compliance. A build script is just a way to automate something in addition to your simple code compile. However, most of these sorts of things can also be accomplished via pre-compile/post-compile actions that nearly every modern IDE allows you to set up.
Truthfully, unless you have lots of developers committing to your source control system, or have lots of systems or applications relying on shared libraries and need to do CI, using a build script is probably overkill compared to simpler alternatives. But if you are in one of those aforementioned situations, a dedicated build server that pulls from source control and does automated builds should be an essential part of your team's arsenal, and the easiest way to set one up is to use make, MSBuild, Ant, etc.
One reason for using a build system that I'm surprised nobody else has mentioned is flexibility. In the past, I also used my IDE's built-in build system to compile my code. I ran into a big problem, however, when the IDE I was using was discontinued. My ability to compile my code was tied to my IDE, so I was forced to re-do my entire build system. The second time around, though, I didn't make the same mistake. I implemented my build system via makefiles so that I could switch compilers and IDEs at will without needing to re-implement the build system yet again.
I encountered a similar problem at work. We had an in-house utility that was built as a Visual Studio project. It's a fairly simple utility and hasn't needed updating for years, but we recently found a rare bug that needed fixing. To our dismay, we found out that the utility was built using a version of Visual Studio that was 5-6 versions older than what we currently have. The new VS wouldn't read the old-version project file correctly, and we had to re-create the project from scratch. Even though we were still using the same IDE, version differences broke our build system.
When you use a separate build system, you are completely in control of it. Changing IDEs or versions of IDEs won't break anything. If your build system is based on an open-source tool like make, you also don't have to worry about your build tools being discontinued or abandoned because you can always re-build them from source (plus fix bugs) if needed. Relying on your IDE's build system introduces a single point of failure (especially on platforms like Visual Studio that also integrate the compiler), and in my mind that's been enough of a reason for me to separate my build system and IDE.
On a more philosophical level, I'm a firm believer that it's not a good thing to automate away something that you don't understand. It's good to use automation to make yourself more productive, but only if you have a firm understanding of what's going on under the hood (so that you're not stuck when the automation breaks, if for no other reason). I used my IDE's built-in build system when I first started programming because it was easy and automatic. I later started to become more aware that I didn't really understand what was happening when I clicked the "compile" button. I did a little reading and started to put together a simple build script from scratch, comparing my output to that of the IDE's build system. After a while I realized that I now had the power to do all sorts of things that were difficult or impossible through the IDE. Customizing the compiler's command-line options beyond what the IDE provided, I was able to produce a smaller, slightly faster output. More importantly, I became a better programmer by having real knowledge of the entire development process from writing code all the way down through the generation of machine language. Understanding and controlling the entire end-to-end process allows me to optimize and customize all of it to the needs of whatever project I'm currently working on.
If you have a hands-off, continuous integration build process it's going to be driven by an Ant or make-style script. Your CI process will check the code out of version control when changes are detected onto a separate build machine, compile, test, package, deploy, and create a summary report.
Let's say you have 5 people working on the same set of code. Each of of those 5 people are making updates to the same set of files. Now you may click the build button and you know that you're code works, but what about when you integrate it with everyone else. The only you'll know is that if you get everyone else's and try. This is easy every once in a while, but it quickly becomes tiresome to do this over and over again.
With a build server that does it automatically, it checks if the code compiles for everyone all the time. Everyone always knows if the something is wrong with the build, and what the problem is, and no one has to do any work to figure it out. Small things add up, it may take a couple of minutes to pull down the latest code and try and compile it, but doing that 10-20 times a day quickly becomes a waste of time, especially if you have multiple people doing it. Sure you can get by without it, but it is so much easier to let an automated process do the same thing over and over again, then having a real person do it.
Here's another cool thing too. Our process is setup to test all the sql scripts as well. Can't do that with pressing the build button. It reloads snapshots of all the databases it needs to apply patches to and runs them to make sure that they all work, and run in the order they are supposed to. The build server is also smart enough to run all the unit tests/automation tests and return the results. Making sure it can compile is fine, but with an automation server, it can handle many many steps automatically that would take a person maybe an hour to do.
Taking this a step further, if you have an automated deployment process along with the build server, the deployment is automatic. Anyone who can press a button to run the process and deploy can move code to qa or production. This means that a programmer doesn't have to spend time doing it manually, which is error prone. When we didn't have the process, it was always a crap shoot as to whether or not everything would be installed correctly, and generally it was a network admin or a programmer who had to do it, because they had to know how to configure IIS and move the files. Now even our most junior qa person can refresh the server, because all they need to know is what button to push.
the IDE build systems I've used are all usable from things like Automated Build / CI tools so there is no need to have a separate build script as such.
However on top of that build system you need to automate testing, versioning, source control tagging, and deployment (and anything else you need to release your product).
So you create scripts that extend your IDE build and do the extras.
One practical reason why IDE-managed build descriptions are not always ideal has to do with version control and the need to integrate with changes made by other developers (ie. merge).
If your IDE uses a single flat file, it can be very hard (if not impossible) to merge two project files into one. It may be using a text-based format, like XML, but XML it notoriously hard with standard diff/merge tools. Just the fact that people are using a GUI to make edits makes it more likely that you end up with unnecessary changes in the project files.
With distributed, smaller build scripts (CMake files, Makefiles, etc.), it can be easier to reconcile changes to project structure just like you would merge two source files. Some people prefer IDE project generation (using CMake, for example) for this reason, even if everyone is working with the same tools on the same platform.