WebStorm with mulitple projects attached - open a tool window for a certain project - webstorm

I've got a project with four different repositories.
Recently, instead of opening four different windows, I've decided to open up the most important repo and attach to it the other projects.
However, it seems all of WebStorm's tool windows (VCS et cetera) are still bound to the main project, and there doesn't seem to be a convenient way to manage all the projects from one WebStorm instance.
Is it possible to, for example, open the VCS tool window and easily switch it focus between projects?

no, it's not. Attaching projects to current one is more or less equal to adding content roots, it's a single project for the IDE. See https://www.jetbrains.com/help/webstorm/2019.2/opening-reopening-and-closing-projects.html#428b6b3d
Related feature requests:
https://youtrack.jetbrains.com/issue/WEB-39009
https://youtrack.jetbrains.com/issue/WEB-39015
https://youtrack.jetbrains.com/issue/IDEA-218888
https://youtrack.jetbrains.com/issue/IDEA-217413

Related

Avoiding dependencies is exploding the number of projects in my VS solution

I'm working in C++; I'm new to Visual Studio and still trying to understand how to use it effectively.
My problem is that I have what seems to me a fairly small, non-complex project, but I find myself adding more and more Projects to the Solution, and managing them is becoming unwieldy and frustrating.
The project depends on a device, so I've defined DeviceInterface, and I've got FakeDevice and RealDevice implementing the interface.
My core project Foo, I've written is a static library, defined using DeviceInterface. The Foo library isn't familiar with either of the concrete implementations.
I have multiple test executables, let's call them TestExe1, TestExe2, and so forth. These tests share some common code, FooTestUtils.
Using RealDevice requires some init and teardown work before and after use. This doesn't belong within the interface implementation; the client code is naturally responsible for this.
This means that a test-executable is only capable of running using RealDevice if I put in a strong dependency on RealDevice and the init/teardown resources. Which I don't need or want for tests using the Fake.
My present solution is to split test executables up - one for FakeDevice, another for RealDevice that performs the initialization and then goes and calls the same test code.
TL;DR: Core library Foo, depending on DeviceInterface, which has multiple implementations. Multiple test executables, most of which can work with either implementation of DeviceInterface, but one of those implementations requires extra set-up in the client code.
This seems to me like a reasonable level of complexity. But it results in SO MANY Projects:
Static Libraries:
Foo
RealDevice implementation
FooTestUtils (note: includes FakeDevice implementation)
gtest (used for some of the testing)
Library from another solution, needed for RealDevice use
Executables:
2 TestExe$i projects for every test executable I want
In the *nix environments I'm more used to, I'd divide the code into a reasonable directory tree, and a lot of these "Projects" would just be a single object file, or a single .cpp with some client code for the core logic.
Is this a reasonable number of projects for a solution of this scope? It feels like an awful lot to me. Frequently I find some setting I need to change across half a dozen different projects, and I'm finding it increasingly difficult to navigate. At present, this is still manageable, but I'm not seeing how this will remain workable as I proceed into larger, more complex projects. Could I be organizing this better?
(Again, I'm new to Visual Studio, so the problem might be that I don't know how to manage multiple related projects, rather than just the number of the projects themselves.)
Though what your doing is pretty standard - and for a small project like you are describing you solution seems perfectly standard.
However Visual studio does provide some ways to minimize the impact of these issues for experienced developers:
build-configurations and property-sheets:
In short, why have a project for fakeDevice and RealDevice?
Create a project for "Device", that depending on what configuration is chosen builds the sources of fakeDevice, or those of RealDevice. This also allows you to start your project in "testing" configuration, and automatically load the fakeDevice, meanwhile selecting "Debug" or "release" would provide the RealDevice.
Note that both Projects, as well as the entire solution may have configurations independently - allow for rapid batch-building of specific configurations.
real world example
My company produces a plugin for adobe-illustrator, there are seven supported versions of Adobe (each with it's own SDK), as well as 32 and 64bit variants, and further debug and release builds (and double that again to 28+ variants as there are two near-identical branded versions of the plugin).
My Solution is as follows:
Plugin-Solution
[Debug][Release] / (win32/x64)
Plugin
[Debug AI4][Debug AI5][Debug AI6][Debug AI7][Release AI4]
[Release AI5][Release AI6][Release AI7] / (win32/x86)
{libraries with similar setups...}
In my day to day operation, I simple "press play" in the debug config, however when release time comes (or a specific version needs testing) I "Batch Build" the correct combination of projects for debugging, or packaging.
This effectively means, although I have (including shared libraries) near-enough 30 binaries being produced for a release, my solution only had three projects within it.
testing executables
As for the unit-testing executables, I'd recommend creating a separate solution for those - Visual studio has no problem having several solutions open concurrently - I do however have one tip
Create a separate solution and create all your unit tests within it, then in your main solution add a simple "tests" project, in it's post-build event run a powershell/batch script.
That script can then invoke the MSVCC toolchain on the unit-tests solution, and then run the tests collating the results (if your in the correct configuration).
This will allow you to build/run your tests from a single project, even if you do need to alt+tab to create a new unit test.
Personal (opinionated) Advice
Having developed in 'Nix, Windows, and Apple systems, here's a good metaphore of layout.
'Nix expects you to create your own makefiles and folder layout, it assumes you know exactly what your doing (in the terminal!) and the layout becomes your plaything (with enough shellscripts).
Windows/Visual studio is designed to be open to every level of user, an eight-yearold learning programs on visual-basic, or a experienced C++ developer creating hardware-drivers. As such the interface is designed to be very Expandable - "projects" in "solutions" is a basic idea (many beginners don't realise you can have multiple projects. However if you want more options, there is one way to do it as far as MS is concerned (in this case, configurations and property sheets) - if your writing a makefile or creating a layout you are "doing it wrong" (in microsoft's eyes anyway).
If you take a look at the hassle boost has had fitting into the windows ecosystems over the last few years, you'll tend to understand the problem. On 'nix having several dozen shared-libraries from a package apt/yum installed as a dependency is fine! Windows however (feels like) having more than one DLL is a bad idea. There's no package-manager, so either rely on .Net, or package a single boost-dll with your product. (this is why I prefer static linking on windows).
EDIT:
When you have multiple configurations, selecting what sources do and don't build for each can be done in two fashions.
one: manually
Right clicking on any source-file in the solution explorer and selecting properties - Under the "general" section, you can select "Excluded From Build" (this works if you group-select and rightclick also.
two: XML magic
If you open the vcxproj file, you'll fine a well-formed XML file layout!
While handling the exact conditions of managing inclusions, exclusions, and even other options is beyond the scope of this post, basic detailscan be found In this well worded stackoverflow question as well as the MDSN toolchain documentation

What are the principles of organizing C++ code in Visual Studio?

I'm a seasoned C++ developer in a new position. My experience is in *nix-based systems, and I'm working with Visual Studio for my first time.
I find that I'm constantly struggling with Visual Studio for things I consider trivial. I feel like I haven't grokked how I'm supposed to be using VS; so I try doing things "the way I'm used to," which takes me down a rabbit-hole of awkward workarounds, wasted time, and constant frustration. I don't need a VS 101 tutorial; what I need is some kind of conversion guide - "Here's the VS way of doing things."
That's my general question - "What's the VS way of doing things?". That might be a bit vague, so I'll describe what's giving me grief. Ideally, I'm not looking for "Here's the specific set of steps to do that specific thing," but rather "You're looking at it wrong; here's the terms and concepts you need to understand to use VS effectively."
In C++, I'm used to having a great measure of control over code organization and the build process. I feel like VS is working strongly against me here:
I strongly tend to write small, isolated building blocks, and then bigger chunks that put those blocks together in different combination.
As a trivial example, for a given unit or project, I make a point of having strong separation between the unit's headers meant for client inclusion; the unit's actual implementation; and any testing code.
I'm likely to have multiple different test projects, some of which will probably rely on common testing code (beyond the code-under-test itself).
VS makes it onerous to actually control code location. If I want a project's code to be divided into an include/ folder and a src/ folder, that's now a serious hassle.
VS's concept of "projects" seems, as far as I can tell, somewhere between what I'd think of as "final build target" and "intermediate build target." As far as I can tell, basically anything I want to share between multiple projects, must also be a project.
But if many intermediate objects now become projects, then I'm suddenly finding myself with a TON of small projects.
And managing a ton of small projects is incredibly frustrating. They each have a million settings and definitions (under multiple configurations and platforms...) that are a real pain to transfer from one project to the other.
This encourages me to lump lots of unrelated code together in a single project, just to reduce the number of projects I need to manage.
I'm struggling with this constantly. I can find solutions to any one given thing, but it's clear to me that I'm missing a wider understanding of how Visual Studio, as a tool, is meant to be used. Call it correct workflow, or correct project organization - any solutions or advice would be a real help to me.
(Note: much as I'd like to, "Stop working with the Visual Studio buildchain" is not an option at the moment.)
The basic rule is: A project results in a single output file [1].
If you want to package building blocks into static libraries, create a project for each one.
Unit test are separate from the code, so it's common to see a "foo" and a "foo test" project side by side.
With respect to your small building blocks, I use this guideline: If it is closely enough related to be put in the same folder, it is closely enough related to be put in the same project.
And managing a ton of small projects is incredibly frustrating. They each have a million settings and definitions (under multiple configurations and platforms...) that are a real pain to transfer from one project to the other.
Property pages are intended to solve this problem. Just define a property page containing related settings and definitions, and it becomes as easy as adding the property page to a new project.
As each project can pull its settings from multiple property pages, you can group them into logical groups. As an example: a "unit test" property page with all settings related to your unit test framework.
To create property page in Visual Studio 2015: in the View menu, there is an option "Property Manager". You get a different tree view of your solution, with the projects, then the configurations, and then all the property pages for that project+configuration combination. The context menu for the configuration has an option to create a new property page or to add an existing one.
[1] Although it is common to have the Release configuration result in foo.dll and Debug configuration in food.dll, so they can exist next to each other without resorting to the Debug/ and Release/ folders. In the General properties, set the TargetName to "$(ProjectName)d" (for Debug configuration) and remove the "$(Configuration)" from the OutputDirectory (for all configurations) to achieve this.

setting up vim project space with 4 different code but related code bases

I'm a novice vim user who really likes vim and wants to take it to the next step in my development workflow.
I have 4 different C/C++ code bases which are compiled using 3 different compilers. Each of the code bases has its own project and makefiles for the compilers. I keep the compilers open to compile the different projects. Two code bases are for firmware of a device, one code base is for a library and the last code base is for a cross platform desktop app that uses the library to talk to the device.
I mainly use vim for my code editing, and right now I have several different vim windows that I keep open, i.e. one per code base. I exit the vim editors a lot to open different code files, which is very unproductive. I often have to look up functions in different files within the same code base. I often have to switch between code bases because the software compiled from one code base processes data generated by a program from another code base and I have to double check defines and such.
I'm wondering if there is a better way to organize this using vim? How does an expert vim user set up his development workflow to work with multiple related code bases within the same vim environment and how does he/she navigate the code bases efficiently?
If your projects are related, and files from one project are referred to in another, I would recommend to open them all in one GVIM instance. I personally often use tab pages to segregate different projects within one Vim instance, but Vim (together with your favorite plugins) is so flexible in this regard that virtually any workflow can be reflected.
For a more precise comment and recommendation, your question is missing details like:
How (through which plugin) do you open project files / recently opened files?
Is your current working directory set to the project root, and does this matter to your workflow?
How is your window layout, do you have any sidebars, and how do you organize your files (buffer list, minimized splits, arg list, etc.)?
There are some blog posts about how individuals have set up their Vim environments, but these naturally are bound to personal preferences and the particular programming environment. So, use them for inspiration, but be aware that there's no perfect recipe, and you'll have to find your own, personal way.

Is a Console Application & Windows Form Application combination possible?

I use Visual Studio 2010 and am coding in C#. A couple of people I know have a difference of opinion on what format these programs should be in, console or windows form. These programs usually just install\delete\modify\create other files given some settings and files already provided to the program.
The reasoning for the console application is to have the ability use it in batch scripts for unmanned execution and repetition with different settings. The reason for the Windows Application is ease of use instead of typing command line options and arguments. Is it possible to combine both of these or is it even a good practice?
If what you want to do, given the right values, can be pulled out into a shared project then yes. You then have two more projects: the console app, and the windows app. Both of these take a reference to the shared project with the meat of the logic. The app projects are there to give an interface into the shored project.
Personally, if the app is just going to use values given to it I'd give it a config file and make it a console app that pulls from that config. You could then have it set up differently on different servers. You could even put different named configuration in that file and have the console app do nothing but ask with named config to use. If the app can be a console app and it is for use by programmers/admin/other power user type people, then it is likely it doesn't really need a windows UI over it.

how-to: programmatic install on windows?

Can anyone list the steps needed to programatically install an application on Windows. Aside from copying the files where they need to be, what are the additional steps needed so that your app will be a first-class citizen in Windows (i.e. show up in the programs list, uninstall list...etc.)
I tried to google this, but had no luck.
BTW: This is for an unmanaged c++ application (developed in Qt), so I'd rather not involve the .net framework if I don't have to.
I highly recommend NSIS. Open Source, very active development, and it's hard to match/beat its extensibility.
To add your program to the Add/Remove Programs (or Programs and Features) list, add the following reg keys:
[HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Uninstall\PROGRAM_NAME]
"DisplayName"="PROGRAM_NAME"
"Publisher"="COMPANY_NAME"
"UninstallString"="PATH_TO_UNINSTALL_PROGRAM"
"DisplayIcon"="PATH_TO_ICON_FILE"
"DisplayVersion"="VERSION"
"InstallLocation"="PATH_TO_INSTALLATION_LOCATION"
I think the theme to the answers you'll see here is that you should use an installation program and that you should not write the installer yourself. Use one of the many installer-makers, such as Inno Setup, InstallSheild, or anything else someone recommends.
If you try to write the installer yourself, you'll probably do it wrong. This isn't a slight against you personally. It's just that there are a lot of little details that an installer should consider, and a lot of things that can go wrong, and if you want to write the installer yourself, you're just going to have to get all those things right. That means lots of research and lots of testing on your part. Save yourself the trouble.
Besides copying files, installation tasks vary quite a bit depending on what your program needs. Maybe you need to put an icon on the Start menu; an installer tool should have a way to make that happen very easily, automatically filling in the install location that the customer chose earlier in the installation, and maybe even choosing the right local language for the shortcut's label.
You might need to create registry entries, such as for file associations or licensing. Your installer tool should already have an easy way to specify what keys and values to create or modify.
You might need to register a COM server. That's a common enough action that your installer tool probably has a way of specifying that as part of the post-file-copy operation.
If there are some actions that your chosen installer tool doesn't already provide for, the tool will probably offer a way to add custom actions, perhaps through a scripting language, or perhaps through linking external code from a DLL you would write that gets included with your installer. Custom actions might include downloading an update from a specific Web site, sending e-mail, or taking an inventory of what other products from your company are already installed.
A couple of final things that an installer tool should provide are ways to apply upgrades to an existing installation, and a way to uninstall the program, undoing all those installation tasks (deleting files, restoring backups, unregistering COM servers, etc.).
I've used Inno Setup to package my software for C++. It's very simple compared to heavy duty solutions such at InstallShield. Everything can be contained in a single setup.exe without creating all these crazy batch scripts and so on.
Check it out here: http://www.jrsoftware.org/isinfo.php
It sounds like you need to check out the Windows Installer system. If you need the nitty-gritty, see the official documentation. For news, read the installer team's blog. Finally, since you're a programmer, you probably want to build the installer as a programmer would. WiX 3.0 is my tool of choice - open source code, from Microsoft to boot. Start with this tutorial on WiX. It's good.
The GUI for innosetup (highly recommended) is Istool
You can also use the MSI installer built into Visual Studio, it's a steeper learning curve (ie is a pain) but is useful if you are installing software in a corporate environment.
To have your program show up in the Start program menu,
You would need to create folder
C:\Documents and Settings\All Users\Start Menu\Programs
and added a short cut to the program you want to launch.
(If you want your application be listed
directly in the Start menu, or in the programs submenu,
you would put your short cut in the respective directory)
To programically create a short cut you can use IShellLink
(See MSDN article).
Since you want to uninstall, that gets a lot more involved because you don't want to simply go deleting DLLs or other common files without checking dependencies.
I would recommend using a setup/installation generator, especially nowadays with Vista being so persnickety, it is getting rather complicated to roll your own installation
if you need anything more than a single executable and a start menu shortcut.
I have been using Paquet Builder setup generator for several years now.
(The registered version includes uninstall).
You've already got the main steps. One you left out is to install on the Start Menu and provide an option to create a desktop and/or quick launch icon.
I would encourage you to look into using a setup program, as suggested by Jeremy.