I'm collaborating on a project and use prettier as my code formatter. The number of collaborators is growing and I just suggested that we install prettier as a dev dependency and set up a config file so that our formatting is consistent. I have two questions about this:
I want to specify a specific version in the package.json so that versioning does not create diffs. I have Prettier installed globally on my machine and want to make sure the dev dependency is taking precedence over the globally installed version.
I wanted to know if anyone had any experience putting multiple config files in their project to format different directories slightly differently and if this caused any issues. specifically I'm thinking of using separate config files for the server and client code.
Any help is appreciated. Thanks so much.
What you're describing is the recommended way to use Prettier. See https://prettier.io/docs/en/install.html
To configure different settings for different parts of the project, use the overrides key in the config file. See https://prettier.io/docs/en/configuration.html#configuration-overrides
If something isn't clear and you still have questions, please write them here.
Related
I have a package I want to install. I would like the files to end up in a different directory than the installation wizard choses for them.
For example, my Sitecore copy is running at C:\SiteCore\website
The module added files to C:\SiteCore\website\Console
I would like the files to ultimately live at C:\SiteCore\website\sitecore_modules\Console
I am using Sitecore 6.5 rev 111230, but we are planning to upgrade very soon. I would like for my installed packages to migrate seamlessly once we have upgraded. For reference, the package I want to install at the moment is the Sitecore Powershell Extensions. Although, I would prefer to apply a similar method to any future packages that I install.
Is there a secret switch in the package installation process to allow me to do this? Can I do it from the package installation wizard? Is there another way to install packages?
I'm assuming I can't just change the package path and expect everything to keep working. Do I have to update a configuration somewhere (a file or inside the Sitecore CMS GUI) to make the package recognize the new file locations?
The module creator defines where files exist. If you move them you run the risk of something not working. The best idea is to ask the creator on the Marketplace page of the module.
There is no turn-key way to change this.
I guess you cand take the code from MarketPlace and you can modify it.
I don't know how exactly is the licenses with MarketPlace modules, but I think people can modify others code.
Please check on code and also on items, maybe on some fields are values for folder path.
I discovered a way to accomplish this, but it can be quite involved or even impossible, depending on the complexity and size of the package.
First of all, I did take the question to the module creator and had a very helpful and informative conversation with the creator. So thanks for that suggestion - they may even move the install location in a future release, based on my request.
The workaround is to first install the package on a system as normal. Then you figure out everything that comes with the package. For files, this is easy if your Sitecore root is under source control. For items, this is really complicated. You can search for the installed items by owner, if you had the foresight to create & use a unique user for the package installation. Or you can check the untyped files in the package that are essentially xml based item manifests.
Once you have a detailed list, you make the desired modifications to the locations. Then you recreate the package yourself using the Sitecore package designer.
This works for simple packages - I did it to one small package that I hope to get up on the Sitecore marketplace as shared source soon. And by small, I mean it was 2 files and 3 items. The package that prompted me to ask this question would not cooperate with this workaround. The included .dll had some assumptions about the file structure hard-coded into it.
The workaround I took for the more complex package was really quite basic: I just created a new source-code external to the required path. That let me wrap everything up neatly without getting medieval on the package files.
Thanks for both your answers, a very fine +1 to you.
I have found a problem with the test environment in a c++ problem.
We have a machine which downloads the code from the version control system and, build it and execute the unit test, nothing new.
The problem arise when we add a new dependency in our project. We are developing a lot of features at the same time and it is something relatively common. We this happens we have to advise testers and give them an easy way to reproduce the compilation environment ...
And I was thinking if there is any other easy way to go through this ... don't know, some tool like virtualenv or buildout for python ..
I have been searching at google, but with no luck.
Any help will be appreciated.
You can always add all of the dependencies to the revision control system and provide automated scripts that will install the required subsystems. Where I work, if you just download the current version from the repository, you can build in one step an ISO image that can be installed by testers in any computer they want. The image contains everything from the OS up to the application.
Depending on your particular situation, you might want to start with smaller steps, like adding the dependencies to the repository and having the testers check there whether any new file appears or changes version.
No ready tool, AFAIK, except maybe for CMake which can control things like that for you.
For C++, it's fairly easy to manage "by hand" since you can set LIB, LIBPATH and PATH environment variables to carefully selected directories. No site.py, eggs, .pth files and the like as with Python.
We do this at our shop, setting up our build/development environment closely and have everything in revision control (mostly scripts that download huge zips of prebuilt libs and unpack them to the right places).
Small libs are copied to common dirs, larger get their own entry in the env-vars.
This works equally well for Python and Java. Haven't tried other languages...yet. :)
For those who have compiled from source knows how much of a pain it is to run "./configure" only to find that X library or missing, worst yet it spits out a silly line saying a cryptic lib file is missing, which you then have to go to a web browser type in the missing file cross you fingers that Google can find the answer for you...
I find that very repetitive, so my question is:
Is there a way to work out all the required dependencies but without doing "./configure"
Read the README* or INSTALL* files in the source distribution, if there are any, or look for any documentation on the website where you downloaded it from. If the package is well documented, dependencies will usually be listed somewhere.
Given that there's no mention of a specific pkg has been mentioned, I assume this is a generic "how to avoid using configure" question. From a source tarball, no there is no automated way to work the dependencies out. That's what configure is for (you can always read the Makefiles and autoconf files and understand the dependencies manually, but then you'll miss configure very quickly). To avoid it, you need use something other the straight tarball, which has already worked out the dependencies.
For example you can switch to building source rpms (or debs, dependending on your system). Or you can use a system such as Gentoo which is really good at working out the dependencies for you. But all of these require the pkg you're interested in to be available in their format, so they won't work for tarballs that you download from the source provider.
Read configure.ac/configure.in. Look for calls to AC_CHECK_LIB, AC_CHECK_LIBS, AC_SEARCH_LIBS, AM_PATH_* (some old packages that don't use pkg-config put their checks into the AM_* namespace for some reason), PKG_CHECK_MODULES (for pkg-config), AX_* (many autoconf-archive macros are written to check for uncommon dependencies) and any macro call that start with an odd name (i.e., not AC_*, AM_* or AX_*. Try grep '^[^A]'?).
One thing you can do that would be good for the community is to submit a bug report/feature request to the package maintainers. There are quite a few packages whose configure script does not abort on the first missing dependency, but runs to completion and then prints a summary of all the dependencies that are missing. That greatly reduces the tedium you describe. Unfortunately, "quite a few" translates to less than .00001 percent (this is a made up statistic). If you can convince the package maintainers to re-write their configure script to support this behavior, you will contribute to making the world a better place.
Good luck with that!
Is it possible to combine the following properties, and if so, how?
Store in our version control system some Visual Studio 2008 native C++ (VCPROJ) project files for the developers in our team that use this IDE.
Allow some of those developers to tweak their projects (e.g. using debug version of third-party libraries instead of the usual ones).
Make sure these modifications are done in files that are not versioned.
In other words, I would like to allow developers to tweak some settings in their projects without risking that these changes are committed.
An 'optional VSPROP' file approach seems doomed to fail, as VS2008 refuses to load projects that refer to non-existent VSPROP files...
Any other suggestion? Is this possible with VS2010?
You may not be able to do this but using a solution that generates the vcproj like CMake for example would let you do this. Scripts all your project with CMake and literally conditionally include a config file(if present for example) that developers can change on their setup.
Branches could solve this problem: you create a branch, play with different versions of third-party, merge changes to trunk if results are good.
Well, as a preliminary solution you could put the project file into something like .hgignore or .gitignore after its initial commit.
This way changes to it can't be done accidentally.
At least that's how I handle .hgignore itself.
We use a versionned "common_configuration" folder, and a script which copies project files from this "common_configuration" folder towards the "project" folder.
We have another script to copy the configuration backwards, so the developpers need to make a conscious action to commit their local changes to the global version control system.
It answers partly your needs :
The upside : we have a way to keep a common configuration for everyone, and no accidental committing of local configuration
The downside : blindly copying the files actually crushes local changes. We live with it. We could write some more clever merger tool (using diff, or xml specific manipulations), but don't want to spend to much time on supporting the deployment tools.
we work under Linux/Eclipse/C++ using Eclipse's "native" C++ projects (.cproject). the system comprises from several C++ projects all kept under svn version control, using integrated subclipse plugin.
we want to have a script that would checkout, compile and package the system, without us needing to drive this process manually from eclipse, as we do now.
I see that there are generated makefile and support files (sources.mk, subdir.mk etc.), scattered around, which are not under version control (probably the subclipse plugin is "clever" enough to exclude them). I guess I can put them under svn and use in the script we need.
however, this feels shaky. have anybody tried it? Are there any issues to expect? Are there recommended ways to achieve what we need?
N.B. I don't believe that an idea of adopting another build system will be accepted nicely, unless it's SUPER-smooth. We are a small company of 4 developers running full-steam ahead, and any additional overhead or learning curve will not appreciated :)
thanks a lot in advance!
I would not recommend putting things that are generated in an external tool into version control. My favorite phrase for this tactic is "version the recipe, not the cake". Instead, you should use a third party tool like your script to manipulate Eclipse appropriately to generate these files from your sources, and then compile them. This avoids the risk of having one of these automatically generated files be out of sync with your root sources.
I'm not sure what your threshold for "super-smooth" is, but you might want to take a look at Maven2, which has a plugin for Eclipse projects to do just this.
I know that this is a big problem (I had exactly the same; in addition: maintaining a build-workspace in svn is a real pain!)
Problems I see:
You will get into problems as soon as somebody adds or changes project settings files but doesn't trigger a new build for all possible platforms! (makefiles aren't updated).
There is no overall make file so you can not easily use the build order of your projects that Eclipse had calculated
BTW: I wrote an Eclipse plugin that builds up a workspace from a given (textual) list of projects and then triggers the build. That's possible but also not an easy task.
Unfortunately I can't post the plugin somewhere because I wrote it for my former employer...