Central repository for Crystal Shards? - crystal-lang

Is there anything like NPM or pip for Crystal?
Are there plans or a roadmap to achieve this?
I have gotten tired of copying and pasting github repositories into my shards file.

The dependency manager for Crystal is shards. However, different from npm or pip, there is no centralized repository for registering shards. This has some benefits, among them avoiding a critical point of failure.
For the process of configuring your shard dependencies, this makes no significant difference. It's just that instead of adding a registered name, you put in the address of a repository.
Currently, there is no option to add a shard to your shards file directly from the command line (but there is an issue for that) so you have to edit shards.yml and add it manually.
Honestly, I don't think this is much of a disturbance. I can't remember using a command line tool to add a local dependency even if many dependency managers support that. If you're adding a dependency, you'll also have to add code to use it. So you'll need to work in an editor anyway and can easily edit the dependency file there.

Related

Can I sync two bazel-remote-cache's using rsync

I have a build pipeline that builds and tests changes before they are merged to the main line. Once that happens, it would be great if the Bazel actions from that build are available to developers. Unfortunately, the build pipeline runs in the cloud and uses an in-cloud cache, but the developers use an on-premises cache.
I am using https://github.com/buchgr/bazel-remote
Does anyone know if I can just rsync the artifacts from the data directory of the cloud cache to the developers' cache in order to give them access to the pre-built artifacts? Normally, I would just try it out, but I'm concerned about subtle issues that might poison the cache or negatively effect the hit rate, so I'm hoping to hear from someone who understands the code before I go digging.
You can rsync the cache directory contents and use them from another location, but this won't work with a running bazel-remote- the items will be ignored until bazel-remote is restarted.
Another option would be to use the http_proxy configuration file setting to automatically put/get cache items to/from another bazel-remote instance. An example configuration file was recently added to README.md in the bazel-remote git repository.

Sitecore Config Files + Project Setup

We are updating our sitecore to 8.2 and in the process I am trying to refine our source control and development workflow.
Goals
1. Have a single source of truth for support dlls, configs, lic, etc.
2. Have everything in source control that is needed to recreate the entire site from dev to prod. (excluding packages).
In order to have all of the different configs needed for the various machines I have created gulp tasks that transform the configs on build (dev, staging, prod). Those transformed configs are placed in a folder in the project that is then used to replace the originals on the target machines. This folder publishes all of its contents and seems to be working well so far.
What I don't know is how to deal with all of the config files that do not change.
Is it best to include all of those .config files in the project so that they publish? If not, then the target machine folders will have to be either manually managed (seems like a bad idea) or a script used to ensure the configs are up to date (more customization..by default not a great idea).
The only downside (that I see) to including all of the configs in the project is the weight that it would add to file searches (and that doesn't seem like a very strong argument).
Am I not seeing something?
How are you other Sitecore humans handling this?
Gregory
As a general rule of thumb, do not check in any default files into Source Control.
The main reasons are; bloat, making syncing/downloading from your source control take much longer, and upgrades, the latter being a much more important reason.
If/when you upgrade in the future, if you do not have any Sitecore files checked into source control then you can simply deploy a new/clean instance of Sitecore, fix any conflicts in your own code and then deploy on top. You don't have to try and figure out what has changed in the default install files between releases.
Any changes you need to make to Sitecore configs or settings should be made using patch files and only those custom files added to your solution.
How to handle this for deployments?
There are a few options. You could go done the scripted route, which will take a clean Sitecore install, unzip and made whatever modifications you need, then install/unzip the modules that you use in your solution one by one.
Another option maybe to create a default install with all the modules and then zip this up, then an install would be similar process to above but a more simpler case of just unzipping a single file. You could use Sitecore SIM to both install the instance, modules and then backup or do this manually.
Yet another alternative may be to check everything into Source Control, either under separate repository or a different project so ensure that all default files and configs are kept separate. If you need to upgrade in the future, simply delete the repo/project and add them back in again.
I would also do the same (a separate project) to keep all Support patches/dlls separate, again to help easily identify what fixes have been applied and to easily remove them if a future version resolves the issue.
These may add an additional step to your deploy, but keeping this separation will make your life much much easier when it comes to upgrade time.

Load .env file in clojure

Is there any recommended way to load configuration inside a .env file in clojure?
I've found https://github.com/rentpath/clj-dotenv and https://github.com/jackmorrill/dotenv which seemed to do what I want, but both of them are not available on clojars.org anymore with github activity being very low.
There also is https://github.com/weavejester/environ/ but I have not quite gotten my head around how to use it, since the project.clj is tracked inside my git repository and my configuration (in dev also) contains potentially sensible information such as API tokens.
Any help would be greatly appreciated.
The most basic approach is to edn/read an .edn file that contains a map of configuration. You don't need a library to do this. You just need to manage the file (don't check it in if it contains passwords, but do deploy it to where it needs to go).
Environ is great for getting values from the environment, but how you get them into your environment is up to you. One way would be source an env file before launching your application.
This library https://github.com/outpace/config can help for more complicated needs. It allows you to pull configuration from many different sources (files, environment, or specify something else) in different formats (edn/string).
Ultimately you have to decide where you want configuration to be and how it will get there, both of which are not directly something you do from your Clojure project, but are instead deployment concerns. Feel free to add more specifics if this is missing your needs.

Is there an ideal way to move from Staging to Production for Coldfusion code?

I am trying to work out a good way to run a staging server and a production server for hosting multiple Coldfusion sites. Each site is essentially a fork of a repo, with site specific changes made to each. I am looking for a good way to have this staging server move code (upon QA approval) to the production server.
One fanciful idea involved compiling the sites each into EAR files to be run on the production server, but I cannot seem to wrap my head around Coldfusion archives, plus I cannot see any good way of automating this, especially the deployment part.
What I have done successfully before is use subversion as a go between for a site, where once a site is QA'd the code is committed and then the production server's working directory would have an SVN update run, which would then trigger a code copy from the working directory to the actual live code. This worked fine, but has many moving parts, and still required some form of server access to each server to run the commits and updates. Plus this worked for an individual site, I think it may be a nightmare to setup and maintain this architecture for multiple sites.
Ideally I would want a group of developers to have FTP access with the ability to log into some control panel to mark a site for QA, and then have a QA person check the site and mark it as stable/production worthy, and then have someone see that a site is pending and click a button to deploy the updated site. (Any of those roles could be filled by the same person mind you)
Sorry if that last part wasn't so much the question, just a framework to understand my current thought process.
Agree with #Nathan Strutz that Ant is a good tool for this purpose. Some more thoughts.
You want a repeatable build process that minimizes opportunities for deltas. With that in mind:
SVN export a build.
Tag the build in SVN.
Turn that export into a .zip, something with an installer, etc... idea being one unit to validate with a set of repeatable deployment steps.
Send the build to QA.
If QA approves deploy that build into production
Move whole code bases over as a build, rather than just changed files. This way you know what's put into place in production is the same thing that was validated. Refactor code so that configuration data is not overwritten by a new build.
As for actual production deployment, I have not come across a tool to solve the multiple servers, different code bases challenge. So I think you're best served rolling your own.
As an aside, in your situation I would think through an approach that allows for a standardized codebase, with a mechanism (i.e. an API) that allows for the customization you're describing. Otherwise managing each site as a "custom" project is very painful.
Update
Learning Ant: Ant in Action [book].
On Source Control: for the situation you describe, I would maintain a core code base and overlays per site. Export core, then site specific over it. This ensures any core updates that site specific changes don't override make it in.
Call this combination a "build". Do builds with Ant. Maintain an Ant script - or perhaps more flexibly an ant configuration file - per core & site combination. Track version number of core and site as part of a given build.
If your software is stuffed inside an installer (Nullsoft Install Shield for instance) that should be part of the build. Otherwise you should generate a .zip file (.ear is a possibility as well, but haven't seen anyone actually do this with CF). Point being one file that encompasses the whole build.
This build file is what QA should validate. So validation includes deployment, configuration and functionality testing. See my answer for deployment on how this can flow.
Deployment:
If you want to automate deployment QA should be involved as well to validate it. Meaning QA would deploy / install builds using the same process on their servers before doing a staing to production deployment.
To do this I would create something that tracks what server receives what build file and whatever credentials and connection information is necessary to make that happen. Most likely via FTP. Once transferred, the tool would then extract the build file / run the installer. This last piece is an area I would have to research as to how it's possible to let one server run commands such as extraction or installation remotely.
You should look into Ant as a migration tool. It allows you to package your build process with a simple XML file that you can run from the command line or from within Eclipse. Creating an automated build process is great because it documents the process as well as executes it the same way, every time.
Ant can handle zipping and unzipping, copying around, making backups if needed, working with your subversion repository, transferring via FTP, compressing javascript and even calling a web address if you need to do something like flush the application memory or server cache once it's installed. You may be surprised with the things you can do with Ant.
To get started, I would recommend the Ant manual as your main resource, but look into existing Ant builds as a good starting point to get you going. I have one on RIAForge for example that does some interesting stuff and calls a groovy script to do some more processing on my files during the build. If you search riaforge for build.xml files, you will come up with a great variety of them, many of which are directly for ColdFusion projects.

2 VisualSVNServer instances pointing to same SVN repo?

Would it be possible/safe to run two instances of VisualSVNServer pointing to the same repo?
I've searched around and not had any luck finding anything related specifically to this question. The only reason I ask is because we have a need to enable Windows Authentication/Integration over http, and svn authentication over https. It does not seem to be an option to run both within a single instance of VisualSVNServer.
If not, do you know of alternative solution that would allow for this?
Edit: Received the following answer from VisualSVN Support
Thanks to Subversion design, repositories are ready to be accessed by several server instances simultaneously. We haven't experimented a lot with such configuration, but I think it's possible.
Am I understand properly, that you are going to store your repositories on a network storage and run two VisualSVN Server instances on different machines?
Please take care about the server.pid. file. In the current release, this file is stored in the repositories folder. So there will be a collision between two instances of VisualSVN Server. We are going to fix this problem in the upcoming release.
You can easily relocate the server.pid to another destination by adding the following command to the "C:\Program Files\VisualSVN Server\conf\httpd-custom.conf" file:
[[
PidFile "C:/Tmp/server.pid"
]]"
You can point two VisualSVN Server instances to the same repository if it stored on SMB share without any problems. It's typical configuration for active/active or active/passive cluster setups.
I wouldn't do this because as far as I know, VisualSVN brings its own web server (Apache) and SVN binaries. I would expect locking issues when running two of each on the same repo, if it's possible at all. VisualSVN probably won't install twice at all.
This sounds like a case for separate installation of SVN and Apache and custom configuration. I can't say whether what you want is possible but I would expect it is. It's probably to be fiddly, though - VisualSVN takes away a lot of configuration hassle that you have when doing the setup manually. Questions about that would be appropriate to ask on Serverfault.com.
Apart from VisualSVN, there also are other, also commercial wrappers. Maybe one of them is more flexible in this respect.
Update: Also, check this out: Supporting Multiple Repository Access Methods from the SVN book