Is there any recommended way to load configuration inside a .env file in clojure?
I've found https://github.com/rentpath/clj-dotenv and https://github.com/jackmorrill/dotenv which seemed to do what I want, but both of them are not available on clojars.org anymore with github activity being very low.
There also is https://github.com/weavejester/environ/ but I have not quite gotten my head around how to use it, since the project.clj is tracked inside my git repository and my configuration (in dev also) contains potentially sensible information such as API tokens.
Any help would be greatly appreciated.
The most basic approach is to edn/read an .edn file that contains a map of configuration. You don't need a library to do this. You just need to manage the file (don't check it in if it contains passwords, but do deploy it to where it needs to go).
Environ is great for getting values from the environment, but how you get them into your environment is up to you. One way would be source an env file before launching your application.
This library https://github.com/outpace/config can help for more complicated needs. It allows you to pull configuration from many different sources (files, environment, or specify something else) in different formats (edn/string).
Ultimately you have to decide where you want configuration to be and how it will get there, both of which are not directly something you do from your Clojure project, but are instead deployment concerns. Feel free to add more specifics if this is missing your needs.
Related
I'm very new to databases and I'm trying to find out what the best practise for what I'm trying to achieve.
I have the one repository which is a Django backend with a postgresql database attached. I'm working with this on my main pc but recently I've had to work on my laptop. My laptop has another postgresql database running on 5432, so I've had to change some of that info to be on port 54324. These changes I don't want pushed to the repository, but I would still like to track the settings.py file in the repository. So far I've just created a branch for each pc to maintain the separate settings, but I'm sure this is not a great way to do it. I've heard about setting up environment files, but I'm unsure about if this is the 'right way' to do it either.
I'm a little confused with the best way I can do this, hopefully I'm making sense. Any help would be appreciated greatly.
Thanks,
Darren
This is normally solved with a properties file that is ignored. What you keep is a sample file (that has a different name) and that you do track and change accordingly on git. Your python scripts read the properties file and everybody should be happy.
Besides eftshift0's answer, consider having a committed config.defaults.py file that set default configuration values that may be overridden by a per-site config.local.py file. If the default configuration works for you, you don't need to create the per-site config. If not, create the per-site config. Never commit (and do .gitignore) the per-site config.
The names of the configuration files might be located outside the repository proper, but the overall idea still applies. The distributed (and committed) configuration file is a sample and/or default and actual site settings are kept in some other file that is never committed.
If you already have a single config.py or settings.py, you can establish this configuration pattern by adding site.py (use whatever name you want for this per-site setting file) as an ignored file. Read the new file, if it exists, such that the site settings override the default settings from the existing tracked file, and you're good to go.
I have a pythonic serverless project on AWS, in which several services are contained in a single repository (a monorepo) which looks like:
/
serverless.yml
/service1
lambda_handler.py
/service2
lambda_handler.py
/general
__init__.py
utils.py
'general' is a package that is shared between different services, and therefore we must use a single 'serverless.yml' file at root directory (otherwise it won't be deployed).
We have two difficulties:
A single 'serverless.yml' may be too messy and hard to maintain, and it prevents us from using global configuration (which may be quite useful).
Deplyoing a single service is complicated. I guess that 'package' feature may help but I'm not quite sure how to use it right.
Any advise or best practices to use for this case?
Its better to use individual serverless.yml files per each service. To use the shared code,
You can convert the code into a library and use it as a dependency and installed via a package manager for each individual service similar to a library. (This is useful since updating a version of common code won't affect the other services)
Keep the shared code in a different repository and use git submodule for individual service.
For more information, refer the article Can we share code between microservices which I have originally written considering serverless.
We are updating our sitecore to 8.2 and in the process I am trying to refine our source control and development workflow.
Goals
1. Have a single source of truth for support dlls, configs, lic, etc.
2. Have everything in source control that is needed to recreate the entire site from dev to prod. (excluding packages).
In order to have all of the different configs needed for the various machines I have created gulp tasks that transform the configs on build (dev, staging, prod). Those transformed configs are placed in a folder in the project that is then used to replace the originals on the target machines. This folder publishes all of its contents and seems to be working well so far.
What I don't know is how to deal with all of the config files that do not change.
Is it best to include all of those .config files in the project so that they publish? If not, then the target machine folders will have to be either manually managed (seems like a bad idea) or a script used to ensure the configs are up to date (more customization..by default not a great idea).
The only downside (that I see) to including all of the configs in the project is the weight that it would add to file searches (and that doesn't seem like a very strong argument).
Am I not seeing something?
How are you other Sitecore humans handling this?
Gregory
As a general rule of thumb, do not check in any default files into Source Control.
The main reasons are; bloat, making syncing/downloading from your source control take much longer, and upgrades, the latter being a much more important reason.
If/when you upgrade in the future, if you do not have any Sitecore files checked into source control then you can simply deploy a new/clean instance of Sitecore, fix any conflicts in your own code and then deploy on top. You don't have to try and figure out what has changed in the default install files between releases.
Any changes you need to make to Sitecore configs or settings should be made using patch files and only those custom files added to your solution.
How to handle this for deployments?
There are a few options. You could go done the scripted route, which will take a clean Sitecore install, unzip and made whatever modifications you need, then install/unzip the modules that you use in your solution one by one.
Another option maybe to create a default install with all the modules and then zip this up, then an install would be similar process to above but a more simpler case of just unzipping a single file. You could use Sitecore SIM to both install the instance, modules and then backup or do this manually.
Yet another alternative may be to check everything into Source Control, either under separate repository or a different project so ensure that all default files and configs are kept separate. If you need to upgrade in the future, simply delete the repo/project and add them back in again.
I would also do the same (a separate project) to keep all Support patches/dlls separate, again to help easily identify what fixes have been applied and to easily remove them if a future version resolves the issue.
These may add an additional step to your deploy, but keeping this separation will make your life much much easier when it comes to upgrade time.
In a scenario where the project's settings.py file is split into base, development, and production, and only the base file is tracked in VCS. Is it a problem if the SECRET_KEY is hard-coded in the production settings file. Or will having it in an environment variable a better choice? If so, why?
Is having it pulled from the system somehow more secure than written in plain text inside the file?
I would say the security for both methods are the same. Written down in a file (which is not committed to the source code repository) or as a environment variable would have the same effect.
If your system is compromised in a way someone got access the server, both methods would expose your security key. So, it wouldn't make much difference.
Now, I would say using environment variable is a better strategy. Not related to security though. But usually it is not a good idea to rely on uncommitted files to run a project. It's one of the causes of the famous in my machine it works problems. And it also make initial setup of a project difficult for newcomers.
For this kind of settings and configuration management, there is a great python library called Python Decouple. It's worth checking it out. I use it in every Django project I work with.
I'm learning about Strongloop, it's pretty good so far.
Question: What is the appropriate place to put AWS keys? config.json? ..and how would I access them from my application?
Thanks
Ideally you would not put those credentials in any file that is committed. I usually find environment variables to be the best balance of convenience and security.
If you are using strong-pm, then you would do this with slc ctl env-set. If you are using some other supervisor, then you'll need to consult its docs.
A lot of times it is enough to use Upstart or systemd directly, which both make it fairly easy to set environment variables in the service process.
Other than above answer, what you can do is put these in your release procedure.
What we have done in our product is all these entries are kept in a config file which is deployed from the shared folder.
Let me elaborate it.
we have local config files in the git. and separate config files on production servers in a folder names as shared, now, when ever a tag release is deployed from git, the shared folder overwrite these config files.