EmberCLI - Determining environment from request early in the page lifecycle - ember.js

I have an EmberCLI app where Staging and Prod both live on the same S3 bucket, and in config/environment.js the environment variable is the same. Still I need to be able to specify different settings for the two "environments".
The surefire way to tell which app is running is by inspecting the domain of the request, but I'm having trouble intercepting that early enough to update my settings.
I've tried creating an initializer to inspect the domain and update the ENV object accordingly, but it seems it's too late in the page lifecycle to have any effect; the page has already been rendered, and my addons do not see their proper settings.
For one addon, I basically had to copy all of its code into my project and then edit the index.js file to use the proper keys based on my domain, but it's unwieldy. Is there a better way to do this? Am I trying to make an architecture work which is just ill-advised?
Any advice from those more versed in Ember would be much appreciated.

There's not a great way to handle staging environments right now (see Stefan Penner's comment here).
That said I think you can achieve a staging environment on s3 by using ember-cli-deploy if you add a staging environment to your ember-cli-deploy config. And handle the difference between staging and production in ember-cli-build.js.

One thing I do is reuse the production environment for both staging and production, but use shell environment variables for some of the configuration:
// config/environment.js
if (environment === 'production') {
ENV.something = process.env.SOMETHING;
}
And then to build:
$ SOMETHING="staging-value" ember build --environment=production

Related

Gatsby local build different from AWS Amplify build?

I have a Gatsby site that is working perfectly fine when I build and serve on localhost, but when I push the code and check my AWS Amplify website (that is tracking the github repo), it's behaving very differently. For example, on my local production build, all the links are working properly and view page source shows HTML.
However, on my Amplify link, only some of the link paths are working, and view page source is not showing any HTML for any of the pages. I assume there must be some kind of difference between the way it's being built on my local machine and on Amplify, but I'm not sure exactly where the exact problem is lying.
In theory, since they're both production builds and not development, they should be behaving the same way?
I assume there must be some kind of difference between the way it's
being built on my local machine and on Amplify, but I'm not sure
exactly where the exact problem is lying.
I also think so. These kinds of issues (different behavior between environments) are usually related to Node versions. hence the installed dependencies version differs between environments.
In your case, check the current local version by running node -v and set the version to AWS Amplify (it uses nvm underlying).
You can follow one of the multiples approaches suggested in:
How to change Node Version in Provision Step in Amplify Console
frontend:
phases:
preBuild:
commands:
- nvm install 10
Change 10 for your local version.
Gatsby is a static site generator if you are adding new pages separately as an html link, at the build time it will be removed. You need to add them under the pages folder as a react component itself and use Link from gatsby to navigate across the pages

Set different environments in AWS Amplify

I am just getting started with AWS Amplify and after some research, I am still unable to set up the environments structure I want. I have a Reactjs app which I want to host there, my plan is to have 3 environments:
Dev: this environment is to test new features. Every new branch I create is automatically deployed to this environment (no problem here, already implemented).
Staging: Once new features are merged into master branch I would like to have them deployed here. This should work as a pre-production environment.
Production: Once features in staging are tested, they should be released into Production with just 1 click (or an easy action). Also production should be always running with the latest released build of the project.
So, what's the problem exactly? So far I don't know how to have master pointing to 2 environments, meaning that it is either deployed in staging or production environment, and promoting from staging to production is rather tedious at the moment.
Is there any way to implement this workflow in Amplify? Thank you in advance for your help.

How to handle private configuration file when deploying?

I am deploying a Django application using the following steps:
Push updates to GIT
Log into AWS
Pull updates from GIT
The issue I am having is my settings production.py file. I have it in my .gitignore so it does not get uploaded to GITHUB due to security. This, of course, means it is not available when I PULL updates onto my server.
What is a good approach for making this file available to my app when it is on the server without having to upload it to GITHUB where it is exposed?
It is definitely a good idea not to check secrets into your repository. However, there's nothing wrong with checking in configuration that is not secret if it's an intrinsic part of your application.
In large scale deployments, typically one sets configuration using a tool for that purpose like Puppet, so that all the pieces that need to be aware of a particular application's configuration can be generated from one source. Similarly, secrets are usually handled using a secret store like Vault and injected into the environment when the process starts.
If you're just running a single server, it's probably just fine to adjust your configuration or application to read secrets from the environment (or possibly a separate file) and set those values on the server. You can then include other configuration settings (secrets excluded) as a file in the repository. If, in the future, you need more flexibility, you can pick up other tools in the future.

How can you add environment variables in a Jenkins Multibranch Pipeline

I am working on a Django project, I have integrated it withJenkins Multibranch Pipeline. I cannot inject environment variables by the Multibrach Pipeline option even after installing the Environment Injector plugin.
I have evironment variables like DB_PASSWORD that must be included in the envvars.
Any insight will be highly appreciated.
Since you require secrets, the best practice way is to use the withCredentials plugin which can load credentials stored in Jenkins credential store and expose them as ENV variables to code executed within its block/closure. Using withCredentials does not expose them in Jenkins logs
withCredentials([[$class: 'UsernamePasswordMultiBinding',
credentialsId: 'DB_Creds',
usernameVariable: 'DB_USERNAME',
passwordVariable: 'DB_PASSWORD']]) {// do stuff }
For non sensitive Env vars use withEnv
withEnv(["AWS_REGION=${params.AWS_REGION}"]) {// do stuff }
If for whatever reason you want Env vars set across your entire pipeline (not entirely recommended but sometimes necessary):
env.MY_VAR='var'
echo("My Env var = ${env.MY_VAR}")

automate and streamline django deployment from local to server

Recently, I have started to deploy my work-in-progress django site from my local to server. But I have been doing it manually, which is ugly, unorganized, and error-prone.
I am looking for a way to automate and streamline the following deployment tasks:
Make sure all changes are committed and pushed to remote source repository (mercurial) and tag the release.
Deploy the release to the server (including any required 3rd-party apps missing from the server)
Apply the model changes to the database on the server
For 2), I have two further questions. Should the source of the deployment be my local env or the source repository? Do I need a differential or full deployment?
For 3), I use South in my local for applying model changes to database. Do I do the same on the server? If so, how do I apply multiple migrations at once?
I think Fabric is the defacto lightweight python deployment tool. http://docs.fabfile.org/en/1.3.4/index.html. It is very simple and will help you keep your deployment organized and streamlined. It allows for easy scp or rsync. Additionally it is easy to integrate with django tests.
For my smaller projects I just make the source of my deployments my local env. I checkout a clean copy and deploy from there. It would probably be better to integrate this with my version control for a quick rollback if there are any errors once I deploy.
I have never used south, but i'd imagine you could just write a fab command to sync your production server. If you're using south on dev, i couldn't imagine why you wouldn't want to use it on production too?