What is the correct way of sharing your elasticbeanstalk configuration with your team? - amazon-web-services

I am looking for a way to share the EB configuration so anyone in my team with valid aws creds can deploy the code. By default, EB adds following to your .gitignore file.
# Elastic Beanstalk Files
.elasticbeanstalk/*
!.elasticbeanstalk/*.cfg.yml
!.elasticbeanstalk/*.global.yml
Do I need to check-in these files to share it with the team?

In my opinion, AWS royally messed up with their .gitignore defaults. This was confusing at first, because it seemed like it was there for a good reason. We couldn't find a good reason. Maybe it was just a precaution so you didn't commit something you shouldn't. However, firstly, modifying a project's .gitignore is not something it should be doing by default, in my opinion. And secondly, no one should be committing code they haven't reviewed.
As Kush notes in his reply, you can add the files into a nested directory which would be tracked by your VCS. I'm assuming the reason for this is so that different developers can maintain different configurations. We have zero use for anything remotely resembling this, but it's worth noting as I'm sure someone may.
We've completely removed these entries from our project and commit the entire .elasticbeanstalk and .ebextensions directories.

Assuming you have CLI Access you can create template and share a command like:
eb config save dev-env --cfg prod
Now, open this file in a text editor to modify/remove sections as necessary for your production environment.
Note: AWSConfigurationTemplateVersion is a required field. Do not remove it from the configuration file.
Checking Configurations into Version Control
If you want to check in your saved configurations so that anyone with access to your code can use the same settings in their own environments or if you want to track different versions of the saved configurations, move the file to the .elasticbeanstalk/folder directory. Saved configurations are located in the .elasticbeanstalk/saved_configs/ folder. By moving the configuration file up one level into the .elasticbeanstalk/ folder, the file can be checked in and will still work with the EB CLI. After you move the file, you must add and commit it.
Refer this AWS Blog Post

Related

Can I create (or simulate) AWS credential files in local directories?

From the same (Mac) machine, I regularly bounce between absolutely unrelated projects. Often, these projects have their own separately owned AWS instance (primarily Elastic Beanstalks)
I would like to be able to just cd into a project directory and have it know the credentials for that specific project, which I presume would require a .aws directory and the usual credentials and config files.
I cannot find anything in the documentation that explicitly allows this and named profiles aren't quite it, are they?
As a fallback, I could just write little scripts that change the directory and export the proper AWS_PROFILE value, but I wanted something that just works in context.
Am I asking for too much again? :-)

How to handle private configuration file when deploying?

I am deploying a Django application using the following steps:
Push updates to GIT
Log into AWS
Pull updates from GIT
The issue I am having is my settings production.py file. I have it in my .gitignore so it does not get uploaded to GITHUB due to security. This, of course, means it is not available when I PULL updates onto my server.
What is a good approach for making this file available to my app when it is on the server without having to upload it to GITHUB where it is exposed?
It is definitely a good idea not to check secrets into your repository. However, there's nothing wrong with checking in configuration that is not secret if it's an intrinsic part of your application.
In large scale deployments, typically one sets configuration using a tool for that purpose like Puppet, so that all the pieces that need to be aware of a particular application's configuration can be generated from one source. Similarly, secrets are usually handled using a secret store like Vault and injected into the environment when the process starts.
If you're just running a single server, it's probably just fine to adjust your configuration or application to read secrets from the environment (or possibly a separate file) and set those values on the server. You can then include other configuration settings (secrets excluded) as a file in the repository. If, in the future, you need more flexibility, you can pick up other tools in the future.

Working on the same git with two different pc's. Two different postgresql settings in the settings.py file

I'm very new to databases and I'm trying to find out what the best practise for what I'm trying to achieve.
I have the one repository which is a Django backend with a postgresql database attached. I'm working with this on my main pc but recently I've had to work on my laptop. My laptop has another postgresql database running on 5432, so I've had to change some of that info to be on port 54324. These changes I don't want pushed to the repository, but I would still like to track the settings.py file in the repository. So far I've just created a branch for each pc to maintain the separate settings, but I'm sure this is not a great way to do it. I've heard about setting up environment files, but I'm unsure about if this is the 'right way' to do it either.
I'm a little confused with the best way I can do this, hopefully I'm making sense. Any help would be appreciated greatly.
Thanks,
Darren
This is normally solved with a properties file that is ignored. What you keep is a sample file (that has a different name) and that you do track and change accordingly on git. Your python scripts read the properties file and everybody should be happy.
Besides eftshift0's answer, consider having a committed config.defaults.py file that set default configuration values that may be overridden by a per-site config.local.py file. If the default configuration works for you, you don't need to create the per-site config. If not, create the per-site config. Never commit (and do .gitignore) the per-site config.
The names of the configuration files might be located outside the repository proper, but the overall idea still applies. The distributed (and committed) configuration file is a sample and/or default and actual site settings are kept in some other file that is never committed.
If you already have a single config.py or settings.py, you can establish this configuration pattern by adding site.py (use whatever name you want for this per-site setting file) as an ignored file. Read the new file, if it exists, such that the site settings override the default settings from the existing tracked file, and you're good to go.

Where is Appropriate to Put AWS Keys

I'm learning about Strongloop, it's pretty good so far.
Question: What is the appropriate place to put AWS keys? config.json? ..and how would I access them from my application?
Thanks
Ideally you would not put those credentials in any file that is committed. I usually find environment variables to be the best balance of convenience and security.
If you are using strong-pm, then you would do this with slc ctl env-set. If you are using some other supervisor, then you'll need to consult its docs.
A lot of times it is enough to use Upstart or systemd directly, which both make it fairly easy to set environment variables in the service process.
Other than above answer, what you can do is put these in your release procedure.
What we have done in our product is all these entries are kept in a config file which is deployed from the shared folder.
Let me elaborate it.
we have local config files in the git. and separate config files on production servers in a folder names as shared, now, when ever a tag release is deployed from git, the shared folder overwrite these config files.

Is there an easier way to remove old files on servers through GitHub after deleting files and syncing locally?

I currently work with a web dev team and we have 100+ GitHub Repo's, each for a different e-commerce website that has an instance on AWS. The developers use the GitHub app to upload their changes to the servers, and do this multiple times a day.
I'm trying to find the easiest way for us to remove old, deleted files from our servers after we delete and sync GitHub locally.
To make it clear, say we have an index.html, page1.html and page2.html. We want to remove page1.html, so they delete page1.html and sync through the GitHub app. The file is no longer visibly in the repo, but for us to completely remove the file I must also SSH into our AWS server, go to the www directory and find page1.html and also remove it there. Is there an easier way for the developers, who do not use SSH and the command line, to get rid of those files in terms of syncing with GitHub? It becomes a pain to have to SSH into many different servers and then determining which files were removed from the repo so that I can remove them there.
Thanks in advance
Something we do with our repo is we use tags(releases) and then through automation (chef in our case) we tell it to pull the new tag. It sounds like this wouldn't necessarily work for you but what Chef actually does with the tag might be of interest
It pulls the tag and then updates a symlink (and graceful restarts Apache). This means there's 0 downtime (symlink updates instantly) and, because it's pulling a fresh copy, any deleted files are gone.