Share source code between Elastic Beanstalk environments - amazon-web-services

I'm trying to share source code in two EB environments and modules/components seem a perfect way to do this. Modules allow for multiple env configurations in one repo - I just put it in a dedicated folder. Unfortunately it also seems that it forces all the code in the dedicated folder so I'd end up with duplicated code...
Documentation (https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/ebcli-compose.html) says
Each subfolder contains the source code for an independent component of an application that will run in its own environment
Any ideas how to get past this?

Related

What is the correct way of sharing your elasticbeanstalk configuration with your team?

I am looking for a way to share the EB configuration so anyone in my team with valid aws creds can deploy the code. By default, EB adds following to your .gitignore file.
# Elastic Beanstalk Files
.elasticbeanstalk/*
!.elasticbeanstalk/*.cfg.yml
!.elasticbeanstalk/*.global.yml
Do I need to check-in these files to share it with the team?
In my opinion, AWS royally messed up with their .gitignore defaults. This was confusing at first, because it seemed like it was there for a good reason. We couldn't find a good reason. Maybe it was just a precaution so you didn't commit something you shouldn't. However, firstly, modifying a project's .gitignore is not something it should be doing by default, in my opinion. And secondly, no one should be committing code they haven't reviewed.
As Kush notes in his reply, you can add the files into a nested directory which would be tracked by your VCS. I'm assuming the reason for this is so that different developers can maintain different configurations. We have zero use for anything remotely resembling this, but it's worth noting as I'm sure someone may.
We've completely removed these entries from our project and commit the entire .elasticbeanstalk and .ebextensions directories.
Assuming you have CLI Access you can create template and share a command like:
eb config save dev-env --cfg prod
Now, open this file in a text editor to modify/remove sections as necessary for your production environment.
Note: AWSConfigurationTemplateVersion is a required field. Do not remove it from the configuration file.
Checking Configurations into Version Control
If you want to check in your saved configurations so that anyone with access to your code can use the same settings in their own environments or if you want to track different versions of the saved configurations, move the file to the .elasticbeanstalk/folder directory. Saved configurations are located in the .elasticbeanstalk/saved_configs/ folder. By moving the configuration file up one level into the .elasticbeanstalk/ folder, the file can be checked in and will still work with the EB CLI. After you move the file, you must add and commit it.
Refer this AWS Blog Post

How Docker and Ansible fit together to implement Continuous Delivery/Continuous Deployment

I'm new to the configuration management and deployment tools. I have to implement a Continuous Delivery/Continuous Deployment tool for one of the most interesting projects I've ever put my hands on.
First of all, individually, I'm comfortable with AWS, I know what Ansible is, the logic behind it and its purpose. I do not have same level of understanding of Docker but I got the idea. I went through a lot of Internet resources, but I can't get the the big picture.
What I've been struggling is how they fit together. Using Ansible, I can manage my Infrastructure as Code; building EC2 instances, installing packages... I can even deploy a full application by pulling its code, modify config files and start web server. Docker is, itself, a tool that packages an application and ensures that it can be run wherever you deploy it.
My problems are:
How does Docker (or Ansible and Docker) extend the Continuous Integration process!?
Suppose we have a source code repository, the team members finish working on a feature and they push their work. Jenkins detects this, runs all the acceptance/unit/integration test suites and if they all passed, it declares it as a stable build. How Docker fits here? I mean when the team pushes their work, does Jenkins have to pull the Docker file source coded within the app, build the image of the application, start the container and run all the tests against it or it runs the tests the classic way and if all is good then it builds the Docker image from the Docker file and saves it in a private place?
Should Jenkins tag the final image using x.y.z for example!?
Docker containers configuration :
Suppose we have an image built by Jenkins stored somewhere, how to handle deploying the same image into different environments, and even, different configurations parameters ( Vhosts config, DB hosts, Queues URLs, S3 endpoints, etc...) What is the most flexible way to deal with this issue without breaking Docker principles? Are these configurations backed in the image when it gets build or when the container based on it is started, if so how are they injected?
Ansible and Docker:
Ansible provides a Docker module to manage Docker containers. Assuming I solved the problems mentioned above, when I want to deploy a new version x.t.z of my app, I tell Ansible to pull that image from where it was stored on, start the app container, so how to inject the configuration settings!? Does Ansible have to log in the Docker image, before it's running ( this sounds insane to me ) and use its Jinja2 templates the same way with a classic host!? If not, how is this handled?!
Excuse me if it was a long question or if I misspelled something, but this is my thinking out loud. I'm blocked for the past two weeks and I can't figure out the correct workflow. I want this to be a reference for future readers.
Please, it would very helpful to read your experiences and solutions because this looks like a common workflow.
I would like to answer in parts
How does Docker (or Ansible and Docker) extend the Continuous Integration process!?
Since docker images same everywhere, you use your docker images as if they are production images. Therefore, when somebody committed a code, you build your docker image. You run tests against it. When all tests pass, you tag that image accordingly. Since docker is fast, this is a feasible workflow.
Also docker changes are incremental; therefore, your images will have minimal impact on storage. Also when your tests fail, you may also choose to save that image too. In this way, developer will pull that image and investigate easily why your tests failed. Developer may choose to run tests in their machine too since docker images in jenkins and their machine are not different.
What this brings that all developers will have same environment, same version of all software since you decide which one will be used in docker images. I have come across to bugs that are due to differences between developer machines. For example in the same operating system, unicode settings may affect your code. But in docker images all developers will test against same settings, same version software.
Docker containers configuration :
If you are using a private repository, and you should use one, then configuration changes will not affect hard disk space much. Therefore except security configurations, such as db passwords, you can apply configuration changes to docker images(Baking the Configuration into the Container). Then you can use ansible to apply not-stored configurations to deployed images before/after startup using environment variables or Docker Volumes.
https://dantehranian.wordpress.com/2015/03/25/how-should-i-get-application-configuration-into-my-docker-containers/
Does Ansible have to log in the Docker image, before it's running (
this sounds insane to me ) and use its Jinja2 templates the same way
with a classic host!? If not, how is this handled?!
No, ansible will not log in the Docker image, but ansible with Jinja2 templates can be used to change dockerfile. You can change dockerfile with templates and can inject your configuration to different files. Tag your files accordingly and you have configured images to spin up.
Regarding your question about handling multiple environment configurations using the same Docker image, I have been planning on using a Service Discovery tool like Consul as a centralized config/property management tool. So, when you start your container up, you set an ENV var that tells it what application it is (appID), and what environment config it should use (ex: MyApplication:Dev) and it will pull its config from Consul at startup. I still have to investigate the security around Consul (as if we are storing DB connection credentials in there for example, how do we restrict who can query/update those values). I don't want to just use this for containers, but all apps in general. Another cool capability is to change the config value in Consul and have a hook back into your app to apply the changes immediately (maybe like a REST endpoint on your app to push changes down to and dynamically apply it). Of course your app has to be written to support this!
You might be interested in checking out Martin Fowler's blog articles on immutable infrastructure and on Phoenix servers.
Although not a complete solution, I have suggestions for two of your issues. Although they might not be perfect, these are the practices we are using in our workflow, and prove themselves so far.
Defining different environments - supposing you've written a different Ansible role for each environment you launch, we define an environment variable setting the environment we wish the container to belong to. We then download the suitable configuration file from an S3 bucket using the env variable set before into the container (which should be possible if you supply AWS creds or give your server an IAM role) and inject these parameters into the code when building it.
Ansible doesn't need to log into the docker app, but the solution is a bit tricky. I've tried two ways of tackling this problem, and both aren't ideal. The first one is to download the configuration file as part of the docker image command line, and build the app on container startup. While this solution works - it breaches the Docker philosophy and makes the image highly prone to build errors.
Another solution is pushing several images to your docker hub repo, and then pulling the appropriate image according to the environment at hand.
In a broader stroke, I've tried launching our app completely with Ansible and it was hell, many configuration steps are tricky and get trickier when you try to implement them as a playbook. When I switched to maintaining the severs alone with Ansible, and deploying the app itself with Docker things got a lot easier.

Source code structure: deploying Go with Subpackages

When there is just one go repository and it imports only public dependencies, deploying to (for example, a Docker container on AWS) is extremely straightforward.
However, I have a question about how to use subpackages with go.
Suppose we have a monorepo with 3 packages.
/src
- /appA
- /appB
- /someSharedDep
How are deployments typically built so that you deploy appA and someSharedDep to one server and appB and someSharedDep on another server?
I imagine there needs to be some creative employments of our friend the GOPATH, but some help on the topic would be appreciated.
Bonus points if we're talking about an elastic beanstalk deployment.
Suspicions
I have some thoughts on how to approach the problem now (and I'll add more or submit an answer if this becomes more complete).
Use vendoring, this means you have the source code of all your dependencies checked into your own repo. It doesn't sound good (esp. if you're used to NodeJS) but it works.
Now that you use vendoring, vendor your own submodules into the ./vendor folder in the application. Yes you will have copies of the same code in two places but whatever.
Create automated scripts to help manage some of these things. I'm still looking for tools that make vendoring more convenient.
Some problems I still face:
When vendoring, sometimes the dependency has files that have a main() function or declare package main, usually in an ./example subfolder. These have to be removed manually. I don't like editing the working source of someone else's project though!

AWS EB: Multiple env.config files for various environments?

I've got an env.config in source control but pretty much the only things I can put in it are things that relate to all my various environments (production, staging). I've got environment specific settings that I want to add to the env.config file (for instance, the DB host) that will change from environment to environment. How can I handle these differences? Right now I'm doing it from the AWS console where I can manage it in the GUI on a per-environment basis, but I'd love to be able to change a lot of this stuff from git so I don't have to be logging into the console whenever I want to change something.
Is there any way to have multiple, environment specific config files?
So this has been posted before in the AWS forums. (https://forums.aws.amazon.com/thread.jspa?messageID=529373) So far there's only workarounds! The problem is that the .config files would require some logic to figure out what environment you're attempting to target. Personally I don't think any logic is required, as you could simply namespace the config settings based on the AWS environment name you're targeting.
I think your usecase is similar to what is discussed in How to configure Elastic Beanstalk for RDS
You may want to use 'eb branch'. You can then have multiple environments with different configurations.
More documentation on eb branch here

How do I know what .ebextensions config file to create?

I think I'm on the right path. I can use .ebextensions to change some of the conf files for the instance I'm running. Since I'm using Elastic Beanstalk, and that a lot of the software is shrinkwrapped (which I'm fine with), I should be using .ebextensions as a means of modifying the environment.
I want to employ some form of mod_rewrite config, but I know nothing of this Amazon Linux. I don't even know what the web server is. I've been through the console for the past few hours and see no trace of the things I want to override.
Apparently I can setup a shell to take a look around, but modifying things that way will cause things to be overridden since Beanstalk is handling config. I'm not entirely sure on that last point.
Should I just ssh and play in userland like a typical unix host?
You can definitely ssh to the instance, and see around. But remember, that your changes are not persistent. You should look at .ebextensions config files as the way to re-run your commands on the host, plus more.
It might take some time to see where ElasticBeanstalk stores configuration files and all other interesting things.
To get you started, your app files are located at: /opt/python/current/app and if you are using Python, it is located in virtual environment at: /opt/python/run/venv/bin/python27
Customizing the Software on EC2 Instances Running Linux guide contains detailed information on what you can do:
Packages - install packages
Sources - retrieve archives
Files - operations with files
Users - anything with users
Groups - anything with groups
Commands - execute instance commands
Container_commands - execute commands after the container is
extracted
Services - launch services
Option_settings - configure
container settings
See if that satisfies your requirements, if not, come back to StackOverflow and ask more questions.