Continuous deployment without cloning whole repository - amazon-web-services

I am searching for a solution to do continuous deployment in a cloud environment, more specific, in an Amazon AWS environment.
The code to be deployed are mainly Microsoft's ASP and PHP, so this framework should work on both platforms. As I have an auto-scale environment, this framework will work if it pulls the new code, like Puppet does.
My first thought was to deploy direct from the VCS, but I ended in a problem where all repository information was mirrored to the servers, as GIT, for instance, works. This is a problem because the repository keeps growing and the servers will demand more and more space.
I found Ansible, that works the way I need, but does not work on Windows environment. It only sends to the servers the production code, not the VCS repository, and keeps track which servers are updated.
Without using an easy-to-setup framework like this, I will need to create a Puppet + Jenkins + a VCS framework, where Jenkins creates the package from a VCS source code and Puppet delivers it.
Does anybody know any small framework for my needs or the Puppet + Jenkins + VCS is the way to go?

Consider CloudMunch (www.cloudmunch.com) for this. The platform is built exactly to solve this kind of polyglot requirements.
Disclaimer: I work for CloudMunch

Related

Best way of deployment automation

I've made a pretty simple web application which has a REST API backend service written in Python/Django and a FE service written in JS/React. Both parts are containerized and can be launched locally via docker-compose. They are situated in separate github repositories, and every time a new tag is pushed to the corresponding repo, an image is built and pushed to the corresponding ecr repo via github actions. Until that point everything works smoothly, but the problem is that, I don't know how to properly automate the deployment process to the test and production environments. The goal is to have those environments as similar as possible.
My current solution for test env is to simply upload docker-compose file to the ec2 instance via github actions, then manually run the docker-compose command, which pulls images from ecr.
Even though it's a simple solution, it's not very scalable or automated and requires some work to be done in order to update an application. The desired flow is to have a manual GitHub action in each repository, which would deploy either BE or FE to the test or prod environment without any need to ssh into the server and do any other manipulations with docker.
I was looking at ECS, but it seems that it's a solution for bigger apps, which need several or more instances to run. I want my app to be used by many users, but I'm not sure, when it will happen. So maybe i should stick to something less complicated than ECS. Are there any simpler solutions, which i am missing? Like Elastic beanstack or something from any other provider?
I will be happy to receive a feedback on anything I wrote in this post, thanks!

Deploy only changed files on AWS Elastic Beanstalk website

I have successfully deployed my website on AWS Elastic Beanstalk. Now I want to change the code in one of my file.
If I do eb deploy, it will completely deploy a new version of my code which I don't want. I already have an updated DB on Elastic Beanstalk. If I deploy the whole code again, it will overwrite my DB file.
How can I deploy the changed file only successfully?
This may not be the answer you're looking for, but I would highly recommend deleting this file from your code repository. Hopefully you're using a version control system like Git; if you want to keep the original file for historical purposes, I would create an entirely different repository and put it in there.
Why? Even if you did come up with a solution to only deploy changed files...would you really want to trust it? If there's any problem with the solution you came up with, you would entirely erase/overwrite your production database. Not good.
In addition, if you want to build a really robust system to entirely create your app from scratch in AWS, take a look at Cloud Formation. It takes some learning and work, but you can build a script -- and maintain it in version control -- that will scaffold your entire cloud infrastructure.

How Docker and Ansible fit together to implement Continuous Delivery/Continuous Deployment

I'm new to the configuration management and deployment tools. I have to implement a Continuous Delivery/Continuous Deployment tool for one of the most interesting projects I've ever put my hands on.
First of all, individually, I'm comfortable with AWS, I know what Ansible is, the logic behind it and its purpose. I do not have same level of understanding of Docker but I got the idea. I went through a lot of Internet resources, but I can't get the the big picture.
What I've been struggling is how they fit together. Using Ansible, I can manage my Infrastructure as Code; building EC2 instances, installing packages... I can even deploy a full application by pulling its code, modify config files and start web server. Docker is, itself, a tool that packages an application and ensures that it can be run wherever you deploy it.
My problems are:
How does Docker (or Ansible and Docker) extend the Continuous Integration process!?
Suppose we have a source code repository, the team members finish working on a feature and they push their work. Jenkins detects this, runs all the acceptance/unit/integration test suites and if they all passed, it declares it as a stable build. How Docker fits here? I mean when the team pushes their work, does Jenkins have to pull the Docker file source coded within the app, build the image of the application, start the container and run all the tests against it or it runs the tests the classic way and if all is good then it builds the Docker image from the Docker file and saves it in a private place?
Should Jenkins tag the final image using x.y.z for example!?
Docker containers configuration :
Suppose we have an image built by Jenkins stored somewhere, how to handle deploying the same image into different environments, and even, different configurations parameters ( Vhosts config, DB hosts, Queues URLs, S3 endpoints, etc...) What is the most flexible way to deal with this issue without breaking Docker principles? Are these configurations backed in the image when it gets build or when the container based on it is started, if so how are they injected?
Ansible and Docker:
Ansible provides a Docker module to manage Docker containers. Assuming I solved the problems mentioned above, when I want to deploy a new version x.t.z of my app, I tell Ansible to pull that image from where it was stored on, start the app container, so how to inject the configuration settings!? Does Ansible have to log in the Docker image, before it's running ( this sounds insane to me ) and use its Jinja2 templates the same way with a classic host!? If not, how is this handled?!
Excuse me if it was a long question or if I misspelled something, but this is my thinking out loud. I'm blocked for the past two weeks and I can't figure out the correct workflow. I want this to be a reference for future readers.
Please, it would very helpful to read your experiences and solutions because this looks like a common workflow.
I would like to answer in parts
How does Docker (or Ansible and Docker) extend the Continuous Integration process!?
Since docker images same everywhere, you use your docker images as if they are production images. Therefore, when somebody committed a code, you build your docker image. You run tests against it. When all tests pass, you tag that image accordingly. Since docker is fast, this is a feasible workflow.
Also docker changes are incremental; therefore, your images will have minimal impact on storage. Also when your tests fail, you may also choose to save that image too. In this way, developer will pull that image and investigate easily why your tests failed. Developer may choose to run tests in their machine too since docker images in jenkins and their machine are not different.
What this brings that all developers will have same environment, same version of all software since you decide which one will be used in docker images. I have come across to bugs that are due to differences between developer machines. For example in the same operating system, unicode settings may affect your code. But in docker images all developers will test against same settings, same version software.
Docker containers configuration :
If you are using a private repository, and you should use one, then configuration changes will not affect hard disk space much. Therefore except security configurations, such as db passwords, you can apply configuration changes to docker images(Baking the Configuration into the Container). Then you can use ansible to apply not-stored configurations to deployed images before/after startup using environment variables or Docker Volumes.
https://dantehranian.wordpress.com/2015/03/25/how-should-i-get-application-configuration-into-my-docker-containers/
Does Ansible have to log in the Docker image, before it's running (
this sounds insane to me ) and use its Jinja2 templates the same way
with a classic host!? If not, how is this handled?!
No, ansible will not log in the Docker image, but ansible with Jinja2 templates can be used to change dockerfile. You can change dockerfile with templates and can inject your configuration to different files. Tag your files accordingly and you have configured images to spin up.
Regarding your question about handling multiple environment configurations using the same Docker image, I have been planning on using a Service Discovery tool like Consul as a centralized config/property management tool. So, when you start your container up, you set an ENV var that tells it what application it is (appID), and what environment config it should use (ex: MyApplication:Dev) and it will pull its config from Consul at startup. I still have to investigate the security around Consul (as if we are storing DB connection credentials in there for example, how do we restrict who can query/update those values). I don't want to just use this for containers, but all apps in general. Another cool capability is to change the config value in Consul and have a hook back into your app to apply the changes immediately (maybe like a REST endpoint on your app to push changes down to and dynamically apply it). Of course your app has to be written to support this!
You might be interested in checking out Martin Fowler's blog articles on immutable infrastructure and on Phoenix servers.
Although not a complete solution, I have suggestions for two of your issues. Although they might not be perfect, these are the practices we are using in our workflow, and prove themselves so far.
Defining different environments - supposing you've written a different Ansible role for each environment you launch, we define an environment variable setting the environment we wish the container to belong to. We then download the suitable configuration file from an S3 bucket using the env variable set before into the container (which should be possible if you supply AWS creds or give your server an IAM role) and inject these parameters into the code when building it.
Ansible doesn't need to log into the docker app, but the solution is a bit tricky. I've tried two ways of tackling this problem, and both aren't ideal. The first one is to download the configuration file as part of the docker image command line, and build the app on container startup. While this solution works - it breaches the Docker philosophy and makes the image highly prone to build errors.
Another solution is pushing several images to your docker hub repo, and then pulling the appropriate image according to the environment at hand.
In a broader stroke, I've tried launching our app completely with Ansible and it was hell, many configuration steps are tricky and get trickier when you try to implement them as a playbook. When I switched to maintaining the severs alone with Ansible, and deploying the app itself with Docker things got a lot easier.

Continuous deployment of C/C++ executable to Linux production servers

I wonder if there is any best practice or at least a more practical way to deploy C/C++ executable to Linux based production servers.
I have Jenkins up and running as CI server, and created a main SVN module which contains multiple svn:externals. This module is mainly served as a pipeline of related C++ applications. (Perhaps I should post this an another question on whether svn:externals is the correct way to do it)
So the main question is the deployment steps, I am planing to make all production servers as Jenkins' slaves with parameterized config, for the purpose of building from SVN tags. And use some scripts to copy all executables to, eg: /opt/mytools/bin in multiple production servers.
Any recommendations?
The best deployment route is the one specified by your distribution, IMHO. That is, for debian packages, bundle your applications into .deb-files, put them into a repository and let apt-get take care of the rest. This way, you have a minimal impact on the production environment and most admins are already familiar with the deployment scheme.
I'm working through some of the same questions, and I'm finding that Continuous Delivery: Reliable Software Releases through Build, Test, and Deployment Automation by Humble and Farley has been a good (technology agnostic) starting point - not perfect but it's pointed me in the right direction when I had no idea what to do next.
The continuous delivery book recommends setting up 'build pipelines' in which you run progressively more and more automated tests, with only the final manual tests and deploy rollback options being triggered by a real person.

Utilizing EC2 or Azure to scale out a distributed C# build system?

Quite a few build and CI systems support steps for pushing build output to Azure, but I haven't seen any which can actually run on Azure (or EC2). Ideally I would like to be able to spin up an arbitrary number of instances (depending on the # of pending submits) to deal with the actual build + quality gates (UTs, FXCop, other static analysis tools) + source repository checkin process.
Are there existing tools which can do this, or has anyone built something which they can discuss?
Thanks!
[Edit: I found this question which is quite similar but didn't have any informative answers, so I'll keep my question alive]
If you're using Git or Mercurial for source control, AppHarbor might be what you're looking for. It's a CI build/deploy environment that runs exclusively in the cloud (EC2), and can deploy build output to Azure.
Here are some links for reference:
http://sourcecodebean.com/archives/appharbor-heroku-for-net/987
http://lostechies.com/chrismissal/2011/03/12/using-appharbor-for-continuous-integration
http://haacked.com/archive/2011/05/12/making-let-me-bing-that-for-you-open-source.aspx
http://appharbor.com/page/pricing
The open souce Jenkins CI server has an EC2 plugin that will spin up EC2 instances automatically depending on your build load. I couldn't find anything for Azure, but I highly recommend Jenkins - it's easy to configure, well maintained and has stacks of features.
Continuous Integration on Windows Azure http://code.google.com/p/cassis/ (over Mercurial)
Disclaimer: work produced by my 1st year CS students
Also Teamcity has support for this: http://www.jetbrains.com/teamcity/features/amazon_ec2.html