How to setup a cloud development setup for parallel development? - amazon-web-services

I want to setup an cloud development environment for my personal use.
Requirements:
1. Have a cloud web server (basically any linux system) serving my Elixir (backend language) app.
2. Connect Sublime Text / Atom to this server (via sftp maybe), and make code changes and save. Automatic compilation and other stuff will be taken care by mix or task runner.
3. Multiple device connectivity to this setup.
Reasons for this setup:
I want to be able to develop from anywhere (office, home, etc), just configure the IDE and continue to work from where I left off last from any device.
Better productivity and less setup required.
Secure as well
Current solution I have:
Had setup a linux instance with sftp server enabled.
Created projects under the root of the sftp directory.
Run task runners in those projects to auto compile and server with other stuff.
Connected sublime text to this sftp server and start working. On save it uploads the file to the server.
I connect another laptop to this server and can start working on last saved state.
This setup works fine till now, but if there is a better way for this, I would love to know.

Since you are using git, you don't need a separate cloud server for syncing your dev environments. The easiest way to meet your needs would be to create a branch in git called workinprogress (for example), and then push and pull to it from your various locations. When you have something you want to publish to the main branch you can do an interactive rebase before merging, which enables you to rewrite the history of your workinprogress branch, squashing and rewording the commit messages as much as you like. Then once you have everything you want on the main branch, you either delete workinprogress and start a new one, or just git checkout workinprogress && git reset --hard master.
If you still want to have your Elixir app on a live server somewhere, then you can just pull from Github on that server and get latest updates for the app.
I work from various places too, and use this work flow. No problems so far.

Related

Run automation on headless browser in an ec2 instance with Amazon linux

I have an automation framework that uses static html pages within its project directory to perform certain aws operations such as dynamoDB scan and Aws Lambda executions. Due to some performance bottleneck in a dependent api component for the test we are trying to move the framework to an ec2 instance with Amazon linux and run the tests from there.
Since we have methods in the TestNG class that actually uses selenium web driver to spin up a browser and open up the static page in order to perform the required Aws operations I am pretty sure this test is going to run into issues.
There are two potential approach I see for solving this issue:
Implement AWSUtil classes and use necessary aws clients to replace the web dependent logics (Will require some effort and re-engineering)
Use a headless chrome browser (or any compatible one) in order to run the web dependent steps.
I am pretty sure that number 1 can be easily achieved, just a matter of time and effort. However, would love to know if there is an easy way of accomplishing #2 since this would not require any code rewrite.
We got the same issue and been successful with puppeteer,
https://github.com/GoogleChrome/puppeteer
If you don't want to install the latest version of node, you can dockerize your tests.
puppeteer can run headless or with browser.
Hope it helps.
There is no need to change anything in your tests, just the setup and the execution. The tests can run headlessly on a Continuous Integration (CI) server. There is no out-of-the-box setup since there is no display output for the browser to launch in. However with Xvfb you can launch the browser virtually. Straight from the docs:
Xvfb (short for X virtual framebuffer) is an in-memory display server for UNIX-like operating system (e.g., Linux). It enables you to run graphical applications without a display
Depending from do you want to keep Xvfb running in the background until the process is killed, there are two options:
Xvfb :99 &
export DISPLAY=:99
run-your-tests-here
or
xvfb-run run-your-tests-here
Here is a Linux tutorial. I am using this for my Docker based Jenkins setup and works like a charm, every time.

How to work with a local development server and deploy to a production server in django?

I want to work locally on my django(1.7) project and regularly deploy updates to a production server. How would you do this? I have not found anything about this in the docs. I am confused about that because it seems like many people would want to do that and there should be some kind of standard solution to this. Or am I getting the whole workflow wrong?
I should note that I'm not expecting a step-by-step guide. I am just trying to understand the concept.
Assuming you already have your deployment server setup, and all you need to do is push code to your server, then you can just use git as a form of deployment.
Digital Ocean has a good tutorial at this link https://www.digitalocean.com/community/tutorials/how-to-set-up-automatic-deployment-with-git-with-a-vps
Push sources to a git repository from a dev machine.
pull sources on a production server. Restart uwsgi/whatever.
There is no standard way of doing this, so no, it cannot be included with Django or be thoroughly described in the docs.
If you're using a PaaS how you deploy depends on the PaaS. Ditto for a container like docker, you must follow the rules of that particular container.
If you're old-school and can ssh into a server you can rsync a snapshot of the code to the correct place after everything else is taken care of: database, ports, webserver setup etc. That's what I do, and I control stuff with bash scripts utilizing a makefile.
REMOETHOST=user#yourbox
REMOTEPATH=yourpath
REMOTE=$REMOTEHOST:$REMOTEPATH
make rsync REMOTE_URI=$REMOTE
ssh $REMOTEHOST make -C $REMOTEPATH deploy
My "deploy"-action is a monster but might be as easy as something that touches the wsgi-file used in order to reload the site. My medium complex ones cleans out stale files, run collectstatic and then reloads the site. The really complex ones creates a timestamped virtualenv, cloned database and remote code tree, a new server-setup that points to this, runs connection tests on the remote and if they succeed, switches the main site to point to the new versioned site, then emails me the version that is now in production, with the git hash and timestamp.
Lots of good solutions. Heroku has a good tutorial: https://devcenter.heroku.com/articles/getting-started-with-django
Check out a general guide for deploying to multiple PaaS providers here: http://www.paascheatsheet.com

Django Server Structure and Conventions

I'm interested in figuring out the best practice way of organising Django apps on a server.
Where do you place Django code? The (old now) Almanac says /home/django/domains/somesitename.com/ but I've also seen things placed in /opt/apps/somesitename/ . I'm thinking that the /opt/ idea sounds better as it's not global, but I've not seen opt before, and presumably it might be better for apps to go in a site specific deployer users home dir.
Would you recommend having one global deployer user, one user per site, or one per site-env (eg, sitenamelive, sitenamestaging). I'm thinking one per site.
How do you version your config files? I currently put them in an /etc/ folder at top level of source control. eg, /etc/nginc/somesite-live.conf.
How do you do provision your servers and do the deployment? I've resisted Chef and Puppet for years on the hope of something Python based. Silver Lining doesn't seem ready yet, and I have big hopes for Patchwork (https://github.com/fabric/patchwork/). Currently we're just using some custom Fabric scripts to deploy, but the "server provisioning" is handled by a bash script and some manual steps for adding keys and creating users. I'm about to investigate Silk Deployment (https://bitbucket.org/btubbs/silk-deployment) as it seems closest to our setup.
Thanks!
I think there would have to be more information on what kinds of sites you are deploying: there would be differences based on the relations between the sites, both programatically and 'legally' (as in a business relation):
Having an system account per 'site' can be handy if the sites are 'owned' by different people - if you are a web designer or programmer with a few clients, then you might benefit from separation.
If your sites are related, i.e. a forum site, a blog site etc, you might benefit from a single deployment system (like ours).
for libraries, if they're hosted on reputable sources (pypy, github etc), its probably ok to leave them there and deploy from them - if they're on dodgy hosts which are up or down, we take a copy and put them in a /thirdparty folder in our git repo.
FABRIC
Fabric is amazing - if its setup and configured right for you:
We have a policy here which means nobody ever needs to log onto a server (which is mostly true - there are occasions where we want to look at the raw nginx log file, but its a rarity).
We've got fabric configured so that there are individual functional blocks (restart_nginx, restart_uwsgi etc), but also
higher level 'business' functions which run all the little blocks in the right order - for us to update all our servers we meerly type 'fab -i secretkey live deploy' - the live sets the settings for the live servers, and deploy ldeploys (the -i is optional if you have your .ssh keys set up right)
We even have a control flag that if the live setting is used, it will ask 'are you sure' before performing the deploy.
Our code layout
So our code base layout looks a bit like this:
/ <-- folder containing readme file etc
/bin/ <-- folder containing nginx & uwsgi binaries (!)
/config/ <-- folder containing nginx config and pip list but also things like pep8 and pylint configs
/fabric/ <-- folder containing fabric deployment
/logs/ <-- holding folder that nginx logs get written into (but not committed)
/src/ <-- actual source is in here!
/thirdparty/ <-- third party libs that we didn't trust the hosting of for pip
Possibly controversial because we load our binaries into our repo, but it means that if i upgrade nginx on the boxes, and want to roll back, i just do it by manipulation of git. I know what works against what build.
How our deploy works:
All our source code is hosted on a private bitbucket repo (we have a lot of repos and a few users, thats why bitbucket is better for us then github). We have a user account for the 'servers' with its own ssh key for bitbucket.
Deploy in fabric performs the following on each server:
irc bot announce beginning into the irc channel
git pull
pip deploy (from a pip list in our repo)
syncdb
south migrate
uwsgi restart
celery restart
irc bot announce completion into the irc channel
start availability testing
announce results of availability testing (and post report into private pastebin)
The 'availability test' (think unit test, but against live server) - hits all the webpages and API's on the 'test' account to make sure it gets back sane data without affecting live stats.
We also have a backup git service so if bitbucket is down, it falls over to that gracefully, and we even have jenkins integration that on a commit to the 'deploy' branch, it causes the deployment to go through
The scary bit
Because we use cloud computing and expect a high throughput, our boxes auto spawn. Theres a default image which contains a a copy of the git repo etc, but invariably it will be out of date, so theres a startup script which does a deployment to itself, meaning new boxes added to the cluster are automatically up-to-date.

Django and multi-stage servers

I am working with a client that demands multi-stage server setup: development server, stage server and production/live server.
Stage should be as stable as it can be to test all those new features we develop at the development server and take this to the live server in the end.
We use git and github for version controlling. I use Ubuntu server edition as the OS.
The problem is, I have never working in such multi-stage server plan. What software/projects would you recommend to do a proper way of handling such setup, especially deployment and moving a new feature developed to the stage and then to the live server ?
We use two different methods of moving code from environment to environment. The first is to use branches and triggers with our source control system (mercurial in our case, though you can do the same thing with git). The other, is to use fabric, a python library for executing shell code across a number of servers.
Using source control, you can have several main branches, like production development staging. Say you want to move a new feature into staging. I'll explain in terms of mercurial, but you can port the commands over to git and it should be fine.
hg update staging
hg merge my-new-feature
hg commit -m 'my-new-feature > staging'
hg push
You then have your remote source control server push to all of your web servers using a trigger. A trigger on each web server will then do an update and reload the web server.
To move from staging to production, it's just as easy.
hg update production
hg merge staging
hg commit -m 'staging > production'
hg push
It's not the nicest method of deployment, and it makes rolling back quite hard. But it's quick and easy to set up, and still a lot better than manually deploying each change to each server.
I won't go through fabric, as it can get quite involved. You should read their documentation so you understand what it is capable of. There are plenty of tutorials around for fabric and django. I highly recommend the fabric route as it gives you lots more control, and only involves writing some python.
There is a nice branching model for git (as it is also used by github itself for example). You can easily apply this branching model using git-flow, which is a git extension that enables you to apply some high level repository operations that fit into this model. There's also a nice blogpost about this.
I do not know what exactly you want to automize in your deployment workflow, but if you apply the model mentioned above, most of the correct version handling is done by git.
To add some further automatic processing to this, fabric is a simple but great tool, and you will find many tutorials about its usage (also in combination with git).
For handling python dependencies using virtualenv and pip is for sure a very good way to go.
If you need something more complex,eg. to handle more than one django instance on one machine and for handling system wide dependencies etc checkout puppet or chef.
Try Gondor.io or Ep.io, they both make it pretty easy (gondor especially excels in this area) to have two+ instances with very similar code, from your VCS -- and to move data back and forth. (if you need an invite, ask either in IRC, but if I recall, they're both open now)

Is there an ideal way to move from Staging to Production for Coldfusion code?

I am trying to work out a good way to run a staging server and a production server for hosting multiple Coldfusion sites. Each site is essentially a fork of a repo, with site specific changes made to each. I am looking for a good way to have this staging server move code (upon QA approval) to the production server.
One fanciful idea involved compiling the sites each into EAR files to be run on the production server, but I cannot seem to wrap my head around Coldfusion archives, plus I cannot see any good way of automating this, especially the deployment part.
What I have done successfully before is use subversion as a go between for a site, where once a site is QA'd the code is committed and then the production server's working directory would have an SVN update run, which would then trigger a code copy from the working directory to the actual live code. This worked fine, but has many moving parts, and still required some form of server access to each server to run the commits and updates. Plus this worked for an individual site, I think it may be a nightmare to setup and maintain this architecture for multiple sites.
Ideally I would want a group of developers to have FTP access with the ability to log into some control panel to mark a site for QA, and then have a QA person check the site and mark it as stable/production worthy, and then have someone see that a site is pending and click a button to deploy the updated site. (Any of those roles could be filled by the same person mind you)
Sorry if that last part wasn't so much the question, just a framework to understand my current thought process.
Agree with #Nathan Strutz that Ant is a good tool for this purpose. Some more thoughts.
You want a repeatable build process that minimizes opportunities for deltas. With that in mind:
SVN export a build.
Tag the build in SVN.
Turn that export into a .zip, something with an installer, etc... idea being one unit to validate with a set of repeatable deployment steps.
Send the build to QA.
If QA approves deploy that build into production
Move whole code bases over as a build, rather than just changed files. This way you know what's put into place in production is the same thing that was validated. Refactor code so that configuration data is not overwritten by a new build.
As for actual production deployment, I have not come across a tool to solve the multiple servers, different code bases challenge. So I think you're best served rolling your own.
As an aside, in your situation I would think through an approach that allows for a standardized codebase, with a mechanism (i.e. an API) that allows for the customization you're describing. Otherwise managing each site as a "custom" project is very painful.
Update
Learning Ant: Ant in Action [book].
On Source Control: for the situation you describe, I would maintain a core code base and overlays per site. Export core, then site specific over it. This ensures any core updates that site specific changes don't override make it in.
Call this combination a "build". Do builds with Ant. Maintain an Ant script - or perhaps more flexibly an ant configuration file - per core & site combination. Track version number of core and site as part of a given build.
If your software is stuffed inside an installer (Nullsoft Install Shield for instance) that should be part of the build. Otherwise you should generate a .zip file (.ear is a possibility as well, but haven't seen anyone actually do this with CF). Point being one file that encompasses the whole build.
This build file is what QA should validate. So validation includes deployment, configuration and functionality testing. See my answer for deployment on how this can flow.
Deployment:
If you want to automate deployment QA should be involved as well to validate it. Meaning QA would deploy / install builds using the same process on their servers before doing a staing to production deployment.
To do this I would create something that tracks what server receives what build file and whatever credentials and connection information is necessary to make that happen. Most likely via FTP. Once transferred, the tool would then extract the build file / run the installer. This last piece is an area I would have to research as to how it's possible to let one server run commands such as extraction or installation remotely.
You should look into Ant as a migration tool. It allows you to package your build process with a simple XML file that you can run from the command line or from within Eclipse. Creating an automated build process is great because it documents the process as well as executes it the same way, every time.
Ant can handle zipping and unzipping, copying around, making backups if needed, working with your subversion repository, transferring via FTP, compressing javascript and even calling a web address if you need to do something like flush the application memory or server cache once it's installed. You may be surprised with the things you can do with Ant.
To get started, I would recommend the Ant manual as your main resource, but look into existing Ant builds as a good starting point to get you going. I have one on RIAForge for example that does some interesting stuff and calls a groovy script to do some more processing on my files during the build. If you search riaforge for build.xml files, you will come up with a great variety of them, many of which are directly for ColdFusion projects.