Source code structure: deploying Go with Subpackages - amazon-web-services

When there is just one go repository and it imports only public dependencies, deploying to (for example, a Docker container on AWS) is extremely straightforward.
However, I have a question about how to use subpackages with go.
Suppose we have a monorepo with 3 packages.
/src
- /appA
- /appB
- /someSharedDep
How are deployments typically built so that you deploy appA and someSharedDep to one server and appB and someSharedDep on another server?
I imagine there needs to be some creative employments of our friend the GOPATH, but some help on the topic would be appreciated.
Bonus points if we're talking about an elastic beanstalk deployment.
Suspicions
I have some thoughts on how to approach the problem now (and I'll add more or submit an answer if this becomes more complete).
Use vendoring, this means you have the source code of all your dependencies checked into your own repo. It doesn't sound good (esp. if you're used to NodeJS) but it works.
Now that you use vendoring, vendor your own submodules into the ./vendor folder in the application. Yes you will have copies of the same code in two places but whatever.
Create automated scripts to help manage some of these things. I'm still looking for tools that make vendoring more convenient.
Some problems I still face:
When vendoring, sometimes the dependency has files that have a main() function or declare package main, usually in an ./example subfolder. These have to be removed manually. I don't like editing the working source of someone else's project though!

Related

How do you configure proprietary dependencies for Leiningen?

We're working on a project that has some Clojure-Java interop. At this point we have a single class that has a variety of dependencies which we put into a user library in Eclipse for development, but of course that doesn't help when using Leiningen (2.x). Most of our dependencies are proprietary, so they aren't on a repository somewhere.
What is the easiest/right way to do this?
I've seen leiningen - how to add dependencies for local jars?, but it appears to be out of date?
Update: So I made a local maven repository for my jar following these instructions and the lein deployment docs on github, and edited my project.clj file like this:
:dependencies [[...]
[usc "0.1.0"]]
:repositories {"usc" "file://maven_repository"}
Where maven_repository is under the project directory (hence not using file:///). When I ran "lein deps"--I got this message:
Retrieving usc/usc/0.1.0/usc-0.1.0.pom from usc
Could not transfer artifact usc:usc:pom:0.1.0 from/to usc (file://maven_repository): no supported algorithms found
This could be due to a typo in :dependencies or network issues.
Could not resolve dependencies
What is meant by "no supported algorithms found" and how do I fix it?
Update2: Found the last bit of the answer here.
add them as a dependency to your leiningen project. You can make up the names and versions.
then run lein deps and the error message when it fails to find it will give you the exact command to run so you can install the jar to your local repo then sould you decide to use a shared repo you can use this same process to put your dependencies there.
#Arthur's answer is good but I figured I'd flesh it out a bit more since it leaves some details lacking.
Always keep in mind Repeatability. If you don't make it so that anyone who needs access to the artifacts can get access to the artifacts in a standard way, you're asking for support hell.
The documentation on deployment is a good place to go to find out everything you need to know about deploying your artifacts. Since you're in a polyglot environment you probably can't have lein take care of deploying all your artifacts but at least you can get your clojure specific jars up into S3 or even a file share if you like. The rest of your artifacts will have to use Maven or Ant directly to upload the artifacts to the Maven repo on the file server or S3. At my current company we are using technomancy's excellent s3 wagon private to great effect for hosting our closed source artifacts and clojars for hosting anything that we can open-source.
What #Arthur is referring to is doing a lein install. All that does is install a copy of the current project into your local .m2 directory so that other projects on your box can reference them. Unless you have configured your install of maven to use a shared directory for your .m2 folder (maybe not a bad idea in your environment?), this will mean that anyone else who checks out your project will not be able to build it. If you wanted to go this route, you need to set the localRepository node in your $M2_HOME/conf/settings.xml to be the shared location that the rest of your team has access to. See the docs for more information.
YMMV but I've found it best to use Maven rather than Leiningen when you are working with Polyglot Clojure / Java projects.
It's mainly because the Java based tools (Eclipse etc.) understand Maven projects but don't really understand Leiningen projects. It's getting slowly better with the excellent Counterclockwise Clojure plugin, but the integration still isn't quite good enough yet for an efficient IDE based workflow.
On the repository side of things, I'd suggest setting up a private shared Maven repository. You're going to need it sooner or later if you plan to manage a complex set of dependencies within your team: might as well bite the bullet and get it done now.

Django Compressor on a multi-server deployment

I've been fortunate enough to discover django_compressor and implemented it within our stack, which deploys to many servers (Currently 6, but growing as we deploy smaller virtual machines.)
Now this is all fine and dandy if you're using django_compressor at its finest. Compressing raw CSS/JS code
However, say now I want introduce some type of pre-compiler. Let's say for this example it is LESS (css). The thought process for this is fairly simple:
Install node, npm, and the less package onto the server.
Add less to your precompilers!
COMPRESS_PRECOMPILERS = ( ('text/less', 'lessc {infile} {outfile}'), )
Now you deploy, and your server compiles the less file. Everything is fantastic!
Now let's add 8 more servers to that and you have to install node, npm, and less on each server?
This is where something doesn't seem right, and I feel like I'm missing something. I believe the Django community has run into this problem before.
My thoughts thus far have been:
Use a post-commit hook to compile the CSS on the developers machine. This means that via django_compressor, we link to the compiled static file in the HTML, and our repository contains both the compiled and non-compiled versions. My only downside to this is it ends up not using half of the benefits of django_compressor and may be tedious for developers?
Suck it up and make node, npm, and less part of the server stack.
Update
I did some additional looking around and it seems that using the COMPRESS_OFFLINE flag (or just --force) with the management command will produce an offline manifest file that does what I need (only tested locally). So setting this up with a pre-deploy hook likes to be the answer.
Of course, still open to other ideas :-)
Coupled with the tips in the comments about COMPRESS_OFFLINE, you could look at django-staticfiles' storage stuff. You can host the static files on amazon s3, for instance, so hosting it all on one static-hosting server and using that from all your servers could also be a nice solution. You wouldn't need to do anything with the static (and compressed) files on the individual servers.
Alternative solution regarding the multiple servers: I've made a custom fabric (docs.fabfile.org) script that installs/configures stuff on our servers. I've only recently started using coffeescript and less, but those two are definitively ending up in my fabfile. That solves the installation problem for me.
(Alternatives to a fabfile are things like a custom debian package with standard dependencies. Or chef or puppet or something similar.)
you can use puppet for the task

Continuous deployment of C/C++ executable to Linux production servers

I wonder if there is any best practice or at least a more practical way to deploy C/C++ executable to Linux based production servers.
I have Jenkins up and running as CI server, and created a main SVN module which contains multiple svn:externals. This module is mainly served as a pipeline of related C++ applications. (Perhaps I should post this an another question on whether svn:externals is the correct way to do it)
So the main question is the deployment steps, I am planing to make all production servers as Jenkins' slaves with parameterized config, for the purpose of building from SVN tags. And use some scripts to copy all executables to, eg: /opt/mytools/bin in multiple production servers.
Any recommendations?
The best deployment route is the one specified by your distribution, IMHO. That is, for debian packages, bundle your applications into .deb-files, put them into a repository and let apt-get take care of the rest. This way, you have a minimal impact on the production environment and most admins are already familiar with the deployment scheme.
I'm working through some of the same questions, and I'm finding that Continuous Delivery: Reliable Software Releases through Build, Test, and Deployment Automation by Humble and Farley has been a good (technology agnostic) starting point - not perfect but it's pointed me in the right direction when I had no idea what to do next.
The continuous delivery book recommends setting up 'build pipelines' in which you run progressively more and more automated tests, with only the final manual tests and deploy rollback options being triggered by a real person.

Django: pre-deployment

Question 1:
I am about to deploy my first Django website and I was wondering what tools are recommended to gathering all your Django files.
Like for example I don't need my sass and coffeescript files I just want the compiled css and js files. I also want to use the correct production settings file.
Question 2:
Do I put these files ready for deployment into their own version control repository? I guess the advantage is that you can easily roll back changes?
Question 3:
Do I run my tests before gathering the files or before deploying?
Shell scripts could be a solution but maybe there is a better way? I looked at jenkins/hudson but that seems more like a tool that sits on top of the tools that I am looking for.
For questions one and two, I'd recommend using a version control system for this. I'm sure you're already using some sort of version control, so you can just say which branch of your repository you would like to deploy. And yes, this makes rollbacks incredibly easy. Probably the most popular method for Django deployment is to package your files using git, and then deploy these files and run any deployment scripts using fabric.
Using git, packaging your files using your local repository would look something like:
git archive --format=tar HEAD | gzip > my_repo.tar.gz
Alternately, you can first push your changes to a github repository, and then in your deployment script just clone your repository from your production server.
For your third question, if you use this version control method for packaging your files, then just make sure when you are testing you have the deployment branch checked out.
I'll typically use Fabric for deploying most Django projects:
http://docs.fabfile.org/en/1.0.0/?redir
It has a decent api for communicating with remote servers and it's all in Python – bonus!
You don't need to store your concatenated media files in a separate repo. They're only needed for production. In that case I've found libraries like django-mediasync and django-compress to be useful. They both provide template tags/settings that can concatenate and cache your static files for you depending on the DEBUG setting/environments (production vs development).
You can run your tests whenever. Some people will run them as a version control hook to prevent broken code from being checked in or during deployment, stopping the deployment in case of test failure.

Is there an ideal way to move from Staging to Production for Coldfusion code?

I am trying to work out a good way to run a staging server and a production server for hosting multiple Coldfusion sites. Each site is essentially a fork of a repo, with site specific changes made to each. I am looking for a good way to have this staging server move code (upon QA approval) to the production server.
One fanciful idea involved compiling the sites each into EAR files to be run on the production server, but I cannot seem to wrap my head around Coldfusion archives, plus I cannot see any good way of automating this, especially the deployment part.
What I have done successfully before is use subversion as a go between for a site, where once a site is QA'd the code is committed and then the production server's working directory would have an SVN update run, which would then trigger a code copy from the working directory to the actual live code. This worked fine, but has many moving parts, and still required some form of server access to each server to run the commits and updates. Plus this worked for an individual site, I think it may be a nightmare to setup and maintain this architecture for multiple sites.
Ideally I would want a group of developers to have FTP access with the ability to log into some control panel to mark a site for QA, and then have a QA person check the site and mark it as stable/production worthy, and then have someone see that a site is pending and click a button to deploy the updated site. (Any of those roles could be filled by the same person mind you)
Sorry if that last part wasn't so much the question, just a framework to understand my current thought process.
Agree with #Nathan Strutz that Ant is a good tool for this purpose. Some more thoughts.
You want a repeatable build process that minimizes opportunities for deltas. With that in mind:
SVN export a build.
Tag the build in SVN.
Turn that export into a .zip, something with an installer, etc... idea being one unit to validate with a set of repeatable deployment steps.
Send the build to QA.
If QA approves deploy that build into production
Move whole code bases over as a build, rather than just changed files. This way you know what's put into place in production is the same thing that was validated. Refactor code so that configuration data is not overwritten by a new build.
As for actual production deployment, I have not come across a tool to solve the multiple servers, different code bases challenge. So I think you're best served rolling your own.
As an aside, in your situation I would think through an approach that allows for a standardized codebase, with a mechanism (i.e. an API) that allows for the customization you're describing. Otherwise managing each site as a "custom" project is very painful.
Update
Learning Ant: Ant in Action [book].
On Source Control: for the situation you describe, I would maintain a core code base and overlays per site. Export core, then site specific over it. This ensures any core updates that site specific changes don't override make it in.
Call this combination a "build". Do builds with Ant. Maintain an Ant script - or perhaps more flexibly an ant configuration file - per core & site combination. Track version number of core and site as part of a given build.
If your software is stuffed inside an installer (Nullsoft Install Shield for instance) that should be part of the build. Otherwise you should generate a .zip file (.ear is a possibility as well, but haven't seen anyone actually do this with CF). Point being one file that encompasses the whole build.
This build file is what QA should validate. So validation includes deployment, configuration and functionality testing. See my answer for deployment on how this can flow.
Deployment:
If you want to automate deployment QA should be involved as well to validate it. Meaning QA would deploy / install builds using the same process on their servers before doing a staing to production deployment.
To do this I would create something that tracks what server receives what build file and whatever credentials and connection information is necessary to make that happen. Most likely via FTP. Once transferred, the tool would then extract the build file / run the installer. This last piece is an area I would have to research as to how it's possible to let one server run commands such as extraction or installation remotely.
You should look into Ant as a migration tool. It allows you to package your build process with a simple XML file that you can run from the command line or from within Eclipse. Creating an automated build process is great because it documents the process as well as executes it the same way, every time.
Ant can handle zipping and unzipping, copying around, making backups if needed, working with your subversion repository, transferring via FTP, compressing javascript and even calling a web address if you need to do something like flush the application memory or server cache once it's installed. You may be surprised with the things you can do with Ant.
To get started, I would recommend the Ant manual as your main resource, but look into existing Ant builds as a good starting point to get you going. I have one on RIAForge for example that does some interesting stuff and calls a groovy script to do some more processing on my files during the build. If you search riaforge for build.xml files, you will come up with a great variety of them, many of which are directly for ColdFusion projects.