Can ember-cli watch and build automatically without running the server? - ember.js

Title is pretty much my question. I'm serving the dist directory differently and would still like the benefit of auto-builds but I don't need to run the server. I looked in the docs and the cli help but didn't see anything specific. I know the cli help doesn't contain everything because it doesn't list ember build which is available.

If I understand correctly you are wanting the ember build command to watch for changes in the file tree and rebuild on a change?
They implemented ember build --watch a while back which will trigger when a file changes. Tested just now and it worked on 0.2.7. Not sure what version it came in though. Let me know if this is not the answer you are looking for.

Related

How to clone a Django project without breaking it?

I built a (relatively simple) Django project following a tutorial (the recently released Hello Web App book.) I committed my changes along each step of the way, and I have my working solution in a github repo. However, when I clone the code into a new work space such as onto a new machine or into a new slot on a cloud IDE, the app doesn't work. I get a few errors and with each I resolve, another pops up. Basically, my environment is totally messed up and incompatible with the app beyond having Python and Django both installed.
I realize that I can read through the error messages I get when I invoke runserver, solve each one by one, etc. but it seems there should be a cleaner/simpler way to be able to pull down my repo to a fresh workspace and have it up and running in just a minute or two. I've read recommendations about using virtualenv, but it also seems like people discourage including venv inside your repo because of the extraneous commits and added bulk that will result from it so I don't think that actually solves my problem of trying to reduce new workspace configuration effort.
Perhaps I am overly optimistic, but I'm hoping someone can give me a recommendation to avoid the need to workout these kinks each time I start fresh.

Running leiningen for storm projects offline

I am trying to compile some projects on the twitter storm platform using leiningen. The servers I am working on do not have access to internet. I was wondering if it is possible to work offline by making a local repository by downloading all dependencies.
Short answer is yes, you should be able to do this by just putting the dependencies where lein expects to find them and then it will use them instead of going out to download them. That said, I've never tried this.
It looks like the answer to this question pretty much points to how to do it:
Run lein deps on a machine that is connected to the Internet
Copy $HOME/.m2/repository to your server
and that should take care of it. However, I have not tried this, so there may be some problem with this method that I have not foreseen.

How do you configure proprietary dependencies for Leiningen?

We're working on a project that has some Clojure-Java interop. At this point we have a single class that has a variety of dependencies which we put into a user library in Eclipse for development, but of course that doesn't help when using Leiningen (2.x). Most of our dependencies are proprietary, so they aren't on a repository somewhere.
What is the easiest/right way to do this?
I've seen leiningen - how to add dependencies for local jars?, but it appears to be out of date?
Update: So I made a local maven repository for my jar following these instructions and the lein deployment docs on github, and edited my project.clj file like this:
:dependencies [[...]
[usc "0.1.0"]]
:repositories {"usc" "file://maven_repository"}
Where maven_repository is under the project directory (hence not using file:///). When I ran "lein deps"--I got this message:
Retrieving usc/usc/0.1.0/usc-0.1.0.pom from usc
Could not transfer artifact usc:usc:pom:0.1.0 from/to usc (file://maven_repository): no supported algorithms found
This could be due to a typo in :dependencies or network issues.
Could not resolve dependencies
What is meant by "no supported algorithms found" and how do I fix it?
Update2: Found the last bit of the answer here.
add them as a dependency to your leiningen project. You can make up the names and versions.
then run lein deps and the error message when it fails to find it will give you the exact command to run so you can install the jar to your local repo then sould you decide to use a shared repo you can use this same process to put your dependencies there.
#Arthur's answer is good but I figured I'd flesh it out a bit more since it leaves some details lacking.
Always keep in mind Repeatability. If you don't make it so that anyone who needs access to the artifacts can get access to the artifacts in a standard way, you're asking for support hell.
The documentation on deployment is a good place to go to find out everything you need to know about deploying your artifacts. Since you're in a polyglot environment you probably can't have lein take care of deploying all your artifacts but at least you can get your clojure specific jars up into S3 or even a file share if you like. The rest of your artifacts will have to use Maven or Ant directly to upload the artifacts to the Maven repo on the file server or S3. At my current company we are using technomancy's excellent s3 wagon private to great effect for hosting our closed source artifacts and clojars for hosting anything that we can open-source.
What #Arthur is referring to is doing a lein install. All that does is install a copy of the current project into your local .m2 directory so that other projects on your box can reference them. Unless you have configured your install of maven to use a shared directory for your .m2 folder (maybe not a bad idea in your environment?), this will mean that anyone else who checks out your project will not be able to build it. If you wanted to go this route, you need to set the localRepository node in your $M2_HOME/conf/settings.xml to be the shared location that the rest of your team has access to. See the docs for more information.
YMMV but I've found it best to use Maven rather than Leiningen when you are working with Polyglot Clojure / Java projects.
It's mainly because the Java based tools (Eclipse etc.) understand Maven projects but don't really understand Leiningen projects. It's getting slowly better with the excellent Counterclockwise Clojure plugin, but the integration still isn't quite good enough yet for an efficient IDE based workflow.
On the repository side of things, I'd suggest setting up a private shared Maven repository. You're going to need it sooner or later if you plan to manage a complex set of dependencies within your team: might as well bite the bullet and get it done now.

Django Compressor on a multi-server deployment

I've been fortunate enough to discover django_compressor and implemented it within our stack, which deploys to many servers (Currently 6, but growing as we deploy smaller virtual machines.)
Now this is all fine and dandy if you're using django_compressor at its finest. Compressing raw CSS/JS code
However, say now I want introduce some type of pre-compiler. Let's say for this example it is LESS (css). The thought process for this is fairly simple:
Install node, npm, and the less package onto the server.
Add less to your precompilers!
COMPRESS_PRECOMPILERS = ( ('text/less', 'lessc {infile} {outfile}'), )
Now you deploy, and your server compiles the less file. Everything is fantastic!
Now let's add 8 more servers to that and you have to install node, npm, and less on each server?
This is where something doesn't seem right, and I feel like I'm missing something. I believe the Django community has run into this problem before.
My thoughts thus far have been:
Use a post-commit hook to compile the CSS on the developers machine. This means that via django_compressor, we link to the compiled static file in the HTML, and our repository contains both the compiled and non-compiled versions. My only downside to this is it ends up not using half of the benefits of django_compressor and may be tedious for developers?
Suck it up and make node, npm, and less part of the server stack.
Update
I did some additional looking around and it seems that using the COMPRESS_OFFLINE flag (or just --force) with the management command will produce an offline manifest file that does what I need (only tested locally). So setting this up with a pre-deploy hook likes to be the answer.
Of course, still open to other ideas :-)
Coupled with the tips in the comments about COMPRESS_OFFLINE, you could look at django-staticfiles' storage stuff. You can host the static files on amazon s3, for instance, so hosting it all on one static-hosting server and using that from all your servers could also be a nice solution. You wouldn't need to do anything with the static (and compressed) files on the individual servers.
Alternative solution regarding the multiple servers: I've made a custom fabric (docs.fabfile.org) script that installs/configures stuff on our servers. I've only recently started using coffeescript and less, but those two are definitively ending up in my fabfile. That solves the installation problem for me.
(Alternatives to a fabfile are things like a custom debian package with standard dependencies. Or chef or puppet or something similar.)
you can use puppet for the task

Django: pre-deployment

Question 1:
I am about to deploy my first Django website and I was wondering what tools are recommended to gathering all your Django files.
Like for example I don't need my sass and coffeescript files I just want the compiled css and js files. I also want to use the correct production settings file.
Question 2:
Do I put these files ready for deployment into their own version control repository? I guess the advantage is that you can easily roll back changes?
Question 3:
Do I run my tests before gathering the files or before deploying?
Shell scripts could be a solution but maybe there is a better way? I looked at jenkins/hudson but that seems more like a tool that sits on top of the tools that I am looking for.
For questions one and two, I'd recommend using a version control system for this. I'm sure you're already using some sort of version control, so you can just say which branch of your repository you would like to deploy. And yes, this makes rollbacks incredibly easy. Probably the most popular method for Django deployment is to package your files using git, and then deploy these files and run any deployment scripts using fabric.
Using git, packaging your files using your local repository would look something like:
git archive --format=tar HEAD | gzip > my_repo.tar.gz
Alternately, you can first push your changes to a github repository, and then in your deployment script just clone your repository from your production server.
For your third question, if you use this version control method for packaging your files, then just make sure when you are testing you have the deployment branch checked out.
I'll typically use Fabric for deploying most Django projects:
http://docs.fabfile.org/en/1.0.0/?redir
It has a decent api for communicating with remote servers and it's all in Python – bonus!
You don't need to store your concatenated media files in a separate repo. They're only needed for production. In that case I've found libraries like django-mediasync and django-compress to be useful. They both provide template tags/settings that can concatenate and cache your static files for you depending on the DEBUG setting/environments (production vs development).
You can run your tests whenever. Some people will run them as a version control hook to prevent broken code from being checked in or during deployment, stopping the deployment in case of test failure.