What are Capistrano binstubs? - ruby-on-rails-4

I'm new to Rails and wish to deploy my app to Ubuntu 14 using Capistrano. Can someone explain to me what are binstubs and whether they are required for deploying my rails app?

A binstub is an executable script that wraps a Ruby command to ensure that a specific version of that command is used.
The reason binstubs are sometimes necessary is because a given named Ruby command can refer to many different things, and so you can't be 100% sure of what the name refers to. In deployment, predictability is very important: you want to be 100% sure of what code you are running, especially in production.
For example, consider the command named rails. You might have multiple versions of Rails installed. Indeed, every time you upgrade to the latest patch release for security fixes, that is another new version you're installing. On top of that, you might have multiple versions of Ruby installed, too.
So when you run the command rails, which version of Ruby is used? Which version of Rails?
A binstub makes this decision explicit. The idea is that you create a special script and place it in the bin directory of your project, say bin/rails. This script uses Bundler to guarantee the right version of Rails is used. When you run bin/rails, you get that guarantee. (When you generate a new Rails project, Rails in fact creates this and other binstubs for you.)
Anyway, technically you do not need these binstubs so long as you use bundle exec rails. The bundle exec wrapper essentially does the same thing that a binstub would do.
If you use the capistrano/rails gem in combination with the capistrano/bundler gem (make sure both are in your Capfile), then Capistrano will always use bundle exec and you won't have to worry about creating your own binstubs.

Related

Using newer version of nodejs in a ruby project with cloud foundary

My project is using the latest ruby-buildpack which currently loads nodejs 6.14.4. I'd like to use a more current version of nodejs. What's the best way to get it exposed to the application? Does multi-buildpacks solve this problem, and if so, do I list the nodejs buildpack before or after the ruby buildpack in the manifest file? Or, would it be better to package a custom buildpack?
What's the best way to get it exposed to the application? Does multi-buildpacks solve this problem,
I think multi-buildpacks should work for you. You can put Nodejs as a supply buildpack which would tell it to install Node.js, whatever version you want. Then the Ruby buildpack would run and Node.js should be available on the path while it runs so you can use it to do whatever you want.
and if so, do I list the nodejs buildpack before or after the ruby buildpack in the manifest file
The last buildpack should be the buildpack which supplies the command to start your app. Only the final buildpack is allowed to pick the command which starts your app. Other buildpacks, called supply buildpacks, only contribute/install dependencies.
It sounds like that should be the Ruby buildpack in your case.
Or, would it be better to package a custom buildpack?
I'd strongly advise against this. Forking and maintaining a buildpack is a lot of work. Let other people do this work for you and you'll be a lot happier :)

Travis-CI "The command "bundle exec rake" exited with 1." + mystery 404 error

bundle exec rake runs all tests perfectly fine locally. However, Travis CI keeps blowing up with Problem accessing /authentication without giving much more info to go on. Here's one of the failed builds: https://travis-ci.org/Nase00/Horizon/builds/48094102 For the life of me, I cannot figure out what is causing an authentication error when Travis tries to run bundle exec rake.
Here's the project repo: https://github.com/Nase00/Horizon
I'm not sure what version of Neo4j Travis uses (UPDATE: they use 1.9.4, not supported) but I'm going to guess that it's a bit older than what Neo4j.rb supports. I'm one of the core maintainers and built the Neo4j 2.2 auth support that's fouling you up, but I tested it with different versions, going back to the early 2.1 subversions and had no trouble.
The best practice is to not use Travis's Neo4j at all. Instead, configure Travis to install the same version of the database you're using for dev and production. As a bonus, the rake task that installs Neo also disables auth in 2.2, so you don't have to deal with that at all. It's not that we're against auth, it's that we think of the rake install and config tasks as convenient features for dev/test environment, not production, so no auth seems like a reasonable default.
Take a peak at our .travis.yml file to see how we do the installation. https://github.com/neo4jrb/neo4j/blob/master/.travis.yml. An abstract that'll solve your issue:
script:
- "bundle exec rake neo4j:install['community-2.2.0-M02'] neo4j:start default --trace"
language: ruby
rvm:
- 2.0.0
Swap the community-2.2.0-M02 for whatever version you want to use. I'd have to check again but from what I remember, we are compatible with versions as far back as 2.1.2. I apologize for this not being posted in our docs -- it should be.
I very strongly recommend using Ruby 2.2.0 with Neo4j.rb. We generate a lot of symbols during Cypher queries that won't be garbage collected otherwise.
EDIT for a little more info
The very first thing the auth module does is check for the presence of the authentication REST endpoint. In all of the versions of Neo4j I tested, it didn't give an error like that, it just returned an empty body, which we interpret as a sign that auth is either unsupported or disabled.
Aftermath Edit
Travis support confirmed their provided Neo4j version is 1.9.4.

Rails 'generate' command different app frameworks

Where can I find the list of different application frameworks created from the Rails 'generate' command?
I'm not sure entirely what you mean by 'application framework' but you can see the entire list of generators available in your current environment by running rails g -h. (Or rails generate --help if you're not into the whole brevity thing.)

Is there any advantage to the Vagrant installer?

Is there any substantive advantage (for the user) to using the downloadable installer for Vagrant over simply doing gem install vagrant, other than the fact the non-Rubyists can more easily get started using it?
I'm introducing Vagrant at a company I'm doing work for, and someone asked why I wasn't having everyone use the installer. I prefer using gem install vagrant because (besides being more familiar and installing into "normal" places) they're going to need to do gem install whatever at some point anyway and might as well have everything set up.
I'd like to know, however, whether there are advantages (once everything is set up) of doing it one way or the other.
My suspicion is that the installer is the preferred method simply because it cuts down on support questions that distract the developers from contributing more to the project, and because it reduces the barrier to entry. Those are both good reasons, but don't necessarily carry enough weight for me to have everyone switch now that they're all set up with Vagrant, Chef, VirtualBox, Ruby, Git, etc.
The main advantage I see is when working with multiple versions of Ruby. Say you have RVM installed and gem install vagrant under a certain version of ruby / in a certain gemset. It won't be available as a gem unless you are using that version of ruby with that gemset.
However I suspect that using the installer will place the vagrant "binary" (ruby script) in /usr/local/bin or some such so that it is always available regardless of the currently active ruby.
for example, I installed the gem with rvm
$ which vagrant
/Users/chrislundquist/.rvm/gems/ruby-1.9.3-p194/bin/vagrant
If I am not mistaken then the installer will be the only way to install Vagrant from version 1.1 and up.

Deploying Django with virtualenv inside a distribution package?

I have to deploy a Django application onto a SuSE Linux Enterprise 11 system. Corporate rules say I need to deploy using RPMs only. While I can use ./setup.py bdist_rpm for each dependency, it's not really sane, since RPM doesn't record all of the dependencies yet. Therefore I'd have no real advantage in using RPMs and managing dependencies manually is somewhat cumbersome and I would like to avoid it.
Now I had the following idea: While building a package, I could create a virtualenv, install all my dependencies via pip there and then package it up with the rest of the code into one solid RPM.
How sensible is this approach?
I've been using this approach for about a year now and it has worked out pretty well.
One gotcha is that you'll want to check out the bang lines in any python scripts written to the virtualenv's bin directory. These will end up being full path names used in your build environment, which probably won't be the same directory where you end up installing the virtualenv. So you may need to add some sed calls in your RPM's postinstall to adjust the paths.