Upgrade from Rspec 2.x to 3.x, in a rails project - using Rails 4.1.x.
I'm getting a really odd behaviour happening when I run RSpec. Here is the order of events.
Both test and normal environments are fully migrated. I test it.
I run rspec with command $ rspec
Then I can check my migrations , and this is the result (for test only, production does not think it losses migrations):
I can't understand why this drops all my migrations. Maybe it will also help to say: if I try to migrate on test again I get this error:
So first, why would it drop all migrations? It's not ACTUALLY dropping them, they are still there - since the tables are all still there.
[1]:
Rspec 3 has a new feature that leverages updates in rails 4.1+ that automatically keeps the development and test schemas in sync. So this means that if you already ran your migrations in development, you don't need to run them again with RAILS_ENV=test. You can double check that you have this (Rspec 3 default) feature activated by looking in rails_helper.rb and seeing if you have ActiveRecord::Migration.maintain_test_schema! set.
Related
I have a running Django application on Heroku with migrations auto-run on release. While in most times this works fine sometimes there is a problem when:
There are more than one migration in given release (they can be in different apps)
Some migration will fail, but not the first one
In this case manage.py migrate will fail so Heroku will not finish the release and will not deploy the new code. This means that code is in the old version and the database is in the state "somewhere between old and new".
Is there a simple way to autorun Django run reversed migrations in case of the failure of the release command on Heroku?
Transactions won't help here as there might be more than one migration (multiple apps) and Django run each migration in seperate transaction.
As I couldn't find any existing solutions I am posting a gist I have written to solve this.
https://gist.github.com/pax0r/0591855e73b9892c28d3e3cdd15f4985
The code stores the state of migrations before running the migration and in case of any exception reverts back to this state. It also checks if all migrations are reversible during the migrate step.
It's not yet well tested, but I will work forward to create a library from it for easier use by others.
I'm having trouble with my Django-app that's been deployed.
It was working fine but I had to do a minor modification (augmented the max_length) of a Charfield of some model. I did migrations and everything was working fine in the local version.
Then I commited the changes without a problem and the mentioned field of the web version now accepts more characters, as expected, but whenever I click the save button a Server Error rises.
I assume I have to do some kind of migration/DB update for the web version but I don't seem to find how.
(I'm working with Django 1.11, postgresql 9.6, and DigitalOcean).
EDIT
I've just realized that the 'minor modification' also included a field deletion in the model.
Short answer
You have to run
python manage.py migrate
on the server, too. Before you do that, make sure all migration scripts you have locally are also present on the server.
Explanation
After changing the model, you probably locally ran
python manage.py makemigrations
This creates migration scripts that'll transform database schema accordingly. Hopefully, you've committed these newly created scripts to Git, together with the changed model. (If not, you can still do so now.)
after running makemigrations (either before or after committing, that shouldn't matter), you've probably locally ran
python manage.py migrate
This applies the migration scripts to the database that haven't been applied to it, yet. (The information which ones have already been applied is stored in the database itself.)
You probably (and hopefully) haven't checked in your local database into Git, so when you pushed your tracked changes to a remote repo and pulled them down on your server (or however else the new Git revisions got there), the changes to the server database haven't happened, yet. So you have to repeat the last local step (migrate) on the server.
Further reading
For more information, refer to the Django 1.11 documentation w.r.t. migrations. (You can e.g. limit migration creation or migration application to a single Django app, instead of the whole Django project.) To get the grip of these things, I can recomment the free Django Girls tutorial.
I'm new to Rails and wish to deploy my app to Ubuntu 14 using Capistrano. Can someone explain to me what are binstubs and whether they are required for deploying my rails app?
A binstub is an executable script that wraps a Ruby command to ensure that a specific version of that command is used.
The reason binstubs are sometimes necessary is because a given named Ruby command can refer to many different things, and so you can't be 100% sure of what the name refers to. In deployment, predictability is very important: you want to be 100% sure of what code you are running, especially in production.
For example, consider the command named rails. You might have multiple versions of Rails installed. Indeed, every time you upgrade to the latest patch release for security fixes, that is another new version you're installing. On top of that, you might have multiple versions of Ruby installed, too.
So when you run the command rails, which version of Ruby is used? Which version of Rails?
A binstub makes this decision explicit. The idea is that you create a special script and place it in the bin directory of your project, say bin/rails. This script uses Bundler to guarantee the right version of Rails is used. When you run bin/rails, you get that guarantee. (When you generate a new Rails project, Rails in fact creates this and other binstubs for you.)
Anyway, technically you do not need these binstubs so long as you use bundle exec rails. The bundle exec wrapper essentially does the same thing that a binstub would do.
If you use the capistrano/rails gem in combination with the capistrano/bundler gem (make sure both are in your Capfile), then Capistrano will always use bundle exec and you won't have to worry about creating your own binstubs.
bundle exec rake runs all tests perfectly fine locally. However, Travis CI keeps blowing up with Problem accessing /authentication without giving much more info to go on. Here's one of the failed builds: https://travis-ci.org/Nase00/Horizon/builds/48094102 For the life of me, I cannot figure out what is causing an authentication error when Travis tries to run bundle exec rake.
Here's the project repo: https://github.com/Nase00/Horizon
I'm not sure what version of Neo4j Travis uses (UPDATE: they use 1.9.4, not supported) but I'm going to guess that it's a bit older than what Neo4j.rb supports. I'm one of the core maintainers and built the Neo4j 2.2 auth support that's fouling you up, but I tested it with different versions, going back to the early 2.1 subversions and had no trouble.
The best practice is to not use Travis's Neo4j at all. Instead, configure Travis to install the same version of the database you're using for dev and production. As a bonus, the rake task that installs Neo also disables auth in 2.2, so you don't have to deal with that at all. It's not that we're against auth, it's that we think of the rake install and config tasks as convenient features for dev/test environment, not production, so no auth seems like a reasonable default.
Take a peak at our .travis.yml file to see how we do the installation. https://github.com/neo4jrb/neo4j/blob/master/.travis.yml. An abstract that'll solve your issue:
script:
- "bundle exec rake neo4j:install['community-2.2.0-M02'] neo4j:start default --trace"
language: ruby
rvm:
- 2.0.0
Swap the community-2.2.0-M02 for whatever version you want to use. I'd have to check again but from what I remember, we are compatible with versions as far back as 2.1.2. I apologize for this not being posted in our docs -- it should be.
I very strongly recommend using Ruby 2.2.0 with Neo4j.rb. We generate a lot of symbols during Cypher queries that won't be garbage collected otherwise.
EDIT for a little more info
The very first thing the auth module does is check for the presence of the authentication REST endpoint. In all of the versions of Neo4j I tested, it didn't give an error like that, it just returned an empty body, which we interpret as a sign that auth is either unsupported or disabled.
Aftermath Edit
Travis support confirmed their provided Neo4j version is 1.9.4.
Why does redmine not use the development and test environments?
In the official installation guide they only show one environment when setting up the databases, advise to run bundler skipping dev and test, and run the rails server in production mode.
I think this instruction describes the installation process only for server (which runs in Production mode). I think it is done this way not to confuse new users (who do not have a lot of knowledge in Rails)
You can easily use this instruction to setup Redmine locally (I did it successfully several times ;). In order to install Redmine locally you should change only few points in the instruction.