Rails Integration tests don't start fresh - ruby-on-rails-4

I'm having difficulty running multiple integration tests on Rails 4 (now called system tests in Rails 5).
Environment: Rails 4 / Minitest / Capybara / Poltergeist running on the Puma server.
When I run a single test that creates a new record, it works every time.
RAILS_ENV="test" ruby -I test test/integration/requests_test.rb -n /create_new/
When I run the entire set of tests, the above test fails to create every time because the record already exists.
RAILS_ENV="test" ruby -I test test/integration/requests_test.rb
I confirmed this by adding puts Request.all.collect(&:name) at the start - when running the group, the record being created already in the DB.
Here's the core issue - the DB is not reliably fresh for every test. (It is fresh for my unit tests and my functional tests, in groups and as individuals.) How can I make sure that my integration tests also start fresh each time?
In case it's helpful, the command above seems to be running Puma in development mode, even though I've specified ENV['RAILS_ENV'] = 'test' in test_helper.rb.

Have you checked out the database_cleaner gem (https://github.com/DatabaseCleaner/database_cleaner)? You can use it to clear the database every time you run a new integration test in your RailsHelper config. I'd check out their documentation but mine looks like:
DatabaseCleaner.strategy = :truncation
RSpec.configure do |config|
config.before(:each) do
DatabaseCleaner.clean
end
config.after(:each) do
FactoryGirl.reload #if you're using FactoryGirl
end
end

Related

Testing - How to split up pre-commit unit tests and CI end-to-end tests

Scenario
I'm working on an app that has fast unit/functional jest tests along with slower end-to-end jest-puppeteer tests. I would like to split those up so that I can run the faster tests as part of a git pre-commit hook and leave the end-to-end tests to be run on CI after the code is eventually pushed to origin.
Question
How can I define specific tests to run at pre-commit? Specifically via regex similar to jest moduleNameMapper eg <rootDir>/__tests__/[a-z]+\.unit\.test\.js
Best idea so far:
in package.json add test:pre which uses bash find . -regex with bash for do to run desired "pre commit" tests
I've added
"test:pre": "PRE=1 npm test -- test/pre-*test.js"
# everything after -- is a kind of regex filter for matching file names
to my package.json scripts and in my jest-puppeteer global-setup.js I'm using
if(+process.env.PRE) return;
before all the puppeteer extras are started. So now I can
$ npm run test:pre
and violá

How to create unit test cases for Ansible functionalities?

I want to add unit testing for my ansible playbook. I am new to this and have tried few things but didn't understood much. How can I start on this and write a test case properly?
Following is the simple example:
yum:
name: httpd
state: present
Ansible is not a programming language but a tool that will check the state you describe is aligned with the state of the node your run in against. So you cannot unit tests your tasks. They are in a certain way tests by themselves already. The underlying ansible binary that runs those task has unit tests itself used during its development.
Your example above is asking ansible to test if httpd is present on the target machine and will return ok if this is the case, changed if it had to install the package to fulfill the requirement, or error if something went wrong.
Meanwhile, it is not because you cannot unit test your ansible code that no tests are possible at all. You can perform basic static checks with yammlint and ansible-lint. To go further, you will have to run your playbook/role/collection against a test target node.
This has become quite easy with CI that will let you spawn virtual machines or docker container from scratch and run your script to test that no error is fired, the --check option passes successfully, idempotency is obeyed (i.e. nothing should change on a second run with the same parameters), and everything works as expected (e.g. in your above case port 80 is opened and your get the default Apache web page).
You can write those kind of tests yourself (running against localhost in a test vm for example). This Mac Appstore CLI role by Geerlinguy is using such tests through travis-ci as an example.
You can also use existing tools to help you write those tests in a more structured way like molecule. Here are some example roles using it if you are interested:
Redis role by Geerlinguy
nexus3-oss role by ThoTeam [1]
[1] Note for transparency: I am the maintainer of this example repository

Rails: How to set up db:schema:load for initial deploy with Capistrano

I'm trying to deploy my Rails 4 app using Capistrano 3. I'm getting error messages in running the db:migrations (i've been sloppy, sorry). Is there a way to have Capistrano deploy the app (at least the first time) using db:schema:load?
An excerpt of my deploy.rb:
namespace :deploy do
%w[start stop restart].each do |command|
desc 'Manage Unicorn'
task command do
on roles(:app), in: :sequence, wait: 1 do
execute "/etc/init.d/unicorn_#{fetch(:application)} #{command}"
end
end
end
I'm not sure how to override Capistrano 3's default behaviour. Can someone tell me how to add this to my script?
For first time deploys, I generally hack around it by logging into the server, cding into the release directory (which will have the deployed code at this point), and then manually running RAILS_ENV=yourenv bundle exec rake db:setup.
In Capistrano 3.10.1 with a Rails 5.1.6 application,
~/Documents/p.rails/perla-uy[staging]$ bundle exec cap staging deploy:updating
gives me enough to shell-in and run the db:structure:load or db:schema:load task manually. In the secure shell session to the host, switch to the newly created release directory and:
dclo#localhost:~/perla-uy/releases/20180412133715$ bundle install --without development test --deployment
dclo#localhost:~/perla-uy/releases/20180412133715$ bundle exec rails db:schema:load
Shelling into a (successful or failed) deploy that has tried deploy:migrate isn't quite the same.
Note: I have RAILS_ENV=production and RAILS_MASTER_KEY=... set-up by the shell login.

Rails 4 integration tests for multiple Engines with Rspec + Factory Girl (shared factories)

I'm using Codeship for continuous deployment. My app is setup so that the main app loads a few engines, and the engines isolate specific functionality. Each engine might have various rspec tests related to it's isolated functionality. Codeship spins up a copy of my app, runs a few commands, and deploys the code if everything works. Those few commands need to run all my tests. bundle exec rake or bundle exec rspec no longer work, as they only run the tests on my main container app (none of the engines).
I ended up created the following shell script that loops through the directories inside my gems directory, and calls bundle install and bundle exec rspec:
retval=0
for dir in gems/*/
do
dir=${dir%*/}
echo "## ${dir##*/} Tests"
cd gems/${dir##*/}
bundle install
if ! bundle exec rspec spec/ --format documentation --fail-fast;
then
retval=1
break
fi
cd ../../
done
exit $retval
This works well. All my tests are executed, and it exits with an error if any tests fail. In an effort to see if there was a better way to accomplish this, I tried moving this functionality into a rake tasks. I attempted a couple methods. I used the FileUtility methods to loop through the directories and the System method to call the same bundle install and bundlee exec rspec as above. The commands are called, but when it is ran from the rake tasks, their is a problem with the factories. If Engine A calls factories from Engine B, the returned values are nil. I'm calling shared factories this way (example from Engine A):
FactoryGirl.define do
factory :Model, class: EngineB::Model do
end
end
Sorry this is long winded. To sum up my questions:
1) Is the shell script a good way to manage tests? I haven't ran into any issues with it yet.
2) Do you know why calling that shell script from a rake task causes the factories to return nil values?
3) Do you know why mimicking the shell functionality in a rake tasks also results in the factories returning nil values? (probably same problem as above)
4) Is there a better way to share factories? I've placed all models that are referenced in multiple engines in one engine.

Django Deploy From Github to Server Like Capistrano with Simple Fabric Recipe

I wanted to use Capistrano to deploy on my django app on my webfaction server but due to my purist tendencies, I wanted to do it in Fabric, in a way that Capistrano does. The thing that I liked most about about Capistrano is that automatically retrieves a repo's content and pushes it to a server.
The fabric recipes I have seen so far required me to do things "the git way", manually entering git commands to work with the repo, etc.
Is there a way to deploy a Django app in Fabric (or any other python package) "the Capistrano" way?
Sie Note: In case I really have to work with Capistrano, is there a way to bypass the assets precompile task and the rake db:migrate task?
Ive successfully used the scripts from here to deploy to webfaction.
If you want to bypass the assets compilation, just don't write this line in your recipe :
load 'deploy/assets'
If you don't want to run migration, just never type the migration command
cap deploy:migrate
If you want to remove some other behaviors (symlink, restart, update code to server) write chosen parts from this :
namespace :deploy do
task :start do ; end
task :stop do ; end
task :restart do ; end
task :update_code do ; end #override this task to prevent capistrano to upload on servers
task :symlink do ; end #don't create the current symlink to the last release
end
for anyone who stumbles across this, here is a capistrano recipe that is so basic :
http://ygamretuta.me/2012/07/18/deploy-django-1-4-webfaction-capistrano/