Read jubula h2 db file and populate the test suite along with test cases - jubula

I am new to Eclipse Jubula. Can the following points be performed on local h2 DB created by jubula?:
1) Get all the test suite along with test cases from the DB.
2) Set active/inactive to the test case and update it to DB again.

There's no way of doing this that is documented or recommended.

Related

PostgreSQL: How can I create a temporary user and database for my unit tests?

I'm developing a C++ web application and I'm using PostgreSQL with libpqxx, on Ubuntu 16.04. The issue I'm having is that I need a portable (over Linux systems) way to run my unit tests.
What I have now is a class that controls database calls. I'd like to test how this class acts under my unit tests. For that, for every time I run a unit test, I'd like to:
Create a temp user
Create a dummy db
run my tests on it
delete the db
delete the user
Now doing the steps is fine with Google tests. I can create a fixture that will do them all reproducibly. BUT...
How can I create a user with a password in one call (without being prompted for the password), so that I can create that user on the go and run my tests?
The manual doesn't seem to provide a way to provide the password in an argument. I was hoping to do something like this:
system("createuser " + username + " -password abcdefg");
where system() runs a terminal command. Then I can connect to the DB server with that username and password to do my unit tests.
Another failed attempt for me was to pass an sql query that would create the user through the terminal:
system("psql -c \"CREATE ROLE joe PASSWORD 'aabbccdd';\"")
When I do this, I get the error:
psql: FATAL: database "user" does not exist
where user is my unix username.
Remember that I cannot connect to the server with libpqxx because I don't have credentials yet (as the temporary user). Right? Please correct me if I'm wrong.
Is there any way to do what I'm planning here and make unit-tests runnable without user intervention?
The problem with the system("psql -c \"CREATE ROLE joe PASSWORD 'aabbccdd';\"") call is that it doesn't specify a database to connect to which will then default to trying a database of the same name as the user logging in. Try adding -d postgres and see if it works.
Please note that this only creates the user, it doesn't create any database or give privileges to anything.

Rails 4: run migrations as separate DB user

The situation I have is our normal Rails DB user has full ownership in order to run migrations.
However, we use a shared DB for development, so we can't run "destructive" DB tasks against the development DB, such as rake db:drop/reset/etc....
My thought is to create 2 DB users:
rails-service
rails-migrator
The service user is the "normal" web app user that connects to the DB when the app is live. This DB user would only have standard CRUD privileges but no dropping rights.
The migrator user is the "admin" user that is only used for running migrations. This DB user would have normal "full" access to the DB such that it "could" drop the DB if that command were executed.
Question: Is there a clean way to tell Rails migrations to run as the rails-migrator user? I'm not sure how I would accomplish this aside from somehow altering the connection strings for every rails migration file, which seems like a bad idea.
In tandem with the above, I'm going to "delete" the destructive rake tasks so that a developer can't even run them.
# lib/tasks/db.rake
# See: https://coderwall.com/p/jt4e1q/disable-destructive-rake-tasks-by-environment
tasks = Rake.application.instance_variable_get '#tasks'
tasks.delete 'db:reset'
tasks.delete 'db:drop'
namespace :db do
desc 'db:reset not available in this environment'
task :reset do
puts 'db:reset has been disabled'
end
desc 'db:drop not available in this environment'
task :drop do
puts 'db:drop has been disabled'
end
end
I refer you to the answer of Matthew Rudy Jacobs from 2007 (!) https://www.ruby-forum.com/topic/123618
Lucky enough it works also now :)
I just changed DEFINED? and the rest to ENV['AS_DB_ADMIN'] and used it to separate migration access to another user.
On migration I used
set :default_env, { as_db_admin: true }

Hibernate running on separate JVM fail to read

I am implementing WebService with Hibernate to write/read data into database (MySQL). One big issue I have was when I insert data (e.g., USER table) via one JVM (example: JUNit test or directly from DBUI suite) successfully, my WebService's Hibernate running on separate JVM cannot find this new data. They all point to the same DB server. It is only if I had destroyed the WebService's Hibernate SessionFactory and recreate it, then the WebService's Hibernate layer can read the new inserted data. In contrast, the same JUnit test or a direct query from DBUI suite can find the inserted data.
Any assistance is appreciated.
This issue is resolved today with the following:
I changed our Hibernate config file (hibernate.cfg.xml) to have Isolation Level to at least "2" (READ COMMITTED). This immediately resolved the issue above. To understand further about this isolation level setting, please refer to these:
Hibernate reading function shows old data
Transaction isolation levels relation with locks on table
I ensured I did not use 2nd level caching by setting CacheMode to IGNORE for each of my Session object:
Session session = getSessionFactory().openSession();
session.setCacheMode(CacheMode.IGNORE);
Reference only: Some folks did the following in hibernate.cfg.xml to disable their 2nd level caching in their apps (BUT I didn't need to):
<property name="cache.provider_class">org.hibernate.cache.internal.NoCacheProvider</property>
<property name="hibernate.cache.use_second_level_cache">false</property>
<property name="hibernate.cache.use_query_cache">false</property>

Codeception - HTML report generation seems slow?

I am using Codeception to run three acceptance tests which basically are as follows:-
Check the email address 'admin#admin.com' exists
Create a new user account
Login to the website
Obviously this requires the database so I have added 'Db' to the list of modules in the acceptance.suite.yml, however the generation of the report takes sometime, is this normal or do I have something wrong with my setup?
Below is the report (and time taken for each according to the html file it is generating)
check admin#admin.com account exists (AdminCept.php) (0.01s)
create new user account (CreateUserCept.php) (19.1s)
log in to the website (LoginCept.php) (21.72s)
Approx 40 seconds in total (although the command line states 1:02 - I guess as it replaces the mock database dump.sql back into the database as well)
Can anybody shed any light on the matter?
Not really an answer but closing this off - simply put the report generation takes time.

How do I run a unit test against the production database?

How do I run a unit test against the production database instead of the test database?
I have a bug that's seems to occur on my production server but not on my development computer.
I don't care if the database gets trashed.
Is it feasible to make a copy the database, or part of the database that causes the problem? If you keep a backup server, you might be able to copy the data from there instead (make sure you have another backup, in case you messed the backup database).
Basically, you don't want to mess with live data and you don't want to be left with no backup in case you mess something up (and you will!).
Use manage.py dumpdata > mydata.json to get a copy of the data from your database.
Go to your local machine, copy mydata.json to a subdirectory of your app called fixtures e.g. myapp/fixtures/mydata.json and do:
manage.py syncdb # Set up an empty database
manage.py loaddata mydata.json
Your local database will be populated with data and you can test away.
Make a copy the database... It's really a good practices!!
Just execute the test, instead call commit, call rollback at the end of.
The first thing to try should be manually executing the test code on the shell, on the production server.
python manage.py shell
If that doesn't work, you may need to dump the production data, copy it locally and use it as a fixture for the testcase you are using.
If there is a way to ask django to use the standard database without creating a new one, I think rather than creating a fixture, you can do a sqldump which will generally be a much smaller file.
Short answer: you don't.
Long answer: you don't, you make a copy of the production database and run it there
If you really don't care about trashing the db, then Marco's answer of rolling back the transaction is my preferred choice as well. You could also try NdbUnit but I personally don't think the extra baggage it brings is worth the gains.
How do you test the test db now? By test db do you mean SQLite?
HTH,
Berryl
I have both a full-on-slow-django-test-db suite and a crazy-fast-runs-against-production test suite built from a common test module. I use the production suite for sanity checking my changes during development and as a commit validation step on my development machine. The django suite module looks like this:
import django.test
import my_test_module
...
class MyTests(django.test.TestCase):
def test_XXX(self):
my_test_module.XXX(self)
The production test suite module uses bare unittest and looks like this:
import unittest
import my_test_module
class MyTests(unittest.TestCase):
def test_XXX(self):
my_test_module.XXX(self)
suite = unittest.TestLoader().loadTestsFromTestCase(MyTests)
unittest.TextTestRunner(verbosity=2).run(suite)
The test module looks like this:
def XXX(testcase):
testcase.assertEquals('foo', 'bar')
I run the bare unittest version like this, so my tests in either case have the django ORM available to them:
% python manage.py shell < run_unit_tests
where run_unit_tests consists of:
import path.to.production_module
The production module needs a slightly different setUp() and tearDown() from the django version, and you can put any required table cleaning in there. I also use the django test client in the common test module by mimicking the test client class:
class FakeDict(dict):
"""
class that wraps dict and provides a getlist member
used by the django view request unpacking code, used when
passing in a FakeRequest (see below), only needed for those
api entrypoints that have list parameters
"""
def getlist(self, name):
return [x for x in self.get(name)]
class FakeRequest(object):
"""
an object mimicing the django request object passed in to views
so we can test the api entrypoints from the developer unit test
framework
"""
user = get_test_user()
GET={}
POST={}
Here's an example of a test module function that tests via the client:
def XXX(testcase):
if getattr(testcase, 'client', None) is None:
req_dict = FakeDict()
else:
req_dict = {}
req_dict['param'] = 'value'
if getattr(testcase, 'client', None) is None:
fake_req = FakeRequest()
fake_req.POST = req_dict
resp = view_function_to_test(fake_req)
else:
resp = testcase.client.post('/path/to/function_to_test/', req_dict)
...
I've found this structure works really well, and the super-speedy production version of the suite is a major time-saver.
If you database supports template databases, use the production database as a template database. Ensure that you Django database user has sufficient permissions.
If you are using PostgreSQL, you can easily do this specifying the name of your production database as POSTGIS_TEMPLATE(and use the PostGIS backend).