I'm writing a complicated web application in Django. There are many components. Two in particular, are the Django server (lets call this Server), and a C++ application server (lets call this Calculator) which serves calculations to Server. When Server wants a calculation done, it sends a command to a socket on which Calculator is listening. Like this:
{
"command": "doCalculations"
}
Now, Calculator might need different pieces of information at different times to do its work. So instead of passing the data directly to Calaculator in the command, it is up to Calculator to ask for what it needs. It does this by calling a RESTful API on Server:
https://Server/getStuff?with=arguments
Calculator then uses the data from this call to do its calculations, and respond to Server with an answer.
The problems begin when I try to do unit testing using Djangos unittest framework. I set up a bunch of data structures in my test, but when Server calls Calculator, it needs to have this data available in the REST API so Calculator can get what it needs. The trouble is that the Django test framework doesn't spin up a webserver, and if I do this manually it reads the data from the real database, and not the test-case.
Does anybody know how to run a unit test with the data made available to external people/processes?
I hope that makes sense...
You need specify the fixtures to load in your test class.
https://docs.djangoproject.com/en/1.7/topics/testing/tools/#fixture-loading
class MyTest(TestCase):
fixtures = ['data.json']
def setUp(self):
# do stuff
def tearDown(self):
# do stuff
Where data.json can be retrieved by using python manage.py dumpdata.
It will be filled with data from your main db in JSON format.
data.json should exist in the fixtures folder of the app you are testing. (Create one if necessary).
Related
I have been rushing to get my app, a mixture between nodejs server driving an sqlite database and a litElement based client side providing the ui, into a useable state as a beta release. I achieved that a couple of days ago and now I am (belatedly I know) thinking how to put together a test framework. However I am really struggling to understand how to best test the client side. I think its because I am having difficulty understanding conceptually what the two main choices of framework are. Before I go into more detail, let me explain the structure of the app in top level terms.
At the project root level there are three main directories node_modules which comprises all the modules I've pulled in (including lit-element and web-components-loader which are client side elements - but see below) server which contains all the code for the server side of my application and client which consist of all the code for the client side of my app. I run rollup ONLY at module install time to "package" lit-element and the directives I use and the web-component-loader and effectively treeshake and copy them to the client/libs. As a result of this my client is coded to assume the modules are in the libs directory AND I DO NOT NEED OR HAVE any build stage. I guess the root of the client is index.html which pulls in a service-worker.js and main-app.js. main-app is the root of a tree in lit-element based components that make up the entire client app. Nginx is the web server for all the static files in the client, but also acts as a proxy to pass any urls that start /api to a standard node http web server (not even express, although I do use the router, body-parser and final-handler modules) and these get passed to various api handlers each of which is a separate javascript file - although these can "require" a few common modules that I have written and those in the node_modules directory.
I plan on using jest as a test environment. For my server I think it is easy. For each api handler I want to test I can build a test script that "requires" the javascript file I want to test. I am in two minds about whether to use a sqlite database for testing or mock something - I am leaning towards the former as I am using better-sqlite3 and it is totally synchronous and very fast. I already have scripts to create empty databases, so I have no worry about test isolation.
Client testing is where I get confused. I "think" that in essence jest can run tests the same way as for the server, one element at a time. BUT, these elements and my test scripts are going to need a "web-platform" set of APIs - not least of which is the entire shadow dom and custom components stuff that lit-element uses. This is where, I think, puppeteer or electron in with there associated jest plugins which can put these platform apis into the test environment. But, and this is the essence of my confusion, puppeteer instructions all start with a something like
const browser = await puppeteer.launch({headless: true});
const page = await browser.newPage()
await page.goto(SOME URL);
What is this URL? - do I have to also run a server? I cannot relate this snippet to running a test controlled by jest. All the examples seem to use webpack and typescript, neither of which I know anything about.
The other module I have seen mentioned is electron, in particular this article, which everything else seems to point to (or the same text).
https://www.ninkovic.dev/blog/2020/testing-web-components-with-jest-and-lit-element
From the code snippets in this article it "seems" like it might be what I want, BUT ...
I cannot find very many references to electron other than on its own web site. Here it is telling you to use electron as a tool to build a cross platform application, but nowhere can I find what it is - it assumes you already know. I don't want a UI for my unit testing I want it to be headless like in puppeteer.
Hence my confusion and why I am unsure how to achieve what I want. Can someone give me some pointers as to
How can I set up puppeteer to run headless tests without needing a server OR
What exactly is electron (and can I use it to
a) run my tests
b) provide me with tools to examine the dom elements I have created to see I have created the right ones) and how is it different from puppeteer and can I use it to conduct headless tests of my client.
UPDATE
I've done some more digging and am beginning to understand the differences. Let me summarise what I think I have found.
Puppeteer is great for end to end testing of your site. You run the tests by launching the included puppeteer at the home page of your site (or the more likely scenario of a development test site) and programatically pretend to be a user who can click on buttons etc. You can use various methods, including functions such as document.querySelector() to check your UI has behaved how you think, or you can take screen shots and compare with standardised version. I could possible use it for unit tests, but I would have to run a server, create a test fixture html page for every test and navigate to it. jest-puppeteer is a package with some of that built in.
Electron is a platform for building apps. What the url I was referencing was using a test runner app jest-electron built using electron. So worrying about electron is a red herring, I should be worrying about jest-electron.
My main concern right now, I think, is that I need different jest configurations for my three scenarios
unit tests on the server
unit tests on the client
end to end testing of the complete app.
Given I have only one package.json file and one set of node_modules I need to figure out a way to have three different jest-config.js files.
I would like to unit test CRUD operations against a pre-populated Neo4j database.
I am thinking that a way to do this might be to:
Create a an empty database (let's call it testDB)
Create a database backup (let's call it testingBackup)
On running tests:
Delete any data from testDB
Populate testDB from testingBackup
Run unit test queries on the now populated testDB
I am aware of the backup / restore functions, the load / dump functions and the export to csv / load from csv etc. However, I'm not sure which of these will be most appropriate to use and can be automated most easily. I'm on Ubuntu and using python.
I would need to be able to quickly and easily alter the backup data as the application evolves.
What is the best approach for this please?
I have something dome somthing similar, with some caveats. I have done tests like these using Java and testcontainers. Also, i didn't use neo4j. I have used postgress, sqlserver and mongodb for my tests. Using the same technique for neo4j should be similar to one of those. I will post the link to my github examples for mongodb/springboot/java. Take a look.
The idea is to spin up a testcontainer from the test (ie, a docker container for tests), populate it with data , make the application use this for its database use, then assert at the end.
In your example, there is no testingbackup. Only a csv file with data.
-Your test spins up a testcontainer with neo4j from your test (this is your testdb).
-Load the csv into this container.
-get the ip, port, user, password of the testcontainer (this part depends on the type of database image available for testcontainers. Some images allow you to set your own port, userid and password. Some of them won't.)
-pass these details to your application and start it (i am not sure how this part will work for a python app. here you are on your own. See the link to a blog i found for a python/testcontainer example below. I have used spring-boot app. You can see my code in github)
-once done, execute queries to your containerized neo4j and assert.
-when the test ends, the container is disposed off with the data.
-any change is done to the csv file which can create new scenarios for your test.
-create another csv file/test as needed.
Here are the links,
https://www.testcontainers.org/
testcontainers neo4j module https://www.testcontainers.org/modules/databases/neo4j/
A blog detailing testcontainers and python.
https://medium.com/swlh/testcontainers-in-python-testing-docker-dependent-python-apps-bd34935f55b5
My github link to a mongodb/springboot and sqlserver/springboot examples.
One of these days i will add a neo4j sample as well.
https://github.com/snarasim123/testcontainers
So I have an assignment to build a web interface for a smart sensor,
I've already written the python code to read the data from the sensor and write it into sqlite3, control the sensor etc.
I've built the HTML, CSS template and implemented it into Django.
My goal is to run the sensor reading script pararel to the Django interface on the same server, so the server will do all the communication with the sensor and the user will be able to read and configure the sensor from the web interface. (Same logic as modern routers - control and configure from a web interface)
Q: Where do I put my sensor_ctl.py script in my Django project and how I make it to run independent on the server. (To read sensor data 24/7)
Q: Where in my Django project I use my classes and method from sensor_ctl.py to write/read data to my djangos database instead of the local sqlite3 database (That I've used to test sensor_ctl.py)
Place your code in app/appname/management/commands folder. Use Official guide for management commands. Then you will be able to use your custom command like this:
./manage getsensorinfo
So when you will have this command registered, you can just put in in cron and it will be executed every minute.
Secondly you need to rewrite your code to use django ORM models like this:
Stat.objects.create(temp1=60,temp2=70) instead of INSERT into....
I'm having a hard time customizing the test database setup behavior. I would like to achieve the following:
The test suites need to use an existing database
The test suite shouldn't erase or recreate the database instead load the data from a mysql dump
Since the db is populated from a dump, no fixtures should be loaded
Upon finishing tests the database shouldn't be destroyed
I'm having a hard time getting the testsuiterunner to bypass creation.
Fast forward to 2016 and the ability to retain the database between tests has been built into django. It's available in the form of the --keep flag to manage.py
New in Django 1.8. Preserves the test database between test runs. This
has the advantage of skipping both the create and destroy actions
which can greatly decrease the time to run tests, especially those in
a large test suite. If the test database does not exist, it will be
created on the first run and then preserved for each subsequent run.
Any unapplied migrations will also be applied to the test database
before running the test suite.
This pretty much fullfills all the criteria you have mentioned in your questions. In fact it even goes one step further. There is no need to import the dump before each and every run.
This TEST_RUNNER works in Django 1.3
from django.test.simple import DjangoTestSuiteRunner as TestRunner
class DjangoTestSuiteRunner(TestRunner):
def setup_databases(self, **kwargs):
pass
def teardown_databases(self, old_config, **kwargs):
pass
You'll need to provide a custom test runner.
The bits your interested in overriding with the default django.test.runner.DiscoverRunner are the DiscoverRunner.setup_databases and DiscoverRunner.teardown_databases methods. These two methods are involved with creating and destroying test databases and are executed only once. You'll want to provide test-specific project settings that use your existing test database by default and override these so that the dump data is loaded and the test database isn't destroyed.
Depending on the size and contents of the dump, a safe bet might be to just create a subprocess that will pipe the dump to your database's SQL command-line interface, otherwise you might be able to obtain a cursor and execute queries directly.
If your looking to get rid of fixture loading completely, you can provide a custom base test case that extends Django's default django.test.testcases.TestCase with the TestCase._fixutre_setup and TestCase._fixutre_teardown methods overriden to be noop.
Caveat emptor: this runner will make it impossible to facilitate tests for anything but your application's sources. It's possible to customize the runner to create a specific alias for a connection to your existing database and load the dump, then provide a custom test case that overrides TestCase._database_names to point to it's alias.
I haven't found a way to test REST methods for my application automatically, while using setUp and tearDown methods to preserve the uniqueness of each test.
gaetestbed gives me a clean datastore in between tests. And httplib2 allows me to easily call REST methods and parse their responses; but in order to do so, a local instance of my application must be running on port 8080 and called each test. This defeats the purpose of a gaetestbed-like refresh of the datastore, since data is preserved in between calls.
I could start and stop the GAE server within the setUp and tearDown methods, but this seems wasteful and time-consuming. Is there a better way?
Using gaetestbed, or your own unittest code like this, simply instantiate your handler classes directly, using a mocked/fake environment dictionary, and call the methods (initialize() and get()/post()/etc in the case of webapp) directly.
When doing functional tests using REST methods we ended up writing helper calls to clean out the internal caches and force our engine to sync with the database.
I haven't used gaetestbed, but I would have thought that you could flush the datastore between tests?
"Use case: develop and test locally with real data
Restore to: your local development server.
Once you have restored the data to your local development server, I would highly suggest that you take a backup of your datastore.
You can find your datastore files in a temporary folder on your local machine (e.g., on my Mac, it's /var/folders/bz/bzDU030xHXK-jKYLMXnTzk+++TI/-Tmp-/). To find the folder on your own machine, flush the datastore (./manage.py flush) and you will see the path to your datastore folder printed in the resulting output."