I like to test my code. I like to compartmentalize my code into packages. And I like Meteor. Now I'm trying Meteor Tinytest meteor test-packages, but I'm getting some weirdness. For example:
TypeError: Cannot read property 'Email' of undefined
Because of SimpleSchema.RegEx.Email. But this code works when it's not tested. Also, SimpleSchema is an object at this point (checked through console.log), and SimpleSchema.RegEx is undefined indeed, but this is completely not what I expected.
Adding api.use('aldeed:simple-schema', ['server']); to the onTest section of package.js doesn't do anything, which is kind of expected. But I'm not sure what I am to do to fix this issue.
Apparently there's a bug: Package.js api.use() loads very old versions, which (mostly) doesn't matter for your application but it matters a lot when you're testing packages individually.
to be updated
Related
Unfortunately for me, the "pinax" application for Django does not seem to have stayed up with the times – in one specific way: the shorttimesince template-tag still refers to the tzinfo object which has been deprecated.
The message is this:
django.template.library.InvalidTemplateLibrary: Invalid template library specified. ImportError raised when trying to load 'pinax.templatetags.templatetags.shorttimesince_tag': No module named 'django.utils.tzinfo'
In my project, I have a overrides/pinax/templatetags directory which contains both __init__.py and shorttimesince_tag.py which contains the updated code. But it does not seem to be referenced. (And I think that, after studying this problem, I see why not.)
I need to be able to override a templatetag that is defined in a third-party application. Does Django actually know where a templatetag is defined? Please guide me to a swift and appropriate solution.
Pardon me, community ... I am very befuddled by all of this, and writing this while "still befuddled."
Sorry to have left this question sitting around unanswered for so long ... here is what I finally (and, successfully) decided to do.
I realized that the "pinax" library application appears to no longer be maintained, even though there is really (so far ...) only this one slight problem with it. So, I moved the package into my own project and removed it from the library (and from requirements.txt ...). After first branching and git-committing the change which moved it, I then made the very-trivial change to fix the problem with shorttimesince_tag.py and committed that, then merged my branch into the mainline. From now on, "pinax" has officially become "part of my project."
"It's really too bad" when really-great packages like "pinax" come to be abandoned, but every now and again it happens, I suppose . . . we are all volunteers.
I'm using chart.js in my Aurelia application and it works fine.
I now want to add the chartjs-plugin-deferred plugin as well, and after having npm install:ed it and added it to aurelia.json's dependencies array I now get the following error:
Uncaught TypeError: Cannot read property 'helpers' of undefined
Pointing to the first couple of lines in the plugin code:
var Chart = window.Chart;
var helpers = Chart.helpers;
(Note that I don't even need to use the plugin (import 'chartjs-plugin-deferred'; for the error to appear; as soon as it's added to aurelia.json I get errors).
If I add a console.dir(window.Chart) before the lines that throw errors it is in fact not undefined, and if I try to use the plugin in my charts it actually works fine.
Can someone explain why this error occurs and if there's some way I can get rid of it? I feel uncomfortable shipping code that, while it works as it should, throws errors in the console.
I'm a huge fan of npm and imports etc but more often than not you run into issues such as these which imo is such a hassle and actually makes me miss the good old days of just piling script elements on top of each other.
Edit: I tried with a couple more plugins just to see if perhaps the deferred plugin was the issue here, but every other plugin I tried completely kills the build.
Does anyone have experience importing ChartJS and a ChartJS plugin into Aurelia successfully?
The issue at hand is that the library does not provide any meaningful way to jump in with a module loader and properly first fully load the dependency ChartJS before carrying on with the execution.
It would be the best if the library could wrap its code in a UMD compatible format to satisfy the most common formats at once, amongst those RequireJS, which is used for the Aurelia CLI.
I see you've created a Github Issue, including the libraries author as well. Good work, I've created a small PR to add the missing feature, which then also makes the example work, without throwing the missing helper error.
I see a lot of examples (including ember-cli generated tests) that use assert.function() but I can use the function as is, so am I doing something wrong, or do examples just show not-really-necessary qualifiers?
For example, either of these work in a new generated unit test:
assert.expect(1);
expect(1);
Why ever do the first one if the second one works?
This is actually a QUnit change, not an Ember one. QUnit is changing their API as they move towards 2.0. You can use the global versions now, but they'll be removed in 2.0 so it's probably a good idea to use the assert.* versions now so you don't have to change the code later.
there's a problem with my Neo4j-Test-Setup-Environment and the org.neo4j.test.ImpermanentGraphDatabase...
I have a class, TestGraphPopulator, for setting up some dummy data for my unit tests.
Because of adding, delete, update operations in my tests, I populate the graph in every test case in init-method annotated via #Before.
But there is some really strange behavior sometimes.
Some of my test fail, because there are more or less entities as expected. In a second, or third, etc. run, everything is fine, ..or not.
AND, in my /target directory of my project, there is a folder \target\test-data\impermanent-db with all the Neo4J database data...
I'm not sure of what my problem results of, but should NOT ImpermanentGraphDatabase only reside in memory??
For me, it looks like a bug, could anyone share some experience?? This seems very important to me, and maybe others...
Thanks a lot in advance!
Markus
P.S.:
Neo4J 1.8, Spring Data Neo4J 2.1 ..
This is indeed what has happened to me, too. It is confirmed to be a problem, and was recently fixed. I have confirmed that no files are created when I use 1.9-SNAPSHOT.
I maintain an installable Django app that includes a regular test suite.
Naturally enough when project authors run manage.py test for their site, the tests for both their own apps and also any third party installed apps such as mine will all run.
The problem that I'm seeing is that in several different cases, the user's particular settings.py will contain configurations that cause my app's tests to fail.
A couple of examples:
Some of the tests need to check for returned error messages. These error messages use the internationalization framework, so if the site language is not english then these tests fail.
Some of the tests need to check for particular template output. If the site is using customized templates (which the app supports) then the tests will end up using their customized templates in preference to the defaults, and again the tests will fail.
I want to try to figure out a sensible approach to isolating the environment that my tests get run with in order to avoid this.
My plan at the moment is to have all my TestCase classes extend a base TestCase, which overrides the settings, and any other environment setup I may need to take care of.
My questions are:
Is this the best approach to app-level test-environment isolation? Is there an alternative I've missed?
It looks like I can only override a setting at a time, when ideally I'd probably like a completely clean configuration. Is there be a way to do this, and if not which are the main settings I need to make sure are set in order to have a basic clean setup?
I believe I'm correct in saying that overriding some settings such as INSTALLED_APPS may not actually affect the environment in the expected way due to implementation details, and global state issues. Is this correct? Which settings do I need to be aware of, and what globally cached environment information may not be affected as expected?
What other environment state other than settings might I need to ensure is clean?
More generally, I'd also be interested in any context regarding how much of an issue this is for other third party installable apps, or if there are any plans to further address any of this in core. I've seen conversation on IRC regarding similar issues with eg. some of Django's contrib apps running under unexpected settings configurations. I seem to also remember running into similar cases with both third party apps and django contrib apps a few times, so it feels like I'm not alone in facing these kind of problems, but it's not clear if there's a consensus on if this is something that needs more work or if the status quo is good enough.
Note that:
These are integration-level tests, so I want to address these environment issues at the global level.
I need to support Django 1.3, but can put in some compatibility wrappers so long as I'm not re-implementing massive amounts of Django code.
Obviously enough, since this is an installable app, I can't just specify my own DJANGO_SETTINGS_MODULE to be used for the tests.
A nice approach to isolation I've seen used by Jezdez is to have a submodule called my_app.tests which contains all the test code (example). This means that those tests are NOT run by default when someone installs your app, so they don't get random phantom test failures, but if they want to check that they haven't inadvertently broken something then it's as simple as adding myapp.tests to INSTALLED_APPS to get it to run.
Within the tests, you can do your best to ensure that the correct environment exists using override_settings (if this isn't in 1.4 then there's not that much code to it). Personally my feeling is that with integration type tests perhaps it doesn't matter if they fail. If you like, you can include a clean settings file (compressor.test_settings), which for a major project may be more appropriate.
An alternative is that you separate your tests out a bit - there are two separate bodies of tests for contrib.admin, those at django.contrib.admin.tests, and those at tests.regression_tests.contrib.admin (or some path like that). The ones to check public apis and core functionality (should) reside in the first, and anything likely to get broken by someone else's (reasonable) configuration resides in the second.
IMHO, the whole running external apps tests is totally broken. It certainly shouldn't happen by default (and there are discussions to that effect) and it shouldn't even be a thing - if someone's external app test suite is broken by my monkey patching (or whatever) I don't actually care - and I definitely don't want it to break the build of my site. That said, the above approaches allow those who disagree to run them fairly easily. Jezdez probably has as many major pluggable apps as anyone else, and even if there are some subtle issues with his approach at least there is consistency of behaviour.
Since you're releasing a reusable third-party application, I don't see any reason the developer using the application should be changing the code. If the code isn't changing, the developers shouldn't need to run your tests.
The best solution, IMO, is to have the tests sit outside of the installable package. When you install Django and run manage.py tests, you don't run the Django test suite, because you trust the version of Django you've installed is stable. This should be the same for developers using your third-party application.
If there are specific settings you want to ensure work your library, just write test cases that use those settings values.
Here's an example reusable django application that has the tests sit outside of the installed package:
https://github.com/Yipit/django-roughage/tree/master
It's a popular way to develop python modules as seen:
https://github.com/kennethreitz/requests
https://github.com/getsentry/sentry