Have different code execute depending on lein profiles? - clojure

I want my code to behave a tiny bit differently in development than in production; for example, don't actually post things on facebook when the dev profile is activated. Right now I'm thinking I can use robert-hooke to add hooks to functions I don't want run in development; however, how can I check which profiles are activated?
I've also checked out environ which looks great for development vs production configurations but doesn't seem to hit my problem.
I don't think this is a rare problem so if there's already some accepted ways to handle this; great.

If you take a look at the luminus guestbook example, it's actually using profiles to set an environment variable :dev, and then environ to read it back from within the application. Environ suggests using the 12 factor app as a model, which makes an argument against grouping configurations inside of the application. Leiningen let's us have the best of both by naming the configuration group external to the actual application. Unfortunately the variable passed to the application is named the same as the profile, and thus groups configurations in the app. Naming it cache.disable but leaving it in the dev profile could fix that.
You could also take a look at isolating dependencies for development. The article has an example near the end using System/getenv that could also use environ as a replacement.

Related

Test environment and test database

Consider having a test database to run your tests on. One way of doing this is to set the database config through environment variables, and I see many people do so (example: Test environment in Node.js / Express application).
However, to me this seems a bit dangerous. All it requires is that sometime in the future, by mistake, the env variable is set to development or production and suddenly we'll be messing with (or even wiping) the wrong database.
Is there a better way to do this?
(I'm using node/mocha/should.js to run my tests)
Run production completely separate from development: different servers, databases on different machines, etc. This eliminates a lot of opportunities for messing up. This being said...
Do not write one configuration file which will set different configuration values depending on an environment variable. Instead, have the environment variable select an entirely different configuration file. Your application could read config_<env>.js where <env> is the value of the environment variable MYAPP_ENV. So if you set MYAPP_ENV=production, then the file config_production.js would be read to set the configuration. You store the various config_....js files separate from the code of your application. (A different repository, if you want.) Then, when you deploy, you copy only the production configuration to your deployment server. This way if you goof by setting MYAPP_ENV=dev at some point, your application won't find the configuration file and will crash rather than do something harmful.
For maximum safety, force yourself to write each configuration file by hand instead of cutting and pasting configuration lines from development to production. The one day you cut and paste the parameters to access a resource can be accessed by both sites and you forget to change it for production, you'll regret having cut and pasted.

Managing Django test isolation for installable apps

I maintain an installable Django app that includes a regular test suite.
Naturally enough when project authors run manage.py test for their site, the tests for both their own apps and also any third party installed apps such as mine will all run.
The problem that I'm seeing is that in several different cases, the user's particular settings.py will contain configurations that cause my app's tests to fail.
A couple of examples:
Some of the tests need to check for returned error messages. These error messages use the internationalization framework, so if the site language is not english then these tests fail.
Some of the tests need to check for particular template output. If the site is using customized templates (which the app supports) then the tests will end up using their customized templates in preference to the defaults, and again the tests will fail.
I want to try to figure out a sensible approach to isolating the environment that my tests get run with in order to avoid this.
My plan at the moment is to have all my TestCase classes extend a base TestCase, which overrides the settings, and any other environment setup I may need to take care of.
My questions are:
Is this the best approach to app-level test-environment isolation? Is there an alternative I've missed?
It looks like I can only override a setting at a time, when ideally I'd probably like a completely clean configuration. Is there be a way to do this, and if not which are the main settings I need to make sure are set in order to have a basic clean setup?
I believe I'm correct in saying that overriding some settings such as INSTALLED_APPS may not actually affect the environment in the expected way due to implementation details, and global state issues. Is this correct? Which settings do I need to be aware of, and what globally cached environment information may not be affected as expected?
What other environment state other than settings might I need to ensure is clean?
More generally, I'd also be interested in any context regarding how much of an issue this is for other third party installable apps, or if there are any plans to further address any of this in core. I've seen conversation on IRC regarding similar issues with eg. some of Django's contrib apps running under unexpected settings configurations. I seem to also remember running into similar cases with both third party apps and django contrib apps a few times, so it feels like I'm not alone in facing these kind of problems, but it's not clear if there's a consensus on if this is something that needs more work or if the status quo is good enough.
Note that:
These are integration-level tests, so I want to address these environment issues at the global level.
I need to support Django 1.3, but can put in some compatibility wrappers so long as I'm not re-implementing massive amounts of Django code.
Obviously enough, since this is an installable app, I can't just specify my own DJANGO_SETTINGS_MODULE to be used for the tests.
A nice approach to isolation I've seen used by Jezdez is to have a submodule called my_app.tests which contains all the test code (example). This means that those tests are NOT run by default when someone installs your app, so they don't get random phantom test failures, but if they want to check that they haven't inadvertently broken something then it's as simple as adding myapp.tests to INSTALLED_APPS to get it to run.
Within the tests, you can do your best to ensure that the correct environment exists using override_settings (if this isn't in 1.4 then there's not that much code to it). Personally my feeling is that with integration type tests perhaps it doesn't matter if they fail. If you like, you can include a clean settings file (compressor.test_settings), which for a major project may be more appropriate.
An alternative is that you separate your tests out a bit - there are two separate bodies of tests for contrib.admin, those at django.contrib.admin.tests, and those at tests.regression_tests.contrib.admin (or some path like that). The ones to check public apis and core functionality (should) reside in the first, and anything likely to get broken by someone else's (reasonable) configuration resides in the second.
IMHO, the whole running external apps tests is totally broken. It certainly shouldn't happen by default (and there are discussions to that effect) and it shouldn't even be a thing - if someone's external app test suite is broken by my monkey patching (or whatever) I don't actually care - and I definitely don't want it to break the build of my site. That said, the above approaches allow those who disagree to run them fairly easily. Jezdez probably has as many major pluggable apps as anyone else, and even if there are some subtle issues with his approach at least there is consistency of behaviour.
Since you're releasing a reusable third-party application, I don't see any reason the developer using the application should be changing the code. If the code isn't changing, the developers shouldn't need to run your tests.
The best solution, IMO, is to have the tests sit outside of the installable package. When you install Django and run manage.py tests, you don't run the Django test suite, because you trust the version of Django you've installed is stable. This should be the same for developers using your third-party application.
If there are specific settings you want to ensure work your library, just write test cases that use those settings values.
Here's an example reusable django application that has the tests sit outside of the installed package:
https://github.com/Yipit/django-roughage/tree/master
It's a popular way to develop python modules as seen:
https://github.com/kennethreitz/requests
https://github.com/getsentry/sentry

What's the best way to install a standalone djangopypi?

I would like to install a djangopypi server for our local development and deployment. However, I'm slightly confused about its installation docs. Apparently, it's assumed that djangopypi is installed inside a bigger project as an app, which is at least debatable from my point of view. I would like my local PyPI instance to run independently of anything else, as a "normal" web service.
And this is where I'm lost. It seems I need some kind of a minimal Django project to wrap djangopypi, which seems a bit overkill for me. Is there a more elegant way to install it in standalone mode?
That's exactly what you need. djangopypi is just an app. Like any Django app, it needs to know stuff like how to connect to your database, etc. That information comes from the project. It doesn't provide this for you because there's no way it could possible know what the best settings are for your particular environment; that's your responsibility.
So, no, it's not "overkill". It's the bare minimum required for functionality, and it's just the way things are. Create a simple project, change all the relevant items in settings.py, nclude djangopypi's urls.py in yours, and you're done. Is is really that hard?

How to make templates of profiles in websphere?

I am aware that we can make templates of domains in weblogic very easily using config_builder script. Is there a similar thing in websphere?
I know nothing about WebLogic, but fix pack 9 for WebSphere added something you may find useful. The wsadmin command AdminTask.extractConfigProperties with GenerateTemplates and PortablePropertiesFile options set to true will generate a portable, editable file transferable to another cell. AdminTask.applyConfigProperties is used to read your edited output and apply the properties to a new cell, server, etc. I haven't tried this yet outside of a controlled sandbox environment; so, I'm not sure what pitfalls may await you. But if you have a ton of servers to build, it may be worth your time to do some experimentation.
Here's the IBM doc on the topic.
As far as i know i don't think there is such a capability.
You can use the default product shipped profiles to start with and create the servers and configure them the way you want.
These servers can then be used as a template to build other servers.
I am not sure if this helps you but i thought i would point this out.
Manglu

Is anyone using a ColdFusion framework that has specific path requirements without mapping or locating resources in the server root?

Let me first say I am aware of this faq for Mach-II, which discusses using application specific mappings as a third option when:
locating the framework in the server root is not possible and
creating a server wide mapping to the Mach-II framework directory is impossible
Using application specific mappings would also work for other ColdFusion frameworks with similar requirements (ColdSpring). Here is my issue however: my (I should say "their") production servers are all running ColdFusion MX7, and application specific mappings were introduced in ColdFusion 8. I most likely will be unable to do option 1 or 2 because they involve creating server wide changes that could conflict with other applications (I don't have a final word on this but I am preparing for that to be the case).
That said, is there anybody out there who was in similar bind and has done an option 4, in any ColdFusion version, or with any similar framework? The only option 4 I can think of is modifying the entire framework to change this hardcoded path, and even if that worked it would be time consuming and risky. I'm fairly sure that if there was a simple modification or other simple solution it would already be included in the framework (maybe it's included in version 1.8 of Mach-II and I don't know about it yet).
Any thoughts on solving this problem or even unorthodox setups with libraries that have specific path requirements would be appreciated. Any thoughts from Team Mach-II would especially appreciated...we're on the same team here Matt! ;-)
EDIT
Apparently, the ColdBox framework includes a refactor.xml ANT task which includes a target that refactors the ColdBox code to use a different absolute path as a base along with several other useful refactoring targets. So problem solved for ColdBox users.
Looking at the build.xml for Mach-II (1.6 and 1.8) I don't see any target in there that would allow me to refactor the code. I thought about creating a feature request ticket for such a task for Mach-II but frankly I don't think creating such an ANT task is a big priority for the MachII team since the need really only relates to either
a) users of ColdFusion versions below 8
b) someone who wants to use multiple Mach-II versions in the same application, a use I doubt they want to support
The ColdSpring code I have doesn't come with any ANT tasks at all, although I do have unit tests, and I bet if I poked around the SVN I'd find a few build scripts.
Using Ant tasks to refactor and retest the code, or the simpler (and sort of cop out) solution of creating a separate ColdFusion instance for the application are the best answers I've been able to come up with. I don't need this application to exist in the shared scope of other applications, so my first solution is going to be to try and get a dedicated CF instance for this application.
I'm also going to look at the ColdBox refactor.xml ANT task however and see if I can modify it to work generically to recognize and refactor CFC references with modified absolute paths. If I complete this task I'll be sure to post the code somewhere and edit create an answer to link to it. If anybody else wants to take a crack at that or help me out with it feel free.
Until then I'll leave this question open and see if someone comes up with a better solution.
Fusebox is not so strict, I think.
In XML mode (maybe I call this not 100% correcly, just mean using the Application.cfm) it's just proper include in index.cfm, something like:
<cfinclude template="fusebox5/fusebox5.cfm" />
In non-XML mode it will need proper extending in the root Application.cfc:
<cfcomponent extends="path.to.fusebox5.Application" output="false">
All you need is to know the path.
Perhaps you could create a symbolic link and let the operating system resolve the issue for you?
I've been playing with FW/1 lately, and while it may look like you need to add a mapping and extend org.corfield.framework, you can actually move the framework.cfc file into your web root and just extend="framework". It's dead simple, and gets you straight into a great framework with no mess and very little overhead.
It should be as simple as dropping the 'MachII' folder at the root of your domain (i.e. example.com/MachII). No mappings are required to use Mach-II if you just deploy at the root of the domain of your website.
Also:
Please file a ticket for the ANT task you mentioned in your question. Team Mach-II would love to have this issue logged:
Enter a new ticket on the Mach-II Trac
If you want to tackle an ANT task for us, we can get stuff like this incorporated into the builds faster than waiting to for a Team member to work on the ticket. Code submissions from the community are welcome and appreciated.
We don't keep an eye on Stack Overflow very often so we invite you to join our official community group at called "Mach-II for ColdFusion" at Google Groups. The Google Group is the best place to ask questions or comments like this if you want feedback from the Team.