Consider having a test database to run your tests on. One way of doing this is to set the database config through environment variables, and I see many people do so (example: Test environment in Node.js / Express application).
However, to me this seems a bit dangerous. All it requires is that sometime in the future, by mistake, the env variable is set to development or production and suddenly we'll be messing with (or even wiping) the wrong database.
Is there a better way to do this?
(I'm using node/mocha/should.js to run my tests)
Run production completely separate from development: different servers, databases on different machines, etc. This eliminates a lot of opportunities for messing up. This being said...
Do not write one configuration file which will set different configuration values depending on an environment variable. Instead, have the environment variable select an entirely different configuration file. Your application could read config_<env>.js where <env> is the value of the environment variable MYAPP_ENV. So if you set MYAPP_ENV=production, then the file config_production.js would be read to set the configuration. You store the various config_....js files separate from the code of your application. (A different repository, if you want.) Then, when you deploy, you copy only the production configuration to your deployment server. This way if you goof by setting MYAPP_ENV=dev at some point, your application won't find the configuration file and will crash rather than do something harmful.
For maximum safety, force yourself to write each configuration file by hand instead of cutting and pasting configuration lines from development to production. The one day you cut and paste the parameters to access a resource can be accessed by both sites and you forget to change it for production, you'll regret having cut and pasted.
Related
I have some Go code I want to test locally, then deploy to AWS. Everything is fine as long as I stay in a single directory/package.
However, I want my folder structure to be similar to this:
$GOPATH/src/project/application.go
$GOPATH/src/project/lib1/libcode.go
Locally, to use the code in "lib1" I need to use import "project/lib1"
But when deployed to AWS the "project" folder doesn't exist of course, so instead I have to use import "./lib1"
I can probably solve this problem by either having all code in a single folder/package, or by changing my $GOROOT every time I change working on a project, but both these workarounds feel dirty and awkward.
What's the correct way to tackle this? Thanks.
The normal deployment mechanism for Go is to build the project, which produces a self-contained binary, and to deploy that binary - not the source code. A production machine running a Go application does not need the Go source files and does not need the Go toolchain installed. Go is not an interpreted language, it's not even a bytecode language with a VM (like Java with JVM or C# with CLR); it produces native binaries for the target OS and architecture. That's one of the reasons for its strong performance and efficient memory profile.
I'm creating a C++ linux application that needs some initial parameters of configuration to work correctly, these externals configurations is needed to avoid multiple compilations for parameters changes, and this configurations needs to be unknow by the end users. I was thinking an way do make an hidden configuration file that is consumed at the first execution, and always is researched at execution to verify possibles changes. Some suggestion to do this?
It is unlikely that you can hide the configuration file so that people do not know its existence: most Linux users would want to know which files you are installing in which location, and there are many ways to help them in discovering that, even if you try to do it without telling them (the simplest way that comes to my mind, they may do a file system snapshot and compare it before and after running the install program)
If your goal is to prevent people from changing the configuration without your permission (i.e., without paying for a license upgrade), you may do it by requiring the configuration to be signed from your company, storing the verification key inside the executable.
If you want to prevent the configuration from being read you don't have that much luck: there is not much that can stop a motivated attacker from reading the content of that file, since your application must be able to do it as well.
I want my code to behave a tiny bit differently in development than in production; for example, don't actually post things on facebook when the dev profile is activated. Right now I'm thinking I can use robert-hooke to add hooks to functions I don't want run in development; however, how can I check which profiles are activated?
I've also checked out environ which looks great for development vs production configurations but doesn't seem to hit my problem.
I don't think this is a rare problem so if there's already some accepted ways to handle this; great.
If you take a look at the luminus guestbook example, it's actually using profiles to set an environment variable :dev, and then environ to read it back from within the application. Environ suggests using the 12 factor app as a model, which makes an argument against grouping configurations inside of the application. Leiningen let's us have the best of both by naming the configuration group external to the actual application. Unfortunately the variable passed to the application is named the same as the profile, and thus groups configurations in the app. Naming it cache.disable but leaving it in the dev profile could fix that.
You could also take a look at isolating dependencies for development. The article has an example near the end using System/getenv that could also use environ as a replacement.
I maintain an installable Django app that includes a regular test suite.
Naturally enough when project authors run manage.py test for their site, the tests for both their own apps and also any third party installed apps such as mine will all run.
The problem that I'm seeing is that in several different cases, the user's particular settings.py will contain configurations that cause my app's tests to fail.
A couple of examples:
Some of the tests need to check for returned error messages. These error messages use the internationalization framework, so if the site language is not english then these tests fail.
Some of the tests need to check for particular template output. If the site is using customized templates (which the app supports) then the tests will end up using their customized templates in preference to the defaults, and again the tests will fail.
I want to try to figure out a sensible approach to isolating the environment that my tests get run with in order to avoid this.
My plan at the moment is to have all my TestCase classes extend a base TestCase, which overrides the settings, and any other environment setup I may need to take care of.
My questions are:
Is this the best approach to app-level test-environment isolation? Is there an alternative I've missed?
It looks like I can only override a setting at a time, when ideally I'd probably like a completely clean configuration. Is there be a way to do this, and if not which are the main settings I need to make sure are set in order to have a basic clean setup?
I believe I'm correct in saying that overriding some settings such as INSTALLED_APPS may not actually affect the environment in the expected way due to implementation details, and global state issues. Is this correct? Which settings do I need to be aware of, and what globally cached environment information may not be affected as expected?
What other environment state other than settings might I need to ensure is clean?
More generally, I'd also be interested in any context regarding how much of an issue this is for other third party installable apps, or if there are any plans to further address any of this in core. I've seen conversation on IRC regarding similar issues with eg. some of Django's contrib apps running under unexpected settings configurations. I seem to also remember running into similar cases with both third party apps and django contrib apps a few times, so it feels like I'm not alone in facing these kind of problems, but it's not clear if there's a consensus on if this is something that needs more work or if the status quo is good enough.
Note that:
These are integration-level tests, so I want to address these environment issues at the global level.
I need to support Django 1.3, but can put in some compatibility wrappers so long as I'm not re-implementing massive amounts of Django code.
Obviously enough, since this is an installable app, I can't just specify my own DJANGO_SETTINGS_MODULE to be used for the tests.
A nice approach to isolation I've seen used by Jezdez is to have a submodule called my_app.tests which contains all the test code (example). This means that those tests are NOT run by default when someone installs your app, so they don't get random phantom test failures, but if they want to check that they haven't inadvertently broken something then it's as simple as adding myapp.tests to INSTALLED_APPS to get it to run.
Within the tests, you can do your best to ensure that the correct environment exists using override_settings (if this isn't in 1.4 then there's not that much code to it). Personally my feeling is that with integration type tests perhaps it doesn't matter if they fail. If you like, you can include a clean settings file (compressor.test_settings), which for a major project may be more appropriate.
An alternative is that you separate your tests out a bit - there are two separate bodies of tests for contrib.admin, those at django.contrib.admin.tests, and those at tests.regression_tests.contrib.admin (or some path like that). The ones to check public apis and core functionality (should) reside in the first, and anything likely to get broken by someone else's (reasonable) configuration resides in the second.
IMHO, the whole running external apps tests is totally broken. It certainly shouldn't happen by default (and there are discussions to that effect) and it shouldn't even be a thing - if someone's external app test suite is broken by my monkey patching (or whatever) I don't actually care - and I definitely don't want it to break the build of my site. That said, the above approaches allow those who disagree to run them fairly easily. Jezdez probably has as many major pluggable apps as anyone else, and even if there are some subtle issues with his approach at least there is consistency of behaviour.
Since you're releasing a reusable third-party application, I don't see any reason the developer using the application should be changing the code. If the code isn't changing, the developers shouldn't need to run your tests.
The best solution, IMO, is to have the tests sit outside of the installable package. When you install Django and run manage.py tests, you don't run the Django test suite, because you trust the version of Django you've installed is stable. This should be the same for developers using your third-party application.
If there are specific settings you want to ensure work your library, just write test cases that use those settings values.
Here's an example reusable django application that has the tests sit outside of the installed package:
https://github.com/Yipit/django-roughage/tree/master
It's a popular way to develop python modules as seen:
https://github.com/kennethreitz/requests
https://github.com/getsentry/sentry
I would love to hear ideas on how to best move code from development server to production server.
A list of gotcha's, don't do this list would be helpful.
Any tools to help automate the steps of.
Make backups of existing code, given these list of files
Record the Deployment of these files from dev to production
Allow easier rollback if deployment or app fails in any way...
I have never worked at a company that had a deployment process, other than a very manual, ftp files from dev to production.
What have you done in your companies, departments, etc?
Thank you...
Yes, I am a coldfusion programmer, but files are files, and this should be language agnostic question.
OK, I'll bite. There's the technology aspect of this problem, which other answers have already covered. But the real issue is a process problem. Where the real focus should be ensuring a meaningful software development life cycle (SDLC) - planning, development, validation, and deployment. I'll cover each in turn. What you want is a repeatable activity at each phase.
Planning
Articulating and recording what's to be delivered. Often tickets or user stories are enough. Sometimes you do more, like a written requirements document, that a customer signs off on, that's translated into various artifacts such as written use cases - ultimately what you want though is something recorded in an electronic system where you can associate changes to code with it. Which leads me to...
Development
Remember that electronic system? Good. Now when you make changes to code (you're committing to source control right?) you associate those change with something in this electronic system - typically tickets. I like Trac, but have also heard good things about Atlassian's suite. This gives you traceability. So you can assert what's been done and how. Then you can use this system and source control to create a build - all the bits needed for whatever's changed - and tag that build in source control - that's your list of what's changed. Even better, have a build contain everything, so that it's standalone entity that can easily be deployed on it's own. The build is then delivered for...
Validation
Perhaps the most important step that many shops ignore - at their own peril. Defects found in production are exponentially more expensive to fix then when they're discovered earlier in the process. And validation is often the only step where this occurs in many shops - so make sure yours does it.
This should not be done by the programmer! That's like the fox watching the hen house. And whoever is doing is should be following some sort of plan. We use Test Link. This means each build is validated the same way, so you can identify regression bugs. And, this build should be deployed in the same way as you would into production.
If all goes well (we usually need a minimum of 3 builds) the build is validated. And this goes to...
Deployment
This should be a non-event, because you're taking a validated build following the same steps as you did in testing. Could be first it hits a staging server, where there's an automated copying process, but the point being is that is shouldn't be an issue at this point, because you validated with the same process.
Conclusion
In terms of knowing what's where, what you really want is a logical way to group changes together. This is where the idea of a build comes in. It's really the unit that should segue between steps in the SDLC. If you already have that, then the ability to understand the state of a given system becomes trivial.
Check out Ant or Maven - these are build and deployment tools used in the Java world which can help you copy / ftp files, backup and even check out code from SVN.
You can automate your deployment steps using these tools, for example Ant will allow you declare a set of tasks as part of your deployment. So you could, for example:
Check out a revision using SVNAnt or similar to a directory
Copy (and perhaps zip first) these files to a backup directory
FTP all the files to your web server(s)
Create a report to email to the team illustrating the deployment
Really you can do almost anything you wish to put time into using Ant. Maven is a little more strucutred (and newer) and you can see a discussion of the differences here.
Hope that helps!
In a nutshell...
You should start with some source control solution - probably Subversion or Git. Once that's in place you can create a script that generates a clean build of your source code and deploys it to your production server(s).
You could do this with a simple batch script or use something like Ant for more control. Here is a simple example of a batch file using Subversion:
svn copy svn://path/to/your/project/trunk -r HEAD svn://path/to/your/project/tags/%version%
svn checkout svn://path/to/your/project/trunk -r HEAD //path/to/target/directory
Ant makes it easy to do things like automatically run unit tests and sync directories. For example:
<sync todir="//path/to/target/directory" includeEmptyDirs="true" overwrite="true">
<fileset dir="${basedir}">
<exclude name="**/*.svn"/>
<exclude name="**/test/"/>
</fileset>
</sync>
This is really just a starting point. A next step might be a continuous integration solution like Hudson. I would also recommend reading "Pragmatic Project Automation: How to Build, Deploy, and Monitor Java Applications".
One ColdFusion specific gotcha is to make sure you clear the Application scope when required (to update any singleton components). A common approach here is to use a URL parameter that causes onRequestStart() to call onApplicationStart(). You may also have to clear the trusted cache.
We use a system called AnthillPro: http://www.anthillpro.com
It's commercial software, but it allows us to completely automate our deployment process across multiple servers and operating systems (We currently use it for both ColdFusion and Java, but it can be used for most languages. It has a ton of 3rd party integrations:
http://www.anthillpro.com/html/products/anthillpro/tool-integrations.html