We experienced strange behaviour. Especially under chrome, some items are missing. Anyone ?
Which version are you using? You should always pin the version you use, which means that once it's published, it will never change.
The code is open source, you can check yourself from the github repository and there are other more appropriate channels than Stack Overflow, IMO:
The Django weblog, where a post is published each time there is a new release.
The Django users mailing list/google group is another place where you can keep up to date.
PyPI gives you the release history.
Finally, with any open source software, don't be afraid of digging into the source code and see what has changed. Github has a great way of comparing between 2 tags, for instance for the latest 2 1.11.x relases: https://github.com/django/django/compare/1.11.16...1.11.17
There was 2 new bugfix relases published on each 2.1.x and 1.11.x branches yesterday, so if your dependencies aren't pinned, you might have auto-updated, but without further details it's hard to tell you more.
There are two main schools of thought for doing A/B (Split) Testing:
Javascript-based solutions such as Optimizely, Google Analytics Content Experiments.
Server-side solutions such as Django-AB, Splango, and django-lean. (Also, writing your own.)
My understanding is that Javascript-based solutions are spectacular for "which color button converts better," but not so great for switching out entire page layouts, and completely unworkable for trying out large functional changes such as the sequence of pages in a funnel.
That leads me towards a server-side solution. I'm not crazy about coding my own, and will do so only if there is no other option. I'm trying to add value by improving the core functionality of my site, not by creating a better split-testing framework.
The Django apps I've found for split testing are various mixtures of unmaintained, undocumented, documented incorrectly, and incompatible with Django 1.5. This surprises me, because the Django and Python communities seem to have a strong focus on good documentation. I'm also very surprised that none of the testing frameworks I've tried has been compatible with Django 1.5 -- is testing not as core a part of the philosophy in the Django/Python world as it is in Rails?
Here's what I've found:
Splango https://github.com/shimon/Splango -- Not compatible with Django 1.5 (although most compatibility bugs I found were trivial to fix). Largely un-touched since October 2010, except for a fix August 2012 which claims to make sure templates get included in the install. Since templates don't get included in the install when Splango is installed via PyPI, either the fix didn't work or didn't get submitted to PyPI. Documentation is largely accurate, but doesn't completely cover how to set up tests and get reports. It tells you how to configure the template to gather the data, but there appears to be additional steps required in the admin interface which are completely undocumented, and I'm not sure I've done them properly.
Django-lean. Original at https://bitbucket.org/akoha/django-lean has not been updated since July 2010. There is an apparently "blessed" fork at https://github.com/anandhenry2002/django-lean which has not been changed since May 2012, when it was copied over from the original. The original's documentation is incorrect in ways that make following the examples impossible. (Though you can probably muddle your way through, as I did.) The new version's documentation has formatting problems that make it difficult to read on github. (This appears to be because it's the unchanged documentation from the old project, and BitBucket syntax doesn't work on Github.) The django-lean Google Group has not had a message since July 2012.
django-mini-lean https://github.com/DanAncona/django-mini-lean -- Updated as recently as February 2013, but undocumented.
Leaner - https://bitbucket.org/brianjinwright/leaner -- Last updated July 2012, and no docs.
Django-AB -- Last updated May 2009. Is not a package, and can't be installed via PIP or PyPI. After placing the checkout in my django app folder (and renaming the folder to ab) and following the installation instructions, I get an error loading the template loader that I have not tracked down further.
So far Splango appears to be the winner, as I've actually been able to get it more-or-less working (by manually installing the templates, and then editing them to fix Django 1.5 incompatibilities).
Can anyone point me to anything I've missed?
You have missed this app : https://github.com/mixcloud/django-experiments + https://github.com/disqus/gargoyle/
And then there's waffle: http://waffle.readthedocs.org/
It's simple, updated, maintained, but not very feature rich, it doesn't have any analytics/reporting stuff integrated. But then again, google analytics or mixpanel type of service is better for this.
I first looked at Django-AB and that is almost what I wanted, but I couldn't get it to work either. After looking at django-experiments and deciding I didn't want to mess around with redis yet, I decided to roll my own. I've tried to package it up nicely and make it easy to use for the beginner. It's super basic.
https://github.com/crobertsbmw/RobertsAB
You can swap out entirely different page layouts with Google Analytics Experiments (their default experiment setup will redirect users to a different URL for each variation you have), although in general its much easier to interpret why something is more successful if you test smaller things against each other.
You are right that testing different funnels and user flows against each other using Google Analytics would require a lot of manual setup; although theoretically you could do it by swapping out different links and tracking your users with UTM campaigns.
For smaller A/B tests within the same page, I ended up using Google Analytics Experiments and writing a custom Django CMS plugin for adding a few variant options to a template, which queries the Google Analytics API and displays the correct variant using Javascript.
I already did some searching on stackoverflow and as far as I can see there are many ways to use databases in C++. Unfortunately at work my tools are pretty limited. I only get to use visual studio C++ 6 and don't even have boost (although I have learned to cope with that) - I assume that I can only use what is the standard distribution being delivered togather with VS C++ 6.
Now my code generates a lot of data and I would like to store some of it in a simple databse (like an MS Access db). What tools might I be able to use?
My alternative approauch would be to create a database-like object via a struct and vectors/arrays.
I also have office 2010 installed - perhaps I could somehow use Access?
Computation-speed also plays a role - the faster the better.
Another important thing: my PC at work isn't an open client. Thus I can not install any new software. Downloading and moving files works. Basically I must be able to install the tool by just moving the files into a desired folder.
Please let me know if the question is confusing or insufficiently detailed I will do what i can to remedy the situation then.
Thnaks in advance for your help :)
Even though you said 'only standard tools', I'd still say, get SQLite. It ss a public domain software, i.e. no license whatsoever . You can download an 'amalgamation' - one .h file and one .c file and include it into your project. It should compile in VC6 no problem. Very easy to use, you will be up and running in 10 minutes.
It does exactly what you need - a DB in a single file, no servers, zero-setup, etc.
Well, Visual C++ 6 did include MFC which had a suite of classes for the creation and manipulation of databases, I'm fairly certain it would be possible to use these to create a database that is accessible from Access. Unfortunately Microsoft's online help doesn't seem to go back that far, but all the reference material you need should come with the VS 6. (In my opinion VS Help system was better back then anyways.)
On a side note, you could download an old version of boost that would work with VS6. I'm not sure what the last version of boost that supports VS6 is, my guess it's somewhere around 1.3x.
VC6 should work.
Can you use MFC's db objects? (DAO I think back then?).
If your app really generates a lot of data, you might want to look at MySql. I've run into size limitations in older Access tables. Unless it's an extraordinarily simple db, you probably don't want to brew your own (though it might be fun if you have a lot of time).
The key will be finding a driver/db combo that will work. I would install the GA (free) MySql, create a tiny db with 1 table and find the driver ("connector" in MySql terms) that will work. Maybe older ODBC driver?
Also, check out ConnectionStrings.com for info on getting connected to a particular database / driver.
I am looking for a small software versioning (changelog) and bug submission system with a web-frontend.
The features I only need is a change-log where users can see what they can expect and a tiny bug-submission system. I don't need the many features SVN offers as software versiong as the project is quite small and I do all development locally.
Any ideas?
It sounds like you'll do just fine coding raw html with your requirements. If you can code in any language, the html you'll need is minimal it'll be easy to pick up in the case that you're not familiar with it.
Although, I do still recommend rethinking your decision not to use SVN. If your project is open source, have a look at Google Code, which offers free source code hosting including bug tracker, SVN repository, release management and wiki. It'll also make your project more discoverable. If it's not open source, you can purchase private hosting on github, but that uses git which is more complicated.
Mantis is probably what you want - web based bug system that's quite nice, fast and simple. It integrates with SVN using a php page you can access using curl from a SVN hook.
We used it as a bug tracker, when the code was committed (with a special regex in the log comment) the files that were changed were added to the mantis bug as a bugnote. I'm not 100% sure of your situation, but it appears at first glance that this kind of arrangement is similar to what you want.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
This post was edited and submitted for review 1 year ago and failed to reopen the post:
Original close reason(s) were not resolved
Improve this question
I haven't worked for very large organizations and I've never worked for a company that had a "Build Server".
What is their purpose?
Why aren't the developers building the project on their local machines, or are they?
Are some projects so large that more powerful machines are needed to build it in a reasonable amount of time?
The only place I see a Build Server being useful is for continuous integration with the build server constantly building what is committed to the repository. Is it I have just not worked on projects large enough?
Someone, please enlighten me: What is the purpose of a build server?
The reason given is actually a huge benefit. Builds that go to QA should only ever come from a system that builds only from the repository. This way build packages are reproducible and traceable. Developers manually building code for anything except their own testing is dangerous. Too much risk of stuff not getting checked in, being out of date with other people's changes, etc. etc.
Joel Spolsky on this matter.
Build servers are important for several reasons.
They isolate the environment The local Code Monkey developer says "It compiles on my machine" when it won't compile on yours. This can mean out-of-sync check-ins or it could mean a dependent library is missing. Jar hell isn't near as bad as .dll hell; either way, using a build server is cheap insurance that your builds won't mysteriously fail or package the wrong libraries by mistake.
They focus the tasks associated with builds. This includes updating the build tag, creating any distribution packaging, running automated tests, creating and distributing build reports. Automation is the key.
They coordinate (distributed) development. The standard case is where multiple developers are working on the same code base. The version control system is the heart of this sort of distributed development but depending on the tool, the developers may not interact with each other's code much. Instead of forcing developers to risk bad builds or worry about merging code overly aggressively, design the build process where the automated build can see the appropriate code and processes the build artifacts in a predictable way. That way when a developer commits something with a problem, like not checking in a new file dependency, they can be notified quickly. Doing this in a staged area let's you flag the code that has built so that developers don't pull code that would break their local build. PVCS did this quite well using the idea of promotion groups. Clearcase could do it too using labels but would require more process administration than a lot of shops care to provide.
What is their purpose?
Take load of developer machines, provide a stable, reproducible environment for builds.
Why aren't the developers building the project on their local machines, or are they?
Because with complex software, amazingly many things can go wrong when just "compiling through". problems I have actually encountered:
incomplete dependency checks of different kinds, resulting in binaries not being updated.
Publish commands failing silently, the error message in the log ignored.
Build including local sources not yet commited to source control
(fortunately, no "damn customers" message boxes yet..).
When trying to avoid above problem by building from another folder, some files picked from the wrong folder.
Target folder where binaries are aggregated contains additional stale developer files that shoulkd not be included in release
We've got an amazing stability increase since all public releases start with a get from source control onto an empty folder. Before, there were lots of "funny problems" that "went away when Joe gave me a new DLL".
Are some projects so large that more powerful machines are needed to build it in a reasonable amount of time?
What's "reasonable"? If I run a batch build on my local machine, there are many things I can't do. Rather than pay developers for builds to complete, pay IT to buy a real build machine already.
Is it I have just not worked on projects large enough?
Size is certainly one factor, but not the only one.
A build server is a distinct concept to a Continuous Integration server. The CI server exists to build your projects when changes are made. By contrast a Build server exists to build the project (typically a release, against a tagged revision) on a clean environment. It ensures that no developer hacks, tweaks, unapproved config/artifact versions or uncommitted code makes it into the released code.
The build server is used to build everyone's code when it is checked in. Your code may compile locally, but you most likely won't have all the change made by everyone else all the time.
To add on what has already been said :
An ex-colleague worked on the Microsoft Office team and told me a complete build sometimes took 9 hours. That would suck to do it on YOUR machine, wouldn't it?
It's necessary to have a "clean" environment free of artifacts of previous versions (and configuration changes) in order to ensure that builds and tests work and don't depend on the artifacts. An effective way to isolate is to create a separate build server.
I agree with the answers so far in regards to stability, tracability, and reproducability. (Lots of 'ity's, right?). Having ONLY ever worked for large companies (Health Care, Finance) with MANY build servers, I would add that it's also about security. Ever seen the movie Office Space? If a disgruntled developer builds a banking application on his local machine and no one else looks at it or tests it... BOOM. Superman III.
These machines are used for several reasons, all trying to help you provide a superior product.
One use is to simulate a typical end user configuration. The product might work on your computer, with all your development tools and libraries set up, but the end user most likely won't have the same configuration as you. For that matter, other developers won't have the exact same setup as you either. If you have a hardcoded path somewhere in your code, it will probably work on your machine, but when Dev El O'per tries to build the same code, it won't work.
Also they can be used to monitor who broke the product last, with what update, and where the product regressed at. Whenever new code is checked in, the build server builds it, and if it fails, its clear that something is wrong and the user who committed last is at fault.
For consistent quality and to get the build 'off your machine' to spot environment errors and so that any files you forget to check in to source control also show up as build errors.
I also use it to create installers as these take a lot of time to do on the desktop with code signing etc.
We use one so that we know that the production/test boxes have the same libraries and versions of those libraries installed as what is available on the build server.
It's about management and testing for us. With a build server we always know that we can build our main "trunk" line from version control. We can create a master install with one-click and publish it to the web. We can run all of our unit tests each time code is checked in to make sure it works. By collecting all these tasks into a single machine it makes it easier to get it right repeatedly.
You are right that developers could build on their own machines.
But these are some of the things our build server buys us, and we're hardly sophisticated build makers:
Version control issues (some have been mentioned in earlier responses)
Efficiency. Devs don't have to stop to make builds locally. They can kick it off on the server and get on to the next task. If builds are large, then that is even more time the dev's machine is not occupied. For those doing continuous integration and automated testing, even better.
Centralization. Our build machine has scripts that make the build, distribute it to UAT environments, and even to production staging. Keeping them in one place reduces the hassle of keeping them in sync.
Security. We don't do much special here, but I'm sure a sysadmin can make it such that production migration tools can only be accessed on a build server by certain authorized entities.
Maybe i'm the only one...
I think everyone agrees that one should
use a file repository
do builds from the repository (and in a clean environment)
use a continous testing server (e.g. cruise control) to see if anything is broken after your "fixes"
But no one cares about automatically built versions.
When something was broken in an automatic build, but it's not anymore - who cares? It's a work in progress. Someone fixed it.
When you want to do a release version, you run a build from the repository. And i'm pretty sure you want to tag the version in the repository at that time and not every six hours when the server does it's work.
So, maybe a "build server" is just a misnomer and it's actually a "continous test server". Otherwise it sounds pretty much useless.
A build server gets you a sort of second opinion of your code. When you check it in, the code is checked. If it works, the code has a minimum quality.
Additionally, remember that low level languages take much longer to compile than high level languages. It's easy to think "Well look, my .Net project compiles in a couple of seconds! What's the big deal?" Awhile back I had to mess with some C code and I had forgotten how much longer it takes to compile.
A build server is used to schedule compile tasks (e.g. nightly builds) of usually large projects located in a repository that can sometimes take more than a couple of hours.
A build server also gives you a basis for escrow, being able to capture all the parts necessary to reproduce a build in the case that others may have rights to take ownership.