Related
I'm making a Django project consisting of several apps and I want to use a version number for the whole project, which would be useful for tracking the status of the project between each time it comes to production.
I've read and googled and I've found how to put a version number for each django app of mine, but not for a whole project.
I assume that the settings.py (in my case it would be base.py, because the settings are inherited for each environment: developmente, pre-production, production) would be the ideal file for storing it, but I would like to know good practices from other Django programmers, because I haven't found any.
Thank you in advance
I don't think I've ever needed to do this, but the two obvious choices would be either the settings file, as you state, or alternatively the __init__.py in the main project app.
You don't need it to relate to django, you can tag a commit in your source control to provide a marker of a particular version (as well as a separate branch for releases).
From the docs for git tagging
Git has the ability to tag specific points in history as being important. Typically people use this functionality to mark release points (v1.0, and so on).
You could use the same versioning number system as google if you so wish which relates to
year.month.day.optional_revision # i.e 2016.05.03 for today
Doing this would make it easier to track back to previous versions since it won't be overwritten in source code by newer version numbers.
I'm a seasoned C++ developer in a new position. My experience is in *nix-based systems, and I'm working with Visual Studio for my first time.
I find that I'm constantly struggling with Visual Studio for things I consider trivial. I feel like I haven't grokked how I'm supposed to be using VS; so I try doing things "the way I'm used to," which takes me down a rabbit-hole of awkward workarounds, wasted time, and constant frustration. I don't need a VS 101 tutorial; what I need is some kind of conversion guide - "Here's the VS way of doing things."
That's my general question - "What's the VS way of doing things?". That might be a bit vague, so I'll describe what's giving me grief. Ideally, I'm not looking for "Here's the specific set of steps to do that specific thing," but rather "You're looking at it wrong; here's the terms and concepts you need to understand to use VS effectively."
In C++, I'm used to having a great measure of control over code organization and the build process. I feel like VS is working strongly against me here:
I strongly tend to write small, isolated building blocks, and then bigger chunks that put those blocks together in different combination.
As a trivial example, for a given unit or project, I make a point of having strong separation between the unit's headers meant for client inclusion; the unit's actual implementation; and any testing code.
I'm likely to have multiple different test projects, some of which will probably rely on common testing code (beyond the code-under-test itself).
VS makes it onerous to actually control code location. If I want a project's code to be divided into an include/ folder and a src/ folder, that's now a serious hassle.
VS's concept of "projects" seems, as far as I can tell, somewhere between what I'd think of as "final build target" and "intermediate build target." As far as I can tell, basically anything I want to share between multiple projects, must also be a project.
But if many intermediate objects now become projects, then I'm suddenly finding myself with a TON of small projects.
And managing a ton of small projects is incredibly frustrating. They each have a million settings and definitions (under multiple configurations and platforms...) that are a real pain to transfer from one project to the other.
This encourages me to lump lots of unrelated code together in a single project, just to reduce the number of projects I need to manage.
I'm struggling with this constantly. I can find solutions to any one given thing, but it's clear to me that I'm missing a wider understanding of how Visual Studio, as a tool, is meant to be used. Call it correct workflow, or correct project organization - any solutions or advice would be a real help to me.
(Note: much as I'd like to, "Stop working with the Visual Studio buildchain" is not an option at the moment.)
The basic rule is: A project results in a single output file [1].
If you want to package building blocks into static libraries, create a project for each one.
Unit test are separate from the code, so it's common to see a "foo" and a "foo test" project side by side.
With respect to your small building blocks, I use this guideline: If it is closely enough related to be put in the same folder, it is closely enough related to be put in the same project.
And managing a ton of small projects is incredibly frustrating. They each have a million settings and definitions (under multiple configurations and platforms...) that are a real pain to transfer from one project to the other.
Property pages are intended to solve this problem. Just define a property page containing related settings and definitions, and it becomes as easy as adding the property page to a new project.
As each project can pull its settings from multiple property pages, you can group them into logical groups. As an example: a "unit test" property page with all settings related to your unit test framework.
To create property page in Visual Studio 2015: in the View menu, there is an option "Property Manager". You get a different tree view of your solution, with the projects, then the configurations, and then all the property pages for that project+configuration combination. The context menu for the configuration has an option to create a new property page or to add an existing one.
[1] Although it is common to have the Release configuration result in foo.dll and Debug configuration in food.dll, so they can exist next to each other without resorting to the Debug/ and Release/ folders. In the General properties, set the TargetName to "$(ProjectName)d" (for Debug configuration) and remove the "$(Configuration)" from the OutputDirectory (for all configurations) to achieve this.
I've been using Redmine for almost a year to manage my startup. I have all issues stored in one project with two subprojects for areas that I had to outsource and didn't want to give the contractor access to the main project issues. My problem is that I have ended up with hundreds of issues which all vary greatly in the time required to implement them. Some are small e.g.'Fix bug in controller', 'Add telephone number to contact us page' etc and some require much more effort e.g. 'Create a new Q&A area', 'Migrate server to nginx', and some are more abstract e.g. 'Investigate new SEO opportunities', 'Consider implementing a reseller control panel' etc.
I feel like I must be using Redmine incorrectly as having these all mixed together is a bit confusing. Any ideas on how I could better organize would be greatly appreciated. If supplementing with other tools might be a better idea I'd love to hear suggestions.
I don't think there is a problem having all the issues you mentioned mixed together in a project as long as they're all related to the project.
The most important point when using redmine with projects having lots of issues is to make use of custom queries. This is a great feature, but in order to ba able to use it, you must also use and fill in other fields:
Tracker: Make use of different trackers (the default of bugs, features and tasks works for me)
Category: Can be a specific part of your software, or other aspects of your business (administration, IT/server, ...)
Version: Use the version to group different issues, usually used for a release, but can also be ideas or unplanned
Of course priority and Due Date - I often use them for ordering, but you may create a custom query of issues du in the next 2 weeks
Assignee is usually the most important if there is more than one user - first of all you'll want to see the issues assigned to you, as well as the issues created by you (in order to follow-up)
You can always add custom fields in case you have other information which may be used to filter your issues.
Once a set of custom queries are in place, you'll hardly consult all your open issues at once anymore.
Two little used features for redmine newbies are categories and custom fields.
Categories are usually used for modules in your project ("Database", "Front End", "Administration Panel", etc.) and you can use custom fields for anything else you find useful - i.e. Create a "Time Consumer (Estimated)" custom field as a list with "Whale (Weeks)", "Elephant (days)", "Tiger (Hours)", "Monkey (About an hour)", "Mouse (Minutes)".
Let me first say I am aware of this faq for Mach-II, which discusses using application specific mappings as a third option when:
locating the framework in the server root is not possible and
creating a server wide mapping to the Mach-II framework directory is impossible
Using application specific mappings would also work for other ColdFusion frameworks with similar requirements (ColdSpring). Here is my issue however: my (I should say "their") production servers are all running ColdFusion MX7, and application specific mappings were introduced in ColdFusion 8. I most likely will be unable to do option 1 or 2 because they involve creating server wide changes that could conflict with other applications (I don't have a final word on this but I am preparing for that to be the case).
That said, is there anybody out there who was in similar bind and has done an option 4, in any ColdFusion version, or with any similar framework? The only option 4 I can think of is modifying the entire framework to change this hardcoded path, and even if that worked it would be time consuming and risky. I'm fairly sure that if there was a simple modification or other simple solution it would already be included in the framework (maybe it's included in version 1.8 of Mach-II and I don't know about it yet).
Any thoughts on solving this problem or even unorthodox setups with libraries that have specific path requirements would be appreciated. Any thoughts from Team Mach-II would especially appreciated...we're on the same team here Matt! ;-)
EDIT
Apparently, the ColdBox framework includes a refactor.xml ANT task which includes a target that refactors the ColdBox code to use a different absolute path as a base along with several other useful refactoring targets. So problem solved for ColdBox users.
Looking at the build.xml for Mach-II (1.6 and 1.8) I don't see any target in there that would allow me to refactor the code. I thought about creating a feature request ticket for such a task for Mach-II but frankly I don't think creating such an ANT task is a big priority for the MachII team since the need really only relates to either
a) users of ColdFusion versions below 8
b) someone who wants to use multiple Mach-II versions in the same application, a use I doubt they want to support
The ColdSpring code I have doesn't come with any ANT tasks at all, although I do have unit tests, and I bet if I poked around the SVN I'd find a few build scripts.
Using Ant tasks to refactor and retest the code, or the simpler (and sort of cop out) solution of creating a separate ColdFusion instance for the application are the best answers I've been able to come up with. I don't need this application to exist in the shared scope of other applications, so my first solution is going to be to try and get a dedicated CF instance for this application.
I'm also going to look at the ColdBox refactor.xml ANT task however and see if I can modify it to work generically to recognize and refactor CFC references with modified absolute paths. If I complete this task I'll be sure to post the code somewhere and edit create an answer to link to it. If anybody else wants to take a crack at that or help me out with it feel free.
Until then I'll leave this question open and see if someone comes up with a better solution.
Fusebox is not so strict, I think.
In XML mode (maybe I call this not 100% correcly, just mean using the Application.cfm) it's just proper include in index.cfm, something like:
<cfinclude template="fusebox5/fusebox5.cfm" />
In non-XML mode it will need proper extending in the root Application.cfc:
<cfcomponent extends="path.to.fusebox5.Application" output="false">
All you need is to know the path.
Perhaps you could create a symbolic link and let the operating system resolve the issue for you?
I've been playing with FW/1 lately, and while it may look like you need to add a mapping and extend org.corfield.framework, you can actually move the framework.cfc file into your web root and just extend="framework". It's dead simple, and gets you straight into a great framework with no mess and very little overhead.
It should be as simple as dropping the 'MachII' folder at the root of your domain (i.e. example.com/MachII). No mappings are required to use Mach-II if you just deploy at the root of the domain of your website.
Also:
Please file a ticket for the ANT task you mentioned in your question. Team Mach-II would love to have this issue logged:
Enter a new ticket on the Mach-II Trac
If you want to tackle an ANT task for us, we can get stuff like this incorporated into the builds faster than waiting to for a Team member to work on the ticket. Code submissions from the community are welcome and appreciated.
We don't keep an eye on Stack Overflow very often so we invite you to join our official community group at called "Mach-II for ColdFusion" at Google Groups. The Google Group is the best place to ask questions or comments like this if you want feedback from the Team.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
This post was edited and submitted for review 1 year ago and failed to reopen the post:
Original close reason(s) were not resolved
Improve this question
I haven't worked for very large organizations and I've never worked for a company that had a "Build Server".
What is their purpose?
Why aren't the developers building the project on their local machines, or are they?
Are some projects so large that more powerful machines are needed to build it in a reasonable amount of time?
The only place I see a Build Server being useful is for continuous integration with the build server constantly building what is committed to the repository. Is it I have just not worked on projects large enough?
Someone, please enlighten me: What is the purpose of a build server?
The reason given is actually a huge benefit. Builds that go to QA should only ever come from a system that builds only from the repository. This way build packages are reproducible and traceable. Developers manually building code for anything except their own testing is dangerous. Too much risk of stuff not getting checked in, being out of date with other people's changes, etc. etc.
Joel Spolsky on this matter.
Build servers are important for several reasons.
They isolate the environment The local Code Monkey developer says "It compiles on my machine" when it won't compile on yours. This can mean out-of-sync check-ins or it could mean a dependent library is missing. Jar hell isn't near as bad as .dll hell; either way, using a build server is cheap insurance that your builds won't mysteriously fail or package the wrong libraries by mistake.
They focus the tasks associated with builds. This includes updating the build tag, creating any distribution packaging, running automated tests, creating and distributing build reports. Automation is the key.
They coordinate (distributed) development. The standard case is where multiple developers are working on the same code base. The version control system is the heart of this sort of distributed development but depending on the tool, the developers may not interact with each other's code much. Instead of forcing developers to risk bad builds or worry about merging code overly aggressively, design the build process where the automated build can see the appropriate code and processes the build artifacts in a predictable way. That way when a developer commits something with a problem, like not checking in a new file dependency, they can be notified quickly. Doing this in a staged area let's you flag the code that has built so that developers don't pull code that would break their local build. PVCS did this quite well using the idea of promotion groups. Clearcase could do it too using labels but would require more process administration than a lot of shops care to provide.
What is their purpose?
Take load of developer machines, provide a stable, reproducible environment for builds.
Why aren't the developers building the project on their local machines, or are they?
Because with complex software, amazingly many things can go wrong when just "compiling through". problems I have actually encountered:
incomplete dependency checks of different kinds, resulting in binaries not being updated.
Publish commands failing silently, the error message in the log ignored.
Build including local sources not yet commited to source control
(fortunately, no "damn customers" message boxes yet..).
When trying to avoid above problem by building from another folder, some files picked from the wrong folder.
Target folder where binaries are aggregated contains additional stale developer files that shoulkd not be included in release
We've got an amazing stability increase since all public releases start with a get from source control onto an empty folder. Before, there were lots of "funny problems" that "went away when Joe gave me a new DLL".
Are some projects so large that more powerful machines are needed to build it in a reasonable amount of time?
What's "reasonable"? If I run a batch build on my local machine, there are many things I can't do. Rather than pay developers for builds to complete, pay IT to buy a real build machine already.
Is it I have just not worked on projects large enough?
Size is certainly one factor, but not the only one.
A build server is a distinct concept to a Continuous Integration server. The CI server exists to build your projects when changes are made. By contrast a Build server exists to build the project (typically a release, against a tagged revision) on a clean environment. It ensures that no developer hacks, tweaks, unapproved config/artifact versions or uncommitted code makes it into the released code.
The build server is used to build everyone's code when it is checked in. Your code may compile locally, but you most likely won't have all the change made by everyone else all the time.
To add on what has already been said :
An ex-colleague worked on the Microsoft Office team and told me a complete build sometimes took 9 hours. That would suck to do it on YOUR machine, wouldn't it?
It's necessary to have a "clean" environment free of artifacts of previous versions (and configuration changes) in order to ensure that builds and tests work and don't depend on the artifacts. An effective way to isolate is to create a separate build server.
I agree with the answers so far in regards to stability, tracability, and reproducability. (Lots of 'ity's, right?). Having ONLY ever worked for large companies (Health Care, Finance) with MANY build servers, I would add that it's also about security. Ever seen the movie Office Space? If a disgruntled developer builds a banking application on his local machine and no one else looks at it or tests it... BOOM. Superman III.
These machines are used for several reasons, all trying to help you provide a superior product.
One use is to simulate a typical end user configuration. The product might work on your computer, with all your development tools and libraries set up, but the end user most likely won't have the same configuration as you. For that matter, other developers won't have the exact same setup as you either. If you have a hardcoded path somewhere in your code, it will probably work on your machine, but when Dev El O'per tries to build the same code, it won't work.
Also they can be used to monitor who broke the product last, with what update, and where the product regressed at. Whenever new code is checked in, the build server builds it, and if it fails, its clear that something is wrong and the user who committed last is at fault.
For consistent quality and to get the build 'off your machine' to spot environment errors and so that any files you forget to check in to source control also show up as build errors.
I also use it to create installers as these take a lot of time to do on the desktop with code signing etc.
We use one so that we know that the production/test boxes have the same libraries and versions of those libraries installed as what is available on the build server.
It's about management and testing for us. With a build server we always know that we can build our main "trunk" line from version control. We can create a master install with one-click and publish it to the web. We can run all of our unit tests each time code is checked in to make sure it works. By collecting all these tasks into a single machine it makes it easier to get it right repeatedly.
You are right that developers could build on their own machines.
But these are some of the things our build server buys us, and we're hardly sophisticated build makers:
Version control issues (some have been mentioned in earlier responses)
Efficiency. Devs don't have to stop to make builds locally. They can kick it off on the server and get on to the next task. If builds are large, then that is even more time the dev's machine is not occupied. For those doing continuous integration and automated testing, even better.
Centralization. Our build machine has scripts that make the build, distribute it to UAT environments, and even to production staging. Keeping them in one place reduces the hassle of keeping them in sync.
Security. We don't do much special here, but I'm sure a sysadmin can make it such that production migration tools can only be accessed on a build server by certain authorized entities.
Maybe i'm the only one...
I think everyone agrees that one should
use a file repository
do builds from the repository (and in a clean environment)
use a continous testing server (e.g. cruise control) to see if anything is broken after your "fixes"
But no one cares about automatically built versions.
When something was broken in an automatic build, but it's not anymore - who cares? It's a work in progress. Someone fixed it.
When you want to do a release version, you run a build from the repository. And i'm pretty sure you want to tag the version in the repository at that time and not every six hours when the server does it's work.
So, maybe a "build server" is just a misnomer and it's actually a "continous test server". Otherwise it sounds pretty much useless.
A build server gets you a sort of second opinion of your code. When you check it in, the code is checked. If it works, the code has a minimum quality.
Additionally, remember that low level languages take much longer to compile than high level languages. It's easy to think "Well look, my .Net project compiles in a couple of seconds! What's the big deal?" Awhile back I had to mess with some C code and I had forgotten how much longer it takes to compile.
A build server is used to schedule compile tasks (e.g. nightly builds) of usually large projects located in a repository that can sometimes take more than a couple of hours.
A build server also gives you a basis for escrow, being able to capture all the parts necessary to reproduce a build in the case that others may have rights to take ownership.