Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 months ago.
Improve this question
We have moved to a product versioning approach which will mark/increment builds according to the following format: [Major].[Minor].[Build].[Revision/Patch], and a production release will essentially be an increment of Major or Minor (depending on the scope of changes).
This works great for patches and Trunk builds, but not so well for concurrent feature development in branches - especially as it is likely we would build release candidates off the branch instead of merging to the Trunk and releasing (not my preferred option, but likely to be more realistic, unfortunately).
Regardless of whether we merge down to the trunk (or not), does anyone have any useful strategies for dealing with branch versioning? We'd need to be able to uniquely identify builds from the branches and trunk, and may end up releasing from trunk or branches at any given time.
Some considerations:
We may not know in advance what the release order is, so trying to assume what the minor version should be on a branch-by-branch basis isn't likely to solve the problem.
We could add another number to the product number which indicates the branch (if applicable), though where would it logically sit?
A (lightweight) scenario may help:
Product X\Trunk (ver 1.1.208.0)
Product X\Branches\Feature A (ver 1.1.239.0)
Product X\Branches\Feature B (ver 1.1.221.0)
Edit: The best documentation I've found thus far is located on MSDN though it is a little vague on unique versioning of concurrent branches.
After almost two weeks of thought, conversations and feedback both from StackOverflow and from people in the industry who I consider to be experts in the field of change management, we came to a consensus approach yesterday.
There's really no right or wrong answer - no silver bullet - to correctly handling branching/merging as, IMHO, it varies from business to business and product to product. This is how we decided to go ahead:
Regardless of trunk or branch, we'll continue to number based on the format [Major].[Minor].[Build].[Rebuild] where rebuilt indicates the build revision. Branches and trunk will get out of synch (different build numbers), but that's not a problem as we'll be defining our build configurations and drop locations explicitly anyway. It'll be an environment management responsibility to know which version is deployed to which server.
We probably won't merge features into a release branch as we typically have a more release branch focus, so we'll release from a candidate branch and increment the minor version on the trunk (and other branches if applicable) before merging down to the trunk after a release has been deployed (if applicable).
Since every release elicits a minor version increment (except patches) the build numbering will never go in reverse. Patches will obviously come from a prod branch, so the build number will increase.
I've a mind to keep this thread openand let others write about their preferred technique for managing branch versioning.
We don't give version numbers to our feature branches. We have the main develop branch, then create feature branches for each feature we create. When that feature is finished, or parts of it are finished that won't break the develop branch, we merge back to develop.
In doing this, the develop branch should be somewhat stable. We release weekly so every Monday we create a release branch from develop which is given a version number. The testers then spend a day or two testing this branch to make sure it's stable, then we deploy on Tuesday/Wednesday.
As we deploy weekly we don't worry too much about fixing minor issues in the release branch. We do it in the feature branch, or if that branch is now done with directly in develop. Any major issues found we would fix in release, deploy and merge back to develop.
I wouldn't tie a version number to a feature branch, because in a concurrent development scenario, you might need to consider:
only part of a feature (not everything might be ready for release within a feature)
several features (if one depends on another), meaning you would need to release several features as part of a new version
minor or major versions: not every stable point will increment just the build, a feature might introduce minor or major changes
For each new x.y release, I would rather have a dedicated branch for consolidation, where I can merge all the feature branches selected for the next release (since some features might not be ready in time), and where the x.y part would make sense.
In other words, I would separate the feature development cycle from the release cycle.
Related
My team has a dozen engineers, some of whom work on modules that will take 2-3 weeks to complete.
Now we integrate each module to the main branch of CVS only after unit testing is completed.
The problem with this is for a good 2-3 weeks the code sits only on an engineer's computer and is not under version control.
The programming language used is C.
Is there any elegant way to manage non unit tested code under version control.
Thanks
James
Your process of 2-3 weeks before check-in to the "main" branch is not out-of-the-norm, as would be a similar re-design "root canal" effort (serious restructuring work that is sometimes necessary).
However, I would tend to get pretty nervous about that much time outside of version control.
Don't code angry.
Don't code drunk.
Don't code for a long time without version control
"check-point".
A strong recommendation is that the local developer use Mercurial or Git for local version control for that 2-3 weeks, and then you can check the "finalized" project back into the (main) CVS branch. They are really built for exactly that scenario.
That's what we do -- it works, and it makes diffs-and-patches and collaboration between individual developers quite trivial.
(For us, Mercurial is local, Subversion is the "main" version control system.)
...for a good 2-3 weeks the code sits only on an engineer's computer and is not under version control...
Hm well to me programming outside of version control is like driving with reverse gear: technically doable but generally quite er counterproductive.
In that sense, I would say any other approach that somehow allows your developers to continuously keep their work under VC would be more elegant than nothing at all. For that, there are many known ways - googling for version control branching strategy shows plenty resources explaining your options and criteria how to choose.
Without experimenting it's rather hard to tell which of these options is a better fit for your project. When studying resources I refer to above I'd recommend to check details of what is usually called Feature Branch. This strategy rather closely matches the case you describe "modules that will take 2-3 weeks to complete" - although I wouldn't bet on that it's the best fit for your team.
Note also that at least for "internal" developers needs you have an option to use version control system other than old-fashioned and inconvenient CVS.
If your company policy requires all your code to be unit tested before checking in, I think it is pretty good policy, and you should do just that: write your unit tests, perhaps even before writing code.
But if I misunderstood you and it is just that there should be large testing session when everything is done, well, then it is too bad. You will definitely encounter pesky integration problems. If you cannot change that policy, at least have your local VCS. Also, you could have config with "featureX_Enabled" switches, and try not to forget to set it to '0' when checking in.
Anyway, switch to Git or Mercurial, they would be so much less painful to use.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
A typical solution is to have a CI (Continuous Integration) build running on a build server: It will analyze the source code, make build (in debug) and run tests, measure test coverage, etc.
Now, another build type usually known is "Nightly build": do slow stuff like create code documents, make a setup package, deploy to test environment, and run automatic (smoke or acceptance) tests against the test environment, etc.
Now, the question:
Is it better to have a third separate "Release build" as release build?
Or do "Nightly build" in release mode and use it as a release?
What are you using in your company?
(The release build should also add some kind of tag to source control of potential product version.)
Usually the way I do it is my nightly builds get promoted to Release builds. You usually don't want to create a separate release build. Most QA teams should test nightly builds (hopefully they aren't testing from your CI builds). Once they have deemed it good enough for release, you promote it to release status. You determine what that means. Move it to another location, rename it, tag it, label it, burn it, etc.
You don't want to have QA testing a nightly, then when they deem it good, build another one, that you say is the same. You never know, they can be different. An OS patch may have been applied to your build machine, a third party tool may have been updated, etc. You don't want to make your QA teams work twice to test the "same exact build". It might be from the same source, but there is no guarantee it is the same exact build.
The answer to your question depends highly on the project you're working on and the goals you want to set.
In general, (trivially true for small projects) a build should be very fast and it should include everything needed for deployment. This is for me always the goal even if I don't reach it - at least not at once. It just keeps me looking at what can be improved all the time.
I know from working on big legacy projects that there are so many accumulated problems slowing things down that it might not be feasible. At least not as an immediate target at least. In large legacy projects compiling and linking usually takes too long, tests (if existing) might also run too long and generating all the required information for deployment might also be slow and even manual. Also build hardware might be insufficient. There are many other things to add to this incomplete list.
When working on a project like this, I try to have separate cycles doing things.
First cycle, a solid CI server which builds, runs automated unit tests, packages and archives builds. This must be fast to give fast feedback to development on changes made. If this is slow, get better hardware for building, sort out dependencies and fix slow unit tests etc. You want this to be as fast as possible. The builds are all deployable builds.
The second cycle would be a slower cycle only picking up builds made by the CI system. It does not work with source code as input, but rather release builds. These are picked up as you want (every build produced) or latest available when ready to do another cycle. This longer cycle would consist of deploying the build onto a test server, run automated functional tests and do other things which are "too slow", "not yet fast" or something else you want which takes a long time. Depending on your organization, you can now add to the deployable package (docs etc.), rename the release according to something visible to clients or things like that. Builds passing here could be good-to-go-live.
If you also have performance tests to run, you might want a third cycle which works with the second cycle's builds as input.
This is briefly described, but the main point here is to separate things, so you can have everything in the chain while getting feedback quicker than having one cycle. I find this a good approach as it's possible to get the benefits of speed (feedback) as well as a natural place to do things.
Finally, I want to mention that the way to go about this would vary from project to project as well, especially if you retrofit CI. You may even want to have a separate continuous build with only build and unit tests, and have a one a day (or something) build which feeds the releases and testing. This would of course mean that only development uses the fast CI builds, because they're incomplete and not suitable for deployment. Still, long-term this is not where you want to be. You want to have the whole chain automated.
Over the years I've done this a number of ways. The first was that the release build would happen IF and ONLY IF the 'sanity test' passed for the debug build. It would also auto-deploy to our pre-production environment for user-driven validation.
I've also seen this done where release builds are treated almost as sacred, and that they are only made when it is deemed 'time to get ready to really deploy'. Along with that comes some 'paperwork' and approvals, and then the release build is made (manually) and then it is sanity checked and then deployed.
From my experience it doesn't really matter as long as you are consistent and that it works with the way the company/team understands it should work. Going against the grain is easy at first but then results in what happened at one client which is they actually abandoned a structured build/deployment approach all together (a $100M company did that, imagine, but they did).
We develop a product for internal customers. We don't have a QA team, and don't use assertions. Performance is important, application size isn't.
Is it a good idea to have a single configuration (instead of separating Debug and Release), which will have the debug information (pdbs), and will also do the performance optimization?
Are there any cons to this approach?
Keep both. There is a reason for having two configurations! Use the Debug one for debugging and the Release one for every-day use.
THe cons of "merging" configurations are obvious - you wont get the best optimizations you could with clean Release configuration and debugging will be awkward. The few seconds (or minutes) needed to rebuild the project in a different configuration are worth it, trust me.
I would say that you should always keep debug and release versions separate. Release versions are for your customers, Debug versions are for your developers. You say that you don't use assertions: perhaps you should be? Even if you don't use assertions in your own code, you can still trigger assertions in the underlying library code, eg when using invalid iterators. These will give the developer a warning that something's wrong. What would the user do if they saw this message: panic, call tech support, do nothing?
The debug version is there to provide you with extra tools to fix problems before you ship the release version. You should use every tool available to you to increase the quality of your product.
The debug infos will be mostly worthless in an optimized build, because the optimizer will transform the program into something unrecognizable. Also, errors related to undefined behavior are easier to expose if you have a secondary configuration with other optimization flags.
Debugging and optimization tend to work against each other. The compiler's optimizations typically make debugging a pain (functions can be inlined, loops unrolled, etc), and the strictness that makes debug info worthwhile ties the compiler's hands so it can't optimize as well. Basically, if you combine the two, it's the worst of both worlds.
Performance of the finished product thus pretty much demands that it be a "release" version, not a debug version, and certainly not some odd mix of the two.
You should have at least two. One for release (performance) and one for debugging - or do you write perfect code, first time every time?
Is it OK to have a single
configuration, rather than separating
Debug and Release (in our case)?
It may be OK - it depends heavily on your case (but depending on your details I think it is very not OK).
We don't have a QA team, and don't use assertions.
Assertions are not the issue with a debug build. They are another tool you can use (or not).
Having a QA team or not should not influence (heavily) the decision between debug and release builds (but if you do have a QA team, sooner or later you will probably want to have a debug version of your product).
A QA team will affect the quality of your product heavily. Without dedicated QA (by someone other than the people who develop the application) you have no guarantee of the quality or stability of your product, you can provide no guarantee it does what it's supposed to do (or that it's fit for any purpose) and you cannot make meaningful measurements on your product in lots of areas).
It may be you actually don't need a QA team, but in most cases you're just depriving your development team and customers (internal or not) of a lot of necessary data.
A debug build should make it easier to - well - debug your product and track issue and fix them. If you are doing no organized QA, you may not even know what your main issues to fix are.
Methinks that you actually have a QA team, you just don't see it as such: your internal customers (that may even be you) are your QA team. It's a bad idea, to the degree your application's function is important.
Working with no QA team is like creating a car by yourself and taking it on the road for testing: you have no idea if the wheels are held together OK, or if the breaks work until you are in traffic. It may be you don't kill anyone, but I wouldn't put the critical data in your company in your untested application, unless it's not really critical.
Performance is important, application size isn't.
If performance is important, who measures it? Does the measurement code belong to your released application? Do you add it and remove it in the released code?
It sounds like you're doing ad-hoc development and with a performance-critical application with no QA team and no dedicated debugging I'd have lots of doubts your team can actually deliver.
I don't know your situation and there may be a lot I don't see in this so maybe it's OK.
Are there any cons to this approach?
Yes: you will either end up with diagnostics code in your release version, or have to remove the diagnostics code after fixing each problem and add it again when working on the next problem.
You should not remove the debug version only for optimization though. That's not a valid argument, since you can optimize your release version and leave the debug version as is.
I've always programmed alone, I'm still a student so I never programmed with anyone else, I haven't even used a version control system before.
I'm working on a project now that requires knowledge of how programmers work together on a piece of software in a company.
How is the software compiled? Is it from the version control system? Is it by individual programmers? Is it periodic? Is it when someone decides to build or something? Are there any tests that are done to make sure it "works"?
Anything will do.
Actually, there are as many variations on these processes as many companies there are. Meaning: every company has a little bit different conventions than others, but there are some common best practices that are generally used in most places.
Best practices that are always useful
All the source code of the project and anything that is required to build it is under version control (also called source control). Anyone should be able to build the entire project with one click.
Furthermore, unnecessary files (object files or compiled binaries) should not be added to the repository, as they can be regenerated quite easily and would just waste space in the repo.
Every developer should update and commit to the version control a few times per day. Mostly when they have finished the task they are working on and tested it enough so they know that it doesn't contain trivial bugs.
Again: anyone should be able to build the project with a single click. This is important and makes it easy to test for everyone. Big advantage if non-programmers (eg. the boss) are able to do so, too. (It makes them feel to be able to see what the team is working on exactly.)
Every developer should test the new feature or bug fix they are adding before they commit those to the repository.
Set up a server that regulary (in predetermined intervals) updates itself from the repository and tries to build everything in the entire project. If it fails, it sends e-mails to the team along with the latest commits to the version control (since which commit did it fail to build) to help debug the issue.
This practice is called continuous integration and the builds are also called nightly builds.
(This doesn't imply that developers should not build and test the code on their own machines. As mentioned above, they should do that.)
Obviously, everyone should be familiar with the basic design/architecture of the project, so if something is needed, different members of the team doesn't have to reinvent the wheel. Writing reusable code is a good thing.
Some sort of communication is needed between the team members. Everyone should be aware of what the others are doing, at least a little. The more, the better. This is why the daily standup is useful in SCRUM teams.
Unit testing is a very good practice that makes testing the basic functionality of your code automatically.
A bug tracking software (sometimes called time tracking software) is a very good means of keeping track what bugs are there and what tasks the different team members have. It is also good for testing: the alpha/beta testers of your project could communicate with the development team this way.
These simple things ensure that the project doesn't go out of control and everyone works on the same version of the code. The continuos integration process helps when something goes terribly bad.
It also prevents people from committing stuff that don't build to the main repository.
If you want to include a new feature that would take days to implement and it would block other people from building (and testing) the project, use the branches feature of your version control.
If that is not enough, you can set it up to do automated testing, too, if that is possible with the project in question.
Some more thoughts
The above list can be very heavyweight at first glance. I recommend that you follow it on an as-needed basis: start with a version control and a bug tracker, then later on set up the continuous integration server, if you need it. (If it's a large project, you're gonna need it very soon.) Start writing unit tests for the most important parts. If it's not enough, then write more of them.
Some useful links:
Continuous integration, Daily builds are your friends, Version control, Unit testing
Examples:
For version control, I tend to use Git for my personal projects nowadays. Subversion is also popular, and for example, VisualSVN is quite easy to set up if you use a Windows server. For client, TortoiseSVN works best for many people. Here is a comparison between Git and SVN.
For bug tracking software, Jira and Bugzilla are very popular. We also used Mantis at a previous workplace.
For continuous integration software, there is Teamcity for one (also, CruiseControl and its .NET counterpart are notable).
Answer to your question "who decides the main design of the project?"
Of course, that would be the lead developer.
In companies, the lead developer is the person who talks to the financial / marketing people of the project, and decides the arcithecture according to the financial capability of the company, the planned features the requirements from users, and the time that is available.
It is a complex task, and usually more than one people are involved. Sometimes members of the team are also asked to participate or brainstorm about the design of the entire project or specific parts.
I'm a student as well, who completed a software engineering course recently where the entire semester consisted of a giant group project. Let me just start by saying we could have done with 3 people what it took 12 of us the whole semester to do. Working with people is a tough thing. Communication is key.
Definitely utilize a repository. Each person can remotely access all the code, and add/delete/change anything. But the best part about subversion is that if someone breaks the code, your can revert to an earlier version and assess what went wrong from there. Communication is still key though, know what your teammates are doing so that there are no conflicts. Don't sit on your code either, make quick, meaningful commits to the repository to be the most effective.
**I'd also recommend a bug tracker, such as Redmine. You can set up accounts for everyone, and assign people tasks with different priorities, and also track and see if people have taken care of certain problems, or if more have come up.
And, as has been said before, unit testing will help greatly. Best of luck! Hope this helped :-)
how programmers work together on a
piece of software in a company
Developers never work as a team. Teams suck. Dilbert is funny not because he's a comical character like Goofy. He's funny because he's real and people recognize the situations he's in.
Generally it is good practice not to check build artifacts into the repository. The repository will contain the source tree, build configuration, etc - anything written by a human. Software engineers will check out a copy of their code onto their local filesystem and build it locally.
It is also good practice to have unit tests which are run as part of the build process. This way, a developer will know instantly if his changes have invalidated any of the unit tests, and will have the opportunity to fix them before checking in his changes.
You might like to look into the documentation for a version control system (one of Subversion, CVS, Git, etc) and for a build system (for example, in Java there are Ant and Maven).
The big things are:
A plan — If people don't know where they're going, they won't go anywhere. The start of any project therefore needs a few people (often the project graybeards) to get into a huddle and come up with a plan; the plan need not be very detailed, but it's still required.
Version control system — Without this, you aren't working together. You also need the firm commitment that if things aren't committed, they don't count. “Oh, it's in one of my sandboxes” is just a lame excuse.
Issue tracker — You can't keep track of these things by email folders. Should definitely be database-backed.
Notification system — People need to know when things are committed to code that they maintain or comments are made to bugs they are responsible for. Email can work for this, as can IRC (provided everyone uses it, of course).
Build system — It doesn't really matter how this happens, so long as with one action you can get a complete build of the current state of things, both of your development sandbox and of the main repository. The best option for this depends on what language(s) you're using.
Test suite — A test suite helps people avoid silly errors. It needs to be as easy to run as the build (being part of the build is good). Note that tests are only a crude substitute for correctness, but they're a heck of a lot better than nothing.
Finally, you need a willingness to work together toward fulfilling the plan. That's all too often the tough part.
There is no standard for the things you're asking about. Rather, there are conventions and these depend heavily on the size and maturity of the organization. If you're in a small organization, say a couple of programmers, then things will probably be somewhat informal with the individual developers doing coding, builds, and test.
In larger organizations, there may be a dedicated build engineer and process. This kind of organization will usually do a formal build on a regular basis, say once a day, using whatever source code is checked in. The process will also usually include BVT (Build Validation Tests) and perhaps some regression tests. Developers will check out the code from the repository, work on their own portion locally, then check it in.
In the largest organizations, like Microsoft or Google, they will have a completely dedicated group and full lab that will build on a more-or-less continual basis, making the results of each run available. These organizations have very formal processes and procedures in place about what gets checked in and when, what the code review processes are, etc.
There is no cookbook for working with software development, but in general the version control system should be the heart of your build system, even if you are working in a project where you are the only developer. Even in this case, being able to revert versions and read the version log is very welcome help in fixing bugs. This is not the only feature of a version control system, but this alone justifies installing, configuring and maintaining a version control system.
The build can be done either by each developer when adding new code, or periodically by a "build server". The last approach requires more setup, but helps finding out build errors sooner.
The short answer - "It depends".
Currently, I'm working on a project by myself, so I'm the one who builds/uses VCS. I know of other places that you have teams working on the project together by shudder email. Or big (+5) teams using VCS.
On that note, I highly recommend learning at least some VCS, and Joel Spolsky has a great introductory tutorial for Mercurial. Bazaar (my personal choice) is similar, and then Git is the next nearest in terms of similarity, but probably more popular than either (at least ATM). After that you have SVN which is pretty weak in comparison.
Actually, Joel talks about most of your questions - I'd recommend reading the 10 years of archives he has - it's all highly useful information, and most of it pertinent to your current and near-future situation.
Proper programming is a deep thing that benefits greatly from experience. Pair-programming is like running multiple processors of awareness... one can overlook something seen by the other and so long as they are communicating it can result in great progress.
First of all, teams work by using repositories (which can be professional version control, or just a bunch of directories that is considered the 'live' one, however a revision control system is the de facto standard). Also, how the project is managed strategy depends on how you work (waterfall, agile, etc.). If you work in iterations, you build components/plugins/modules/libraries which are self-sustained, and you perform unit testing, until its signed off as finished. As a team, you work in a team which means you do don't work on the entire project everywhere at the same time. Instead, you get a task to perform inside a realm of the project. On some occasions you have to fix code that isn't yours, but that comes usually when a strange behavior occurs. Basically, you are doing the testing of the parts you develop.
Let me examplify this for you. You are inside a team of construction workers. The architect comes with a plan for a building, the foreman looks what the necessities are to construct and then hires the constructors. The mason does the walls, checks them for strength and glues them up nicely. The electrician does all the wiring inside the building so electricity can flow. Each man has their own job. Sometimes, the electrician might want to discuss with the mason if certain walls can be carved, but always in conjunction with the foreman.
I hope this is some help for you!
Typically, the source control system contains the source code and usually does not have the binaries. If you want to build it and run it, you would check the code out and build it on your local machine.
Some places run nightly builds to make sure everything works. There may even be some automated tests that are ran server-side. If the build or anything else fails, someone is notified automatically.
A good introduction to a method of using source control is Eric Sink's Source Control HOWTO http://www.ericsink.com/scm/source_control.html
In his examples he uses SourceGear Vault since he wrote it and all, but the methods can be applied to other version control systems.
This is again one good reason why one should look into Open Source projects.
The lead developers who work in big OpenSource projects (like Chromium , Mozilla Firefox, MySQL , Popular Gnu Software) are professionals. They have lot of experience and these projects have evolved over years with ideas from hundreds of such professionals.
Everything others mentioned in their answers (Plan, Version control system , Issue tracker , Notification system , Build system, Test suite, ) can be found in these OpenSource projects.
If you really want an hands on experience I strongly suggest you to go through some popular & big OpenSource projects and then get the Source from any project (using Version Control) and build it your self.
PS: I'm also a student and involving in OpenSource projects is the best thing I ever did in my life. Trust me! you'll also feel the same.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I will be implementing a version control system in C++ for my final year project.
I would like to know:
What are the features a version control system should must support.
What features do you consider are missing in existing implementations (so that my version control system does more than just reinventing the wheel)
References (if any) to start with.
If you want to do something different / innovative, then I'd reccommended looking into targeting your revision control at something other than source code. There are other application where revision control could be useful (e.g. revision control for documents) and source control has been extensively done already - your unlikely to cone up with an innovation that doesn't already exist in some other source control already.
What are the features a version control system should must support.
Core features: Create Project, Check in, Check out, Branch, Get Latest/Previous, View History, Compare, Rollback
What features do you consider are missing in existing implementations (so that my version control system does more than just reinventing the wheel )
Auto Build, Code Analysis, Email Notification, In-place editor, Database based storage
Ignore the people who say it cannot be done. In their seminal book "The Unix Programming Environment" Kernighan and Pike develop (a very basic) one using shell scripts, diff and sed in a very few lines of code. I'm sure you can do something similar.
#2 Nice user interface!
The number one feature should be:
MERGING
If you wrote a complete VCS and your merging sucks, you will not use that feature. And your VCS is now simply a glorified backup-system.
Merging is also probably the most difficult part to implement.
The ability to search an entire codebase for instances of a chunk of text. This gets really important as the codebase gets older (or as new people join a project and are learning the codebase). SourceSafe has it but some others don't. Yes, Sourcegear Vault, I'm looking at you.
Granted, it's one of the few things that SourceSafe does right, but it needs mentioning.
Stand on the shoulders of giants. What about contributing to an existing version control system, adding new features and/or fixing bugs better than reinventing the wheel?
Depending on the language you prefer, you will probably find a good open source written with that language.
Python > Mercurial, C > Git, C++ > Monotone
Take a look at article by Tom Preston-Werner, cofounder of GitHub: The Git Parable, describing how a Git-like system could be built from first principles. Worth reading.
Try also to avoid traps other fell in, for example design file formats and network protocols with extendability in mind, to not have to hack it if the format changes; either provide version number or a list of capabilities. But that is more advanced thing, and it assumes that you want for this VCS to go beyond term project.
I admire your ambition, your course may be different from mine, but you tend to get more marks for your report rather than the code, so it's important to know what you can archive given that writing the report will take 2,3 times the time for coding.
You probably need to try to implement a few other ideas extremely roughly to give yourself an idea of the difficulty of each one before you commit.
Linus Torvalds may well have developed the core of git in four weeks, but it took many others a lot longer to make it really usable, and he is Linus Torvalds!
One thing many miss is the ability to shelve changes - ie check files in as a temporary commit, so you can save your working copy regularly (eg every night in case your computer gets stolen/broken) but without these 'in-progress' changes showing up in the main list of changes. When you've finished working on them, you then check them in to commit them as normal and the temporaries are removed (as you don;t want to fill your VCS with irrelevant versions).
Another thing many fail to do is automatically notify the user of changes. eg. svn you have to run svn update to get any changes made by team members, it'd be nice to have that automated so changes were retrieved on every operation (eg. save a list of changes since and send a highly optimised data block to the client after every result)
Try a better way of storing binary files, most SCMs are designed to handle text, and work on binaries only by chance (and not very well usually).
In many ways I'd suggest starting with an existing open source SCM and extending it with your changes. That way, you do get the benefit of the basics for free leaving you to concentrate on the fancy features but at a cost of learning how to develop that SCM. Subversion, git, mercurial, bazaar are all good reference SCMs.
You can't create a full version control system as your final year project. And in C++, it's close to impossible in a 4-6 Month timeframe even for a full team. ( I think FYPs normally have a 4-6 month timeframe).
What you should be aiming at is creating a subset of a good version control system. Look at SVN for a start and try developing a subset of it. e.g., SVN MINUS Branching Support should be way more than enough.
And Why C++? Try doing it in some other language which has a better support of libraries e.g., C#. Remember, C# applications ship quicker :)