Version Control for code yet to be unit tested - unit-testing

My team has a dozen engineers, some of whom work on modules that will take 2-3 weeks to complete.
Now we integrate each module to the main branch of CVS only after unit testing is completed.
The problem with this is for a good 2-3 weeks the code sits only on an engineer's computer and is not under version control.
The programming language used is C.
Is there any elegant way to manage non unit tested code under version control.
Thanks
James

Your process of 2-3 weeks before check-in to the "main" branch is not out-of-the-norm, as would be a similar re-design "root canal" effort (serious restructuring work that is sometimes necessary).
However, I would tend to get pretty nervous about that much time outside of version control.
Don't code angry.
Don't code drunk.
Don't code for a long time without version control
"check-point".
A strong recommendation is that the local developer use Mercurial or Git for local version control for that 2-3 weeks, and then you can check the "finalized" project back into the (main) CVS branch. They are really built for exactly that scenario.
That's what we do -- it works, and it makes diffs-and-patches and collaboration between individual developers quite trivial.
(For us, Mercurial is local, Subversion is the "main" version control system.)

...for a good 2-3 weeks the code sits only on an engineer's computer and is not under version control...
Hm well to me programming outside of version control is like driving with reverse gear: technically doable but generally quite er counterproductive.
In that sense, I would say any other approach that somehow allows your developers to continuously keep their work under VC would be more elegant than nothing at all. For that, there are many known ways - googling for version control branching strategy shows plenty resources explaining your options and criteria how to choose.
Without experimenting it's rather hard to tell which of these options is a better fit for your project. When studying resources I refer to above I'd recommend to check details of what is usually called Feature Branch. This strategy rather closely matches the case you describe "modules that will take 2-3 weeks to complete" - although I wouldn't bet on that it's the best fit for your team.
Note also that at least for "internal" developers needs you have an option to use version control system other than old-fashioned and inconvenient CVS.

If your company policy requires all your code to be unit tested before checking in, I think it is pretty good policy, and you should do just that: write your unit tests, perhaps even before writing code.
But if I misunderstood you and it is just that there should be large testing session when everything is done, well, then it is too bad. You will definitely encounter pesky integration problems. If you cannot change that policy, at least have your local VCS. Also, you could have config with "featureX_Enabled" switches, and try not to forget to set it to '0' when checking in.
Anyway, switch to Git or Mercurial, they would be so much less painful to use.

Related

How do programmers work together on a project?

I've always programmed alone, I'm still a student so I never programmed with anyone else, I haven't even used a version control system before.
I'm working on a project now that requires knowledge of how programmers work together on a piece of software in a company.
How is the software compiled? Is it from the version control system? Is it by individual programmers? Is it periodic? Is it when someone decides to build or something? Are there any tests that are done to make sure it "works"?
Anything will do.
Actually, there are as many variations on these processes as many companies there are. Meaning: every company has a little bit different conventions than others, but there are some common best practices that are generally used in most places.
Best practices that are always useful
All the source code of the project and anything that is required to build it is under version control (also called source control). Anyone should be able to build the entire project with one click.
Furthermore, unnecessary files (object files or compiled binaries) should not be added to the repository, as they can be regenerated quite easily and would just waste space in the repo.
Every developer should update and commit to the version control a few times per day. Mostly when they have finished the task they are working on and tested it enough so they know that it doesn't contain trivial bugs.
Again: anyone should be able to build the project with a single click. This is important and makes it easy to test for everyone. Big advantage if non-programmers (eg. the boss) are able to do so, too. (It makes them feel to be able to see what the team is working on exactly.)
Every developer should test the new feature or bug fix they are adding before they commit those to the repository.
Set up a server that regulary (in predetermined intervals) updates itself from the repository and tries to build everything in the entire project. If it fails, it sends e-mails to the team along with the latest commits to the version control (since which commit did it fail to build) to help debug the issue.
This practice is called continuous integration and the builds are also called nightly builds.
(This doesn't imply that developers should not build and test the code on their own machines. As mentioned above, they should do that.)
Obviously, everyone should be familiar with the basic design/architecture of the project, so if something is needed, different members of the team doesn't have to reinvent the wheel. Writing reusable code is a good thing.
Some sort of communication is needed between the team members. Everyone should be aware of what the others are doing, at least a little. The more, the better. This is why the daily standup is useful in SCRUM teams.
Unit testing is a very good practice that makes testing the basic functionality of your code automatically.
A bug tracking software (sometimes called time tracking software) is a very good means of keeping track what bugs are there and what tasks the different team members have. It is also good for testing: the alpha/beta testers of your project could communicate with the development team this way.
These simple things ensure that the project doesn't go out of control and everyone works on the same version of the code. The continuos integration process helps when something goes terribly bad.
It also prevents people from committing stuff that don't build to the main repository.
If you want to include a new feature that would take days to implement and it would block other people from building (and testing) the project, use the branches feature of your version control.
If that is not enough, you can set it up to do automated testing, too, if that is possible with the project in question.
Some more thoughts
The above list can be very heavyweight at first glance. I recommend that you follow it on an as-needed basis: start with a version control and a bug tracker, then later on set up the continuous integration server, if you need it. (If it's a large project, you're gonna need it very soon.) Start writing unit tests for the most important parts. If it's not enough, then write more of them.
Some useful links:
Continuous integration, Daily builds are your friends, Version control, Unit testing
Examples:
For version control, I tend to use Git for my personal projects nowadays. Subversion is also popular, and for example, VisualSVN is quite easy to set up if you use a Windows server. For client, TortoiseSVN works best for many people. Here is a comparison between Git and SVN.
For bug tracking software, Jira and Bugzilla are very popular. We also used Mantis at a previous workplace.
For continuous integration software, there is Teamcity for one (also, CruiseControl and its .NET counterpart are notable).
Answer to your question "who decides the main design of the project?"
Of course, that would be the lead developer.
In companies, the lead developer is the person who talks to the financial / marketing people of the project, and decides the arcithecture according to the financial capability of the company, the planned features the requirements from users, and the time that is available.
It is a complex task, and usually more than one people are involved. Sometimes members of the team are also asked to participate or brainstorm about the design of the entire project or specific parts.
I'm a student as well, who completed a software engineering course recently where the entire semester consisted of a giant group project. Let me just start by saying we could have done with 3 people what it took 12 of us the whole semester to do. Working with people is a tough thing. Communication is key.
Definitely utilize a repository. Each person can remotely access all the code, and add/delete/change anything. But the best part about subversion is that if someone breaks the code, your can revert to an earlier version and assess what went wrong from there. Communication is still key though, know what your teammates are doing so that there are no conflicts. Don't sit on your code either, make quick, meaningful commits to the repository to be the most effective.
**I'd also recommend a bug tracker, such as Redmine. You can set up accounts for everyone, and assign people tasks with different priorities, and also track and see if people have taken care of certain problems, or if more have come up.
And, as has been said before, unit testing will help greatly. Best of luck! Hope this helped :-)
how programmers work together on a
piece of software in a company
Developers never work as a team. Teams suck. Dilbert is funny not because he's a comical character like Goofy. He's funny because he's real and people recognize the situations he's in.
Generally it is good practice not to check build artifacts into the repository. The repository will contain the source tree, build configuration, etc - anything written by a human. Software engineers will check out a copy of their code onto their local filesystem and build it locally.
It is also good practice to have unit tests which are run as part of the build process. This way, a developer will know instantly if his changes have invalidated any of the unit tests, and will have the opportunity to fix them before checking in his changes.
You might like to look into the documentation for a version control system (one of Subversion, CVS, Git, etc) and for a build system (for example, in Java there are Ant and Maven).
The big things are:
A plan — If people don't know where they're going, they won't go anywhere. The start of any project therefore needs a few people (often the project graybeards) to get into a huddle and come up with a plan; the plan need not be very detailed, but it's still required.
Version control system — Without this, you aren't working together. You also need the firm commitment that if things aren't committed, they don't count. “Oh, it's in one of my sandboxes” is just a lame excuse.
Issue tracker — You can't keep track of these things by email folders. Should definitely be database-backed.
Notification system — People need to know when things are committed to code that they maintain or comments are made to bugs they are responsible for. Email can work for this, as can IRC (provided everyone uses it, of course).
Build system — It doesn't really matter how this happens, so long as with one action you can get a complete build of the current state of things, both of your development sandbox and of the main repository. The best option for this depends on what language(s) you're using.
Test suite — A test suite helps people avoid silly errors. It needs to be as easy to run as the build (being part of the build is good). Note that tests are only a crude substitute for correctness, but they're a heck of a lot better than nothing.
Finally, you need a willingness to work together toward fulfilling the plan. That's all too often the tough part.
There is no standard for the things you're asking about. Rather, there are conventions and these depend heavily on the size and maturity of the organization. If you're in a small organization, say a couple of programmers, then things will probably be somewhat informal with the individual developers doing coding, builds, and test.
In larger organizations, there may be a dedicated build engineer and process. This kind of organization will usually do a formal build on a regular basis, say once a day, using whatever source code is checked in. The process will also usually include BVT (Build Validation Tests) and perhaps some regression tests. Developers will check out the code from the repository, work on their own portion locally, then check it in.
In the largest organizations, like Microsoft or Google, they will have a completely dedicated group and full lab that will build on a more-or-less continual basis, making the results of each run available. These organizations have very formal processes and procedures in place about what gets checked in and when, what the code review processes are, etc.
There is no cookbook for working with software development, but in general the version control system should be the heart of your build system, even if you are working in a project where you are the only developer. Even in this case, being able to revert versions and read the version log is very welcome help in fixing bugs. This is not the only feature of a version control system, but this alone justifies installing, configuring and maintaining a version control system.
The build can be done either by each developer when adding new code, or periodically by a "build server". The last approach requires more setup, but helps finding out build errors sooner.
The short answer - "It depends".
Currently, I'm working on a project by myself, so I'm the one who builds/uses VCS. I know of other places that you have teams working on the project together by shudder email. Or big (+5) teams using VCS.
On that note, I highly recommend learning at least some VCS, and Joel Spolsky has a great introductory tutorial for Mercurial. Bazaar (my personal choice) is similar, and then Git is the next nearest in terms of similarity, but probably more popular than either (at least ATM). After that you have SVN which is pretty weak in comparison.
Actually, Joel talks about most of your questions - I'd recommend reading the 10 years of archives he has - it's all highly useful information, and most of it pertinent to your current and near-future situation.
Proper programming is a deep thing that benefits greatly from experience. Pair-programming is like running multiple processors of awareness... one can overlook something seen by the other and so long as they are communicating it can result in great progress.
First of all, teams work by using repositories (which can be professional version control, or just a bunch of directories that is considered the 'live' one, however a revision control system is the de facto standard). Also, how the project is managed strategy depends on how you work (waterfall, agile, etc.). If you work in iterations, you build components/plugins/modules/libraries which are self-sustained, and you perform unit testing, until its signed off as finished. As a team, you work in a team which means you do don't work on the entire project everywhere at the same time. Instead, you get a task to perform inside a realm of the project. On some occasions you have to fix code that isn't yours, but that comes usually when a strange behavior occurs. Basically, you are doing the testing of the parts you develop.
Let me examplify this for you. You are inside a team of construction workers. The architect comes with a plan for a building, the foreman looks what the necessities are to construct and then hires the constructors. The mason does the walls, checks them for strength and glues them up nicely. The electrician does all the wiring inside the building so electricity can flow. Each man has their own job. Sometimes, the electrician might want to discuss with the mason if certain walls can be carved, but always in conjunction with the foreman.
I hope this is some help for you!
Typically, the source control system contains the source code and usually does not have the binaries. If you want to build it and run it, you would check the code out and build it on your local machine.
Some places run nightly builds to make sure everything works. There may even be some automated tests that are ran server-side. If the build or anything else fails, someone is notified automatically.
A good introduction to a method of using source control is Eric Sink's Source Control HOWTO http://www.ericsink.com/scm/source_control.html
In his examples he uses SourceGear Vault since he wrote it and all, but the methods can be applied to other version control systems.
This is again one good reason why one should look into Open Source projects.
The lead developers who work in big OpenSource projects (like Chromium , Mozilla Firefox, MySQL , Popular Gnu Software) are professionals. They have lot of experience and these projects have evolved over years with ideas from hundreds of such professionals.
Everything others mentioned in their answers (Plan, Version control system , Issue tracker , Notification system , Build system, Test suite, ) can be found in these OpenSource projects.
If you really want an hands on experience I strongly suggest you to go through some popular & big OpenSource projects and then get the Source from any project (using Version Control) and build it your self.
PS: I'm also a student and involving in OpenSource projects is the best thing I ever did in my life. Trust me! you'll also feel the same.

Are some logical paths inherently untestable?

I have been using TDD to drive the project that I am currently working on and the results have been fairly satisfying. I did run into a problem (described here; still without a solution or any suggestions!) where there are some aspects of a particular method which may not be able to be tested (as in my example; briefly, I want to be able to handle a ManagementException which has a specific ErrorCode - but it doesn't seem possible for me to set up a test which throws a ManagementException like that).
So, how does one deal with that? Do we simply accept the fact that some logical paths are untestable (because of the framework that we are working in or limitations in the testing framework(s) that are currently available)?
Some designs do not lend themselves to testability.. especially ones that do not have testability as one of the design goals. Generally TDDed designs do not fall into this category.
To answer your original question, I've posted a response which involves using reflection to slot in the requested error code. However this may not work in all situations and is not a general solution.
The tradeoff here is the effort in writing the test vs the benefit of having that particular piece of code under automated tests. If you feel that the cost to benefit ratio is huge and probability of failure is miniscule, you may write it up as an exceptional manual test, a comment to future developers and verify it manually for now. I'd say be pragmatic, if you've spent 30-40 mins of a couple of developers' brain time trying to get it under test, maybe you need to step back and rethink your strategy. Have a look at Michael Feather's 'Working effectively with legacy code' on some suggestions to overcome barriers to testability.
I don't think you could say that anything is logically untestable, but you will certainly find areas of code where the effort required to test them would be better spent elsewhere.
This is a great question, and one which I also found myself contemplating recently.
So first, I wouldn't say some logical paths are "untestable" - at most they are probably very hard to test with automatic unit testing. You could probably still test most of these problematic paths with some serious heavy duty system tests.
Consider this - anything you test can be thought to run inside a virtual machine under your control and you can (theoretically) simulate every aspect of its operation in order to test your software. Whether or not this is practical for most applications is another question.
I've just tried answering your original question (and collided in midflight with somebody else saying the same thing more concisely, or most of it at least;-). Anyway, there surely exist frameworks that are way too rigid (thanks to private and friends), and if you can't use introspection to go around that (despite having done all proper incantations), then you're just using a language that's too rigid as well as a framework that is.
I'd be astonished if that was the case in an overall system that supports dynamic languages (as .NET now does) such as IronRuby and IronPython -- maybe if C# won't let you go around accessibility limitations via introspection, the dynamic languages could serve.
That said, it is surely possible for the overall environment to be designed so badly and so rigidly to make it impossible to unit-test certain things -- even though I'm not entirely convinced that this is the case in your current situation.
Some things cannot be tested in an automated unit test because the language/framework/situation is just not open to it. The way to handle that is to reduce that area as much as possible and keep it so simple that it is highly unlikely to be a source of bugs or behavior changes later on.
There is also more to testing than just unit testing, and those areas (such as Acceptance testing, QA, etc.) are not covered by unit testing as well.

Simplifying algorithm testing for researchers.

I work in a group that does a large mix of research development and full shipping code.
Half the time I develop processes that run on our real time system ( somewhere between soft real-time & hard real-time, medium real-time? )
The other half I write or optimize processes for our researchers who don't necessarily care about the code at all.
Currently I'm working on a process which I have to fork into two different branches.
There is a research version for one group, and a production version that will need to occasionally be merged with the research code to get the latest and greatest into production.
To test these processes you need to setup a semi complicated testing environment that will send the data we analyze to the process at the correct time (real time system).
I was thinking about how I could make the:
Idea
Implement
Test
GOTO #1
Cycle as easy, fast and pain free as possible for my colleagues.
One Idea I had was to embed a scripting language inside these long running processes.
So as the process run's they could tweak the actual algorithm & it's parameters.
Off the bat I looked at embedding:
Lua (useful guide)
Python (useful guide)
These both seem doable and might actually fully solve the given problem.
Any other bright idea's out there?
Recompiling after a 1-2 line change, redeploying to the test environment and restarting just sucks.
The system is fairly complicated and hopefully I explained it half decently.
If you can change enough of the program through a script to be useful, without a full recompile, maybe you should think about breaking the system up into smaller parts. You could have a "server" that handles data loading etc and then the client code that does the actual processing. Each time the system loads new data, it could check and see if the client code has been re-compiled and then use it if that's the case.
I think there would be a couple of advantages here, the largest of which would be that the whole system would be much less complex. Now you're working in one language instead of two. There is less of a chance that people can mess things up when moving from python or lua mode to c++ mode in their heads. By embedding some other language in the system you also run the risk of becoming dependent on it. If you use python or lua to tweek the program, those languages either become a dependency when it becomes time to deploy, or you need to back things out to C++. If you choose to port things to C++ theres another chance for bugs to crop up during the switch.
Embedding Lua is much easier than embedding Python.
Lua was designed from the start to be embedded; Python's embeddability was grafted on after the fact.
Lua is about 20x smaller and simpler than Python.
You don't say much about your build process, but building and testing can be simplified significantly by using a really powerful version of make. I use Andrew Hume's mk, but you would probably be even better off investing the time to master Glenn Fowler's nmake, which can add dependencies on the fly and eliminate the need for a separate configuration step. I don't ordinarily recommend nmake because it is somewhat complicated, but it is very clear that Fowler and his group have built into nmake solutions for lots of scaling and portability problems. For your particular situation, it may be worth the effort to master it.
Not sure I understand your system, but if the build and deployment is too complicated, maybe you could automate it? If deployment is completely automatic, would that solve the problem?
I don't understand how a scripting language would solve the problem? If you change your algorithm, you still need to restart calculation from the beginning, don't you?
It kind of sounds like what you need is CruiseControl or something similar; every time hyou touch the baseline code, it rebuilds and reruns tests.

Doing a run-around of existing application to make database changes, good idea?

We have an existing "legacy" app written in C++/powerbuilder running on Unix with it's own Sybase databases. For complex organizational(existing apps have to go through lot of red-tape to be modified) and code reasons(no re-factoring has been done in yrs so the code is spaghetti), so it's difficult to get modifications done to this application. Hence I am considering writing a new modern, maybe grails, based web app to do some "admin" type things directly into the database. For example to add users, or to add "constraint rows".
What does the software community think of this approach of doing a run-around of the existing app like this? Good idea? Tips and hints?
There is always a debate between a single monolithic app and several more focused apps. I don't think it's necessarily a bad idea to separate things - you make things more modular, reduce dependecies, etc.. The thing to avoid is duplication of functionality. If you split off an adminstration app separately, make sure to remove that functionality from the old app, or else you will have an unmaintained set of adminstration tools that will likely come back to haunt you.
Good idea? No.
Sometimes necessary? Yes.
Living in a world where you sometimes have to do things you know aren't a good idea? Priceless.
In general, you should always follow best practices. For everything else, there's kludges.
See this, from Joel, first!
Update: I had somewhat misconstrued the question and thought that more was being rewritten.
My perspective on your suggested "utility" system is not nearly so reserved as would be suggested by my link to Joel's article. Indeed, I would heartily recommend that you take this approach for a number of reasons.
First, this may well be the fastest route to your desired outcome since the old code is so difficult to work with.
Second, this gives you experience with a new development technology and does so in the context of your existing work - this is a real advantage.
Third, I took this approach years ago when transitioning an application from C++ to Delphi. In time, the Delphi app grew to be so capable that a complete leap onto that platform became possible. At no point were users without the functionality that they already knew because the old app wasn't phased out until the replacement functionality had been proven. However, it is at this stage that you'll want to heed Joel's warnings: remember that some of the "messiness" you see is actually knowledge embodied in the old code.
Good idea? That depends on how well the database is documented and/or understood. Make a mistake about some implicit application-level implemented rule, relation, or constraint, and your legacy app may end up doing cartwheels down the aisle.
Take a hypothetical example. Let's say adding a user with the legacy system adds records to the following tables:
app_users
app_constraints
app_permissions
user_address
Let's assume you catch the first three, miss the fourth. It can't be important, right? But what if in the app, in the 50 places that app_users is used, one place does an inner join to user_address. (And why not? The app writer knew that he always wrote a record to user_address!) The newly added user suddenly disappears from the application's view, a condition that "could never happen" according to the original coder, and the application coughs up a hair ball. Orders can't be taken. Production stops. A VP puts his new cardiac bypass surgery to the test.
Who gets blamed? Not the long-gone developer who should have coded for more exceptions. Not the guys who set up the red tape to control change. The guy that did an end run around those controls.
Good luck,
Terry.

Improving and publishing an application. Need some advice

Last term (August - December 2008) me and some class mates wrote an application in C++. Nothing spectacular, it is an ORM for Sqlite3. We implemented some stuff like reflection to make it work and release the end user from the ugly stuff. Personally, i think we made a nice job, and that our ORM could actually be useful for someone (even though its writen specifically for Sqlite3, its easily adaptable for oter databases).
Consequently, i`ve come to the conclusion that it should be published somewhere (sourceforge most likely) as an open source project. But, as it was a term project, there are some things that need to be addresesed before doing that. Namely, it has some memory leaks that should be fixed, and some parts of the code could be refactored to make everyone´s life easier in the future.
I would like to know more experienced C++ programmers opinion on some issues:
Is it worth rewriting some parts to
apply new techonologies (for example,
boost).
Should our ORM be adapted to latest
C++ standard? Is there any benefit in
doing this?
How will we know when our code is
ready for release?
What are the chances that this ORM
will be forgotten into the mists of
the internet? (i.e is it worth
publishing it beyond personal pride
as a programmer?)
Right now i can`t think of many more questions, but i would like to read on similar experiences.
EDIT: I should probably translate my code + comments to english right? (self question)
Thanks in advance.
I guess I am "more experienced" with regard to your particular question. I co-developed an open source web application language & template system a lot like ColdFusion back in the early days of web design before Java or ASP were around. You can still see it at http://www.steelblue.com/ if you are interested. It's still used at the company I was at when it was developed, but I don't think anywhere else.
What I found is that unless you are already well connected and people are watching what you are doing, getting people to use your open source code is just about as hard as selling somone your closed source program. You really need to advocate for your project and it should have some kind of unique selling proposition that distinguishes it from the compitition.
So, that's the unsolicited advice. Here are some specific answers to the questions you had...all purely my opinion, of course.
I wouldn't rewrite any code unless you have a featuer you want to put in. That feature might be compatibility with a specific platforms or compilers. It might be to support a new db datatype or smarter indicies or whatever. If you are going to put some more serious work into the applicaiton, think about a roadmap of what you can realistically accomplish in the next iteration and what choices will make the app the "most better" at the end of your cycle.
Release the code as soon as it is usable for a specific purpose, any purpose. Two reasons. First off, there might be someone who wants it for that purpose right now. If it's not available, they will use something else. Also, if it's open source, they might contribute back to the project. Second, the sooner you find out how much people want to use the code, the better. Either it will be more popular than you expect and you can get excited about continuing the development....or....you will find that no one is even visiting your web page to see what you've got. In either case, better to know sooner than later what people really want from your project so you can take that into account when planning new releases.
About the "forgotten into the mists." I think most projects are. I don't want to be a downer, but looking at Wikipedia, there were 5 C++ ORM tools popular enough to get mention and they were all open source. As I said above, unless you can sell your idea to people, they are going to go with another proven open source solution. For someone to choose you over them, three things have to happen: 1. They need a feature you have that the others don't. 2. They find your project web site and it demonstrates the superiority of your code. 3. They trust your code enough to give it a shot.
On the other hand, if you are in this for the long haul and want to continue development thigns get easier over time. Eventually the project will get all the basics covered and you can start developing those new featuers that aren't in the other solutions. Also, the longer you are in active development the more trustworthy the project will seem. Finally, you will get more experience in the nitch. 2 years from now you will be better positioned to say where your effort will have the most impact on bettering the project.
A final thought: If you are enjoying it, learning from it, and it's not getting in the way of you keeping food on the table, it's a good use of your time.
Good luck!
-Al
Regarding the open source part:
If you really want to make it an open source project, you really should publish it regardless of it's current state - fully working and debugged - or half working and full of memory leaks.
Just, if it's state is bad, make sure to document it, and give it a suitable version number (less than one?). then others may view your code, suggest improving, join your team, etc...
My--rather random--thoughts on the matter (in the order I think is most important):
How will we know when our code is ready for release?
Like Liran Orevi said: if you're going open source release early. Document it reasonable well, and take the time to provide a road map of planned or hoped for future improvements (these are a invitation for people to help you, so note which ones have no one working on them).
Is it worth rewriting some parts to apply new technologies (for example, boost).
Should our ORM be adapted to latest C++ standard? Is there any benefit in doing this?
SQLite relies on a fairly limited base. Maybe you don't want your tool to demand a much heavier environment. If the code in not currently a tangled and unmaintainable mess, you might want to avoid boost and newest frills. Once you have a stable release (1.0 at least) you can starting thinking about the improvements that can be made for version 2.
What are the chances that this ORM will be forgotten into the mists of the internet? (i.e is it worth publishing it beyond personal pride as a programmer?)
Most things end up in the big /dev/null in the sky, and there is only one way to find out... If it goes anywhere at all, you win. If it doesn't it was a modest investment, and maybe you learned something while you were at it.