Requirements Sign-Off - ms-solution-framework

Where do you sign-off requirements in MSF? Do you finish writing the functional specs in the planning phase and then you sign off with the client right before starting development.
NOTE: I'm working on a fixed scope fixed price contract so agile model will not work. Does that mean Iterative and Incremental approach will not work as well. In such scenario, cant i kick off development before finishing the planning phase.

If it is a fixed scope, fixed price contract, then surely you want agreement on a finished functional specification so that everyone is happy with what the end outcome will be.
The last thing you would want to do is start developing functionality that is neither agreed upon nor necessary as part of the fixed price and scope, as that will simply be an inefficient use of time.
By the Way, what is MSF- Microsoft Solutions Framework?

As you refer to Functional Specs, plural, you could always look to sign some of them off so development on those areas of functionality can begin and then work to finish the others. This avoids potential work on areas that will not be part of final delivery while allowing work to commence.
One of the Foundation Principles of MSF is
Stay agile, expect change

I certainly wouldn't start developing before the customer signs the contract. And I can't imagine how they could do that until you give them a price. And price is dependent on time. And time is dependent on the features that the customer wants to pay for (you estimate the work). So, that leads me to conclude that you would have to have the specs done before doing any work at all.
Which is of course why the concept of fixed price and scope has been proven time and time again to be inefficient.

yes, you certainly want to get sign-off (or project engagement) prior to starting the project in earnest (even before taking a deposit)
nearly all the projects i do are fixed-price
on my proposal (which is a high-level listing of deliverables), i have this:
you can get more info about sign-off/project engagement from this article: Writing Short, Easy to Read Fee Proposals (Part 2)
--LM

Related

django-shop vs Satchless?

Can anyone compare these two e-commerce frameworks?
I found this link, but I am not sure how outdated it might be. It mentioned that Satchless was still in the early stage. And at least according to this post from last year, django-shop was not production ready. Is it production ready now?
What I need is actually quite simple. I only need a B2C website (i.e. only me selling products to customers). The desired features include anonymous checkout, shipping cost + tax calculable, friendly products returning interface, paypal support. The code is hopefully easy to read and customizable (thus I will avoid Satchmo)
For Satchless: is it based on Satchmo, or a rewrite?
For django-shop: I noticed there is a giant ecosystem for django-shop. It implies that django-shop is highly customizable, but that might also imply inconsistent code design and implementation quality. And it looks like even paypal checkout needs a 3rd party extension?
Thanks again, I appreciate all your input.
Satchless isn't a rewrite of Satchmo, it's name is simply a reaction to the perceived poor quality of Satchmo's codebase. It's designed to be very minimal, but extensible. It's changed a fair bit over the last couple of years, so might not be a particularly stable platform choice (ie it's likely the API will continue to change in big ways).
There's also Oscar, which is more opinionated and feature-rich, but still designed to be extensible: https://github.com/tangentlabs/django-oscar
Disclaimer: I've worked with the Satchless guys (not on Satchless itself) and on Oscar directly.
You requirements sound like Mezzanine + Cartridge would be a better solution for you.

What is a proper way to approach the following scheduling task

I am thinking about the following scheduling problem:
I have X people.
I have Y meeting slots with Z meeting roles available in every meeting.
For some roles, same person may combine two of them in a single meeting, but most are one person = one role.
For each person x in X, I know a set of facts about them:
a) The last date they attended the meeting and had a specific role (historical);
b) Their availability for any meeting y in Y;
c) Their specific preference for the roles z in Z or a set of roles (no specific dates) for the group of meetings.
I'd like to build a scheduler with the following objectives in mind:
a) All meeting roles are filled.
b) Preferences are accommodated if possible;
c) Distribution of people / roles should be uniform (i.e. if one person is scheduled every meeting and other just for one meeting once in a while -- it's unacceptable; if one person is scheduled for the same role over, and over, and over again -- it's unacceptable).
Now, I have a gut feeling that the task is not easy at all :), so my specific questions are:
What language would be better suited for the task (somehow I feel Prolog can deal with it, but I am not entirely sure).
What is the proper approach to solve this task and how close can I get to my objectives in #4 above?
Any good read on the kind of problem I am looking to solve?
Thank you!
P.S. If you are curious, the use case is scheduling a roster for a set of Toastmasters meeting (example) (I am lazy do it by hand and I'd like computer to help me in this task at least partially).
A rule engine, like Drools Expert or Prolog is good for defining the constraints (= score function). However it's terrible at finding the best solution.
Since your problem is probably NP complete (especially if the meetings need to be put into a timeslot and/or 1 person can't attend 2 meetings at the same time), you need to use a planning optimization algorithm on top of that, such as construction heuristics and metaheuristics. Take a look at the curriculum course example in Drools Planner (java, open source, ASL).
From my point of view, the language you are going to program in doesn't really matter that much: for simple problems the language to use is more of a personal preference instead of an exact science. If you like/want to learn Python, use that. If you "feel like" Prolog today, use that.
What will be a factor in your choice though is how you want to preserve and present your data. From your question it can be told that you need the following:
A database (or at least, a persistent resource) to store your available participants and roles, past and future meetings storing the roles for every participant, and some way to schedule availability.
Some way to present your data (command line, GUI, or website).
Some business logic that describes the way of assigning roles, criteria for the attendance and such.
You will want to use some third-party components for most of these, since your time is to be spent on the added value of your product; creating a shiny ORM or GUI toolkit is not your goal in this. So the programming language you will choose should have a proper support for these items (especially the first two). I can't say it for Prolog, but Python will have you fully covered in these areas. I think it goes beyond the scope of this question to suggest specific toolkits, so I'll leave it at that for now.
After this step, you analyze your problem, which you seem to have done quite nicely already. So, start implementing it. To be able to verify your specific use cases, it sounds like you could benefit from some Test or Behavior Driven Design, so you may want to read up on that.
For learning the language, just search StackOverflow for "[language] tutorial": there are already plenty of answers linking to very nice resources for getting started with any language you will choose.
Final advice: perseverance is the hardest part, so try to set yourself some goals or milestones, or try to involve other people in one way or another. That way you'll enlarge the possibility of following through with creating a nice piece of software.
Even though I'm a Python fan, I'd hardly suggest Prolog for this task. I'm familiar with Prolog, and it's definitely nicer solved with Prolog. But it depends on how you will use that program. Your choice - decide whether the installation of Python or Prolog is easier for you (if you just run it on your local PC, it doesn't matter that much I guess), or on other requirements you have.
It's farly simple with Prolog, if you know about Prolog. After you learnt Prolog, you can solve it with some thinking without much problems I guess (if you really understood Prolog!).
Basicly you should start with Prolog of course. I'd suggest to use SWI-Prolog, it's one of the most common Prolog Implementations used. Also, there is a nice tutorial for it: http://www.learnprolognow.org/
It seems to me, but I'm not 100% sure, that you are not familiar with Prolog yet. You need the time to learn Prolog first, so it also depends on how fast you need to have your program. It's possible to get through the Tutorial in less than a month, as far as I remember. Of course this hardly depends on how much time you invest per day - you can do it in less or even more time.
Prolog is based on rules. Every of your requirement can be expressed as a rule. After you have your set of rules, you can ask, which combination (of persons and meeting room) conform to all those rules. For the historical data of the different persons, you could use a small database.
This sounds like an optimization problem and I agree with Geoffrey that it would be a NP Complete problem. I recently developed a scheduling algorithm for a university that does final exam scheduling. I used a genetic algorithm with domain specific heuristics to solve that problem. My implementation performed nicely with a student count of 3000 + and course count of 500, it took about 2 hours to find a near optimal solution.
I agree with people who suggest Prolog for this task; I would suggest to take a look
at ECLiPSe (it is, besides being a Prolog implementation, a constraint programming
language which have more powerful problem solving capabilities than just Prolog).
ECLiPSe has now a very nice introduction, with many examples and very to the point,
with a free pdf, written by Antoni Niederlinski:
http://www.anclp.pl/
Among the examples on ECLiPSe site, I found the following which seems to be relevant: http://eclipseclp.org/examples/roster.ecl.txt.
ECLiPSe is thoroughly documented and, according to this documentation,
can be also integerated with C++/Java.

Reducing defect injection rates in large software development projects

In most software projects, defects originate from requirements, design, coding and defect corrections. From my experience the majority of defects originate from the coding phase.
I am interested in finding out what practical approaches software developers use to reduce defect injection rates.
I have seen the following appraoches used with varying levels of success and associated cost
code inspections
unit tests
static code analysis tools
use of programming style
peer programming
In my experience it has been the fault of the process, not developers, that permit defects. See They Write the Right Stuff on how the process affects bugs.
Competitive Testing
Software developers should aspire to prevent testers from finding issues with the software they have written. Testers should be rewarded (does not have to be financial) for finding issues with software.
Sign Off
Put a person in charge of the software who has a vested interest in making sure the software is devoid of issues. The software is not shipped until that person is satisfied.
Requirements
Avoid changing requirements. Get time estimates from developers for how long it will take to implement the requirements. If the time does not match the required delivery schedule, do not hire more developers. Instead, eliminate some features.
Task Switching
Allow developers to complete the task they are working on before assigning them to another. After coming back to a new task, much time is spent getting familiar with where the task was abandoned and what remaining items are required to complete the it. Along the way, certain technical details can be missed.
Metrics
Gather as many possible metrics you can. Lines of code per method, per class, dependency relationships, and others.
Standards
Ensure everyone is adhering to corporate standards, including:
Source code formatting. This can be automated, and is not a discussion.
Naming conventions (variables, database entities, URLs, and such). Use tools when possible, and weekly code reviews to enforce.
Code must compile without warnings. Note and review all exceptions.
Consistent (re)use of APIs, both internally and externally developed.
Independent Review
Hire a third-party to perform code reviews.
Competent Programmers
Hire the best programmers you can afford. Let go of the programmers who shirk corporate standards.
Disseminate Information
Hold review sessions where developers can share (with the entire team) their latest changes to the framework(s). Allow them freedom to deprecate old portions of the code in favour of superior methods.
Task Tracking
Have developers log how long (within brackets of 15 minutes) each task has taken them. This is not to be used to measure performance, and must be stressed that it has no relation to review or salary. It is simply a measure of how long it takes certain technical tasks to be implemented. From there you can see, generally, how much time is being spent on different aspects of the system. This will allow you to change focus, if necessary.
Evaluate the Process
If many issues are still finding their way into the software, consider reevaluating the process with which the software is being developed. Metrics will help pinpoint the areas that need to be addressed.
First, bugs injected at requirements time are far, far more costly than coding bugs. A zero-value requirement, correctly implemented is a piece of zero-value, unused (or unusable) functionality.
Two things reduce the incidence of bugs
Agility. You are less likely to inject bugs at every step (requirements, design, etc.) if you aren't doing as much in each step. If you try to write all the requirements, you will make terrible mistakes. If you try to write requirements for the next sprint, odds are better that you will get those few requirements correct.
TDD. You are less likely to struggle with bad requirements or bad design if you have to write a test first. If you can't figure out what you're testing, you have a requirements bug. Stop coding. Step away from the keyboard.
If you can't figure out how to test it, you have a design bug. Again, stop coding. Fix the design so it's testable. Then move forward.
I think the main problem of injection rates can become from a lot of sources, and it vary from environment to environment.
You can use a lot of best practices like TDD, DDD, pair programming, continuous integration, etc. But you will never be free from bugs, because what creates bugs are human people, and not exactly the processes.
But IMO, using a bug tracker tool could bring you hints of which problem is more recurrent. From there, you can start attacking your main problem.
The majority of defects may occur during coding, but the impact of coding defects is generally much lower than the impact of errors made during the process of understanding requirements and while developing a resilient architecture. Thus the use of short executable-producing iterations focused on
identifying and correcting ambiguous, imprecise, or just plain incorrect requirements
exposing a suboptimal and/or brittle architecture
can save enormous amounts of time and collective stomach lining in a project of significant scope.
Unit testing, scenario testing, and static analysis tools can detect defects after they are created, but to reduce the number of defects created in the first place, reduce the number of interruptions that developers must endure:
reduce, eliminate, and/or consolidate meetings
provide an interrupt-free working environment
allow developers to correct their defects when they find them (while the responsible code is still fresh in their mind) rather than defer them to a later time when context must be re-established
Step 1 - Understand where your defects are being injected.
Use a technique such as Orthogonal Defect Classification (ODC) to measure when in the software lifecycle defects are injected and when they are detected. Once you know when the defects are injected and have identified when they were discovered you can start to understand the gaps in your process with respect to defect injection and removal.
Step 2 - Develop defect "filters" and adapt your process
Once you know when defects are being injected you can devise strategies to prevent them from entering the system. Different strategies are effective at different points in the software lifecycle. For example, static analysis tools don't help with defects that originated in the requirements, instead you should be looking into some kind of peer review or inspection, maybe even changing the way requirements are specified so you use automated analysis or achieve a more meaning sign-off, etc.
Generally I use a combination of inspection, static analysis, and testing (many different kinds) to filter as many bugs as I can, as soon after they are injected as I am able.
In addition:
Project knowledge base. It says how we do activity X (like 'form validation') in this project. This allows unification and re-use of tested solution, preventing bugs injected when re-inventing-the-wheel.
Production bug monitoring. When a production bug occurs it is investigated. Why this bug was not caught? How we can ensure that this won't happen again? Then we change the process accordingly.

Options to Common Criteria

There are some critics about the international Common Criteria like [Under-attack].1
What are in your opinion the pros and cons of developing IT products with CC?
I'm a Common Criteria evaluator for the BSI (Germany) and NIAPP (USA) schemes. I've had a small amount of experience, but I think I'm qualified enough to answer this question.
Pros:
The first and foremost plus to developing with CC is to be able to do business with the US government. I die inside every time I say this to someone, because I'd really like the top reason to be security. But alas...
Secondly, it enormously increases the quality of your design documentation because much of the CC revolves around analyzing documentation, and good docs are a requirement. Find a good lab, and they may do all of that for you.
It will make you aware of security questions you never thought about, like how does the customer know the product I shipped to them is really from me and not someone impersonating me?
Lastly, sadly, it will improve the technical security of the product. Get a good lab, and you will leave with a very strong secure product and a certification. Get a bad one and you'll just leave with a certification.
Cons:
Extremely expensive. Unless you have the deep-enough coffers to absorb a hit of hundreds of thousands of dollars, you're not cut out for CC. However, if you have the intention of working with the federal government, you may get them to pay your way if they really like your product.
Extremely time consuming. Our evaluations last 9-16 months, depending on the complexity of the product and the evaluation assurance level. To give you an idea, a general linux distribution at EAL 4 could take a full year to complete.
The certificate only applies to an exact version number of your product. Make an update and the cert is invalid. (However, its up to the requisitioning officer in the DoD whether to accept the patched product, so not all hope is lost.
It's value is almost worthless anywhere outside the federal market.
Depending on the scheme you pick, you'll be facing certain kinds of politics, lack of resources, and extra requirements. Best thing to do is find a good lab who will help you through everything.
Note that I'm giving you pro's and con's from the developer's point of view. There are a different set of pro's and con's when talking about technically how the criteria is set up and what it's effectiveness is.

Doing a run-around of existing application to make database changes, good idea?

We have an existing "legacy" app written in C++/powerbuilder running on Unix with it's own Sybase databases. For complex organizational(existing apps have to go through lot of red-tape to be modified) and code reasons(no re-factoring has been done in yrs so the code is spaghetti), so it's difficult to get modifications done to this application. Hence I am considering writing a new modern, maybe grails, based web app to do some "admin" type things directly into the database. For example to add users, or to add "constraint rows".
What does the software community think of this approach of doing a run-around of the existing app like this? Good idea? Tips and hints?
There is always a debate between a single monolithic app and several more focused apps. I don't think it's necessarily a bad idea to separate things - you make things more modular, reduce dependecies, etc.. The thing to avoid is duplication of functionality. If you split off an adminstration app separately, make sure to remove that functionality from the old app, or else you will have an unmaintained set of adminstration tools that will likely come back to haunt you.
Good idea? No.
Sometimes necessary? Yes.
Living in a world where you sometimes have to do things you know aren't a good idea? Priceless.
In general, you should always follow best practices. For everything else, there's kludges.
See this, from Joel, first!
Update: I had somewhat misconstrued the question and thought that more was being rewritten.
My perspective on your suggested "utility" system is not nearly so reserved as would be suggested by my link to Joel's article. Indeed, I would heartily recommend that you take this approach for a number of reasons.
First, this may well be the fastest route to your desired outcome since the old code is so difficult to work with.
Second, this gives you experience with a new development technology and does so in the context of your existing work - this is a real advantage.
Third, I took this approach years ago when transitioning an application from C++ to Delphi. In time, the Delphi app grew to be so capable that a complete leap onto that platform became possible. At no point were users without the functionality that they already knew because the old app wasn't phased out until the replacement functionality had been proven. However, it is at this stage that you'll want to heed Joel's warnings: remember that some of the "messiness" you see is actually knowledge embodied in the old code.
Good idea? That depends on how well the database is documented and/or understood. Make a mistake about some implicit application-level implemented rule, relation, or constraint, and your legacy app may end up doing cartwheels down the aisle.
Take a hypothetical example. Let's say adding a user with the legacy system adds records to the following tables:
app_users
app_constraints
app_permissions
user_address
Let's assume you catch the first three, miss the fourth. It can't be important, right? But what if in the app, in the 50 places that app_users is used, one place does an inner join to user_address. (And why not? The app writer knew that he always wrote a record to user_address!) The newly added user suddenly disappears from the application's view, a condition that "could never happen" according to the original coder, and the application coughs up a hair ball. Orders can't be taken. Production stops. A VP puts his new cardiac bypass surgery to the test.
Who gets blamed? Not the long-gone developer who should have coded for more exceptions. Not the guys who set up the red tape to control change. The guy that did an end run around those controls.
Good luck,
Terry.