Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I have been tasked with developing a document for internal testing standards and procedures in our company. I've been doing plenty of research and found some good articles, but I always like to reach out to the community for input on here.
That being said, my question is this: How do you take a company that has a very large legacy code base that is barely testable, if at all testable, and try to test what you can efficiently? Do you have any tips on how to create some useful automated test cases for tightly coupled code?
All of our new code is being written to be as loosely coupled as possible, and we're all pretty proud of the direction we're going with new development. For the record, we're a Microsoft shop transitioning from VB to C# ASP.NET development.
There are actually two aspects to this question: technical, and political.
The technical approach is quite well defined in Michael Feathers' book Working Effectively With Legacy Code. Since you can't test the whole blob of code at once, you hack it apart along imaginary non-architectural "seams". These would be logical chokepoints in the code, where a block of functionality seems like it is somewhat isolated from the rest of the code base. This isn't necessarily the "best" architectural place to split it, it's all about selecting an isolated block of logic that can be tested on its own. Split it into two modules at this point: the bulk of the code, and your isolated functions. Now, add automated testing at that point to exercise the isolated functions. This will prove that any changes you make to the logic won't have adverse effects on the bulk of the code.
Now you can go to town and refactor the isolated logic following the SOLID OO design principles, the DRY principle, etc. Martin Fowler's Refactoring book is an excellent reference here. As you refactor, add unit tests to the newly refactored classes and methods. Try to stay "behind the line" you drew with the split you created; this will help prevent compatibility issues.
What you want to end up with is a well-structured set of fully unit tested logic that follows best OO design; this will attach to a temporary compatibility layer that hooks it up to the seam you cut earlier. Repeat this process for other isolated sections of logic. Then, you should be able to start joining them, and discarding the temporary layers. Finally, you'll end up with a beautiful codebase.
Note in advance that this will take a long, long time. And thus enters the politics. Even if you convince your manager that improving the code base will enable you to make changes better/cheaper/faster, that viewpoint probably will not be shared by the executives above them. What the executives see is that time spent refactoring code is time not spent on adding requested features. And they're not wrong: what you and I may consider to be necessary maintenance is not where they want to spend their limited budgets. In their minds, today's code works just fine even if it's expensive to maintain. In other words, they're thinking "if it ain't broke, don't fix it."
You'll need to present to them a plan to get to a refactored code base. This will include the approach, the steps involved, the big chunks of work you see, and an estimated time line. Its also good to present alternatives here: would you be better served by a full rewrite? Should you change languages? Should you move it to a service oriented architecture? Should you move it into the cloud, and sell it as a hosted service? All these are questions they should be considering at the top, even if they aren't thinking about them today.
If you do finally get them to agree, waste no time in upgrading your tools and setting up a modern development chain that includes practices such as peer code reviews and automated unit test execution, packaging, and deployment to QA.
Having personally barked up this tree for 11 years, I can only assure you it's incredibly not easy. It requires a change all the way at the top of the tech ladder in your organization: CIO, CTO, SVP of Development, or whoever. You also have to convince your technical peers: you may have people who have a long history with the old product and who don't really want to change it. They may even see your complaining about its current state as a personal attack on their skills as a coder, and may look to sabotage or sandbag your efforts.
I sincerely wish you nothing but good luck on your venture!
my robotics lab is looking for programmers to work on some projects we have at the moment.
We nailed down the requirements (mainly, c++ and experience with openGL and 3D), but due to obvious money constraints we can't afford to hire Great Developers. Instead we're going to settle for Talented Students, offering them projects for their dissertation/thesis and hoping for some fresh ideas and creativity from their side. We can also afford to pay students that just graduated (first job experience).
So my question is:
In your experience, how did you spot a Talented Student (computer scientist or engineer)? What questions did you ask? What else did help you in finding a candidate that turned out to be a Good Programmer? (note: they might not know much about a specific language, but might have the ability to learn pretty fast)
or, if you were the interviewee,
Which questions were asked that made you jump on the bandwagon? Or, if you had an awful experience, what - in retrospect - was an obvious warning signal that you ignored?
Please note that I am not looking for an argumentative answer. We can talk all day long about what's best for us and never agree.
Instead, I'm looking for tales from your experience. Anecdotes, stories, hints, everything will help.
Background:
A bit more background: working for academia here is slightly different than working for the private sector (here = Italy). There are no 'deadlines' to 'sell' a product; instead, it's all proof-of-concept based. Nothing you start working on has the guarantee to be functional.
A comic best describes it: reinventing the wheel
I am considering doing Coding Questions for their interview, but all my colleagues are scoffing at me (too scary, nobody will ever come to work for us again, nobody really know how to code, etc).
Coding-wise, programming done by researchers is ... ugly. I am fighting to get a version control system in constant use, people have to be chased down to report bugs and document their code, everything is coded-so-that-it-works and rarely we go back to old code to 'fix bugs'. Basically once it's somewhat working, the project is closed and people go work on another project.
Lots has been reinvented and rewritten over and over again (just because nobody knew it was already there). People come and go, future is uncertain, but we play with robots so it's very cool :)
Furthermore, being really understaffed, nobody can follow you and guide you in your project. At best you're the one that has to come up with a plan, background literature and a working prototype.
Hence, we are looking for people that:
have some background to get started
can be highly independent
do want to learn and build their own expertise in new fields
Actually, here's my best advice:
Recruit among your students.
Since you work for an academic institution I assume that either you or your colleagues teach. This provides you with a wealth of information about what potential recruits are capable of -- how fast they learn, how motivated they are, what they are good and bad at, how the code they turn in for the lab assignments and projects look like, etc.
Firstly, in industry, coding questions are very much the norm - I'd be worried if coding questions weren't asked!
I've been responsible for doing technical interviews for about ten years now. And yes, I ask coding questions. But the questions themselves aren't really the point. What I'm more interested in is getting an idea of whether the candidate can think, and articulate their thoughts.
One question I ask (asuming the simple earlier stuff has gone well) is about inheritance heirachies. There is no one right answer, although there are a lot of wrong ones. But what's important is how they approach the question, the points they come up with in favour of one design over another.
Background knowledge is useful, in that it shows that they have an interest in the area in which you're working - but really knowledge can be gained. Intelligence is much more important.
However it's quite possible to have intelligent people who are impossible to work with! I haven't figured out how to determine who they are.
I have done such a project when I was a student: ie a 4-months project, working half-time. It was not about robots, per se.
I think that the most obvious requirement is motivation/passion. Since they'll be mostly on their own you will need them to be somewhat independent and able to think for themselves, this requires motivation first and foremost.
In order to determine whether the candidate is motivated or not, begin by asking them about the project itself. If they only gave it a cursory glance, they're likely not motivated. Also look at their experience / courses: optional courses in CS, projects they've done, etc... anything indicating that they really care about CS / development in general and are not there just because they've heard it paid well.
Then comes the question of ability. Like you said it might not be easy to spot those who will be smart enough to figure stuff out by themselves and DO things. Once again, you can ask them about past projects, having them detail the issues they faced and how they solved it.
Finally, I agree with you that some demonstration of their abilities is in order. They might be a bit tense initially, so I would do this at the end, once the interview is already going, you might have had a chance to get them to relax with the previous questions this way.
You don't necessarily need them to do coding questions, I think it's most about reasoning. Try to pick problems related to your area work, for example one you really had in the past, and get them to analyze the problem. If possible they should take the lead and ask you questions about the problem itself here.
We've had an issue with the robot not being able to analyze the images the camera took, it could not correctly determine the moving objects itself, do you have an idea how you would do that ?
Then you'll need to get them to think about a solution. You need a whiteboard here, and ask them to think aloud so that you can follow their reflexion. You'll probably need to nudge them a bit from time to time to keep them on track, their reaction to your input is also a key-point here, since you want them to be able to accept criticism and build on it, otherwise you might have issues directing them afterward.
Frankly, try to avoid asking them for the quicksort algorithm, or the introsort, or the radix sort... If they need sorting, they'll just fire up their computer and browse the internet. On the other hand, getting them to analyze an existing algorithm unknown to them (for example, the median of 5 sort) and checking that they understand why it works, could be worth it. If they need to work on their own, they'll need to be able to learn on their own too :)
As others say, try to hire someone that's motivated!
For master thesis students I put more emphasis on knowing the basic skills (programming, knowing how to use version control) as they don't stay on long enough to learn everything along the way.
If they're going to work mostly on their own and you have no special requirements on language I wouldn't focus much on language questions. But every decent programmer knows at least one language fairly well, get a sample or their prior work or make them code some simple application to test that they don't suck.
I'd focus more on algorithm and data structures. Ask rudimentary stuff that every programmer should know - when to use a list and when to use a vector, why summing a row-major matrix by iterating over the columns first is bad, basic complexity analysis questions, etc. That will sort out many of the bad weeds.
Ask some design questions too perhaps, e.g. what is "coupling" and why is it bad, ask them if they know what a design pattern is, etc.
Check that the applicants have a solid grasp of linear algebra and coordinate system changes in particular if they're going to work with any 3D stuff like OpenGL. In my experience, learning the API is simple, wrapping your head around how the transformations work less so.
Obviously, if you except them to perform any real robotics-specific you should check for that knowledge as well. E.g. estimation (understanding simple EKF and particle filters is a requirement in my book), control theory, pattern recognition, machine learning, vision, or whatever is useful for the particular task.
If I were hiring someone for theoretical work I would perhaps loosen up on the CS/programming skills and focus more on math knowledge. Someone with solid math skills will pick up the CS easily, and programming is just programming.
Ask for references or to see some of the prior work. Many great students already have some interesting project to show after graduation.
I'm not sure how common this is at universities in general, but I would look for a games programming (or robotics) course on the transcript where the candidate, as a student, succeeded in completing a project with a team. It would ask the candidate to describe how that project worked (important technical details) and the role he had in the team. The only way to really tell if someone is good at something is to see what happens when they try it, and since you're in academia, recruiting students, that should not be a problem.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
What do software engineers encounter after another stressfull release? Well, the first thing we encounter in our group are the bugs we have released out in the open. The biggest problem that we as software engineers encounter after a stressfull release is spaghetti-code, also called the big ball of mud.
The time and money to chase perfection are seldom available, nor should they be. To survive, we must do what it takes to get our software working and out the door on time. Indeed, if a team completes a project with time to spare, today’s managers are likely to take that as a sign to provide less time and money or fewer people the next time around.
You need to deliver quality software on time, and under budget
Cost: Architecture is a long-term investment. It is easy for the people who are paying the bills to dismiss it, unless there is some tangible immediate benefit, such a tax write-off, or unless surplus money and time happens to be available. Such is seldom the case. More often, the customer needs something working by tomorrow. Often, the people who control and manage the development process simply do not regard architecture as a pressing concern. If programmers know that workmanship is invisible, and managers don't want to pay for it anyway, a vicious circle is born.
But if this was really the case than each longterm software project would eventually always lead to a big ball of mud.
We know that does not, always, happen. How come? Because the statement that managers do not regard architecture as a pressing concern is false. At least nowadays. Managers in the IT field very well know that maintainability is key to the business.
business becomes dependent upon the data driving it. Businesses have become critically dependent on their software and computing infrastructures. There are numerous mission critical systems that must be on-the-air twenty-four hours a day/seven days per week. If these systems go down, inventories can not be checked, employees can not be paid, aircraft cannot be routed, and so on. [..]
Therefore it is at the heart of the business to seek ways to keep systems far away from the big ball of mud. That the system is still maintainable. That the system actually works and that you, as programmer can prove it does. Does your manager ask you if you have finished your coding today, does she ask you if the release that has fixes A, B and C can be done today or does she ask if the software that will be released actually works? And have you proved it works? With what?
Now for my question:
What ways do we have to prove our managers and/or stakeholders that our software works? Are those green lights of our software-unit tests good enough? If yes, won't that only prove our big-ball of mud is still doing what we expect it to do? That the software is maintainable? How can you prove your design is right?
[Added later]
Chris Pebble his answer below is putting my team on the right track. The Quality Assurance is definitely the thing we are looking for. Thanks Chris. Having a QA policy agreed with the stakeholders is than the logical result of what my team is looking for.
The follow-up question is what should all be in that QA policy?
Having that buildserver running visible for my stakeholders
Having the buildserver not only 'just build' but adding tests that were part of the QA policy
Having an agreement from my stakeholders on our development process (where developers review each others code is part of)
more..
Some more information: The team I'm leading is building webservices that are consumed by other software teams. That is why a breaking webservice is immediately costing money. When the developers of the presentationlayer team, or the actual testers can't move forward we are in immediate stress and have to fix bugs ASAP, which in turn lead to quick hacks..
[ added later ]
Thanks for all the answers. It is indeed about 'trust'. We cannot do a release if the software is not trusted by the stakeholders, who are actively testing our software themselves using the website that is consuming our webservice.
When issues arise, the first question of our testers is: Is it a servicelayer problem or a presentationlayer problem? Which directs me to have a QA policy that ensures that our software is ok for the tests they are doing.
So, the only way I can (now) envision enabling trust with testers is to:
- Talk with the current test-team, go over the tests that they are able to manually execute (from their test-script and scenario's) and make sure that our team has those tests as unit-tests already checked against our webservice. That would be a good starting point for a 'sign-off' before we do a release that the presentationlayerteam has to integrate. It will take some effort to clarify that creating automatic tests for all those scenario's will take some time. But it will definately be usefull to ensure what we build is actually working.
I'm part of a team working on a large project for a governmental client.
The first module of phase 1 was a huge disaster, the team was un-managed, there were'nt a QA team and the developers were not motivated to work better. Instead, the manager kept shouting and deducting wages for people who didnt work overtime!!
The client, huh, dont ask about that, they were really pissed off, but they stayed with our company because they know that no one understand the business as we do.
So what was the solution:
First thing separated the management from programmers, and put a friendly team leader.
Second, get a qualified QA team. In the first few weeks, bugs were in 100s.
Third, put 2-3 developers as support team, there responsibility is not to do any new task, just fix bugs, work directly with the QA.
Fourth, motivate the guys, sometimes its not about the money or extra vacations, sometimes a good word will be perfect. Small example, after working 3 days in row for almost 15 hours a day, the team leader made a note to the manager. Two days later i received a letter from the CEO thanking me on my efforts and giving me 2 vacation days.
We will soon deliver the 4th module of the system, and as one of the support team i could say its at least 95% bug free. Which is a huge jump from our first modules.
Today we have a powerful development team, qualified QA and expert bug fixers.
Sorry for the long story, but thats how our team (during 4 months) proved to the manager and client that we are reliable, and just need the right environment.
You cant prove it beyond the scope of the tests, and unless there is a bulletproof specification (which there never is) then the tests never prove anything beyond the obvious.
What you can do as a team is approach your software design in a responsible manner and not give in to the temptations of writing bad code to please managers, demanding the necessary resources and time constraints, and treating the whole process as much as a craft as a job. The finest renaissance sculptors knew no-one would see the backs statues to be placed in the corners of cathedrals but still took the effort to make sure they weren't selling themselves short.
As a team the only way to prove your software is reliable is to build up a track record: do things correctly from the start, fix bugs before implementing new features, never give in to the quick hack fix, and make sure everyone shares the same enthusiasm and respect for the code.
In all but trivial cases, you cannot 'prove' that your software is correct.
That's the role of User Acceptance Testing: to show that that an acceptable level of usefulness has been reached.
I think this is putting the cart before the horse. It's tantamount to a foot soldier trying to explain to the general what battle maneuvers are and why protecting your flanks is important. If management can't tell the difference between quality code and a big ball of mud, you're always going to end up delivering a big ball of mud.
Unfortunately it is completely impossible to "prove" that your software is working bug-free (the windows xp commercials always annoyed me by announcing "the most secure version of windows ever", that's impossible to prove at release). It's up to management to setup and enforce a QA process and establish metrics as to what a deliverable product actually looks like and what level of bugs or unexpected behaviors is acceptable in the final release.
That said, if you're a small team and set your own QA policies with little input from management I think it would be advantageous to write out a basic QA process and have management sign off on it. For our web apps we currently support 4 browsers -- and management knows this -- so when the application breaks in some obscure handheld browser everyone clearly understands that this is not something we designed the application to support. It also provides good leverage for hiring additional development or test resources when management decides it wants to start testing for x.
As Billy Joel once say, "It's always been a matter of trust."
You need to understand that software development is "black magic" to all except those who write software. It's not obvious (actually, quite counter intuitive) to to the rest of your company that many of your initiatives lead to increasing quality and reducing risk of running over time and/or budget.
The key is to build a trusting, respectful relationship between development and other parts of the business. How do you build this trust? Well that's one of those touchy-feely people problems... you will need to experiment a little. I use the following tools often:
Process visibility - make sure everyone knows what you are doing and how things are progressing. Also, make sure everyone can see the impact of and changes that occur during development.
Point out the little victories - to build up trust, point out when things went exactly as you planned. Try to find situations where you had to make a judgement call and use the term "mitigated the risk" with your management.
Do not say, "I told you so." - let's say that you told management that you needed 2 months to undertake some task and they say, "well you only have three weeks." The outcome is not likely to be good (assuming your estimate is accurate). Make management aware of the problem and document all the things you had to do to try to meet the deadline. When quality is shown to be poor, you can work through the issues you faced (and recorded) in a professional manner rather than pointing the finger and saying, "I told you so."
If you have a good relationship with you manager, you can suggest that they read some books specific to software development so they can understand industry best practices.
Also, I would point out to your boss that not allowing you to act as a professional software developers is hurting your career. You really want to work somewhere that lets you grow professionally rather than somewhere that turns you into a hacker.
As a hobby project I am trying to create a ROM (Diku-Merc based) derivative. (Now defunct) I would appreciate it if anybody has done something similar and has some useful resources to share or tips to offer. I'm finding that a lot the resources such as mailing lists are no longer active and many links are dead.
I've picked ROM because that is what I am familiar as a player, but the source is more complicated than anything I have come across and I wouldn't mind picking a code base that was easier to understand. Any recommendations before I dive in in earnest would also be appreciated.
As for mudding communities in general I don't know of much beyond the mud connector because I've always been in more of a user/player role than developer. A forgiving and active place where I can get answers to my questions is what I value most.
After extensive research I've decided to go with a tba code base. I may elaborate later but very broadly
Coding experience is more important than experience as a player and this has convinced me to abandon my roots. I wanted a well documented, reasonably modern, managable code base undergoing active development and this seems to fit the bill.
Anyways muds are truly a labour of love and you have to have a few screws loose if you plan to run one. Moreover the glory days have passed (it seems like there many muds shut down en masse around 2000) and in my opinion the community is largely inactive and fragmented. An exerpt from from some of the tba docs sums this up nicely:
So, you're sure you want to run your own MUD? If you're already an
old hand at playing MUDs and you've
decided you want to start one of your
own, here is our advice: sleep on it,
try several other MUDs first. Work
your way up to an admin position and
see what running a MUD is really
about. It is not all fun and games.
You actually have to deal with people,
you have to babysit the players, and
be constantly nagged about things you
need to do or change. Running a MUD is
extremely time consuming if you do it
well, if you are not going to do it
well then don't bother. Just playing
MUDs is masochistic enough, isn't
it? Or are you trying to shave that
extra point off your GPA, jump down
that one last notch on your next job
evaluation, or get rid of that pesky
Significant Other for good? If you
think silly distractions like having
friends and seeing daylight are
preventing you from realizing your
full potential in the MUD world, being
a MUD Administrator is the job for
you.
Anyways I don't have any high hopes for success, but this is something I will find interesting, improve my code-fu and will keep me busy for many years to come :D
There is no active ROM developer mailing list, so tba definitely is a better choice. There was some effort to clean up ROM with the RaM project.
Dead Souls sees active development as well (the main dev is a hero in my eyes for the amount of work he produces).
I would not recommend MUCK as the userbase is rather small. However that is not to say there isn't good work being done -- look up the user Valente on the code subforum of the wora.netlosers.com forum, as he's probably one of the foremost MUCK developers at the moment.
However if you thought that ROM was complicated I should caution you about tackling an established/canon codebase for any purpose other than getting a familiarity with mud servers. For actual development you may be better off with a barebones codebase such as NakedMUD (C/Python) or even something slimmer than that such as Socketmud (ports in many languages).
There are of course dozens of mud servers you can look at; all will be educational in some manner, but in the beginning stages it won't be obvious what is good practice and what is not. You may want to look up ColdC (similar to LP) and TeensyMUD (Ruby) to study. The author of Teensy, Jon Lambert, has a useful developer site up at http://sourcery.dyndns.org/.
However you'll find very experienced ROM and tba (i.e., Circle) developers at MudBytes, and I'll second Sam to say that is the most active mud developer site currently. It's a little surprising but in the last year there has been a significant growth in activity at MB. I think people are coming in from the fold so to speak and gathering at MB. There also is a good-sized code repository at MB as well.
Your other options are The Mudconnector which you already know, Top Mud Sites which has a somewhat smaller crowd of mostly developers (typically of established and long-running muds), and Mudlab, which is much quieter but usually with a good signal to noise ratio. MudGamers is an interesting new site with a fairly quiet forum, but a new approach to creating a more contemporary-looking portal for playing muds.
Not to be overlooked is the archive for the old mud-dev mailing list. There is a staggering amount of information to be gleaned there. The raw archive can be found at muddev.wishes.net/. Richard Tew also has done some noble work in combing through old usenet archives to find valuable mud development related threads, which you can find through his mud tag at posted-stuff.blogspot.com/search/label/mud.
I should note that many muds use the IMC chat network to link muds (MB has a portal to this as well on the front page of their site). Once your mud is running it can be useful to get on IMC if you're in need of real-time chat to fix a problem (of course, there are many IMC channels and you'll want to choose which one you use prudently).
Despite the fact that muds today are niche at best and unheard of at worst, there is no shortage of new muds in development. They offer a design and programming challenge that is still accessible to the solo developer, unlike any graphical game of equal size or complexity.
Furthermore you shouldn't be discouraged if it feels like you'll never release a playable game. Like many larger projects you may start and abandon it many times over, but you'll be building proficiencies across a wide spectrum of programming skillsets and applications -- not many projects will allow you to take such a whole systems approach. Good luck!
An active community seems to be around for the Dead Souls MUDlib
http://en.wikipedia.org/wiki/Dead_Souls_MUDlib
I was an old player of Nightmare LPMud which sadly disappeared. I'm not much in for the coding of these MUDs, but I have been following this community loosely just due to so many positive MUDding memories.
Take a look at Nameless MUCK. It's a solid piece of software.
First concentrate on getting or finding a solid Telnet Socket library going, this is generally the main protocol for a MUD.
Next, create a FULL list of features that you want to implement, you should probably get some sort of feature or bug tracking system setup (even if it is a spreadsheet). Then prioritize the features based on dependencies of other systems.
Check out http://www.gamasutra.com for some architectural discussions on creating games in general, creating basic AI, character systems, and multi-player games.
Once you understand the theory, it is just a butt load of programming to build in everything you want to support.
I'd make the MUD engine abstract enough to run behind both a terminal client, a web-based Ajax client, and maybe stand-alone clients - i.e., don't tie the front end in with the actual game logic. I'm not averse to a MUD actually using a decent font for the text, and real graphics (as interstitials or to make notes on the bulletin board look like notes, etc), not in place of the text based interface) where necessary instead of ASCII, etc.
You might also want to have some MUD script file converters into your own format, so that you don't have to spend ages creating zones.
I find the problem with MUDs is that there is too much emphasis on killing NPCs, and not many puzzles or other interesting aspects. So a more interesting, story-oriented (possibly to the extend of sharding zones for single-player or single-team use) engine could be a nice feature to have.
I will take this opportunity to recommend MudBytes, which is probably the most active MUD developer site available right now.
How does one go about choosing a software vendor after having seen many presentations from many software vendors, from a user preference perspective?
I ask this question on behalf of a friend who has been put in charge of making such an evaluation, without any prior experience. I thought the experience of the SO community might generate something considerably more useful than a bit of googling.
Domain specificity is not important, but if it helps at all the systems currently being evaluated are Treasury systems and ERP (Electronic Resource Planning).
Thanks in advance for any help/ideas offered.
A common tool for software evaluation is a SWOT analysis:
SWOT Analysis is a strategic planning method used to evaluate the Strengths, Weaknesses, Opportunities, and Threats involved in a project or in a business venture.
[...]
Strengths: attributes of the organization that are helpful to achieving the objective.
Weaknesses: attributes of the organization that are harmful to achieving the objective.
Opportunities: external conditions that are helpful to achieving the objective.
Threats: external conditions which could do damage to the business's performance.
See if you can find existing users of the packages either through people you actually know or through forums online. See what the pain points and advantages are. Also, you should press the vendors with questions about your specific usage scenario and demand real answers and not just sales/marketing spin; you're trying to see if the package will actually solve your problem.
You should find out how their customer support works. Something might break in the product and you'll need technical support. If it's not there when you need it you risk just sitting there waiting and losing time and money.
I would want to know if the vendor is willing to give at least some of your money back if the system fails to deliver to pre-agreed criteria.
Are they going to be involved or committed?
(They say with breakfast the chicken is involved but the pig is committed)