Does anyone actually use flowcharts for nuts and bolts code anymore? [closed] - flowchart

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
When in school it was often a requirement to flowchart the little programs that we wrote line for line.
Those flow charts tended, due to the size of the pictures, to be very large and were often tedious to draw.
It was always to such detail that you were essentially writing code anyway.
I use flowchart/UML style techniques to develop higher level things but when it gets down to actual loops and what not it seems like overkill.
I will often pseudo-code more detailed algorithms but still not to the super fine grained point.
Is this just one of those things where in school things were so tiny there would be nothing else to 'Flow-Chart' so they had use do the minutia?

Flowcharts, no -- Sequence diagrams, yes. I try to keep them at a very high-level to communicate the idea to someone quickly. I do not try to get every detail in. I might supplement with another diagram to show an edge case, if it seems important.
It's great for communication as a sketch -- I think it's not right for a specification (but would be a good intro to a detailed section)

Flow charts for the ifs and whiles of real code, never (in 30 years) found useful.
As a discussion aid for elicitng requirements ... so when we're here, what could happen? ... how would you decide ... what would you do if it's > 95% ... Can be helpful. A certain kin d of user finds such diagrams on the whiteboard easy to talk about.

To be absolutely honest, I am extremely glad I was required to accompany any assignment with flowcharts. They made me think structurally, something I was lacking (and, perhaps, still lacking to a certain extent).
So don't be quick to jump on a "I'm off to play the grand piano" bandwagon, flowcharts really do work.
Not once did I find myself in a bit of a bind in non-trivial logic. After laying out the logic in the flowchart form on a sheet of paper (takes a couple of minutes), it all inevitably becomes clear to me.

Yeah, I agree. The point was to get you to understand flow charts, not to imply that you should use them for line-by-line code coverage.
I don't know why you'd even waste time with pseudo-code except for demonstration, honestly, unless it's some really low-level programming.

I don't use them myself. But a coworker who only programs every once in a while does. It's a very handy way for him to remember the nuts-and-bolts of that program he wrote a year ago. He doesn't do much programming, so it's not worth it for him to learn sequence diagrams and things like that.
It's also the type of diagram that other pple that hardly ever program will be able to read easily. Which in his position is a plus.

I don't find great detail in a flowchart to be very helpful. I use UML-style techniques on a sheet of paper. Use a whiteboard in a group setting. Mid-level class diagrams and sequence diagrams can be extremely helpful to organize your ideas and communicate your design intentions.

Sometimes on a whiteboard to describe a process, but never in actual design or documentation. I'd describe them more as "flowchart-like" since I'm not always particular about the shapes.

We have an in-house application that has some fairly complex workflow in it. A flowchart forms a big part of the spec of this part of the system. So yes, Flowcharts are a useful tool for spec'ing a system. They are also normally understood by non-technical people which is useful if they are part of a user requirements. No, I would not normally use them at a very low level, nor would I expect part of a system to only be spec'ed or documented by flowcharts.

TDD is a good option for nuts and bolts code, if you are so inclined, and it comes with a lot of other benefits.

We have a large organically grown application that has very little documentation, so using flow charts to document components within the application has proven very usefull to the business side of the operation, as even they don't understand how everything fits together.
They aren't at line by line level, but do cover all branches of business logic with the subsequent processing and outputs (although not following strict flow chart rules - some blocks describe multiple processes).

Related

How to gracefully integrate unit testing where none is present? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I have been tasked with developing a document for internal testing standards and procedures in our company. I've been doing plenty of research and found some good articles, but I always like to reach out to the community for input on here.
That being said, my question is this: How do you take a company that has a very large legacy code base that is barely testable, if at all testable, and try to test what you can efficiently? Do you have any tips on how to create some useful automated test cases for tightly coupled code?
All of our new code is being written to be as loosely coupled as possible, and we're all pretty proud of the direction we're going with new development. For the record, we're a Microsoft shop transitioning from VB to C# ASP.NET development.
There are actually two aspects to this question: technical, and political.
The technical approach is quite well defined in Michael Feathers' book Working Effectively With Legacy Code. Since you can't test the whole blob of code at once, you hack it apart along imaginary non-architectural "seams". These would be logical chokepoints in the code, where a block of functionality seems like it is somewhat isolated from the rest of the code base. This isn't necessarily the "best" architectural place to split it, it's all about selecting an isolated block of logic that can be tested on its own. Split it into two modules at this point: the bulk of the code, and your isolated functions. Now, add automated testing at that point to exercise the isolated functions. This will prove that any changes you make to the logic won't have adverse effects on the bulk of the code.
Now you can go to town and refactor the isolated logic following the SOLID OO design principles, the DRY principle, etc. Martin Fowler's Refactoring book is an excellent reference here. As you refactor, add unit tests to the newly refactored classes and methods. Try to stay "behind the line" you drew with the split you created; this will help prevent compatibility issues.
What you want to end up with is a well-structured set of fully unit tested logic that follows best OO design; this will attach to a temporary compatibility layer that hooks it up to the seam you cut earlier. Repeat this process for other isolated sections of logic. Then, you should be able to start joining them, and discarding the temporary layers. Finally, you'll end up with a beautiful codebase.
Note in advance that this will take a long, long time. And thus enters the politics. Even if you convince your manager that improving the code base will enable you to make changes better/cheaper/faster, that viewpoint probably will not be shared by the executives above them. What the executives see is that time spent refactoring code is time not spent on adding requested features. And they're not wrong: what you and I may consider to be necessary maintenance is not where they want to spend their limited budgets. In their minds, today's code works just fine even if it's expensive to maintain. In other words, they're thinking "if it ain't broke, don't fix it."
You'll need to present to them a plan to get to a refactored code base. This will include the approach, the steps involved, the big chunks of work you see, and an estimated time line. Its also good to present alternatives here: would you be better served by a full rewrite? Should you change languages? Should you move it to a service oriented architecture? Should you move it into the cloud, and sell it as a hosted service? All these are questions they should be considering at the top, even if they aren't thinking about them today.
If you do finally get them to agree, waste no time in upgrading your tools and setting up a modern development chain that includes practices such as peer code reviews and automated unit test execution, packaging, and deployment to QA.
Having personally barked up this tree for 11 years, I can only assure you it's incredibly not easy. It requires a change all the way at the top of the tech ladder in your organization: CIO, CTO, SVP of Development, or whoever. You also have to convince your technical peers: you may have people who have a long history with the old product and who don't really want to change it. They may even see your complaining about its current state as a personal attack on their skills as a coder, and may look to sabotage or sandbag your efforts.
I sincerely wish you nothing but good luck on your venture!

How to deal with large projects in C++? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
Now that I know some of the basics of C++, I must admit that I still find it very hard to deal with code that others have written in C++. This may inherently be so, as C++ allows for complex object hierarchies that are, or at least to me, very hard to grasp if one is just supplied with a C++ Project without any further comments or instructions.
So my question is more a question to the more experienced C++ programmers among you: how can someone understand a large C++ project written by others?
I easily loose my way and can be lost for weeks, if I try to understand how a large project of, for example, 10,000 lines of code is written. Functions of classes are pointers to functions of different classes that may or may not be overloaded and may or may not be inherited by other classes, etcetera, without ending.
Are there any practical tips that may speed up my ability to read and understand large C++ projects? Is there perhaps a tutorial with such tips? Please, elaborate! :)
I've been programming professionally for some time now, and as such I have repeatedly been handed down codebases written by others before me. Understanding is never easy, especially when the code is inconsistent.
The first thing to realize, though, is that learning your ways in a new codebase is not so different than re-discovering a codebase you had not touched for a while. Thus, whether written by your old-self of others does not matter much; and since you probably manage to cope with re-discovering codebases you had worked on before, you should be able to discover new codebases as well. Don't lose hope.
The second thing to realize is that understanding is a vague term, and there are certainly different degrees. Often times, nobody asks you to understand the ins and outs completely; more likely you will be asked to understand a portion of the codebase in which either there is a bug or some new functionality should be developed. Therefore, as time passes, you will gradually gain an understanding of various portions, and you will inevitably have a deeper knowledge of the portions you worked the most whilst others can be relatively abstract or even completely obscure. It's okay, it's been a long time since human beings stopped trying to learn everything there was to learn.
With that said, there are several axis of understanding you can try:
you should look for architecture: a good thing is to trace the library dependencies (the Makefile/Project should help here) this will give you the coarse technical blocks out of which the application is built. Executables are normally leaves of the dependency trees.
you should look for data-flow: what's the trigger of the application (called directly or as a callback) ? what are the steps followed by this data (roughly, just a sketch). Do not hesitate to focus on a specific narrow usecase and use the debugger to trace things, and do not try to dig too deep at first; just get a feel of things.
There are also other axis that may help gaining some understanding of the domain the application has been written for. An understanding of the domain is useful because it provides you with a key insight on what should happen and it also helps you decipher the comments/function names.
user documentation: what is this used for ? if you can arrange for a demo it is generally very helpful, otherwise maybe you can try playing with it yourself (in a test environment)
tests: what is tested ? what is exposed to the user ?
persistent data: what is serialized ? what is saved in a database ? Persistent data is accessed at some point, so it helps if you understand when it is read/written.
If it is a working product (that runs) and you can "debug" it, start by looking at just one particular feature.
Learn how it is working from the user's point of view (UI, behaviour, inputs, outputs, ...).
Once you know the feature from the outside, just look for the code for that feature (only that feature); the starting point might be a handler for a menu, or from a dialog or a mouse/pointer event.
From there; manually trace the code for one action or sub-feature; skip deep internal libraries (treat them as black box for now) and learn how it works.
Once you know that section of code, dig deeper in libraries API that was called from the upper level code.
Take your time.
Do not try to understand everything at once.
Draw up schematic (pen and paper) of the dependencies (stay high level, no class dependencies at the beginning).
Good luck.
The problem that you are mentioning does not have clear and simple answer. Nevertheless here are some tips:
At the beginning try to randomly remember everything. Names of directories, classes, params of templates, etc. As much as you can. This sounds pointless but still makes sense.
While working with the code always think "Have I looked at this function/param/etc before?" If the answer is yes, spend with this piece of code more. If not, just make basic grasp and go on.
As the time will go on, you will find out that more and more sounds clear and easier to grasp.
It is impossible to give any exact values because size and complexity of projects vary greatly. Do not expect simple and immediate results.
Other points:
You definitely need a source code browser. Spend time in learning how to use it. Good example is http://sourceinsight.com/. This is not my site!!! I do have my own site. I will not mention it here.
If you see a function that is called 500 times, it is 500 times more likely that knowledge about this function will be useful comparing with a function, that is called only once.
The best is to grasp the architecture of the project. Trying to do this it is necessary to remember that project may have no architecture at all.
Studying the code you should remember your task. Typical situation - you need to modify something or fix a bug. If this is so look for the right part of the code and focus your effort on it.

How to write good software without getting stuck [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I've been working for years on my personal project, an operating system made from scratch. As you may imagine it's quite complicated stuff. The problem is that I've been working this from scratch many times. Meaning that at some point (quite advanced too, I had hard disk read/write and some basic networking), things were too confused and I decided to throw it all by the window and try again.
In the years I've learnt how to make the code look nicer, I read "Clean Code - A Handbook of Agile Software Craftsmanship" by Robert Martin and it helped a lot. I learnt to make functions smaller, organize things in classes (I used C, now C++) and namespaces, appropriate error handling with exceptions and testing.
This new approach however got me stuck at the point that I spend most of the time to check that everything is going well, that the code reads well, that is easy, well commented and tested. Basically I'm not making any relevant step from months. When I see my well-written code, it's difficult to add a new functionality and think "where should I put this? Have I already used this piece of code? What would be the best way to do this?" and too often I postpone the work.
So, here's the problem. Do you know any code writing strategy that makes you write working, tested, nice code without spending 90% of time at thinking how to make it working, tested and nice?
Thanks in advance.
Do you know any code writing strategy that makes you write working, tested, nice code without spending 90% of time at thinking how to make it working, tested and nice?
Yes, here.
Seriously, no. It is not possible to write good code without thinking.
When I see my well-written code, it's difficult to add a new functionality and think "where should I put this? Have I already used this piece of code? What would be the best way to do this?" and too often I postpone the work.
This is called "analysis paralysis". You might be interested in reading the "Good Enough Software" section of The Pragmatic Programmer. Your code doesn't have to be perfect.
Those things are widely discussed. To me this legendary blog entry be Joel Spolsky and the follow up discussion (Robert Martin answered this) everywhere an the web contains all the pro and cons and is still fun to read.
To get an idea here's a quote by Jamie Zawinski which appears in the post linked to above:
“At the end of the day, ship the fu****g thing! It’s great to rewrite your code and make it cleaner and by the third time it’ll actually be pretty. But that’s not the point—you’re not here to write code; you’re here to ship products.”
I suggest you give TDD (test driven development) a run.
In this context, you will write automated tests for each piece of functionality before implementing it, then you run the tests after completing the feature.
If the tests pass, then you are done, and can start another feature. As a bonus, the tests will collect over time, and you will soon have a test suite you can use for regression testing (to make sure you haven't broke anything while new coding); this addresses your fear of breaking things in the "nice code".
Also, TDD will let you focus on developing exactly what you need, not more, so it tends to lead to nicer and simpler design (especially in interfaces, since you have to think about interfaces before you start coding, so "thought" drives the interfaces, rather than "whatever happens to be handier when I'm coding it".)
However, be aware that applying automated tests to an OS may provide some amount of technical challenge!

Evaluating developers [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 13 years ago.
Improve this question
I am a technical team leader of a small programming team, working on a project for an external client.
I was recently asked to produce written evaluations of my team members. I feel uncomfortable doing this, because I don't see myself as a management person and never thought of my colleagues much deeper than "A is reliable and B is a lazy bum".
But I am expected to produce more elaborate stuff to be read by actual managers, and my manager hinted that the purpose of this is rather to test my evaluation skills.
Any hints or resources on how to produce a quality evaluation? Are there standardized forms? How should I address this?
Thank you.
I have found that Joel's Professional Development Ladder and this construx site provided great advice on how to start. It helps to understand the various knowledge areas and what developers are expected to know and do. You can then evaluate developers on how competent they are in various knowledge areas and assign them a level accordingly.
You of course have to evaluate their work ethic and attitude etc which have nothing to do with development as such.
First thing, don't be intimidated by the task. Second, you are a team lead, so your opinion of the people counts; it may be a test, but you should be up to it. Third, if you were doing this informally over a coffee and your boss asked you about someone you would probably have no trouble chatting for a few minutes about your observations of them and what you thought were their strengths and weaknesses. That's what you should write down in your review notes.
Ask your boss if there is a standard format - if you are in a large organisation HR might have forms and/or systems in place for these sorts of reviews. Otherwise, just give him a paragraph or two in plain English (or your language of choice) on what you think.
You can add colour to your reports by citing work they have done and where they have succeeded or failed.
Some golden rules...
don't get personal
try and be objective and fair
don't hide the truth, however uncomfortable
Good luck, it's all part of stepping up to be a manager and is fun in a way - your opinion is counting.
Tough question! I would suggest you first look back at evaluations that have been performed by your manager on YOU. This is usually a good example of what you are expected to produce for your team mates. If you have not had any formal evaluation yet, I suggest you look to your HR department, or management for a copy of a standard template for such purposes. Most large companies have them.
Evaluating team members can be tricky, especially as a team leader and not a 'front line' manager. Remember the following,
Be honest, with them and yourself
Evaluate based on performance not gut feeling, or emotion
Never ever evaluate someone better simply because you 'like' them or have empathy for their situation. It always comes back to you in the end.
Edit:
Some further things I thought of, been awhile since I did evals as a team lead..
When evaluating performance, look at not only what the person needs to improve, but also what they have done well. Try to present both sides of the story (even if you feel the person is a lazy bum)
Look at quantifiable results.. what has the person PRODUCED and how useful was it to the team as a whole. Remember, even if they pump out thousands of lines of code, that doesn't mean it was all useful, maintainable or even worth the time.
Good luck!
You could conduct a 360 degree feedback with your team (http://en.wikipedia.org/wiki/360-degree_feedback), motivating each team member to give feedback to his colleagues (and you).

YAGNI - The Agile practice that must not be named? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
As I've increasingly absorbed Agile thinking into the way I work, yagni ("you aren't going to need it") seems to become more and more important. It seems to me to be one of the most effective rules for filtering out misguided priorities and deciding what not to work on next.
Yet yagni seems to be a concept that is barely whispered about here at SO. I ran the obligatory search, and it only shows up in one question title - and then in a secondary role.
Why is this? Am I overestimating its importance?
Disclaimer. To preempt the responses I'm sure I'll get in objection, let me emphasize that yagni is the opposite of quick-and-dirty. It encourages you to focus your precious time and effort on getting the parts you DO need right.
Here are some off-the-top ongoing questions one might ask.
Are my Unit Tests selected based on user requirements, or framework structure?
Am I installing (and testing and maintaining) Unit Tests that are only there because they fall out of the framework?
How much of the code generated by my framework have I never looked at (but still might bite me one day, even though yagni)?
How much time am I spending working on my tools rather than the user's problem?
When pair-programming, the observer's role value often lies in "yagni".
Do you use a CRUD tool? Does it allow (nay, encourage) you to use it as an _RU_ tool, or a C__D tool, or are you creating four pieces of code (plus four unit tests) when you only need one or two?
TDD has subsumed YAGNI in a way. If you do TDD properly, that is, only write those tests that result in required functionality, then develop the simplest code to pass the test, then you are following the YAGNI principle by default. In my experience, it is only when I get outside the TDD box and start writing code before tests, tests for things that I don't really need, or code that is more than the simplest possible way to pass the test that I violate YAGNI.
In my experience the latter is my most common faux pas when doing TDD -- I tend to jump ahead and start writing code to pass the next test. That often results in me compromising the remaining tests by having a preconceived idea based on my code rather than the requirements of what needs to be tested.
YMMV.
Yagni and KISS (keep it simple, stupid) are essentially the same principle. Unfortunately, I see KISS mentioned about as often as I see "yagni".
In my part of the wilderness, the most common cause of project delays and failures is poor execution of unnecessary components, so I agree with your basic sentiment.
The freedom to change drives YAGNI. In a waterfall project, the mantra is control scope. Scope is controlled by establishing a contract with the customer. Consequently, the customer stuffs all they can think of in the scope document knowing that changes to scope will be difficult once the contract has been signed. As a result, you end up with applications that has a laundry list of features, not a set of features that have value.
With an agile project, the product owner builds a prioritized product backlog. The development team builds features based on priority i.e., value. As a result, the most important stuff get built first. You end up with an application that has features that are valued by the users. The stuff that is not important falls off the list or doesn't get done. That is YAGNI.
While YAGNI is not a practice, it is a result of the prioritized backlog list. The business partner values the flexibility afforded the business given that they can change and reprioritized the product backlog from iteration to iteration. It is enough to explain that YAGNI is the benefit gained when we readily accept change, even late in the process.
The problem I find is that people tend to bucket even writing factories, using DI containers (unless you've already have that in your codebase) under YAGNI. I agree with JB King there. For many people I've worked with YAGNI seems to be the license to cut corners / to write sloppy code.
For example, I was writing a PinPad API for abstracting multiple models/manufacturers' PINPad. I found unless I've the overall structure, I can't write even my Unit Tests. May be I'm not a very seasoned practioner of TDD. I'm sure there'll be differing opinions on whether what I did is YAGNI or not.
I have seen a lot of posts on SO referencing premature optimization which is a form of yagni, or at least ydniy (you don't need it yet).
I don't see YAGNI as the opposite of quick-and-dirty, really. It is doing just what is needed and no more and not planning like the software someone writes has to last 50 years. It may come rarely because there aren't really that many questions to ask around it, at least to my mind. Similar to the "don't repeat yourself" and "keep it simple, stupid" rules that become common but aren't necessarily dissected and analyzed in 101 ways. Some things are simple enough that it is usually gotten soon after doing a little practice. Some things get developed behind the scenes and if you turn around and look you may notice them may be another way to state things.