Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
Im in first year computer science in University and for my current assignment i need to write a program with an algorithm and a hand trace and a flow chart. I kind of get the usefulness of the algorithm and hand trace but to me it seems like the flowchart is a monumental waste of time especially since the program took me about an hour to write and was super simple. I was just wondering if real programmers actually used flow charts for help.
Flow charts are one of many ways I'll work through my planning of a program or script; but I won't necessarily use it every time. Pick the right tool for the situation - sometimes it's pseudocode, sometimes it's a flowchart, sometimes it's prose or simple wireframes/schematics. Often, it's a combination.
Once you get to any application or system that's of non-trivial size, some sort of visualization or planning process that doesn't involve code on a display becomes immensely helpful. It also aids in communicating your ideas to users, testers and other developers - you won't be building your application in a vacuum.
Back from about 1980 to 2000 I used to "draw" a lot of "Chapin" charts -- feed a PL/I sort of syntax into a program and it prints a form of flow chart on 14" wide fanfold paper. These were incredibly useful and we used them extensively in code reviews.
But then they did away with paper, and the charts are not nearly as useful if you can't set them on your desk and mark on them, and even if you print them out on letter-sized paper instead of fanfold you lose a lot of effectiveness because you can't easily stretch them out to see 3-4 pages at a time.
Similarly, code reviews are less effective now that code is pre-reviewed online vs with fanfold listings. It's harder to follow the flow of a long method, and there's no good way to make notes, highlight/underline lines, etc.
Using this technology I once did the design for an incredibly complex algorithm internal to a database system. Detailed flow charts ran, I'm thinking, about 1000 lines. The flowchart got down to the details of variable names, etc. We reviewed that intensively. Then I was transferred to another area and a (admittedly bright) new-hire was given the task to actually code the algorithm. It came to about 4000 lines, IIRC. There were 3 minor bugs found in testing.
A lot has been lost in the past 15-20 years, due to "technological advancement".
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I have been tasked with developing a document for internal testing standards and procedures in our company. I've been doing plenty of research and found some good articles, but I always like to reach out to the community for input on here.
That being said, my question is this: How do you take a company that has a very large legacy code base that is barely testable, if at all testable, and try to test what you can efficiently? Do you have any tips on how to create some useful automated test cases for tightly coupled code?
All of our new code is being written to be as loosely coupled as possible, and we're all pretty proud of the direction we're going with new development. For the record, we're a Microsoft shop transitioning from VB to C# ASP.NET development.
There are actually two aspects to this question: technical, and political.
The technical approach is quite well defined in Michael Feathers' book Working Effectively With Legacy Code. Since you can't test the whole blob of code at once, you hack it apart along imaginary non-architectural "seams". These would be logical chokepoints in the code, where a block of functionality seems like it is somewhat isolated from the rest of the code base. This isn't necessarily the "best" architectural place to split it, it's all about selecting an isolated block of logic that can be tested on its own. Split it into two modules at this point: the bulk of the code, and your isolated functions. Now, add automated testing at that point to exercise the isolated functions. This will prove that any changes you make to the logic won't have adverse effects on the bulk of the code.
Now you can go to town and refactor the isolated logic following the SOLID OO design principles, the DRY principle, etc. Martin Fowler's Refactoring book is an excellent reference here. As you refactor, add unit tests to the newly refactored classes and methods. Try to stay "behind the line" you drew with the split you created; this will help prevent compatibility issues.
What you want to end up with is a well-structured set of fully unit tested logic that follows best OO design; this will attach to a temporary compatibility layer that hooks it up to the seam you cut earlier. Repeat this process for other isolated sections of logic. Then, you should be able to start joining them, and discarding the temporary layers. Finally, you'll end up with a beautiful codebase.
Note in advance that this will take a long, long time. And thus enters the politics. Even if you convince your manager that improving the code base will enable you to make changes better/cheaper/faster, that viewpoint probably will not be shared by the executives above them. What the executives see is that time spent refactoring code is time not spent on adding requested features. And they're not wrong: what you and I may consider to be necessary maintenance is not where they want to spend their limited budgets. In their minds, today's code works just fine even if it's expensive to maintain. In other words, they're thinking "if it ain't broke, don't fix it."
You'll need to present to them a plan to get to a refactored code base. This will include the approach, the steps involved, the big chunks of work you see, and an estimated time line. Its also good to present alternatives here: would you be better served by a full rewrite? Should you change languages? Should you move it to a service oriented architecture? Should you move it into the cloud, and sell it as a hosted service? All these are questions they should be considering at the top, even if they aren't thinking about them today.
If you do finally get them to agree, waste no time in upgrading your tools and setting up a modern development chain that includes practices such as peer code reviews and automated unit test execution, packaging, and deployment to QA.
Having personally barked up this tree for 11 years, I can only assure you it's incredibly not easy. It requires a change all the way at the top of the tech ladder in your organization: CIO, CTO, SVP of Development, or whoever. You also have to convince your technical peers: you may have people who have a long history with the old product and who don't really want to change it. They may even see your complaining about its current state as a personal attack on their skills as a coder, and may look to sabotage or sandbag your efforts.
I sincerely wish you nothing but good luck on your venture!
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
This is a bad title, but hopefully my description is clearer. I am managing a modeling and simulation application that is decades old. For the longest time we have been interested in writing some of the code to run on GPUs because we believe it will speed up the simulations (yes, we are very behind in the times). We finally have the opportunity to do this (i.e. money), and so now we want to make sure we understand the consequences of doing this, specifically to sustaining the code. The problem is that since many of our users do not have high end GPUs (at the moment), we would still need our code to support normal processing and GPU processing (i.e. I believe we will now have two sets of code performing very similar operations). Has anyone had to go through this and have any lesson learned and/or advice that they would like to share? If it helps, our current application is developed with C++ and we are looking at going with NVIDIA and writing in Cuda for the GPU.
This is similar to writing hand-crafted assembly version with vectorization or other assembly instructions, while maintaining a C/C++ version as well. There is a lot of experience with doing this in the long-term out there, and this advice is based on that. (My experience with doing this with GPU cases is both shorter term (a few years) and smaller (a few cases)).
You will want to write unit tests.
The unit tests use the CPU implementations (because I have yet to find a situation where they are not simpler) to test the GPU implementations.
The test runs a few simulations/models, and asserts that the results are identical if possible. These run nightly, and/or with every change to the code base as part of the acceptance suite.
This ensures that both code bases do not go "stale" as they are constantly exercised, and the two indepdendent implementations actually help with maintenance on the other.
Another approach is to run blended solutions. Sometimes running a mix of CPU and GPU is faster than one or the other, even if they are both solving the same problem.
When you have to switch technology (say, to a new GPU language, or to a distributed network of devices, or whatever new whiz-bang that shows up in the next 20 years), the "simpler" CPU implementation will be a life saver.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
I need to interview a candidate with over 8 years of experience in Linux using C/C++.
What would be the best way to judge such a candidate?
Do I need to test his understanding of algorithms?
Do I need to test his programming skills by asking to write a program?
How should I test his understanding of Linux?
It depends entirely on what you want him to do. You haven't said anything about the position that you are hiring for but if, say, you want him to write C# then you need him to prove his adaptibility.
Do you need him to write (or modify or bugfix) algorithms? If not, then it is pointless determining how good at them he is.
On the other hand, in order to understand his abilities, you may be better off talking to him about a domain that he is familiar with. You should certainly get him to describe a recent project that he has been involved in, what his contribution was, what the challenges were, what went well, what lessons he learnt.
"Over 8 years of experience in Linux using C/C++" is a fairly vague requirement without reasons for the time length. What are the specific reasons for that time length? Would you prefer more C/C++ experience if some of it were BSD or Solaris or other Unix? Would you prefer less time or a wider experience with different distributions; would you prefer 5 years experience with Red Hat or 7 years experience spanning Red Hat, Debian, SUSE, Gentoo, and others. What are you trying to get from the person you hire, that relates to the amount of time?
The best way to judge a candidate, any candidate, is on how well he can do the job, not how good the qualifications are. You mentioned Lead Developer, owning a product feature and eventually new features. What sort of feature? A highly responsive and adaptive UI? A UI-free recursive data mining calculation? Offline document scanning/indexing code? Custom device drivers?
Basic understanding of algorithms is important, but that can be tested easily in a phone interview. The ability to map out an algorithm for problem solving, and clearly state the reasons for preferring one over another is much more useful, and harder to test.
Test his programming skills by asking to write a program is a fairly useful BS indicator test; there are quite a few people who are adept at slinging manure who can't actually write a line of code. Another useful test is to give him some code with a defect and ask him what's wrong with it, and how he would fix it.
To test his understanding of Linux, I would look at a basic BS test; fire up a Linux box and ask him to perform some basic tasks, including maybe write and compile "Hello world". This will identify the BS artists. Then I would just go with some stock test, showing that he understands the basics of the Linux design; some file system knowledge, some knowledge of tools, ask about how he would add removable device permissions for a user using SE Linux, how he'd configure access to an application that needs elevated privileges so users without those privileges can use the application.
But ultimately, these are all pretty generic ideas; IMHO, it's much more useful to think in terms of "what do we want the candidate to accomplish", than "how do we test basic skills".
Maybe you should focus on what you need. Can he help you? Has he solved problems similar to yours? What are his expectations, what are yours?
I interview people like this all the time. The answer is that no matter how much experience he has, you must prove to yourself that he is capable of the job.
Joel Spolsky is right, hiring badly is destructive to a team and organization. It should be avoided at all costs.
The more I think about it, the more I begin to think good professional developers must be good communicators - in their code and with people. Think of the old saying - the more you know, the more you realise you don't know.
That's not to say you want somebody who isn't confident: but neither do you want someone that is cocky and unwilling to interact with others.
Recently someone asked about whether they should become a programmer in this posting. No matter how a programmer starts out they will likely learn from many mistakes they've made and as a result have an element of humility about themselves and development in general.
A good programmer continues to learn and keeps a relatively open mind.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
When in school it was often a requirement to flowchart the little programs that we wrote line for line.
Those flow charts tended, due to the size of the pictures, to be very large and were often tedious to draw.
It was always to such detail that you were essentially writing code anyway.
I use flowchart/UML style techniques to develop higher level things but when it gets down to actual loops and what not it seems like overkill.
I will often pseudo-code more detailed algorithms but still not to the super fine grained point.
Is this just one of those things where in school things were so tiny there would be nothing else to 'Flow-Chart' so they had use do the minutia?
Flowcharts, no -- Sequence diagrams, yes. I try to keep them at a very high-level to communicate the idea to someone quickly. I do not try to get every detail in. I might supplement with another diagram to show an edge case, if it seems important.
It's great for communication as a sketch -- I think it's not right for a specification (but would be a good intro to a detailed section)
Flow charts for the ifs and whiles of real code, never (in 30 years) found useful.
As a discussion aid for elicitng requirements ... so when we're here, what could happen? ... how would you decide ... what would you do if it's > 95% ... Can be helpful. A certain kin d of user finds such diagrams on the whiteboard easy to talk about.
To be absolutely honest, I am extremely glad I was required to accompany any assignment with flowcharts. They made me think structurally, something I was lacking (and, perhaps, still lacking to a certain extent).
So don't be quick to jump on a "I'm off to play the grand piano" bandwagon, flowcharts really do work.
Not once did I find myself in a bit of a bind in non-trivial logic. After laying out the logic in the flowchart form on a sheet of paper (takes a couple of minutes), it all inevitably becomes clear to me.
Yeah, I agree. The point was to get you to understand flow charts, not to imply that you should use them for line-by-line code coverage.
I don't know why you'd even waste time with pseudo-code except for demonstration, honestly, unless it's some really low-level programming.
I don't use them myself. But a coworker who only programs every once in a while does. It's a very handy way for him to remember the nuts-and-bolts of that program he wrote a year ago. He doesn't do much programming, so it's not worth it for him to learn sequence diagrams and things like that.
It's also the type of diagram that other pple that hardly ever program will be able to read easily. Which in his position is a plus.
I don't find great detail in a flowchart to be very helpful. I use UML-style techniques on a sheet of paper. Use a whiteboard in a group setting. Mid-level class diagrams and sequence diagrams can be extremely helpful to organize your ideas and communicate your design intentions.
Sometimes on a whiteboard to describe a process, but never in actual design or documentation. I'd describe them more as "flowchart-like" since I'm not always particular about the shapes.
We have an in-house application that has some fairly complex workflow in it. A flowchart forms a big part of the spec of this part of the system. So yes, Flowcharts are a useful tool for spec'ing a system. They are also normally understood by non-technical people which is useful if they are part of a user requirements. No, I would not normally use them at a very low level, nor would I expect part of a system to only be spec'ed or documented by flowcharts.
TDD is a good option for nuts and bolts code, if you are so inclined, and it comes with a lot of other benefits.
We have a large organically grown application that has very little documentation, so using flow charts to document components within the application has proven very usefull to the business side of the operation, as even they don't understand how everything fits together.
They aren't at line by line level, but do cover all branches of business logic with the subsequent processing and outputs (although not following strict flow chart rules - some blocks describe multiple processes).
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
I'm trying to pick a perforamnce analyzer to use. I'm a beginner developer and not sure what to look for in a performance analyzer. What are the most important features?
If you use valgrind, I can highly recommend KCacheGrind to visualize performance bottlenecks.
I would like to have following features/output information shown in a profiler.
1.) Should be able to show Total Clock cycles consumed and also for each function.
2.) If not one, should tell the total time consumed and time spent in each function.
3.) All it should be able to tell how many times a function is called.
4.) It would be nice to know memory reads, memory writes, cache misses, cache hits.
5.) Code memory for each function
6.) Data memory used: Global constants, Stack, Heap usage.
=AD
The two classical answers (assuming you are in *nix world) are valgrind and gprof. You want something that will let you (at least) check how much time you are spending inside each procedure or function.
Stability - be able to profile your process for long durations without crashing or running out of memory. its surprising how many commercial profilers fail that.
goldenmean has it right, I would add that line execution counts are sometimes handy as well.
My preference is for sampling profilers rather than instrumented profilers. The profiler should be able to map sample data back to the source code, ideally in a GUI. The two best examples of this that I am aware of are:
Mac OS X: Shark developer.apple.com
Linux: Zoom www.rotateright.com
All you need is a debugger or IDE that has a "pause" button. It is not only the simplest and cheapest tool, but in my experience, the best. This is a complete explanation why. Note the 2nd-to-last comment.
EDIT because I thought of a better answer:
As an aside, I studied A.I. in the 70s, and an idea very much in the air was automatic programming, and a number of people tried to accomplish it.
(I took my crack at it.)
The idea is to try to automate the process of having a knowledge structure of a domain, plus desired functional requirements, to generate (and debug) a program that would accomplish those requirements.
It would be a tour-de-force in automated reasoning about the domain of programming.
There were some tantalizing demonstrations, but in a practical sense the field didn't go very far.
Nevertheless, it did contribute a lot of ideas to programming languages, like contracts and logical verification techniques.
To build an ideal profiler, for the purpose of optimizing programs, it would get a sample of the program's state every nanosecond.
Either on-the-fly or later (ideal, remember?) it would carefully examine each sample, to see if, knowing the reasons for which the program is executing, that particular nanosecond of work was actually necessary or could be somehow eliminated.
That would be billions of samples and a lot of reasoning, but course there would be tremendous duplication, because any wastage costing, say, 10% of time, would be evident on 10% of samples.
That wastage could be recognized on a lot fewer than a billion samples.
If fact, 100 samples or even less could spot it, provided they were randomly chosen in time, or at least in the time interval the user cares about.
This is assuming the purpose is to find the wastage so we can get rid of it, as opposed to measuring it with much precision.
Why would it be helpful to apply all that reasoning power to each sample?
Well, if the programs were little, and it were only looking for things like O(n^2) code, it shouldn't be too hard.
But suppose the state of the program consisted of a procedure stack 20-30 levels deep, possibly with some recursive function calls appearing more than once, possibly with some of the functions being calls to external processors to do IO, possibly with the program's action being driven by some data in a table.
Then, to decide if the particular sample is wasteful requires potentially examining all or at least some of that state information, and using reasoning power to see if it is truly necessary in accomplishing the functional requirements.
What the profiler is looking for is nanoseconds being spent for dubious reasons.
To see the reason it is being spent requires examining every function call site on the stack, and the code surrounding it, or at least some of those sites.
The necessity of the nanosecond being spent requires the logical AND of the necessity of every statement being executed on the stack.
It only takes one such function call site to have a dubious justification for the entire sample to have a dubious justification.
So, if the entire purpose is to find nanoseconds being spent for dubious reasons, the more complicated the samples are, the better,
and the more reasoning power brought to bear on each sample, the better.
(That's why bigger programs have more room for speedup - they have deeper stacks, hence more calls, hence more likelihood of poorly justified calls.)
OK, that's in the future.
However, since we don't need a huge number of samples (10 or 20 is very useful), and since we already have highly intelligent automatic programmers (powered by pizza and soda),
we can do this now.
Compare that to the tools we call profilers today.
The very best of them take stack samples, but what's their output?
Measurements. "Hot paths". Rat's nest graphs. Eye-candy.
From those, even an artificially intelligent programmer would easily miss large inefficiencies, except for the ones that are exposed by those outputs.
After you fix the ones you do find, the ones you don't find are the ones that make all the difference.
One of the things one learns studying A.I. is, don't expect to be able to program a computer to do something if a human, in principle, can't also do it.