I had a little argument, and was wondering what people out there think:
In C++ (or in general), do you prefer code broken up into many shorter functions, with main() consisting of just a list of functions, in a logical order, or do you prefer functions only when necessary (i.e., when they will be reused very many times)? Or perhaps something in between?
Small functions, please
It is the conventional wisdom that smaller functions are better, and I think it's true. In fact, there is a company with an analysis tool that rates individual functions by how many decisions they make compared to the number of unit tests that they have.
The theory is that you may or may not be able to reduce complexity in an entire application, but you have complete control over how much complexity is in any given function.
A measurement called cyclomatic complexity is thought to correlate positively with bad code...specifically, the more paths there are through a method the higher its CCN number is, the more poorly it is written, the harder it is to understand and hence change or even get right to start with, and the more unit tests it will need.
Ok, found the tool. It is called, ahem, the Change Risk Analysis and Predictions index.
Lately, the principle of encoding information only once has grown new acronyms, specifically DRY (Don't Repeat Yourself) and DIE (Duplication is Evil) ...
I believe we can in part thank the RoR community for promoting this philosophy...
Split the functions, but never split functionality.
Functionality may be classified into layers, then each layer may split into different functions. For example, when we are processing a sine series, the main loop for summing and subtracting should be in primary function. This may consider as layer 1. Now the functionality for finding power may classified in to layer 2. This can be implemented as a sub function. Similarly finding factorial also belongs to layer 2 which would be another sub function. Always consider functionality, never count number of lines. Number of lines may vary from 3 to 300, doesn't matter. This will add more readability and maintainability to our code. This is my idea about splitting.
I think the only answer is something in between. If you break up functions every time possible, it becomes unreadable. Likewise, if you never break up, it also becomes unreadable.
I like to group functions into semantic differences. It is one logical unit for some calculation. Keep the units small, but big enough to actually do something useful.
My favorite granularity rule of thumb for a function is "no more than 24 lines of < 80 characters each" -- and that's not just because 80 x 24 terminals were all the rage "back when I started"; I think it's a reasonable guideline for functions you can "grasp as one eyeful", at least in C or languages not much richer than C. "A function does only one thing", AKA "a function has one function" (playing on the meaning of "function" as "role" or "purpose"!-) is a secondary rule I use in languages where "too much functionality" can easily be packed in 24 lines. But the "lexical eyeful" guideline -- 24 x 80 -- is still my main one.
Small functions are good and smaller ones are better.
About five to eight lines of code is my upper limit on function size. Beyond that, and it's too complicated. You should be able to:
Assume that a callee does what its name would indicate,
Read a function's definition in a matter of seconds, and
Convince yourself quickly that the first assumption implies that the present function is correct.
The other thing is that you should use your functions BEFORE you write their code. When you see how you intend to use the function, then you'll see what pre- and post-conditions said functions must respect.
Anything that isn't obviously correct at first glance should be proven correct in the running commentary. If that's difficult, factor sub-functions out.
Whatever helps with code reuse and readability works best, I believe.
Making lots of one line functions just to do it doesn't help with readability so they should be grouped in classes that makes sense, and then split up the functions so that you can understand quickly what is going on in that function.
If you have to jump all over to understand what is going on then your design is flawed.
I prefer functions (or methods) which fit within one screenful of code, so I can see at a glance anything I need to reference to understand how that function works. I generally have about 50 lines of space in my editor windows, which are also generally at 80 columns so I can fit two side by side on a monitor and cross reference between two pieces of code.
So, I generally consider 50 lines to be about the maximum. The only time I would consider allowing more is when you have one big long initialization function or something that is completely linear (no variables, conditionals, or loops), since that's not something where you need all that much context and some APIs require a whole bunch of initialization to get up and running, and splitting it into smaller functions wouldn't really help much.
On the whole, though, nice, small, easy to understand functions that do one thing and are well named are vastly preferable to big sprawling monstrosities that are hundreds of lines long and dozens of variables to keep track of with indentation going 10 levels deep.
Another simple reason: A function should be made when a block of code is being reused more than once or twice. For very small bits of code (say one or two statements), macros often alleviate the problem.
Related
I know it helps a lot if we structure our programs using classes, structs etc. but does it help in terms of running speed that we avoid these structures and write code plain in terms of basic C++ syntax?
For example, I am trying to write a program that works on vectors. Now it sounds tempting to write a class vector and define its methods like set_at_index(int i) that sets the value of specific row i of this vector. Furthermore I can check whether i<=N where N is the length of the vector in question.
My confusion is that with these routine every set_at_index method that is used a lot will require one 'if' statement. So if I want my code to run faster should I avoid it and go with declaring an array and manually take care that there is no memory leak?
Is there any way I can check for the memory leaks without putting burden on the code speed?
Yes, bounds checking will take slightly more time. But it will take so little extra time that it will only matter if the code is being run 28894389375 times and then it might add up to a millisecond. Note that std::vector only performs bounds checking if you use the at member function, not if you use operator[]. Also, if you are doing anything like writing to a file or printing text to the console, doing that one time will likely take more time than ten million bounds-checked array accesses, because I/O is relatively very very slow.
Typically, without bounds checking code using classes will run at the same speed as code using plain arrays. The problem with manually managing memory like you suggest is that it's easy to forget to clean it up, or to clean it up only through one path of execution through the program, or to fail to clean it up in the event of an exception. It's really hardly ever worth it. Also, it'll be just as fast to use a vector class without bounds checking as it will be to use a dynamic array without bounds checking. You pay for it either way.
I also suggest using std::vector instead of writing your own vector class since they do pretty much every optimisation you could do yourself, and they usually have the advantage of being able to write the code for their specific compiler and perhaps be able to take advantage of things that only that compiler does because they know more of its implementation. The STL classes are also rigorously tested and written by experts (usually).
You should write your code first, then measure with a profiler to see the bottlenecks in your code if it is not fast enough already, then optimise the bottlenecks. I will bet that bounds checking on arrays is probably not going to be one of those bottlenecks.
Checking for memory leaks can be done with a tool like valgrind. You don't do it in the code itself.
Don't try to over optimize before you even start writing. Go ahead and write code that is easily maintainable, readable, and as bug free as possible. Once you have things working, you can start profiling to see the real bottlenecks.
"Premature optimization is root of all evils" - Donald Knuth. (this is true 97% of the time).
Unless you profile your application and see that your class encapsulation is a bottleneck that does slow your application in a significant amount, don't hesitate to have high level structures. It will brings you plenty of benefits like readability, maintainance, and understanding what you do. That's what brings OOP: Big scale programs.
Some good answers have already been posted, and premature optimization is indeed inadvisable as others have said. However, let me put your question in slightly another light.
I know it helps a lot if we structure our programs using classes,
structs etc. but does it help in terms of running speed that we avoid
these structures and write code plain in terms of basic C++ syntax?
Theoretically, most properly written C++ code should run just as fast with fully developed classes as without, but
there are exceptions to the rule;
the effort required to write the C++ code theoretically properly may be too great; and
the same features of the C++ compiler that make it hard to write incorrect code can make it all too easy to write grossly inefficient code.
Point-by-point remarks follow.
Consider a complex three-dimensional vector type, of which each instance consists of six doubles (three real parts and three imaginary parts). If there were not so many doubles, your compiler might load them directly into your microprocessor's registers, but with six they are likely to remain on the stack when the complex three-dimensional vector is loaded. Some operations however on a complex three-dimensional vector do not require all six doubles, but only one, two or three of them. If so, then it might be preferable to store the six floating-point components separately. Thus, rather than an array of 1000 vectors, you'd keep six arrays of 1000 doubles each. Of course, one can (and probably should) bind the arrays together in a class of some kind, but -- for efficiency reasons only -- a good design might never explicitly associate individual elements from one array to another.
Sometimes, you know where your data is and what you want to do with it, and C++'s elaborate organizational and access-control facilities only get in your way. In this case, you might skip the high-level C++ and just do what you want in primitive, hackworthy, brutish, machete-wielding C-style code. Indeed, C++ explicitly supports this style of coding by making it possible -- nay, easy -- to wrap the primitive C code safely within a module and thus to hide its horror from the rest of your beautiful C++ program. Of course, if you hand your code a machete, so to speak, then you take a risk, don't you, because your code may hack up data you never wanted it to, and your compiler will stand aside and let it do it; but sometimes the risk is worth the gain, and sometimes the risk is even fun (and character-building) for a programmer's change of pace.
This point is the most subtle of the three. Where a user-defined type consists partly of other user-defined types, multiple layers of constructors will be called and implicitly invoked. This is great, and usually it is what you want, especially if you have a good unit-testing regime at each layer. The rose however has a thorn, as it were. A properly written constructor is careful never to lose anything it needs. So, unless the programmer is most careful, a constructor may quietly make a lot of strictly unnecessary copies of very large objects. Sometimes, the programmer will mentally lose track of all the levels of implicit invocation, which he never would have done if he had had to handle each invocation explicitly. Also, your data in an object of one type may lack access to a member function to which it can easily gain access, so long is it temporarily copies itself to an object of another type (you can avoid the copy with the use of handle types, reference counting and so forth, but this is not free: it's quite a bit of work). Even if the programmer is conscious of the implicit copy, the implicit copy is so much easier to code in the moment that the temptation to do so is sometimes too great -- especially when a deadline looms! Several hidden inefficiencies can arise in these ways. One can, and should, work around such inefficiencies, of course, but it can take a lot of coding effort to do so and, even then, your compiler is so busy helping you to avoid logical errors that it tends to cause you to create inadvertent inefficiencies that you would never purposely have created. The unnecessary, hidden copying of data is a much bigger problem in C++ than it ever was in C.
All in all, I would say that the C++ trade-off is worth it 80 percent of the time. C++'s organizational and access-control facilities merit the effort it takes to apply them properly. If your question regards the 20 percent, well, there is more than one valid approach to programming, in my view. Sometimes it really does help "that we avoid these structures and write code plain in terms of basic C++ syntax," as you have said.
Usually, no. Sometimes, yes. I think that the earlier answers are right, though, that the particular example you have posed is probably better treated in boring, neat, orderly C++, without tricks.
Two things:
DO NOT do any kind of premature optimization, CPUs are fast nowadays, compilers are smart and able to figure out optimizations that you wouldn't think of in months of looking at your code.
you can easily check things like memory leaks by profiling your code and/or using conditional compilation. Leaks shouldn't occur on release versions so you should just skip that checks.
I'm asking this with full knowledge that this idea is probably well covered in a subject unfamiliar to me. Suppose you're writing a small piece of code that takes an input of an arbitrary number of variables. Those variables can have several states, namely:
Correct Data
Incorrect Data (outside range, improper formatting, whatever)
Unknown (Null)
So if we have 3 input variables, and 3 states per those variables, we end up with 27 possible scenarios. Suppose I have to do some logic based on the state of certain variables, or the combination of states (AND, NAND, OR, etc). Can I easily structure a program in such a way that I provably cover all scenarios without an absolute mess of if/else style logic? The first thing that came to mind was statemachines, but after looking at them for a bit I'm not entirely convinced it's the same thing.
There will be if style logic, but you can use karnaugh maps to make it much cleaner and be sure that you've covered every possibility. What you do, is you make a grid showing every possible combination of states. Then, mark each state in a different way depending on the way you want to react to it. The goal of this is to group states. Then, you can easily see if your groups of states are logically "close together," and if so, you can simplify your control logic. A quick search for karnaugh maps will bring up explanations that will be much easier to follow thanks to pictures, but the idea is to use the grid to see which variables are irrelevant to a group of states, and optimize them out of the logic.
I read recently, in an article on game programming written in 1996, that using global variables is faster than passing parameters.
Was this ever true, and if so, is this still true today?
Short answer - No, good programmers make code go faster by knowing and using the appropriate tools for the job, and then optimizing in a methodical way where their code does not meet their requirements.
Longer answer - This article, which in my opinion is not especially well-written, is not in any case general advice on program speedup but '15 ways to do faster blits'. Extrapolating this to the general case is missing the writer's point, whatever you think of the merits of the article.
If I was looking for performance advice, I would place zero credence in an article that does not identify or show a single concrete code change to support the assertions in the sample code, and without suggesting that measuring the code might be a good idea. If you are not going to show how to make the code better, why include it?
Some of the advice is years out of date - FAR pointers stopped being an issue on the PC a long time ago.
A serious game developer (or any other professional programmer, for that matter) would have a good laugh about advice like this:
You can either take out the assert's
completely, or you can just add a
#define NDEBUG when you compile the final version.
My advice to you, if you really wish to evaluate the merit of any of these 15 tips, and since the article is 14 years old, would be to compile the code in a modern compiler (Visual C++ 10 say) and try to identify any area where using a global variable (or any of the other tips) would make it faster.
[Just joking - my real advice would be to ignore this article completely and ask specific performance questions on Stack Overflow as you hit issues in your work that you cannot resolve. That way the answers you get will be peer reviewed, supported by example code or good external evidence, and current.]
When you switch from parameters to global variables, one of three things can happen:
it runs faster
it runs the same
it runs slower
You will have to measure performance to see what's faster in a non-trivial concrete case. This was true in 1996, is true today and is true tomorrow.
Leaving the performance aside for a moment, global variables in a large project introduce dependencies which almost always make maintenance and testing much harder.
When trying to find legitimate uses of globals variables for performance reasons today I very much agree with the examples in Preet's answer: very often needed variables in microcontroller programs or device drivers. The extreme case is a processor register which is exclusively dedicated to the global variable.
When reasoning about the performance of global variables versus parameter passing, the way the compiler implements them is relevant. Global variables typically are stored at fixed locations. Sometimes the compiler generates direct addressing to access the globals. Sometimes however, the compiler uses one more indirection and uses a kind of symbol table for globals. IIRC gcc for AIX did this 15 years ago. In this environment, globals of small types were always slower than locals and parameter passing.
On the other hand, a compiler can pass parameters by pushing them on the stack, by passing them in registers or a mixture of both.
Everyone has already given the appropriate caveat answers about this being platform and program specific, needing to actually measure timings, etc. So, with that all said already, let me answer your question directly for the specific case of game programming on x86 and PowerPC.
In 1996, there were certain cases where pushing parameters onto the stack took extra instructions and could cause a brief stall inside the Intel CPU pipeline. In those cases there could be a very small speedup from avoiding parameter passing altogether and reading data from literal addresses.
This isn't true any more on the x86 or on the PowerPC used in most game consoles. Using globals is usually slower than passing parameters for two reasons:
Parameter passing is implemented better now. Modern CPUs pass their parameters in registers, so reading a value from a function's parameter list is faster than a memory load operation. The x86 uses register shadowing and store forwarding, so what looks like shuffling data onto the stack and back can actually be a simple register move.
Data cache latency far outweighs CPU clock speed in most performance considerations. The stack, being heavily used, is almost always in cache. Loading from an arbitrary global address can cause a cache miss, which is a huge penalty as the memory controller has to go and fetch the data from main RAM. ("Huge" here is 600 cycles or more.)
What do you mean, "faster"?
I know for a fact, that understanding a program with global variables takes me a whole lot more time than one without.
If the extra time it takes the programmer(s) is less than the time gained by the users when they run the program with globals, then I'd say using global is faster.
But consider that the program is going to be run by 10 people once a day for 2 years. And that it takes 2.84632 secs without globals and 2.84217 secs with globals (a 0.00415 sec increase). That's 727 seconds less of TOTAL runtime. Gaining 10 minutes of run time is not worth the introduction of a global as regards programmer time.
To a degree any code that avoids processor instructions (ie shorter code) will be faster. However how much faster? Not very! Also note that compiler optimisation strategies may result in the smaller code anyway.
These days this is only an optimisation on very specific applications usually in ultra time critical drivers or micro-control code.
Putting aside the issues of maintainability and correctness, there are basically two factors that will govern performance with regard to globals vs. parameters.
When you make a global you avoid a copy. That's slightly faster. When you pass a parameter by value, it has to be copied so that a function can work on a local copy of it and not damage the caller's copy of the data. At least in theory. Some modern optimizers do pretty tricky things if they identify that your code is well behaved. A function may get automatically inlined, and the compiler may notice that the function doesn't do anything to the parameters, and just optimise away any copying.
When you make a global, you are lying to the cache. When you have all of your variables neatly contained in your function, and a few parameters, the data will tend to all be in one place. Some of the variables will be in registers, and some will probably be in cache right away because they are right 'next to' each other. Using a lot of global variables is basically pathological behavior for the cache. There is no guarantee that various globals will be used by the same functions. Location has no obvious correlation with usage. Perhaps you have a small enough working set that it makes no difference where anything is, and it all winds up in cache.
All of this just adds up to the point made by a poster above me:
When you switch from parameters to
global variables, one of three things
can happen:
* it runs faster
* it runs the same
* it runs slower
You will have to measure performance
to see what's faster in a non-trivial
concrete case. This was true in 1996,
is true today and is true tomorrow.
Depending on the specific behavior of your exact compiler, and precise details of the hardware that you use to run your code, it's possible that global variables could be a very slight performance win in some cases. That possibility may be worth trying it on some code that runs too slow as an experiment. It's probably not worth dedicating yourself to, as the answer of your experiment could change tomorrow. So, the right answer is almost always to go with "correct" design patterns and avoid the uglier design. Look for better algorithms, more efficient data structures, etc., before intentionally trying to spaghettify your project. Much better payoff in the long run.
And, aside from the dev time vs user time argument, I'll add the dev time vs. Moore's time argument. If you assume Moore's law will make computers something like half again as fast every year, then for the sake of a simple round number, we can assume that progress happens in a steady 1% progress per week. IF you are looking at a microoptimisation that may improve things like 1%, and it will add a week to the project from complicating things, then just taking the week off will have the same effect on average run times for your users.
Perhaps a micro optimisation, and would probably be wiped out by optimisations your compiler could generate without resort to such practices. In fact the use of globals may even inhibit some compiler optimisations. Reliable and maintainable code would generally be of greater value, and globals are not conducive to that.
Using globals to replace function parameters renders all such functions non-reentrant, which may be a problem if multi-threading is used - not a common practice in game development in 1996, but more common with the advent of multi-core processors. It also precludes recursion, although that is probably less of an issue since recursion has its own issues.
In any significant body of code, there is likely to be more mileage in higher-level optimisation of algorithms and data structures. Moreover there are options open to you other than global variables that avoid parameter passing, most especially C++ class-member variables.
If the habitual use of global variables in your code makes a measurable or useful difference to its performance, I would question the design first.
For a discussion of the problems inherent in global variables and some ways to avoid them see A Pox on Globals by Jack Gannsle. The article relates to embedded systems development, but is generally applicable; its just that some embedded systems developers think they have good reason to use globals, probably for all the same misguided reasons used to justify it in game development.
Well, if you are considering using global parameters instead of parameter passing, that could mean that you have a long chain of methods/functions that you have to pass that parameter down. It that is the case, you really WILL save CPU cycles by switching from parameter to global variable.
So, guys that say that it depends, I guess that they are plain wrong. Even with REGISTER parameter passing, there will still be MORE cpu cycles and MORE overhead for pushing the parameters down to the callee.
HOWEVER - I never do that. CPUs are superior now, and at times when there were 12Mhz 8086s that could be the issue. Nowadays, if you don't write embedded or super-turbo-charged performance code, stick to that which looks good in code, which doesn't break code logic, and thrives to be modular.
And lastly, leave machine language code generation to compiler - guys who designed it are best at knowing how their baby performs and will make your code run at its best.
In general (but it may depend greatly on compiler and platform implementation), passing parameters mean writing them onto the stack which you would not need with global variable.
That said, global variable may mean include page refresh in the MMU or memory controller whereas the stack may be located in much faster memory available to the processor...
Sorry, no good answer for a general question like this, just measure it (and try different scenarios too)
It was faster when we had <100mhz processors. Now that that processors are 100x faster this 'problem' is 100x less significant. It wasnt a big deal then, it was a big deal when you did it in assembly and had no (good) optimizer.
Says the guy who programmed on a 3mhz processor. Yes you read that right and 64k was NOT enough.
I see a lot of theoretical answers, but no practical advice for your scenario. What I'm guessing is that you have a large number of parameters to pass down through a number of function calls, and you're worried about accumulated overhead from many levels of call frames and many parameters at each level. Otherwise your concern is completely unfounded.
If this is your scenario, you should probably put all of the parameters in a "context" structure and pass a pointer to that structure. This will ensure data locality, and makes it so you don't have to pass more than one argument (the pointer) at each function call.
Parameters accessed this way are slightly more expensive to access than true function arguments (you need an extra register to hold the pointer to the base of the structure, as opposed to the frame pointer which would serve this purpose with function arguments), and individually (but probably not with cache effects factored in) more expensive to access than global variables in normal, non-PIC code. However, if your code is in a shared library/DLL using position independent code, the cost of accessing parameters passed by pointer to struct is cheaper than accessing a global variable and identical to accessing static variables, due to GOT and GOT-relative addressing. This is another reason never to use global variables for parameter passing: if you may eventually put your code in a shared library/DLL, any possible performance benefits will suddenly backfire!
Like everything else: yes and no. There is no one answer because it depends on context.
Counterpoints:
Imagine programming on Itanium where you have hundreds of registers. You can put quite a few globals into those, which will be faster than the typical way globals are implemented in C (some static address (although they might just hardcode the globals into instructions if they are word length)). Even if the globals are in cache the whole time, registers may still be faster.
In Java, overuse of globals (statics) can decrease performance because of initialization locks that have to be done. If 10 classes want to access some static class, they all have to wait for that class to finish initializing its static fields, which can take anywhere form no time up to forever.
In any case, global state is just bad practice, it raises code complexity. Well designed code is naturally fast enough 99.9% of the time. It seems like newer languages are removing global state all together. E removes global state because it violates their security model. Haskell removes state all together. The fact that Haskell exists and has implementations that outperform most other languages is proof enough for me that I will never use globals again.
Also, in the near future, when we all have hundreds of cores, global state isn't really going to help much.
It might still be true, under some circumstances.
A global variable might be as fast as a pointer to a variable, where its pointer is stored in/passed through registers only. So, it is a question about the count of registers, you can use.
To speed-optimize a function call, you could do several other things, that might perform better with global-variable-hacks:
Minimize the count of local variables in the function to a few (explicit) register variables.
Minimize the count of parameters of the function, i.e. by using pointers to structures instead of using the same parameter-constellations in functions that call each other.
Make the function "naked", that means that it does not use the stack at all.
Use "proper-tail-calls" (does neither work with java/-bytecode nor java-/ecma-script)
If there is no better way, hack yourself sth like TABLES_NEXT_TO_CODE, which locates your global variables next to the function code. In functional languages this is a backend-optimization that uses the function-pointer as data-pointer, too; but as long as you do not program in a functional language, you only need to locate those variables beside those used by the function. Then again, you only want this to remove the stack-handling from your function. If your compiler generates assembler code that handles the stack, then there is no point in doing this, you could use pointers instead.
I've found this "gcc attribute overview":
http://www.ohse.de/uwe/articles/gcc-attributes.html
and I can give you these tags for googling:
- Proper Tail Call (it is mostly relevant to imperative backends of functional languages)
- TABLES_NEXT_TO_CODE (it is mostly relevant to Haskell and LLVM)
But you have 'spaghetti code', when you often use global variables.
I face a situation where we have many very long methods, 1000 lines or more.
To give you some more detail, we have a list of incoming high level commands, and each generates results in a longer (sometime huge) list of lower level commands. There's a factory creating an instance of a class for each incoming command. Each class has a process method, where all the lower level commands are generated added in sequence. As I said, these sequences of commands and their parameters cause quite often the process methods to reach thousands of lines.
There are a lot of repetitions. Many command patterns are shared between different commands, but the code is repeated over and over. That leads me to think refactoring would be a very good idea.
On the contrary, the specs we have come exactly in the same form as the current code. Very long list of commands for each incoming one. When I've tried some refactoring, I've started to feel uncomfortable with the specs. I miss the obvious analogy between the specs and code, and lose time digging into newly created common classes.
Then here the question: in general, do you think such very long methods would always need refactoring, or in a similar case it would be acceptable?
(unfortunately refactoring the specs is not an option)
edit:
I have removed every reference to "generate" cause it was actually confusing. It's not auto generated code.
class InCmd001 {
OutMsg process ( InMsg& inMsg ) {
OutMsg outMsg = OutMsg::Create();
OutCmd001 outCmd001 = OutCmd001::Create();
outCmd001.SetA( param.getA() );
outCmd001.SetB( inMsg.getB() );
outMsg.addCmd( outCmd001 );
OutCmd016 outCmd016 = OutCmd016::Create();
outCmd016.SetF( param.getF() );
outMsg.addCmd( outCmd016 );
OutCmd007 outCmd007 = OutCmd007::Create();
outCmd007.SetR( inMsg.getR() );
outMsg.addCmd( outCmd007 );
// ......
return outMsg;
}
}
here the example of one incoming command class (manually written in pseudo c++)
Code never needs refactoring. The code either works, or it doesn't. And if it works, the code doesn't need anything.
The need for refactoring comes from you, the programmer. The person reading, writing, maintaining and extending the code.
If you have trouble understanding the code, it needs to be refactored. If you would be more productive by cleaning up and refactoring the code, it needs to be refactored.
In general, I'd say it's a good idea for your own sake to refactor 1000+ line functions. But you're not doing it because the code needs it. You're doing it because that makes it easier for you to understand the code, test its correctness, and add new functionality.
On the other hand, if the code is automatically generated by another tool, you'll never need to read it or edit it. So what'd be the point in refactoring it?
I understand exactly where you're coming from, and can see exactly why you've structured your code the way it is, but it needs to change.
The uncertainty you feel when you attempt to refactor can be ameliorated by writing unit tests. If you've tests specific to each spec, then the code for each spec can be refactored until you're blue in the face, and you can have confidence in it.
A second option, is it possible to automatically generate your code from a data structure?
If you've a core suite of classes that do the donkey work and edge cases, you can auto-generate the repetitive 1000 line methods as often as you wish.
However, there are exceptions to every rule.
If the methods are a literal interpretation of the spec (very little additional logic), and the specs change infrequently, and the "common" portions (i.e. bits that happen to be the same right now) of the specs change at different times, and you're not going to be asked to get a 10x performance gain out of the code anytime soon, then (and only then) . . . you may be better off with what you have.
. . . but on the whole, refactor.
Yes, always. 1000 lines is at least 10x longer than any function should ever be, and I'm tempted to say 100x, except that when dealing with input parsing and validation it can become natural to write functions with 20 or so lines.
Edit: Just re-read your question and I'm not clear on one point - are you talking about machine generated code that no-one has to touch? In which case I would leave things as they are.
Refectoring is not the same as writing from scratch. While you should never write code like this, before you refactor it, you need to consider the costs of refactoring in terms of time spent, the associated risks in terms of breaking code that already works, and the net benefits in terms of future time saved. Refactor only if the net benefits outweigh the associated costs and risks.
Sometimes wrapping and rewriting can be a safer and more cost effective solution, even if it appears expensive at first glance.
Long methods need refactoring if they are maintained (and thus need to be understood) by humans.
As a rule of thumb, code for humans first. I don't agree with the common idea that functions need to be short. I think what you need to aim at is when a human reads your code they grok it quickly.
To this effect it's a good idea to simplify things as much as possible--but not more than that. It's a good idea to delegate roughly one task for each function. There is no rule as for what "roughly one task" means: you'll have to use your own judgement for that. But do recognize that a function split into too many other functions itself reduces readability. Think about the human being who reads your function for the first time: they would have to follow one function call after another, constantly context-switching and maintaining a stack in their mind. This is a task for machines, not for humans.
Find the balance.
Here, you see how important naming things is. You will see it is not that easy to choose names for variables and functions, it takes time, but on the other hand it can save a lot of confusion on the human reader's side. Again, find the balance between saving your time and the time of the friendly humans who will follow you.
As for repetition, it's a bad idea. It's something that needs to be fixed, just like a memory leak. It's a ticking bomb.
As others have said before me, changing code can be expensive. You need to do the thinking as for whether it will pay off to spend all this time and effort, facing the risks of change, for a better code. You will possibly lose lots of time and make yourself one headache after another now, in order to possibly save lots of time and headache later.
Take a look at the related question How many lines of code is too many?. There are quite a few tidbits of wisdom throughout the answers there.
To repost a quote (although I'll attempt to comment on it a little more here)... A while back, I read this passage from Ovid's journal:
I recently wrote some code for
Class::Sniff which would detect "long
methods" and report them as a code
smell. I even wrote a blog post about
how I did this (quelle surprise, eh?).
That's when Ben Tilly asked an
embarrassingly obvious question: how
do I know that long methods are a code
smell?
I threw out the usual justifications,
but he wouldn't let up. He wanted
information and he cited the excellent
book Code Complete as a
counter-argument. I got down my copy
of this book and started reading "How
Long Should A Routine Be" (page 175,
second edition). The author, Steve
McConnell, argues that routines should
not be longer than 200 lines. Holy
crud! That's waaaaaay to long. If a
routine is longer than about 20 or 30
lines, I reckon it's time to break it
up.
Regrettably, McConnell has the cheek
to cite six separate studies, all of
which found that longer routines were
not only not correlated with a greater
defect rate, but were also often
cheaper to develop and easier to
comprehend. As a result, the latest
version of Class::Sniff on github now
documents that longer routines may not
be a code smell after all. Ben was
right. I was wrong.
(The rest of the post, on TDD, is worth reading as well.)
Coming from the "shorter methods are better" camp, this gave me a lot to think about.
Previously my large methods were generally limited to "I need inlining here, and the compiler is being uncooperative", or "for one reason or another the giant switch block really does run faster than the dispatch table", or "this stuff is only called exactly in sequence and I really really don't want function call overhead here". All relatively rare cases.
In your situation, though, I'd have a large bias toward not touching things: refactoring carries some inherent risk, and it may currently outweigh the reward. (Disclaimer: I'm slightly paranoid; I'm usually the guy who ends up fixing the crashes.)
Consider spending your efforts on tests, asserts, or documentation that can strengthen the existing code and tilt the risk/reward scale before any attempt to refactor: invariant checks, bound function analysis, and pre/postcondition tests; any other useful concepts from DBC; maybe even a parallel implementation in another language (maybe something message oriented like Erlang would give you a better perspective, given your code sample) or even some sort of formal logical representation of the spec you're trying to follow if you have some time to burn.
Any of these kinds of efforts generally have a few results, even if you don't get to refactor the code: you learn something, you increase your (and your organization's) understanding of and ability to use the code and specifications, you might find a few holes that really do need to be filled now, and you become more confident in your ability to make a change with less chance of disastrous consequences.
As you gain a better understanding of the problem domain, you may find that there are different ways to refactor you hadn't thought of previously.
This isn't to say "thou shalt have a full-coverage test suite, and DBC asserts, and a formal logical spec". It's just that you are in a typically imperfect situation, and diversifying a bit -- looking for novel ways to approach the problems you find (maintainability? fuzzy spec? ease of learning the system?) -- may give you a small bit of forward progress and some increased confidence, after which you can take larger steps.
So think less from the "too many lines is a problem" perspective and more from the "this might be a code smell, what problems is it going to cause for us, and is there anything easy and/or rewarding we can do about it?"
Leaving it cooking on the backburner for a bit -- coming back and revisiting it as time and coincidence allows (e.g. "I'm working near the code today, maybe I'll wander over and see if I can't document the assumptions a bit better...") may produce good results. Then again, getting royally ticked off and deciding something must be done about the situation is also effective.
Have I managed to be wishy-washy enough here? My point, I think, is that the code smells, the patterns/antipatterns, the best practices, etc -- they're there to serve you. Experiment to get used to them, and then take what makes sense for your current situation, and leave the rest.
I think you first need to "refactor" the specs. If there are repetitions in the spec it also will become easier to read, if it makes use of some "basic building blocks".
Edit: As long as you cannot refactor the specs, I wouldn't change the code.
Coding style guides are all made for easier code maintenance, but in your special case the ease of maintenance is achieved by following the spec.
Some people here asked if the code is generated. In my opinion it does not matter: If the code follows the spec "line by line" it makes no difference if the code is generated or hand-written.
1000 thousand lines of code is nothing. We have functions that are 6 to 12 thousand lines long. Of course those functions are so big, that literally things get lost in there, and no tool can help us even look at high level abstractions of them. the code is now unfortunately incomprehensible.
My opinion of functions that are that big, is that they were not written by brilliant programmers but by incompetent hacks who shouldn't be left anywhere near a computer - but should be fired and left flipping burgers at McDonald's. Such code wreaks havok by leaving behind features that cannot be added to or improved upon. (too bad for the customer). The code is so brittle that it cannot be modified by anyone - even the original authors.
And yes, those methods should be refactored, or thrown away.
Do you ever have to read or maintain the generated code?
If yes, then I'd think some refactoring might be in order.
If no, then the higher-level language is really the language you're working with -- the C++ is just an intermediate representation on the way to the compiler -- and refactoring might not be necessary.
Looks to me that you've implemented a separate language within your application - have you considered going that way?
It has been my understanding that it's recommended that any method over 100 lines of code be refactored.
I think some rules may be a little different in his era when code is most commonly viewed in an IDE. If the code does not contain exploitable repetition, such that there are 1,000 lines which are going to be referenced once each, and which share a significant number of variables in a clear fashion, dividing the code into 100-line routines each of which is called once may not be that much of an improvement over having a well-formatted 1,000-line module which includes #region tags or the equivalent to allow outline-style viewing.
My philosophy is that certain layouts of code generally imply certain things. To my mind, when a piece of code is placed into its own routine, that suggests that the code will be usable in more than one context (exception: callback handlers and the like in languages which don't support anonymous methods). If code segment #1 leaves an object in an obscure state which is only usable by code segment #2, and code segment #2 is only usable on a data object which is left in the state created by #1, then absent some compelling reason to put the segments in different routines, they should appear in the same routine. If a program puts objects through a chain of obscure states extending for many hundreds of lines of code, it might be good to rework the design of the code to subdivide the operation into smaller pieces which have more "natural" pre- and post- conditions, but absent some compelling reason to do so, I would not favor splitting up the code without changing the design.
For further reading, I highly recommend the long, insightful, entertaining, and sometimes bitter discussion of this topic over on the Portland Pattern Repository.
I've seen cases where it is not the case (for example, creating an Excel spreadsheet in .Net often requires a lot of line of code for the formating of the sheet), but most of the time, the best thing would be to indeed refactor it.
I personally try to make a function small enough so it all appears on my screen (without affecting the readability of course).
1000 lines? Definitely they need to be refactored. Also not that, for example, default maximum number of executable statements is 30 in Checkstyle, well-known coding standard checker.
If you refactor, when you refactor, add some comments to explain what the heck it's doing.
If it had comments, it would be much less likely a candidate for refactoring, because it would already be easier to read and follow for someone starting from scratch.
Then here the question: in general, do
you think such very long methods would
always need refactoring,
if you ask in general, we will say Yes .
or in a
similar case it would be acceptable?
(unfortunately refactoring the specs
is not an option)
Sometimes are acceptable, but is very unusual, I will give you a pair of examples:
There are some 8 bit microcontrollers called Microchip PIC, that have only a fixed 8 level stack, so you can't nest more than 8 calls, then care must be taken to avoid "stack overflow", so in this special case having many small function (nested) is not the best way to go.
Other example is when doing optimization of code (at very low level) so you have to take account the jump and context saving cost. Use it with care.
EDIT:
Even in generated code, you could need to refactorize the way its generated, for example for memory saving, energy saving, generate human readable, beauty, who knows, etc..
There has been very good general advise, here a practical recommendation for your sample:
common patterns can be isolated in plain feeder methods:
void AddSimpleTransform(OutMsg & msg, InMsg const & inMsg,
int rotateBy, int foldBy, int gonkBy = 0)
{
// create & add up to three messages
}
You might even improve that by making this a member of OutMsg, and using a fluent interface, such that you can write
OutMsg msg;
msg.AddSimpleTransform(inMsg, 12, 17)
.Staple("print")
.AddArtificialRust(0.02);
which can be an additional improvement under circumstances.
How many levels of indentation do you consider reasonable?
I feel that having a C++ function with 4/5+ levels of indentation is normally a bad thing. It implies that you have to mentally keep track of 4/5+ things the whole time.
Is my opinion justified?
(yes, I can avoid having multiple levels of indentation by not indenting at all:)
I agree with you. If a function has more than 4 or 5 nested if/switch/loop/try statements, parts of it should be extracted into their own functions.
This will make the code more readable because the extracted functions' names are usually more descriptive than the code itself.
It depends entirely on the problem you code is trying to solve. Sometimes you have no choice but to have quite deep levels of indentation, though it certainly is a code smell.
I think you're right in that 4 or 5 levels or so is reasonable, more and you should probably be looking to refactor the method.
It's also worth noting that people have been trying to quantify code quality and design metrics for many years. One of the more common metrics is cyclomatic complexity.
Actually, it's not the number of indentation levels that most contributes to unreadable code, it's the length of the module/function/method that you're looking at.
Of course, long sections typically have more levels of indentation because blocks of code are used inline rather than broken out so there is a relation between. Personally, I think there's a smell if a method has more than a couple of screenfuls of code and more than 6 levels of indentation.
I think it can depend on the situation, but generally you'll want less nesting.
Nested loops can kill performance, and nested if statements can ofter be reduced down to less levels.
The problem is that often you run into issues trying to optimize too early in your development. I think that you should always take a little time to consider the possibilities before doing 3+ levels of nesting.
My only pet peeve regarding indentation is when two keywords are used and one depends on the other. For instance:
switch (c) {
case 'x':
foo = bar;
That is bad, you can't have a case outside of a switch, so:
switch (c) {
case 'x':
foo = bar;
.. would be much better. Indentation tends to get crazy in switches with if / else if / else. I also highly recommend keeping an 80 (at most 110 column limit) while indenting. Nobody likes scrolling 30 columns to the right to see what your doing, and 30 columns back to see how you handle the results :) Never mind that poor soul that has to edit your code in dumb 80x25 console mode on a server.
Although what I'm going to say comes from a plain C standpoint, it might apply to C++ as well. I favor the Linux kernel coding style, that is: 8 characters-wide tab indentation (natural tab) and as many levels of indentation as there fit on a 80x25 terminal (without exceeding the 80 characters width, that is).
Too much indentation is probably a sign that you should refactor: create a couple more small methods with good names. That way you break up the logic into smaller pieces that are easier to gobble up.
This relates to the question of how long a function should be. If you keep your function bodies to ten lines or fewer, then it's really hard to get too many levels of indentation.
I think 4 or 5 levels is too many. You can almost certainly factor out some of those inner loops or conditionals into their own functions.
I feel queasy with three levels of indentation myself. Even two levels causes me to reconsider what I'm doing.
Many levels of indentation is fine as long as you can see the all the open and close brackets at the same time in your editor.
This tends to limit you to maybe 4 levels max in practice.
One thing to note however is that some nesting levels are necessary and should not be abstracted out.
If you have some sort of brute force iteration over a 4D array, then you've got four levels of nesting automatically, and if there's is an IF statement, then you have five levels without having a particularly convoluted section of code.
So the level of nesting isn't necessarily the problem, it's the complexity attached to each level of nesting. If you are mixing WHILEs, FOREACHs, IFs and SWITCHes then maybe you should boil them down.
The basic point is that levels of nesting aren't always an indicator of complexity.
IIRC, Linus Torvalds would agree with you. I believe he insists on an 8-space indent and a 80-character line length for the linux kernel code. When you get to 4 or 5 levels of indent under that scheme your code would look scrunched horribly. You're forced to try to refactor the code just to maintain readability.
I used to think Linus was a bit of a hard-ass on this when he said no more than 3 levels or else your code is broken, i.e., but following that principle has helped me a lot in reasoning about my control flows. That said, I had to interpret it differently. The intuitive interpretation is that you should create more and more itsy bitsy functions. I don't recommend taking that idea to the extreme as that can make your control flows and side effects just as difficult to reason about when all the side effects are obscured by function calls leading you all over the place.
What helped me enormously was favoring deferred processing and simpler loops. Take this as an example:
// Remove vertices from mesh.
for each vertex in vertices.to_remove:
{
for each edge in vertex.edges:
{
for each face in edge.faces:
{
face.remove_edge(edge);
if (face.size() < 3)
face.remove();
}
edge.remove();
}
vertex.remove();
}
That monstrous thing with 4 levels of indentation above is awkward (and more so when it's not in pseudocode form where it might have to update texture maps and so on requiring even more levels of indentation). We can instead do this:
for each vertex in vertices.to_remove:
{
for each edge in vertex.edges:
edges_to_remove.insert(edge);
}
for each edge in edges_to_remove:
{
for each face in edge.faces:
faces_to_rebuild.insert(face, edge);
}
for each face,edge in faces_to_rebuild:
{
face.remove_edge(edge);
if (face.size() < 3)
face.remove();
}
for each edge in edges_to_remove:
edge.remove();
for each vertex in vertices_to_remove:
vertex.remove();
While that involves more passes over the data and some additional state, it makes every loop much simpler. And when we have these simpler loops, they become easier to parallelize, and the memory access patterns can start taking on a cache-friendly nature with decent data structures in place.
I've found that favoring this type of deferred processing not only reduces those indentations but also simplifies my ability to comprehend what the code is doing since the control flows are dead simple and they're all causing only one type of uniform side effect. It makes it so much easier to see what each loop is doing and comprehend whether it is doing the right thing or not at a glance, since each one is dedicated to causing one type of side effect, not many different types of side effects. And surprisingly, in spite of the extra work, it often improves cache coherence, and also opens up new doors for multithreading and SIMD, and I end up getting an even more efficient result than I started.