Related
I'm currently going through learncpp.com's C++ tutorials and I'm seeing that their variable naming trend has them naming int variables with an "n" prefix (i.e. int nValue) and "ch" prefixes for char variables (i.e. char chOperation). Is this something that is commonplace in the industry that I should form as a habit now?
Is this something that is commonplace in the industry?
This practice was common in some parts of Microsoft twenty or thirty years ago, due to a misunderstanding of a somewhat more useful convention used by other parts of the company (that of tagging variables indicate their purpose which, in a weakly typed language, can help avoid various kinds of category error). Neither convention serves any useful purpose in a strongly typed language like C++: the type system can catch such errors automatically and more reliably.
It became widely used by others, long after Microsoft (presumably) realised that it was pointless and advised against its use, presumably in the belief that emulating Microsoft's habits might also emulate their success. It's still occasionally seen today, by people who develop habits and never question their usefulness, and by companies who prioritise style guides above software.
Personally, I find it useful as a warning that the code is likely to contain worse horrors.
I should form as a habit now?
It only serves to make the code harder to read, and misleading if you forget to update the tags when you change a variable's type. You should develop a habit of writing clear, readable code, not of sprinkling it with mysterious runes.
Disclaimer: the brief comments about Microsoft are intended to give historical context and are not intended to be an authorative account of Microsoft's policy decisions; specifically the phrase "[Microsoft] realised [it] was pointless" is intended to mean "[some people at Microsoft] realised [the topic under discussion, using redundant type tags in modern C++ in most contexts] was pointless" not (as a commentor appears to have read) "[the entirety of Microsoft] realised [all use of variable tagging] was pointless". All opinions are my own, and may be based on imperfect knowledge.
Yes, they are common (esp. in Windows related projects)
But different projects may use different coding styles. So if you're working with an existing project, then the best is to stick to the style it already follows.
The naming style you mentioned is known as Hungarian style, which is typically used in Windows related projects. In the Hungarian style, variables are formatted in camel case (e.g., CamelCase) and prefixed by their scope and types:
[scope prefix]_[variable type][actual variable name in camel-cased style]
For example:
m_nMemberInteger
is an integer (according to it prefix n), in addition, it's a member variable (according to its prefix m_) to some structure / class, and you can find the complete list of scope and type prefixes used in the Hungarian style in the above link.
However, in linux-based projects, you will usually find people using different coding styles (e.g.,
Google c++ coding style), which uses only lower-cases and underscore _ to name their variables.
This looks similar to Hungarian Notation. Such things are sometimes used, especially in certain fields of programming. Personally I think it makes code look messy. In C++ you should think more about what the object means rather than what its underlying type may happen to be. And modern editors easily let you look up the type of variables, so it is kind of obsolete. I can understand why it was used when editors weren't so helpful..
As mentioned in the other comments, this is known as "Hungarian Notation" and is used to make the type of a variable obvious. While it's perhaps arguable whether it's worth the trouble to label the type, another common convention (especially in C++) is to use prefixes to indicate information about a variable's usage. This is especially useful for references and member variables. For instance, one might have a function that looks like
void MyClass::myMethod(const int& iInput, int& oOutput, int &ioInputAndOutput)
{
oOutput = ioInputAndOutput + mMemberData + iInput;
ioInputAndOutput *= 2;
}
As also mentioned above, the important thing is consistency, which will prevent more bugs than any particular convention. On collaborative projects, it's usually worth it to conform to the existing convention.
Is it considered good or bad practice? A friend of mine told me that, generally speaking, it was not considered a good practice in most languages now a days, but that he thought he heard that this was not the case with fortran. Is this true? And if it is true, why?
In 30+ years of programming in Fortran I've never come across anyone using Hungarian notation. Your friend might be confusing Fortran's long-standing (and now deprecated) ability to implicitly type variables depending on the initial letter of their names. But that long predates widespread awareness of what is now called Hungarian notation.
On the more general issue of whether Hungarian notation is, or would be, a good idea for adoption when writing Fortran, I agree with David, and (I think) the wider software development community which regards it as a not-very-useful practice. Fortran certainly does not require it, variable (and other) names follow rules very similar to those in many programming languages.
Systems Hungarian
Systems Hungarian notation essentially adds type information into variable names so that you know the types of the values you are using, and are less likely to use a value in an incorrect way. This is of dubious benefit in modern strongly typed languages, as type-safety significantly reduces the chance of using/accessing a variable erroneously.
However, for less strongly typed languages, including this sort of information can be beneficial, as it keeps the programmer constantly aware of the data they are dealing with.
The biggest criticism of HN (besides it being of limited benefit in a strongly typed language) is that the type prefixes used can result in extremely obscure and confusing variable names, so while you may gain a measure of pseudo type-safety, you can lose clarity in the code (or at least create code that is only readable to be an expert in your conventions) which can harm maintainability.
If you need to produce code to someone else's naming convention then you have little choice, but if you are in control, you can define a sensible, clear, simple naming convention that may suit your needs better, giving a good balance between making variable names information-rich and introducing confusing clutter. For example, one practice is to name boolean variables in the form of IsOpen rather than Open, to avoid confusion between words that can be used as both verbs and nouns. It also makes it easy to see when you are mixing booleans into integer or floating point expressions. This approach also works intuitively, so requires no special knowledge for any programmer to be able to read and understand the code.
Apps Hungarian
In response to the first comment, there is another form of Hungarian Notation (Apps Hungarian). See Wikipedia for a more in-depth description of it, but essentially it associates information relating to the usage or purpose of a variable with its name, rather than its type.
In strongly typed languages this is a much more useful approach, and is well worth considering - or at least (IMHO) in concept. I find the prefixes chosen often tend to be rather complicated and unfriendly (e.g. rw instead of row to my mind just obfuscates the prefix without any practical gain). I also think many examples are rather pointless (e.g. str to indicate that a variable is a string, in many languages is redundant because strings are often only represented in one form, and if the variable is named sensibly ("UserName" rather than "Data") it is often a pretty obvious that it will be a string).
A Modern Alternative
In my opinion/experience, what usually matters is clarifying a few key differences between variables (e.g. we need to treat members, pointers, volatiles and constants quite differently from each other - mixing up a member and a parameter or indexing an array with the wrong index variable can be catastrophic, and modern compilers do little to protect us from these mistakes). The difference between a list and a string is usually obvious if sensible descriptive variable naming is used, and type-safe languages will tell us if we've mixed these types up, so we don't need prefixes for these cases. This led to my own extremely simple prefixing approach which is explained in my answer to this stack overflow question.
Hopefully this post may give you something to think about when deciding if prefixes will be beneficial for you. Ultimately, any prefixing scheme you apply needs to be something that you believe (or better, can prove) is beneficial to you and your team. Don't just follow someone else's scheme - think about how and why a prefix might be useful, and evaluate it objectively before you adopt it or discard it.
It really depends more on the development environment and the team standards than on the language. If you happen to be using Fortran, but in a good IDE with code analysis and navigation, then you probably don't need Hungarian Notation.
I think the real question to ask yourself is, "What would Hungarian Notation give me?"
That is, what is its value? Even when using an old language, you can still apply good coding practices and techniques. Keep routines small, keep variable scope small, etc. Now, I'm not expert in Fortran, so I don't know what the limitations are. But I imagine it would still hold true.
Hungarian Notation is particularly useful when you're using a limited editor (no information about the variable on mouse hover, for example) and when your variable has a fairly long lifespan. By which I mean the use of the variable continues well beyond its definition.
Since the inception of Fortran so long ago, we as an industry have learned a lot about organizing code and working effectively on development teams. (Even if the team is only you... Remember that any code you wrote more than a few months ago may as well have been written by somebody else.) You may be able to apply these lessons to your Fortran code.
Use informative variable names. Storage isn't expensive, characters don't take long to send over modems anymore... Reasonably long variable names are both acceptable and encouraged, provided that they convey information about what that variable is and means. Additionally, keep the use of the variable close to its definition. So the additional context of the surrounding code reveals more information about the variable than just its name.
In short, if you have a global variable called price used throughout the application, then calling it dblPrice to indicate that it's an double adds useful information to the variable name. But there are more meaningful ways to add that useful information. price is a poor name for a large-scope variable in the first place, and the scope may be able to be narrowed if possible.
Lately I am losing my trust in OOP. I have already seen many
complaints about common OOP misuses or just simple overuse. I do not
mean the common confusion between is-a and has-a relationship. I mean
stuff like the problems of ORM when dealing with relational databases,
the excessive use of inheritance from C# and also several years of looking
at code with the same false encapsulation belief that Scott Meyers
mentions in the item 23 of Effective C++
I am interested in learning more about this and non OOP software
patterns that can solve certain problems better than their OOP
counterparts. I am convinced that out there there are many people
giving good advice on how to use this as an advantage with non pure OOP
languages such as C++.
Does anyone knows any good reference (author, book, article) to get
started?
Please, notice that I am looking for two related but different things:
Common misuses of OOP concepts (like item 23)
Patterns where OOP is not the best solution (with alternatives)
Well I can recommend you a book Agile Principles, Patterns, and Practices in C#.
Examples are in C# of course, but the idea of the book is universal. Not only it covers Agile but also focuses on bad practices and shows in examples how to convert bad code to a good code. It also contains descriptions of many design pattern and shows how to implement them in semi-real example of Payroll application.
This has to be done but if you truly want to get away from OOP or at least take a look at concepts which are not OOP but are used with great effectiveness: Learn you a Haskell. Try a new programming paradigm and then start seeing where you can apply much of the concepts back to OOP languages. This addresses your second bullet, not in a direct way but trust me, it'll help more than you can think.
It's a bit odd that you mention C#. It has very powerful keywords to keep the usual inheritance misery in check. The first one ought to be the internal keyword. The notion of restricting the visibility to a module. That concept is completely absent in C++, the build model just doesn't support it. Otherwise a great concept, "I only trust the members of my team to get it right". Of course you do.
Then there's the slammer one, the sealed keyword. Extraordinary powerful, "the buck stops here, don't mess with me". Used with surgical precision in the .NET framework, I've never yet found a case where sealed was used inappropriately. Also missing in C++, but with obscure ways to get that working.
But yes, the WPF object model sucks fairly heavy. Inheriting 6 levels deep and using backdoors like a dependency property is offensive. Inheritance is hard, let's go shopping.
I would say to look at game engines. For the most part, OOP has a tendency to cause slight performance decreases, and the gaming industry is seemingly obsessed with eliminating minor slowdowns (and sometimes ignoring large ones). As such, their code, though usually written in a language that supports OOP, will end up using only those elements of OOP that are necessary for clean code / ease of maintenance that also balances performance.
EDIT:
Having said that, I don't know if I would really go look at Unreal. They do some strange things for the sake of making their content pipeline easier for developers... it makes their code... well, look if you really want to know.
One common overuse is forcing OOP in programs/scripts that take some input, turn it to output, then exit (and not receiving input from anywhere else during the process). Procedural way is much cleaner in these cases.
Typical example of this is forcing OOP in PHP scripts.
Are there any technical reasons for the use of the underscore in names like (for example) scoped_lock in the Boost library? Why not call it `ScopedLock?
Please note I am not asking about stylistic reasons.
From the Boost Library Requirements and Guidelines,
Given the intent to propose portions of boost for the next revision of the C++ standard library, boost decided to follow the standard library's conventions.
There is no technical reason. If you ignore the stylistic reason, you could write scopedlock, istreamiterator and the-like too.
Readability if you can call that technical... spaces are usually forbidden and underscore is the nearest match. Camel case is horrible to read (an often is reserved for classes as a convention)..
Underscores improve the interface with human neural hardware by creating more space between separate words.
I used to prefer camelcase when I was little, and had a small monitor and small hands. I've mostly come around, though.
Subjectively I find underscores a bit of overkill in code. There is enough abuse of non-alphanumeric symbols in code as is, I think introducing them into identifiers is a bit over the top. Just off the top of my head consider this excerpt from a boost template error:
Derived=boost::transform_iterator<std::binder1st<std::multiplies<size_t>>,boost::counting_iterator<size_t>>,
Base=boost::counting_iterator<size_t>,
Value=boost::detail::transform_iterator_base<std::binder1st<std::multiplies<size_t>>,boost::counting_iterator<size_t>,boost::use_default,boost::use_default>::cv_value_type,
Traversal=boost::use_default,
Reference=boost::detail::transform_iterator_base<std::binder1st<std::multiplies<size_t>>,boost::counting_iterator<size_t>,boost::use_default,boost::use_default>::reference,
Difference=boost::use_default
versus the following that has been converted to Pascal case (I prefer this method):
Derived=boost::TransformIterator<std::Binder1st<std::Multiplies<SizeT>>,boost::CountingIterator<SizeT>>,
Base=boost::CountingIterator<SizeT>,
Value=boost::detail::TransformIteratorBase<std::Binder1st<std::Multiplies<SizeT>>,boost::CountingIterator<SizeT>,boost::UseDefault,boost::UseDefault>::CVValueType,
Traversal=boost::UseDefault,
Reference=boost::detail::TransformIteratorBase<std::Binder1st<std::Multiplies<SizeT>>,boost::CountingIterator<SizeT>,boost::UseDefault,boost::UseDefault>::Reference,
Difference=boost::UseDefault
I can see the advantage of underscores when taken in isolation but with all our other symbols I think we should focus on making programs that read closer to english and not underscore-ese.
There's no technical reason, but there's a reason. You've got to agree with me that it's much easier to read scoped_lock then scopedlock, but scopedLock would make it too. Yet, with underscore is easier to read, IMHO.
But a well-written code is a legible code. It's part of knowing to program well.
There's no technical reason.
Variable names in C++ must only
Start with a letter or underscore
Contain only number, letters (capitalized or not) and underscores
Using this_way or ThisWay is just a matter of style.
The only technical reason is for readability because using CamelCase may cause the wrong interpretation, especially when referring to abbreviations in all caps. A GPS Socket would come out as GPSSocket. There are some better examples, but my mental block precludes me from writing them down. :-(
If you want to get technical, there is no reason since the underscore is a viable character for identifiers.
Although technically speaking there is no difference there could be issues caused by environment. For instance, if you include windows.h you will not want to name any function TextOut even if that's what the function does. The reason is that this name will get replaced by the preprocessor due to the fact that TextOut is a macro in the win32 API. For this reason a project manager may wish to impose non-camel case as a standard.
So there can be technical reasons but there's no reason imposed by the language itself. It's not like Java (does it still do this?) where you are forced by the compiler to use camel case.
There is no technical reason per se. But I do have a reason other than my glib "because they look kewl."
My reason is because I find it useful to distinguish member variables from non-member variables in a convenient way. In particular when I am transferring data from a local variable to a member variable, such as within a constructor. Cheap example:
class Socket
{
public:
Socket(const sockaddr_in& group)
: group_(group)
{
}
private:
sockaddr_in group_;
};
If you ask my opinion, most variable naming schemes are terrible because there are too many rules and too many ways they break down. The classic example of a horrible naming scheme is Hungarian, but even from that I did take something useful: the m_ prefix for member variables came in handy at times. Not too often but often enough for me to borrow the idea if not the method.
There is no technical reason. It is purely stylistic. To be specific, C++ views all symbols that begin with a letter, underscore, or dollar sign the same. The only difference is how they are declared. If you want, you can name your "thing" class as Thing, THING, thing, tHiNg, or even T_h_I_n_G_$ if you want... it won't make a difference to the compiler. However, it does make a difference to other human beings that will look at and use your code. And if you take this too far (such as the last couple of examples I listed), you might even find your life in danger at some point (an angry programmer can be a terrifying thing).
There is no technical reason for or against except that which is imposed by the language, which in this case, does not exist.
This reason skirts the edges of being stylistic, but since no one else has mentioned this so far, I'll simply add that in a case sensitive language like C++, underscores are more memorable than capitalization.
For example, sometimes you might see scopedLock instead of ScopedLock. If you never use caps, that's just one less thing to keep track of.
Well, not the compilers, but prefast rulesets sometimes try to enforce naming conventions. To be frank, so many conventions are really confusing; escpecially when one needs to support old code as well as write new code in multiple languages.
One technical reason I can think of (especially for member function names) is to allow duck-typing. For example, the following boost classes could be used (to some extent) where one expects an STL container:
boost::ptr_container and family
boost::multi_index containers
boost::array
boost::dynamic_bitset (in lieu of boost::bitset)
IMHO, it is pretty reasonable to adopt the style of the Standard Library for the language you use. If it is Java, it is scopedLock, if it is C++ it is scoped_lock. If it is Lisp, it is scoped-lock.
Not that it really matters, anyway.
When C was invented, it was used on Unix, and Unix was operated from terminals that resembled typewriters. Some terminals had both upper and lower case letters, but some terminals had only upper case. If you wanted to use a Unix system but all of the nice terminals were already occupied by your mean greedy selfish colleagues, you got stuck with an older terminal. This is the reason why, if you type your login name in all upper case characters, Unix assumes you don't have lower case. Each lower case letter gets displayed as the corresponding upper case letter, and each upper case letter gets displayed as an asterisk followed by itself.
Now imagine camel casing instead of underscores.
By the way C was based more or less loosely on PL/I. PL/I was punched into cards which originally didn't support lower case, and eventually could be hacked to support lower case but not in a puncher-friendly fashion. Furthermore it was usually printed by printers that didn't support lower case, though a few did. So lower case was out, camel case was out, and programmers were used to underscores. (Except for Cobol programmers, who were used to minus signs in the middle of identifiers meaning this-is-an-identifier not this minus is minus an minus identifier.)
Pascal was invented later, in an environment where lower case letters were more common but still not universal. Camel case became possible because Pascal was case insensitive. Camel case became popular because Pascal didn't allow underscores in identifiers.
So if you like camel case combined with case sensitivity, you're half-Pasced.
I am a new to professional development. I mean I have only 5 months of professional development experience. Before that I have studied it by myself or at university. So I was looking over questions and found here a question about code quality. And I got a question related to it myself. How do I increase my code understanding/reading skills? Also will it improve the code quality I will write? Is there better code notation than Hungarian one? And is there any really good books for C++ design patterns(or the language doesn't matter?)?
Thank you in advance answering these questions and helping me improving :)
P.S. - Also I have forgot to tell you that I am developing with C++ and C# languages.
There is only way I've found to get better at reading other peoples code and that is read other peoples code, when you find a method or language construct you don't understand look it up and play with it until you understand what is going on.
Hungarian notation is terrible, very few people use it today, it's more of an in-joke among programmers.
In fact the name hungarian notation is a joke itself as:
"The term Hungarian notation is
memorable for many people because the
strings of unpronounceable consonants
vaguely resemble the consonant-rich
orthography of some Eastern European
languages."
From How To Write Unmaintainable Code
"Hungarian Notation is the tactical
nuclear weapon of source code
obfuscation techniques; use it! Due to
the sheer volume of source code
contaminated by this idiom nothing can
kill a maintenance engineer faster
than a well planned Hungarian Notation
attack."
And the ever popular linus has a few words to say on the matter.
"Encoding the type of a function into
the name (so-called Hungarian
notation) is brain damaged—the
compiler knows the types anyway and
can check those, and it only confuses
the programmer."
- Linus Torvalds
EDIT:
Taken from a comment by Tobias Langner.
"For the differences between Apss Hungarian and Systems Hungarian see Joel on Software".
Joel on Software has tips on how to read other people code called Reading Code is Like Reading the Talmud.
How do I increase my code
understanding/reading skills?
Read read read. Learn from your mistakes. Review answers on SO and elsewhere. When you can think back on a piece of code you wrote and go "aha! I should've done xyz instead!" then you're learning. Read a good book for your language of choice, get beyond the basics and understand more advanced concepts.
Then, apart from reading: write write write! Coding is like math: you won't fully grock it without actually solving problems. Glancing at a math problem's solution is different than getting out a blank piece of paper and solving it yourself.
If you can, do some pair programming too to see how others code and bounce ideas around.
Also will it improve the code quality
I will write?
See above. As you progress you should get more efficient. It won't happen by reading a book on design patterns. It will happen by solving real world problems and understanding why what you read works.
Is there better code notation than
Hungarian one?
It depends. Generally I avoid them and use descriptive names. The one exception where I might use Hungarian type of notations is for UI elements such as Windows Forms or ASP.NET controls, for example: using btn as a prefix for a Submit button (btnSubmit), txt for a TextBox (txtFirstName), and so on but it differs from project to project depending on approach and patterns utilized.
With regards to UI elements, some people like to keep things alphabetical and may append the control type at the end, so the previous examples become submitButton and firstNameTextBox, respectively. In Windows Forms many people name forms as frmMain, which is Hungarian, while others prefer naming it based on the application name or form purpose, such as MainForm, ReportForm, etc.
EDIT: be sure to check out the difference between Apps Hungarian and Systems Hungarian as mentioned by #Tobias Langner in a comment to an earlier response.
Pascal Case is generally used for method names, classes, and properties, where the first letter of each word is capitalized. For local variables Camel Case is typically used, where the first letter of the first word is lowercase and subsequent words have their first letters capitalized.
You can check out the naming conventions and more from the .NET Framework Design Guidelines. There is a book and some of it is on MSDN.
And is there any really good books for
C++ design patterns(or the language
doesn't matter?)?
Design patterns should be applicable to any language. Once you understand the concept and the reasoning behind that pattern's usefulness you should be able to apply it in your language of choice. Of course, don't approach everything with a "written in stone" attitude; the pattern is the goal, the implementation might differ slightly between languages depending on language features available to you. Take the Decorator pattern for example, and see how C# extension methods allow it to be implemented differently than without it.
Design Pattern books:
Head First Design Patterns - good beginner intro using Java but code is available for C++ and C# as a download (see "book code and downloads" section on the book's site)
Design Patterns: Elements of Reusable Object-Oriented Software - classic gang of four (GOF)
Patterns of Enterprise Application Architecture - Martin Fowler
If you're looking for best practices for quality coding in C++ and C# then look for the "Effective C++" and "More Effective C++" books (by Scott Meyers) and "Effective C#" and "More Effective C#" books (by Bill Wagner). They won't hold your hand along the way though, so you should have an understanding of the language in general. There are other books in the "Effective" series so make sure you see what's available for your languages.
I'm sure you can do a search here for other recommended reading so I'll stop here.
EDIT: added more details under the Hungarian Notation question.
I can't speak for everyone else, but in my experience I've found that the best thing I learned about making readable and/or in general better code was reading (and ultimately cleaning) a lot of other people's code. Some people may disagree with me but I think it's invaluable. Here's my reasoning:
When you start programming, its difficult to determine what is crap vs. not crap vs. good. Being logical, rational and extremely intelligent help in making good code, but even those factors don't always contribute. By reading others works and doing the dirty work, you'll have gone through the experience of what works and what doesn't. Eventually, you'll be able to mentally navigate those minefields that others had to cross and you'll be prepared to avoid those identical minefields.
By reading other's works, you gain insight into their mind and how they tackle a problem. Just from an architecture or technique aspect, this can be very useful to you whether their
tactics were good or bad. By reading other peoples successful or unsuccessful implementation, you've gained that knowledge without putting in the actual time it took them to learn it.
Design patterns are extremely useful. Only time and experience with them will help you in knowing what the appropriate pattern for whichever problem. Again, read other peoples' code for this if they've successfully built some pattern that may be useful for you.
When dealing with extreme problems where people's work falls short, you learn to research and dive into the internals of whatever system/language/platform/framework you're working with. This research ability on your own is very useful when all else fails. But you'll never know when to start looking or for what until you get through the crud of other people's work. Good code or bad, it's all valuable in some form or fashion.
All these notations and formats and nomenclature are helpful, but can be learned or implemented rather quickly and their payoff is fairly substantial. By reading code from other people, you'll develop your own style of logic. As you encounter other peoples work and the tremendous amount of effort it takes to read through, you'll learn what logical pitfalls to avoid and what to implement the next time for yourself or even how to fix bad code even faster.
I've never felt as if I was a great programmer. Not to say I'm a bad one either, but I feel confident in my abilities as my experience has taught me so much and my ability to adapt to every situation is what makes me a solid programmer. Learning from other people and their code has helped me. Whether their work was good or bad, there's always something you can take from them and their experience, add it to your memories, knowledge, etc.etc.
Ask other people to read your code! Try and see if you get a fellow coworker or similar to have a code review with you. Having someone else comb through your code and ask you questions about your code will provide new insights and critiques to your style and techniques. Learn from your mistakes and others.
Just to give you a bit of encouragement, I've been a professional programmer for 30 years now, and I still find reading other people's code very difficult. The major reason for this, unfortunately, is that the quality of the code follows Sturgeons Law - 90% of it is crap. So don't think it's your fault if you find it hard going!
The biggest improvement in readability of my code came about when I started liberally using white space.
I found this article on Joel on Software to be very relevant to the Hungarian notation debate.
It seems that the original intent of the notation was to encode type information that wasn't immediately obvious- not whether a variable is an int (iFoo), but what kind of int it is- such as a distance in centimeters (cmFoo). That way, if you see "cmFoo = mBar" you can tell that it's wrong because even though both are ints, one is meters and the other is centimeters, and thus the logic of the statement is incorrect, even though the syntax is fine. (Of course, I personally prefer to use classes such that that statement wouldn't even compile, or would do the conversion for you).
The problem was at some point people started using it without understanding it, and the world was cursed with lots of dwFoos and bBars.
Everyone needs to read that article- Making Wrong Code Look Wrong
How do I increase my code
understanding/reading skills?
Reading code is like dancing by yourself. You need a partner, and I suggest a debugger.
Stepping through code with a debugger is a true, lively dance. I recommend getting a quality, open-source project in the language of your choice, and then step through with the debugger. Concepts will come alive if you ask "why did that happen?", "what should happen next?".
One should ultimately be able to reason about code without a debugger; don't let it become a crutch. But that said, it is a very valuable tool.
Reading code is a lot like reading literature in that you need to have some insight into the author sometimes to understand what you're looking at and what to expect. They only way to improve your comprehension skills is by reading as much code and possible and trying to follow along.
I think a lot of what is mentioned here is applicable to coding...
Reading and understanding skills are a question of time. You will improve them as you get more experienced. Also depends in the quality of the code you are reading.
Keep in mind that sometimes it's not the best idea to learn directly from what you see at work. Books will teach you the best practices and you will be able to adapt them to yourself with the experience.
I am reading the Head First Design Patterns at present and it is very useful. It presents the information in a novel way that is easy to understand. Nice to see they have a C# version for download.
There will always be struggles with reading code unless you are Jon Skeet. This isn't to say that it is a big problem, rather that unless you can eat, sleep, breathe in that programming language, it will always take a little time to digest code. Looking at other people's code is certainly a good suggestion for helping in some ways but remember that there are many different coding conventions out there and some may be more enforced than others, e.g. interface names start with an I to give a simple example. So, I guess I'm saying that even with Visual Studio and Resharper, there is still a little work to understand a few lines of code since I can't quite write out sentences in C# yet.
1) Educate yourself. Read relevant literature.
2) Write code
3) Read code
4) Read relevant blogs.
Visit http://hanselminutes.com . He is a programmer from microsoft. Even though you don't program on microsoft stack, it's good to read through. There is a podcast in there that answers this question.
Another suggestion is to make sure you have the appropriate tools for the job before you start digging into a piece of code. Trying to understand a code-base without the ability to search across the entire set of files is extremely difficult.
Granted, we very rarely have the entire set of files, especially in large projects, but whatever boundaries you draw, you should have good visibility and searchability across those files. Whatever lies outside those boundaries can be considered 'black box' and perhaps lies outside the scope of your grokking.
There are many good open source editors including Eclipse and the CDT. Spending some time learning how to effectively create projects, search across projects, and enable any IDE-specific tooltips/helpers can make a world of difference.