Immutable Data Structure - Application maintenance - clojure

I have been reading about Immutable data structures and understood that change detection has been made easy . And quite often, I hear that it makes the application maintenance simpler and provides an easy to understand programming model.
I need help to understand the way it simplifies the job.

The Clojure community has embraced immutability and it is an eye opener. The best I can do is send you to the source: Rich Hickey's essay on State and his talk The Value of Values. Rich explains how separating the concept of a variable into three distinct concepts: identity, state, and value helps you model your system and reason about it.
It boils down to this: in your programming model, you should only allow things to change if they change in the system you are trying to model. Otherwise you are adding moving parts (mutable variables and objects) to a model that doesn't need them. This makes it harder to understand the model (specially as time evolves) but has little or no benefit.
Even though reading helps, the only way to grok this is to program in a language that takes immutability as a default until you realize how most of the systems you model actually have only a handful of things that change instead of pages and pages of mutable variables.

Immutability is certainly more embraced in functional languages than in imperative ones, even if you can have a Java programming style that limits mutability (see this for immutability in Java). That said, I will just comment on [functional/immutability] and [object/mutability].
I'm Clojure fan and find functional programming really powerful, but...
May be I spent too much time with C++ & Java and not enough with Lisp & Clojure, but I reckon that the simpler maintenance argument has yet to be proven by facts. I'm not sure there are reliable surveys on the actual cost of maintenance in big production systems with data on the technology used and associated costs.
Certainly, in terms of LOC, language like Clojure are really more focused and concise than Java. Hence you can say that less code leads to less maintenance, but I think functional style gives really more compact code that needs a very focused attention to fully understand what a function is doing comparing to imperative style which is more verbose but kind of straightforward. One big advantage of functional programming associated with immutability, is the ability to isolate a function and experiment with it without the need to drag a heavy context of satellite objects or build a bunch of mocks, which is very often the case with OO languages. Putting aside the experimentation, a pure function won't modify its arguments, which ease the fear to break unintentionally some piece of code outside the scope of the function.
But, putting aside the merits of functional/immutability over oop/mutability, in terms of maintenance, my experience leads me to think that it's not the technology which is the main issue, but the design, code quality and evolution of this code over time even when the initial one was of good quality. By "good", I mean that the code is respectful of style conventions (like basic naming), managed complexity, and has a sensible test harness, in a continuous (or at least automated) build environment.
Then, the question becomes: is there a paradigm (functional/immutability, object-oriented/mutability) that enforced a better design and better code. My feeling is that functional languages are the land of computer science passionates, OTOH OOP is more mainstream. Isn't it because OOP is easier to apprehend or is ot just a matter of education? but then, in order to maintain a system in the long run, should one go for a "clever" functional environment with few people able to tackle it, or some mainstream OO technology - with its unsafeness or permissiveness - but lots of people having some knowledge in it?
Certainly the solution is to choose the right technologies (plural) with the right, motivated people...

Related

Symptoms and alternatives to overused OOP

Lately I am losing my trust in OOP. I have already seen many
complaints about common OOP misuses or just simple overuse. I do not
mean the common confusion between is-a and has-a relationship. I mean
stuff like the problems of ORM when dealing with relational databases,
the excessive use of inheritance from C# and also several years of looking
at code with the same false encapsulation belief that Scott Meyers
mentions in the item 23 of Effective C++
I am interested in learning more about this and non OOP software
patterns that can solve certain problems better than their OOP
counterparts. I am convinced that out there there are many people
giving good advice on how to use this as an advantage with non pure OOP
languages such as C++.
Does anyone knows any good reference (author, book, article) to get
started?
Please, notice that I am looking for two related but different things:
Common misuses of OOP concepts (like item 23)
Patterns where OOP is not the best solution (with alternatives)
Well I can recommend you a book Agile Principles, Patterns, and Practices in C#.
Examples are in C# of course, but the idea of the book is universal. Not only it covers Agile but also focuses on bad practices and shows in examples how to convert bad code to a good code. It also contains descriptions of many design pattern and shows how to implement them in semi-real example of Payroll application.
This has to be done but if you truly want to get away from OOP or at least take a look at concepts which are not OOP but are used with great effectiveness: Learn you a Haskell. Try a new programming paradigm and then start seeing where you can apply much of the concepts back to OOP languages. This addresses your second bullet, not in a direct way but trust me, it'll help more than you can think.
It's a bit odd that you mention C#. It has very powerful keywords to keep the usual inheritance misery in check. The first one ought to be the internal keyword. The notion of restricting the visibility to a module. That concept is completely absent in C++, the build model just doesn't support it. Otherwise a great concept, "I only trust the members of my team to get it right". Of course you do.
Then there's the slammer one, the sealed keyword. Extraordinary powerful, "the buck stops here, don't mess with me". Used with surgical precision in the .NET framework, I've never yet found a case where sealed was used inappropriately. Also missing in C++, but with obscure ways to get that working.
But yes, the WPF object model sucks fairly heavy. Inheriting 6 levels deep and using backdoors like a dependency property is offensive. Inheritance is hard, let's go shopping.
I would say to look at game engines. For the most part, OOP has a tendency to cause slight performance decreases, and the gaming industry is seemingly obsessed with eliminating minor slowdowns (and sometimes ignoring large ones). As such, their code, though usually written in a language that supports OOP, will end up using only those elements of OOP that are necessary for clean code / ease of maintenance that also balances performance.
EDIT:
Having said that, I don't know if I would really go look at Unreal. They do some strange things for the sake of making their content pipeline easier for developers... it makes their code... well, look if you really want to know.
One common overuse is forcing OOP in programs/scripts that take some input, turn it to output, then exit (and not receiving input from anywhere else during the process). Procedural way is much cleaner in these cases.
Typical example of this is forcing OOP in PHP scripts.

Efficiency of program

I want to know whether there is an effect on program efficiency by adopting object oriented approach to a problem as compared to the structured programming approach in any programming language but specially in c++.
Maybe. Maybe not.
You can write efficient object-oriented code. You can write inefficient structured code.
It depends on the application, how well the code is written, and how heavily the code is optimized. In general, you should write code so that it has a good, clean, modular architecture and is well designed, then if you have problems with performance optimize the hot spots that are causing performance issues.
Use object oriented programming where it makes sense to use it and use structured programming where it makes sense to use it. You don't have to choose between one and the other: you can use both.
I remember back in the early 1990's when C++ was young there were studies done about this. If I remember correctly, the guys who took (well written) C++ programs and recoded them in C got around a 15% increase in speed. The guys who took C programs and recoded them in C++, and modified the imperative style of C to an OO style (but same algorithms) for C++ got the same or better performance. The apparent contradiction was explained by the observation that the C programs, in being translated to an object oriented style, became better organized. Things that you did in C because it was too much code and trouble to do better could more easily be done properly in C++.
Thinking back about this I wonder about the conclusion some. Writing a program a second time will always result in a better program, so it didn't have to be imperative to OO style that made the difference. Todays computer architectures are designed with hardware support for common operations done by OO programs, and compilers have gotten better at using the instructions, so I think that it is likely that whatever overhead a virtual function call had in 1992 it is far smaller today.
There doesn't have to be, if you are very careful to avoid it. If you just take the most straightforward approach, using dynamic allocation, virtual functions, and (especially) passing objects by value, then yes there will be inefficiency.
It doesn't have to be. Algorithm is all matters. I agree encapsulation will slow you down little bit, but compilers are there to optimize.
You would say no if this is the question in computer science paper.
However in the real development environment this tends to be true if the OOP paradigm is used correctly. The reason is that in real development process, we generally need to maintain our code base and that the time when OOP paradigm could help us. One strong point of OOP over structured programming like C is that in OOP it is easier to make the code maintainable. When the code is more maintainable, it means less bug and less time to fix bug and less time needed for implementing new features. The bottom line is then we will have more time to focus on the efficiency of the application.
The problem is not technical, it is psychological. It is in what it encourages you to do by making it easy.
To make a mundane analogy, it is like a credit card. It is much more efficient than writing checks or using cash. If that is so, why do people get in so much trouble with credit cards? Because they are so easy to use that they abuse them. It takes great discipline not to over-use a good thing.
The way OO gets abused is by
Creating too many "layers of abstraction"
Creating too much redundant data structure
Encouraging the use of notification-style code, attempting to maintain consistency within redundant data structures.
It is better to minimize data structure, and if it must be redundant, be able to tolerate temporary inconsistency.
ADDED:
As an illustration of the kind of thing that OO encourages, here's what I see sometimes in performance tuning: Somebody sets SomeProperty = true;. That sounds innocent enough, right? Well that can ripple to objects that contain that object, often through polymorphism that's hard to trace. That can mean that some list or dictionary somewhere needs to have things added to it or removed from it. That can mean that some tree or list control needs controls added or removed or shuffled. That can mean windows are being created or destroyed. It can also mean some things need to be changed in a database, which might not be local so there's some I/O or mutex locking to be done.
It can really get crazy. But who cares? It's abstract.
There could be: the OO approach tends to be closer to a decoupled approach where different modules don't go poking around inside each other. They are restricted to public interfaces, and there is always a potential cost in that. For example, calling a getter instead of just directly examining a variable; or calling a virtual function by default because the type of an object isn't sufficiently obvious for a direct call.
That said, there are several factors that diminish this as a useful observation.
A well written structured program should have the same modularity (i.e. hiding implementations), and therefore incur the same costs of indirection. The cost of calling a function pointer in C is probably going to be very similar to the cost of calling a virtual function in C++.
Modern JITs, and even the use of inline methods in C++, can remove the indirection cost.
The costs themselves are probably relatively small (typically just a few extra simple operations per instruction call). This will be insignificant in a program where the real work is done in tight loops.
Finally, a more modular style frees the programmer to tackle more complicated, but hopefully less complex algorithms without the peril of low level bugs.

What are the best resources for learning how to avoid side effects and state in OOP?

I've been playing with functional programming lately and there are pretty good treatments on the topic of side effects, why they should be contained, etc. In projects where OOP is used, I'm looking for some resources which lay out some strategies for minimizing side effect and/or state.
A good example of this is the book RESTful Web Services which gives you strategies for minimizing state in a web application. What others exist?
Remember I'm not looking for another OOP analysts/design patterns book (though good encapsulation and loose coupling help avoid side effects) but rather a resource where the topic itself is state/side effects.
Some compiled answers
OOP programmers who mostly care about state do so because of concurrency, so read Java Concurrency in Practice. [exactly what I was looking for]
Use TDD to make side effects more visible [I like it, example: the bigger your setUps are, the more state you need in place to run your tests = good warning]
Command-query separation [Good stuff, prevents the side effect of changing a function argument which is generally confusing]
Methods do only one thing, perhaps use descriptive names if they change the state of their object so it's simple and clear.
Make objects immutable [I really like this]
Pass values as parameters, instead of storing them in member variables. [I don't link this; it clutters up function prototype and is actively discouraged by Clean Code and other books, though I do admit it helps the state issue]
Recompute values instead of storing and updating them [I also really like this; in the apps I work on performance is a minor concern]
Similarly, don't copy state around if you can avoid it. Make one object responsible for keeping it and let others access it there. [Basic OOP principle, good advice]
I don't think you'll find a lot current material in the OO world on this topic, simply because OOP (and most imperative programming, for that matter) relies on state and side effects. Consider logging, for instance. It's pure side-effect, yet in any self-respecting J2EE app, it's everywhere. Hoare's original QuickSort relies on mutable state, since you have to swap values around a pivot, and yet it too is everywhere.
This is why many OO programmers have trouble wrapping their heads around functional programming paradigms. They try to reassign the value of "x," discover that it can't be done (at least not in the way it can in every other language they've worked in), and they throw up their hands and shout "This is impossible!" Eventually, if they're patient, they learn recursion and currying and how the map function replaces the need for loops, and they calm down. But the learning curve can be very steep for some.
The OO programmers these days who care most about avoiding state are those working on concurrency. The reasons for this are obvious -- mutable state and side effects cause huge headaches when you're trying to manage concurrency between threads. As a result, the best discussion I've seen in the OO world about avoiding state is Java Concurrency in Practice.
I think the rules are quite simple: methods should only ever do one thing, and the intent should be communicated clearly in the method name.
Methods should either query or change data, but never both.
Some small things I do:
Prefer immutable state, it is relatively benign. E.g. in Java I make member variables final and set them in the constructor wherever possible.
Pass around values as parameters, instead of storing them in member variables.
Recompute values instead of storing and updating them, if that can be done cheaply enough. This helps to avoid inconsistent data by forgetting to update it.
Similarly, don't copy state around if you can avoid it. Make one object responsible for keeping it and let others access it there.
One way to isolate side-effects in OO is to let operations only return a description object of the side-effects to cause.
Command-query separation is a pattern that is close to this idea.
By practising TDD (or at least writing unit tests) one will typically be much more aware of side-effects and use them more sparingly, and also separate them from other side-effect free expressions that are easy to write data driven (expected, actual) unit-tests for.

For C/C++, When is it beneficial not to use Object Oriented Programming?

I find myself always trying to fit everything into the OOP methodology, when I'm coding in C/C++. But I realize that I don't always have to force everything into this mold. What are some pros/cons for using the OOP methodology versus not? I'm more interested in the pros/cons of NOT using OOP (for example, are there optimization benefits to not using OOP?). Thanks, let me know.
Of course it's very easy to explain a million reasons why OOP is a good thing. These include: design patterns, abstraction, encapsulation, modularity, polymorphism, and inheritance.
When not to use OOP:
Putting square pegs in round holes: Don't wrap everything in classes when they don't need to be. Sometimes there is no need and the extra overhead just makes your code slower and more complex.
Object state can get very complex: There is a really good quote from Joe Armstrong who invented Erlang:
The problem with object-oriented
languages is they’ve got all this
implicit environment that they carry
around with them. You wanted a banana
but what you got was a gorilla holding
the banana and the entire jungle.
Your code is already not OOP: It's not worth porting your code if your old code is not OOP. There is a quote from Richard Stallman in 1995
Adding OOP to Emacs is not clearly an
improvement; I used OOP when working
on the Lisp Machine window systems,
and I disagree with the usual view
that it is a superior way to program.
Portability with C: You may need to export a set of functions to C. Although you can simulate OOP in C by making a struct and a set of functions who's first parameter takes a pointer to that struct, it isn't always natural.
You may find more reasons in this paper entitled Bad Engineering Properties
of Object-Oriented Languages.
Wikipedia's Object Oriented Programming page also discusses some pros and cons.
One school of thought with object-oriented programming is that you should have all of the functions that operate on a class as methods on the class.
Scott Meyers, one of the C++ gurus, actually argues against this in this article:
How Non-Member Functions Improve Encapsulation.
He basically says, unless there's a real compelling reason to, you should keep the function SEPARATE from the class. Otherwise the class can turn into this big bloated unmanageable mess.
Based on experiences in a previous large project, I totally agree with him.
A benefit of non-oop functionality is that it often makes exporting your functionality to different languages easier. For example a simple DLL containing only functions is much easier to use in C#, you can use the P/Invoke to simply call the C++ functions. So in this sense it can be useful for writing extremely time critical algorithms that fit nicely into single/few function calls.
OOP is used a lot in GUI code, computer games, and simulations. Windows should be polymorphic - you can click on them, resize them, and so on. Computer game objects should be polymorphic - they probably have a location, a path to follow, they might have health, and they might have some AI behavior. Simulation objects also have behavior that is similar, but breaks down into classes.
For most things though, OOP is a bit of a waste of time. State usually just causes trouble, unless you have put it safely in the database where it belongs.
I suggest you read Bjarne's Paper about Why C++ is not just an Object-Oriented Programming Language
If we consider, for a moment, not object-orienatation itself but one
of the keystones of object-orientation: encapsulation.
It can be shown that change-propagation probability cannot increase
with distance from the change: if A depends on B and B depends on C,
and we change C, then the probability that A will change
cannot be larger than the proabability that B will
change. If B is a direct dependency on C and A is an indirect
dependency on C, then, more generally, to minimise the potential cost
of any change in a system we must miminimise the potential number of
direct dependencies.
The ISO defines encapsulation as the property that the information
contained in an object is accessible only through interactions at the
interfaces supported by the object.
We use encapsulation to minimise the number of potential dependencies
with the highest change-propagation probability. Basically,
encapsulation mitigates the ripple effect.
Thus one reason not to use encapsulation is when the system is so
small or so unchanging that the cost of potential ripple effects is
negligible. This is also, therefore, a case when OO might not be used
without potentially costly consequences.
Well, there are several alternatives. Non-OOP code in C++ may instead be:
C-style procedural code, or
C++-style generic programming
The only advantages to the first are the simplicity and backwards-compatibility. If you're writing a small trivial app, then messing around with classes is just a waste of time. If you're trying to write a "Hello World", just call printf already. Don't bother wrapping it in a class. And if you're working with an existing C codebase, it's probably not object-oriented, and trying to force it into a different paradigm than it already uses is just a recipe for pain.
For the latter, the situation is different, in that this approach is often superior to "traditional OOP".
Generic programming gives you greater performance (among other things because you often avoid the overhead of vtables, and because with less indirection, the compiler is better able to inline), better type safety (because the exact type is known, rather than hiding it behind an interface), and often cleaner and more concise code as well (STL iterators and algorithms enable much of this, without using a single instance of runtime polymorphism or virtual functions.
OOP is little more than an aging buzzword. A methodology that everyone misunderstood (The version supported by C++ and Java has little to do with what OOP originally meant, as in SmallTalk), and then pretended was the holy grail. There are aspects to it that are useful, certainly, but it is often not the best approach for designing an application.
Rather, express the overall logic by other means, for example generic programming, and when you need a class to encapsulate some simple concept, by all means design it according to OOP principles.
OOP is just a tool among many. The goal is not to write OOP code, but to write good code. Sometimes, the way to do this is by using OOP principles, but often, you can get better code using generic programmming principles, or functional programming.
It is a very project dependent decision. My general feel of OOP is that its useful for organizing large projects that involve multiple components. One area I find that OOP is especially pointless is school assignments. Excepting those specifically designed to teach OOP concepts, or large software design concepts, many of my assignments, specifically those in more algorithmy type classes are best suited to non-OOP design.
So specifically, smaller projects, that are not likely to grow large, and projects that center around a single algorithm seem to be non-OOP candidates in my books. Also, if you can write the specification as a linear set of steps, e.g., with no interactive GUI or state to maintain, this would also be an opportunity.
Of course, if you're required to use an OOP design, or an OOP toolkit, or if you have well defined 'objects' in you're spec, or if you need the features of polymorphism, etc. etc. etc...there are plenty of reasons to use it, the above seem to be indicators of when it would be simple not to.
Just my $0.02.
Having an Ada background, I develop in C in terms of packages containing data and their associated functions. This gives a code very modular with pieces of code that can be taken apart and reused on other projects. I don't feel the need to use OOP.
When I develop in Objective-C, objects are the natural container for data and code. I still develop with more or less the package concept in mind with some new cool features.
I'm used to be an OOP fanboy... Then realized using functions, generics and callbacks can often make a more elegant and change-friendly solution in C++ than classes and virtual functions.
Other big names realized it too: http://harmful.cat-v.org/software/OO_programming/
IMHO, I have a feeling that the OOP concept is not really suits the needs of the Big Data, as OOP assume all the stuff to be kept in memory (concept of Objects and member variables). This always result in memory demanding and heavy applications when OOP is used for example for big images processing. Instead, the simplicity of C maybe used with intensive parallel I/O making apps more efficient and easy to implement. It is the year 2019 I am writing this message...Everything may change in a year! :)
In my mind it comes down to what kind of model suits the problem at hand. It seems to me that OOP is best suited to coding GUI programs, in that the data and functionality for a graphical object is easily bundled together. Other problems- (such as a webserver, as an example off the top of my head), might be more easily modeled with a data centric approach, where there's no strong advantage to having a method and its data near each-other.
tl;dr depends on the problem.
I'd say the greatest benefit of C++ OOP is inheritance and polymorphism (Virtual function etc...) .
This allows for code reuse and extendibility
C++, use OOP - - - C, no, with certain exceptions
In C++ you should use OOP. It's a nice abstraction and it's the tool you are given. You either use it or leave it in the box where it can't help. You don't use the power saw for everything but I would read the manual and have it ready for the right job.
In C, it's a more difficult call. While you can certainly write arbitrarily object-oriented code in C, it's enough of a pain that you immediately find yourself fighting the language in order to use it. You may be more productive dropping the doesn't-fit-so-well design pattern and programming as C was intended to be used.
Furthermore, every time you make an array of function pointers or something in an OOP-in-C design pattern, you sever almost completely all visible links in the inheritance chain, making the code hard to maintain. In real OOP languages, there is an obvious chain of derived classes, often analyzed and documented for you. (mmm, javadoc.) Not so in OOP-in-C, and the tools available won't be able to see it.
So, I would argue in general against OOP in C. For a really complex program, you may well need the abstraction, and then you will have to do it despite needing to fight the language in the process and despite making the program quite hard to follow by anyone other than the original author.
But if you knew the program was going to become that complicated, you shouldn't have written it in C in the first place...
In C, there are some times when I 'emulate' the object oriented approach, by defining some sort of constructor with granular control over things like callbacks, when running several instances of it.
For instance, lets say I have some spiffy event handler library and I know that down the road I'm going to need many allocated copies:
So I would have (in C)
MyEvent *ev1 = new_eventhandler();
set_event_callback_func(ev1, callback_one);
ev1->setfd(fd1);
MyEvent *ev2 = new_eventhandler();
set_event_callback_func(ev2, callback_two);
ev2->setfd(fd2);
destroy_eventhandler(ev1);
destroy_eventhandler(ev2);
Obviously, I would later do something useful with that like handle received events in the two respective callback functions. I'm not going to really elaborate on the method of typing function pointers and structures to hold them, nor what would go on in the 'constructor' because its pretty obvious.
I think, this approach works for more advanced interfaces where its desirable to allow the user to define their own callbacks (and change them on the fly), or when working on complex non-blocking I/O services.
Otherwise, I much prefer a more procedural / functional approach.
Probably an unpopular idea but I think you should stick with non-OOP unless it adds something useful. In most practical problems OOP is useful but if I'm just playing with an idea I start writing non-object code and put functions and data into classes if it becomes useful.
Of course I still use other objects in my code (std::vector et al) and I use namespaces to help organise my functions but why put code into objects until it is useful? Equally don't shy away from free functions in an OO solution.
The question is tricky because OOP encompasses several concepts: object encapsulation, polymorphism, inheritance, etc. It's easy to take those ideas too far. Here's a concrete example:
When C++ first caught on, zillions of string classes sprung into being. Everything you could possibly imagine doing to a string (upcasing, downcasing, trimming, tokenizing, parsing, etc.) was a member function of some string class.
Notice, though, that std::strings from the STL don't have all these methods. STL is object-oriented--the state and implementation details of a string object are well encapsulated, only a small, orthogonal interface is exposed to the world. All the crazy manipulations that people used to include as member functions are now delegated to non-member functions.
This is powerful, because these functions can now work on any string class that exposes the same interface. If you use STL strings for most things and a specialty version tuned to your program's idiosyncracies, you don't have to duplicate member functions. You just have to implement the basic string interface and then you can re-use all those crazy manipulations.
Some people call this hybrid approach generic programming. It's still object-oriented programming, but it moves away from the "everything is a member-function" mentality that a lot of people associate with OOP.

As a programmer with no CS degree, do I have to learn C++ extensively?

I'm a programmer with 2 years experience, I worked in 4 places and I really think of myself as a confident, and fluent developer.
Most of my colleagues have CS degrees, and I don't really feel any difference! However, to keep up my mind on the same stream with these guys, I studied C (read beginning C from novice to professional), DataStructures with C, and also OOP with C++.
I have a reasonable understanding of pointers, memory management, and I also attended a scholarship which C, DataStructures, and C++ were a part of it.
I want to note that my familiarity with C and C++ does not exceed reading some pages, and executing some demos; I haven't worked on any project using C or C++.
Lately a friend of mine advised me to learn C, and C++ extensively, and then move to OpenGL and learn about graphics programming. He said that the insights I may gain by learning these topics will really help me throughout my entire life as a programmer.
PS: I work as a full-time developer mostly working on ASP.NET applications using C#.
Recommendations?
For practical advancement:
From a practical sense, pick a language that suites the domain you want to work in.
There is no need to learn C nor C++ for most programming spaces. You can be a perfectly competent programmer without writing a line of code in those languages.
If however you are not happy working in the exact field you are in now, you can learn C or C++ so that you may find a lower level programming job.
Helping you be a better programmer:
You can learn a lot from learning multiple languages though. So it is always good to broaden your horizons that way.
If you want more experience in another language, and have not tried it yet, I would recommend to learn a functional programming language such as Scheme, Lisp, or Haskell.
First, having a degree has nothing to do with knowing C++. I know several people who graduated from CS without ever writing more than 50 lines of C/C++. CS is not about programming (in the same sense that surgery is not about knives), and it certainly isn't about individual languages. A CS degree requires you to poke your nose into several different languages, on your way to somewhere else. CS teaches the underlying concepts, an understanding of compilers, operating systems, the hardware your code is running on, algorithms and data structures and many other fascinating subjects. But it doesn't teach programming. Whatever programming experience a CS graduate has is almost incidental. It's something he picked up on the fly, or because of a personal interest in programming.
Second, let's be clear that it's very possible to have a successful programming career without knowing C++. In fact, I'd expect that most programmers fall into this category. So you certainly don't need to learn C++.
That leaves two possible reasons to learn C++:
Self-improvement
Changing career track
#2 is simple. If you want to transition to a field where C++ is the dominant language, learning it would obviously be a good idea. You mentioned graphics programming as an example, and if you want to do that for a living, learning C++ will probably be a good idea. (however, I don't think it's a particularly good suggestion for "insights that will help throughout your live as a programmer". There are other fields that are much more generally applicable. Learning graphics programming will teach you graphics programming, and not much else.)
That leaves #1, which is a bit more interesting. Will you become a better programmer simply by knowing C++? Perhaps, but not as much as some may think. There are several useful things that C++ may teach you, but there also seems to be a fair bit of superstition about it: it's low-level and has pointers, so by learning C++, you will achieve enlightenment.
If you want to understand what goes on under the hood, C or C++ will be helpful, sure, but you could cut out the middle man and just go directly into learning about compilers. That'd give you an even better idea. Supplement that with some basic information on how CPU's work, and a bit about operating systems as well, and you've learned all the underlying stuff much better than you would from C++.
However, some things I believe are worth picking up from C++, in no particular order:
(several of them are likely to make you despair at C#, which, despite adopting a lot of brilliant features, is still missing out some that to a C++ programmer seems blindingly obvious)
Paranoia: Becoming good at C++ implies becoming a bit of a language lawyer. The language leaves a lot of things undefined or unspecified, so a good C++ programmer is paranoid. "The code I just wrote looks ok, and it seems to be have ok when I run it - but is it well-defined by the standard? Will it break tomorrow, on his computer, or when I compile with an updated compiler? I have to check the standard". That's less necessary in other languages, but it may still be a healthy experience to carry with you. Sometimes, the compiler doesn't have the final word.
RAII: C++ has pioneered a pretty clever way to deal with resource management (including the dreaded memory management). Create an object on the stack, which in its constructor acquires the resource in question (database connection, chunk of memory, a file, a network socket or whatever else), and in its destructor ensures that this resource is released. This simple mechanism means that you virtually never write new/delete in your top level code, it is always hidden inside constructors or destructors. And because destructors are guaranteed to execute when the object goes out of scope, even if an exception is thrown, your resource is guaranteed to be released. No memory leaks, no unclosed database connections. C# doesn't directly support this, but being familiar with the technique sometimes lets you see a way to emulate it in C#, in the cases where it's useful. (Obviously memory management isn't a concern, but ensuring that database connections are released quickly might still be)
Generic programming, templates, the STL and metaprogramming: The C++ standard library (or the part of it commonly known as the STL) is a pretty interesting example of library design. In some ways, it is lightyears ahead of .NET or Java's class libraries, although LINQ has patched up some of the worst shortcomings of .NET. Learning about it might give you some useful insights into clever ways to work with sequences or sets of data. It also has a strong flavor of functional programming, which is always nice to poke around with. It's implemented in terms of templates, which are another remarkable feature of C++, and template metaprogramming may be beneficial to learn about as well. Not because it is directly applicable to many other languages, but because it might give you some ideas for writing more generic code in other languages as well.
Straightforward mapping to hardware: C++ isn't necessarily a low level language. But most of its abstractions have been modelled so that they can be implemented to map directly to common hardware features. That means it might help provide a good understanding of the "magic" that occurs between your managed .net code and the CPU at the other end. How is the CLR implemented, what do the heap and stack actually mean, and so on.
p/invoke: Let's face it, sometimes, .NET doesn't offer the functionality you need. You have to call some unmanaged code. And then it's useful to actually know the language you might be using. (if you can get around it with just a single pinvoke call, you only need to be able to read C function signatures on MSDN so you know which arguments to pass, but sometimes, it may be preferable to write your own C++ wrapper, and call into that instead.
I don't know if you should learn C++. There are valid reasons why doing so may make you a better programmer, but then again, there are plenty of other things you could spend your time on that would also make you a better programmer. The choice is yours. :)
Experience is the best teacher.
While you can read about things like memory management, data structures (and their implementations), algorithms, etc., you won't really get it until you've had a chance to put it in to practice. While I don't know if it's truly necessary to use C or C++ to learn these things I would put some effort into actually writing some code that manages its own memory and implements some common data structures. I think you'll learn things that will help you to understand your code better; to know what's really going on under the hood, so to speak. I would also recommend reading up on computer organization and operating systems, computer security, and boolean logic. On the other hand, I've never really found a need to do any OpenGL programming, though I did do some X Windows stuff once upon a time.
Having degree has got nothing to do with C/C++ actually. Now, stuff like big O() estimation, data structures or even mathematical background. For example linear algebra results very useful, even in context that seemingly have nothing to do (eg. search engines).
For example typical error that a good coder, but without any theoretical knowledge, might commit is to try to solve NP-complete problems by exact algorithm, rather than approximation.
Now, why in universities they teach you C/C++? Because it let's you see how it's all working "under the hood". You get opportunity to see how call stack works, how memory management works, how pointers work. Of course you don't need that knowledge to use most modern languages. But you need that to understand how their "magic" works. Eg. you can't understand how GC works, if you got no idea about pointers and memory allocation.
I've often asked this question (to myself). I think the more general version is, "how can I call myself a programmer if I don't know how to kick around a language that doesn't have automatic garbage collection, with pointers and all that 'complex' stuff'?" I've never learned C++ except to do a few HelloWorlds, so my answer is limited by that lack:
I think that the feeling that you need to learn C++ (or assembler, really) comes from the feeling that you're always working on someone else's abstractions: the "rocket scientists" who write the JVM, CLR, whatever. So if you can get to a lower level language, you'll really know what you're talking about. I think this is quite wrong. One is always building on a set of abstractions: even Assembler is translated into binary, which can be learned as well. And beyond that, you still couldn't make a computer out of firewood, even if you had a pair of pliers and a bit of titanium.
In my experience as a corporate trainer in software dev (in Java, mostly), the best people were not those who knew C++, but rather those that took the language that they are working in as an independent space for "play." Although memory management comes up all the time in C# and Java, you never have to think about anything beyond freeing your object from references (and a few other cliche places, like using streams instead of throwing around huge objects in memory). Pointers and all that stuff do not help you there, except as a right of passage (and a good one, I'm sure).
So in summary, work in the language you're in and branch out into as many relevant things as possible. These days I find myself dipping into Javascript though the APIs are supposed to make this unecessary, and doing some stuff in Fireworks while I mess with CSS by hand. And this is all in addition to the development I'm really doing in RoR, PHP and Actionscript. So my point is: focus on abstractions that you need, because they're more likely to be relevant than the lower-level stuff that underlies your platform.
Edit: I made some slight changes in response to jalf's comments, thanks.
I have a 1st class Software Engineering degree and work for a large console manufacturer developing a game engine in a team of programmers all of whom program across a wide range of languages from Asm to C++ to C# to LUA and know the hardware inside out.
I would say that 5% of my degree was useful and that by far and away the most important trait to furthering my career has been enthusiam and self development.
In fact many of the colleagues I've worked with haven't had a degree and on average have probably been the better ones.
I'd say this is because they've had to replace that piece of paper from a university degree with actual working code that they've developed in thier own spare time learning the skills off thier own back rather than being spoon fed it.
My driving instructor use to tell me that I would only start learning how to drive after I pass my test ie you only really learn from the practical application of the basics. A CS degree gives you the basics which if you've had a job programming any of the major languages for 6 months you will already have. A degree just opens up doors that you may not have otherwise - it doesn't help that much once inside the door.
Knowing how the software interacts with the hardware by the sounds of it is the most important area for you at the moment only then does the 'mystery' or 'magic' really disappear and you can be confident of what your talking about else where. Learning C and C++ will undoubtedbly help in this respect as will knowing an API like OpenGL.
But I'd say the most important thing is to find something you have interest in and code that. If you have real enthusiam for it you will naturally learn more low level information and become a better programmer, if indeed that is what your definition of being a better programmer is!
I've been working as a developer with no degree for almost 15 years now. I started with Ada and moved quickly into C/C++, but it's been my experience that there will always be some language that you "have to learn." If it's not C++, it will be C# or C or Java or Lisp. My advice is make sure you're solid on the basics that apply to any language(my best friend as a dev with no degree was the CLR book), and you should be able to move relatively easily between languages and frameworks.
You don't absolutely have to learn C/C++, but both languages will teach you to think about how your software interacts with the underlying OS and hardware, which is a essential skill. You say that you already know about pointers, memory management and so on, which is great. Many programmers without a CS degree lack this important knowledge.
Another good reason to learn C/C++ is that there's a lot of code written in these languages and a good way to learn more about programming is to read other people's code. If you're interested in writing low level code like drivers, OS, file systems and the like C/C++ is pretty much the only way to go.
Do you have to learn it extensively? I expect not.
However it's best to always be learning things that help you look at programming from a different perspective. Learning C or C++ are worth it for the insight into how things work at a lower level. For C and C++ programmers the same thing might be accomplished by learning assembly. Most people won't use assembly in a project, but knowing how it works can be very helpful from time to time.
My recommendation is always to learn as much as you can. If you're not working on a C++ project in the near future I wouldn't be too worried about learning the ins and outs, but it's always good to be able to look at problems from another angle and learning new languages is one way to do that.
Today for the majority of applications, C and C++ can be viewed as an academic exercise: "How can we write programs without garbage collection?"
The answer is: you can, but it's a mostly painful experience. Most of the details of best practices in C++ are related to the lack of garbage collection.
Given the brilliant performance of modern GCs, and the general increase in computing power, even cell phones have GCs these days. And in a platform with a GC, you can always code in such a way as to limit the pressure you put on the GC.
Listen or read SO podcast 44, where Joel plays his favorite song Write in C
Spolsky: Yeah, it's not paying the proper royalties to the Beatles anyway. We'll link to that from the shownotes. Awesome song, Write in C.
Atwood: That's right, Joel's favourite song. Write everything in C, because Joel does in fact write everything in C, don't you, Joel?
Spolsky: I started using a little bit of C99, the latest version of C, which let you declare variables after you written some statements.
...
Without a professional reason (other than the good practice of self-improvement) to learn C or C++, then you should have a passionate side project planned out that you could write in C or C++. Once the going gets tough on the side project, you'll need your enthusiasm and curiosity to take you over the hump (since on a side project, you naturally don't have the motivation of pay or de-motivation of a superior looming over you).
Also, most CS degrees are using Java as their language of choice now. This just proves the point that experience gained in the language of choice and exposure to some of the theory involved in the other classes in the degree is the main benefit for people with CS degrees, and not so much the specific language (though I think the higher they go up the abstraction scale, the worse it is for the students in the long run).
Without a practical reason for learning a programming language it is pretty hard going.
If you can think of particular problems or a specific task which the language is suit for - Then the learning experience is driven by needs, rather than simple academics.
I only just recently switched from VB to C# (1 month ago) while not as significantly different as a switch from C# to C, because I switch for a particular reason I found it much easier to learn. I had dabbled previous without a specific problem to solve, needless to say I switched back
If you have a different style of learning as in self-taught then my recommendation to be a better programmer is to research topics regarding your domain. From bottom to top, slowly climb up the ladder.There is a fairly amount of different programmers, no one will excel in all, so don't start off with that context in mind.
Best of luck to you.
C++ is just a programming language. What you don't have that other students (if they paid attention in class) have is the deeper understanding that comes through studying concepts.
Being a programmer is not and should not be the end goal of any CS graduate. However it is as far as most people get without such a degree.
Here is an analogy: An engineer and an architect both at some point learn to draft buildings using CAD. Also, someone completely untrained can come in and start work using CAD and be very effective. This is a good career and it pays well, but for both the engineer and the architect it is not where you want to be when you are 30.
One value of knowing C is that many other languages including C#, Java, C++, JavaScript, Python, and PHP have their roots in C syntax.
Another value, and arguably more important, is that it will build your confidence. Programmers are a confident group and very optimistic (you have to be confident to think that you can write the equivalent of a 1000 page book without a single spelling or grammatical error). And confidence in your ability to learn and effectively use any language will grow considerably with a pure C application or two under your belt.
So write a non trivial program in C; something that at least reads and writes files, allocates and deallocates memory, and manages a data structure like a queue or binary tree.
Your confidence will thank you.