Learning FORTRAN In the Modern Era - fortran

Locked. This question and its answers are locked because the question is off-topic but has historical significance. It is not currently accepting new answers or interactions.
I've recently come to maintain a large amount of scientific calculation-intensive FORTRAN code. I'm having difficulties getting a handle on all of the, say, nuances, of a forty year old language, despite google & two introductory level books. The code is rife with "performance enhancing improvements". Does anyone have any guides or practical advice for de-optimizing FORTRAN into CS 101 levels? Does anyone have knowledge of how FORTRAN code optimization operated? Are there any typical FORTRAN 'gotchas' that might not occur to a Java/C++/.NET raised developer taking over a FORTRAN 77/90 codebase?

You kind of have to get a "feel" for what programmers had to do back in the day. The vast majority of the code I work with is older than I am and ran on machines that were "new" when my parents were in high school.
Common FORTRAN-isms I deal with, that hurt readability are:
Common blocks
Implicit variables
Two or three DO loops with shared CONTINUE statements
GOTO's in place of DO loops
Arithmetic IF statements
Computed GOTO's
Equivalence REAL/INTEGER/other in some common block
Strategies for solving these involve:
Get Spag / plusFORT, worth the money, it solves a lot of them automatically and Bug Free(tm)
Move to Fortran 90 if at all possible, if not move to free-format Fortran 77
Add IMPLICIT NONE to each subroutine and then fix every compile error, time consuming but ultimately necessary, some programs can do this for you automatically (or you can script it)
Moving all COMMON blocks to MODULEs, low hanging fruit, worth it
Convert arithmetic IF statements to IF..ELSEIF..ELSE blocks
Convert computed GOTOs to SELECT CASE blocks
Convert all DO loops to the newer F90 syntax
myloop: do ii = 1, nloops
! do something
enddo myloop
Convert equivalenced common block members to either ALLOCATABLE memory allocated in a module, or to their true character routines if it is Hollerith being stored in a REAL
If you had more specific questions as to how to accomplish some readability tasks, I can give advice. I have a code base of a few hundred thousand lines of Fortran which was written over the span of 40 years that I am in some way responsible for, so I've probably run across any "problems" you may have found.

Legacy Fortran Soapbox
I helped maintain/improve a legacy Fortran code base for quite a while and for the most part think sixlettervariables is on the money. That advice though, tends to the technical; a tougher row to hoe is in implementing "good practices".
Establish a required coding style and coding guidelines.
Require a code review (of more than just the coder!) for anything submitted to the code base. (Version control should be tied to this process.)
Start building and running unit tests; ditto benchmark or regression tests.
These might sound like obvious things these days, but at the risk of over-generalizing, I claim that most Fortran code shops have an entrenched culture, some started before the term "software engineering" even existed, and that over time what comes to dominate is "Get it done now". (This is not unique to Fortran shops by any means.)
Embracing Gotchas
But what to do with an already existing, grotty old legacy code base? I agree with Joel Spolsky on rewriting, don't. However, in my opinion sixlettervariables does point to the allowable exception: Use software tools to transition to better Fortran constructs. A lot can be caught/corrected by code analyzers (FORCHECK) and code rewriters (plusFORT). If you have to do it by hand, make sure you have a pressing reason. (I wish I had on hand a reference to the number of software bugs that came from fixing software bugs, it is humbling. I think some such statistic is in Expert C Programming.)
Probably the best offense in winning the game of Fortran gotchas is having the best defense: Knowing the language fairly well. To further that end, I recommend ... books!
Fortran Dead Tree Library
I have had only modest success as a "QA nag" over the years, but I have found that education does work, some times inadvertently, and that one of the most influential things is a reference book that someone has on hand. I love and highly recommend
Fortran 90/95 for Scientists and Engineers, by Stephen J. Chapman
The book is even good with Fortran 77 in that it specifically identifies the constructs that shouldn't be used and gives the better alternatives. However, it is actually a textbook and can run out of steam when you really want to know the nitty-gritty of Fortran 95, which is why I recommend
Fortran 90/95 Explained, by Michael Metcalf & John K. Reid
as your go-to reference (sic) for Fortran 95. Be warned that it is not the most lucid writing, but the veil will lift when you really want to get the most out of a new Fortran 95 feature.
For focusing on the issues of going from Fortran 77 to Fortran 90, I enjoyed
Migrating to Fortran 90, by Jim Kerrigan
but the book is now out-of-print. (I just don't understand O'Reilly's use of Safari, why isn't every one of their out-of-print books available?)
Lastly, as to the heir to the wonderful, wonderful classic, Software Tools, I nominate
Classical FORTRAN, by Michael Kupferschmid
This book not only shows what one can do with "only" Fortran 77, but it also talks about some of the more subtle issues that arise (e.g., should or should not one use the EXTERNAL declaration). This book doesn't exactly cover the same space as "Software Tools" but they are two of the three Fortran programming books that I would tag as "fun".... (here's the third).
Miscellaneous Advice that applies to almost every Fortran compiler
There is a compiler option to enforce IMPLICIT NONE behavior, which you can use to identify problem routines without modifying them with the IMPLICIT NONE declaration first. This piece of advice won't seem meaningful until after the first time a build bombs because of an IMPLICIT NONE command inserted into a legacy routine. (What? Your code review didn't catch this? ;-)
There is a compiler option for array bounds checking, which can be useful when debugging Fortran 77 code.
Fortran 90 compilers should be able to compile almost all Fortran 77 code and even older Fortran code. Turn on the reporting options on your Fortran 90 compiler, run your legacy code through it and you will have a decent start on syntax checking. Some commercial Fortran 77 compilers are actually Fortran 90 compilers that are running in Fortran 77 mode, so this might be relatively trivial option twiddling for whatever build scripts you have.

There's something in the original question that I would caution about. You say the code is rife with "performance enhancing improvements". Since Fortran problems are generally of a scientific and mathematical nature, do not assume these performance tricks are there to improve the compilation. It's probably not about the language. In Fortran, the solution is seldom about efficiency of the code itself but of the underlying mathematics to solve the end problem. The tricks may make the compilation slower, may even make the logic appear messy, but the intent is to make the solution faster. Unless you know exactly what it is doing and why, leave it alone.
Even simple refactoring, like changing dumb looking variable names can be a big pitfall. Historically standard mathematical equations in a given field of science will have used a particular shorthand since the days of Maxwell. So to see an array named B(:) in electromagnetics tells all Emag engineers exactly what is being solved for. Change that at your peril. Moral, get to know the standard nomenclature of the science before renaming too.

As someone with experience in both FORTRAN (77 flavor although it has been a while since I used it seriously) and C/C++ the item to watch out for that immediately jumps to mind are arrays. FORTRAN arrays start with an index of 1 instead of 0 as they do in C/C++/Java. Also, memory arrangement is reversed. So incrementing the first index gives you sequential memory locations.
My wife still uses FORTRAN regularly and has some C++ code she needs to work with now that I'm about to start helping her with. As issues come up during her conversion I'll try to point them out. Maybe they will help.

I have used Fortran starting with the '66 version since 1967 (on an IBM 7090 with 32k words of memory). I then used PL/1 for some time, but later went back to Fortran 95 because it is ideally suited for the matrix/complex-number problems we have. I would like to add to the considerations that much of the convoluted structure of old codes is simply due to the small amount of memory available, forcing such thing like reusing a few lines of code via computed or assigned GOTOs. Another problem is optimization by defining auxiliary variables for every repeated subexpression - compilers simply did not optimize for that. In addition, it was not allowed to write DO i=1,n+1; you had to write n1=n+1; DO i=1,n1. In consequence old codes are overwhelmed with superfluous variables. When I rewrote a code in Fortran 95, only 10% of the variables survived. If you want to make the code more legible, I highly recommend looking for variables that can easily be eliminated.
Another thing I might mention is that for many years complex arithmetic and multidimensional arrays were highly inefficient. That is why you often find code rewritten to do complex calculations using only real variables, and matrices addressed with a single linear index.

Well, in one sense, you're lucky, 'cause Fortran doesn't have much in the way of subtle flow-of-control constructs or inheritance or the like. On the other, it's got some truly amazing gotchas, like the arithmetically-calculated branch-to-numeric-label stuff, the implicitly-typed variables which don't require declaration, the lack of true keywords.
I don't know about the "performance enhancing improvements". I'd guess most of them are probably ineffective, as a couple of decades of compiler technology have made most hinting unnecessary. Unfortunately, you'll probably have to leave things the way they are, unless you're planning to do a massive rewrite.
Anyway, the core scientific calculation code should be fairly readable. Any programming language using infix arithmetic would be good preparation for reading Fortran's arithmetic and assignment code.

Could you explain what you have to do in maintaining the code? Do you really have to modify the code? If you can get away by modifying just the interface to that code instead of the code itself, that would be the best.
The inherent problem when dealing with a large scientific code (not just FORTRAN) is that the underlying mathematics and the implementation are both complex. Almost by default, the implementation has to include code optimization, in order to run within reasonable time frame. This is compounded by the fact that a lot of code in this field is created by scientists / engineers that are expert in their field, but not in software development. Let's just say that "easy to understand" is not the first priority to them (I was one of them, still learning to be a better software developer).
Due to the nature of the problem, I don't think a general question and answer is enough to be helpful. I suggest you post a series of specific questions with code snippet attached. Perhaps starting with the one that gives you the most headache?

I loved FORTRAN, I used to teach and code in it. Just wanted to throw that in. Haven't touched it in years.
I started out in COBOL, when I moved to FORTRAN I felt I was freed. Everything is relative, yeah?
I'd second what has been said above - recognise that this is a PROCEDURAL language - no subtelties - so take it as you see it.
Probably frustrate you to start with.

I started on Fortran IV (WATFIV) on punch cards, and my early working years were VS FORTRAN v1 (IBM, Fortran 77 level). Lots of good advice in this thread.
I would add that you have to distinguish between things done to get the beast to run at all, versus things that "optimize" the code, versus things that are more readable and maintainable. I can remember dealing with VAX overlays in trying to get DOE simulation code to run on IBM with virtual memory (they had to be removed and the whole thing turned into one address space).
I would certainly start by carefully restructuring FORTRAN IV control structures to at least FORTRAN 77 level, with proper indentation and commenting. Try to get rid of primitive control structures like ASSIGN and COMPUTED GOTO and arithmetic IF, and of course, as many GOTOs as you can (using IF-THEN-ELSE-ENDIF). Definitely use IMPLICIT NONE in every routine, to force you to properly declare all variables (you wouldn't believe how many bugs I caught in other people's code -- typos in variable names). Watch out for "premature optimizations" that you're better off letting the compiler handle by itself.
If this code is to continue to live and be maintainable, you owe it to yourself and your successors to make it readable and understandable. Just be certain of what you are doing as you change the code! FORTRAN has lots of peculiar constructs that can easily trip up someone coming from the C side of the programming world. Remember than FORTRAN dates back to the mid-late '50s, when there was no such thing as a science of language and compiler design, just ad hoc hacking together of something (sorry, Dr. B!).

Here's another one that has bit me from time to time. When you are working on FORTRAN code make sure you skip all six initial columns. Every once and a while, I'll only get the code indented five spaces and nothing works. At first glance everything seems okay and then I finally realize that all the lines are starting in column 6 instead of column 7.
For anyone not familiar with FORTRAN, the first 5 columns are for line numbers (=labels), the 6th column is for a continuation character in case you have a line longer than 80 characters (just put something here and the compiler knows that this line is actually part of the one before it) and code always starts in column 7.

Related

What next generation low level language is the best bet when migrating a code base? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
Let's say you have a company running a lot of C/C++, and you want to start planning migration to new technologies so you don't end up like COBOL companies 15 years ago.
For now, C/C++ runs more than fine and there is plenty dev on the market for it.
But you want to start thinking about it now, because given the huge running code base and the data sensitivity, you feel it can take 5-10 years to move to the next step without overloading the budget and the dev teams.
You have heard about D, starting to be quite mature, and Go, promising to be quite popular.
What would be your choice and why?
D and Go will probably just become as popular as Python and Ruby are today. They each fill a niche, and even though D was supposed to be a full-fledged replacement of C++, it probably will never acquire enough mass to push C++ away. Not to mention that they both aren't stable/mature enough, and it's unknown whether you'll have support for these languages in 10-20 years for the then-current hardware and operating systems. Considering that C/C++ is pretty much the compiled language and is used in the great majority of operating systems and native-code applications, it's very unlikely that it'll go away in the foreseeable future.
C and C++ are a pretty much unbeatable combo when it comes to native/unmanaged/"lowlevel" languages.
Not because they're the best languages, far from it, but because they're there, they do the job, and they're good enough. There's little doubt that D, for example, is better than C++ in most respects. But it fails in the most important one: Compatibility with all the existing C++ code. Without that requirement, most of that code would be written in a managed language today anyway. The only reason so many codebases use C++ today is because they used it last year, and it'd be too much of a pain to switch to something else. But if and when they switch, they typically don't switch to D. They switch to C# or Java or Python.
The problem for D and other "upcoming" languages competing for the same niches, is that while they're better, they're not groundbreaking enough to motivate people to actually switch to them.
So C and C++ are here to stay. C is unlikely to evolve much further. It is as it is, and one of the niches it has to fill is "simplicity, even for compiler writers". No other language is likely to beat it in that niche, even if they never revise the standard again.
C++ is evolving much more dramatically, with C++0x getting nearer, and they've already got a huge list of features they want to do afterwards. C++ isn't a dead end in any way.
Both languages are here to stay. Perhaps in 50 years other languages will have replaced them, but it won't happen this decade.
I currently use D regularly. I wouldn't recommend it yet for people writing production code because it's too bleeding edge. I get away with it because most of my code is research prototypes in bioinformatics. However, the language is starting to stabilize. Andrei Alexandrescu is releasing a book titled "The D Programming Language" next March, and right now there is a push to stabilize the spec for version 2 of the language in time for the book.
While D is not a formal superset of C, it is what I'd call an idiomatic superset except for the lack of a preprocessor. In other words, any code written in C proper (ignoring the preprocessor), can be trivially translated to D without a redesign, because C concepts like pointers and inline ASM are there and work the same in D as in C. D also supports direct linking to C code and the D standard library includes the entire C standard library.
Also, despite D's lack of libraries because it is still a bleeding edge language, it's a library writer's dream because of its metaprogramming capabilities. If it takes off, it will probably have some pretty impressive libs. For a preview of this, see std.range or std.algorithm in the D2 standard library (Phobos). As another example, I implemented an OpenMP-like parallelism model (parallel foreach, parallel map, parallel reduce, futures) as a pure library in D, without any special compiler support. (See http://cis.jhu.edu/~dsimcha/parallelFuture.html)
Given that you're mostly interested in the long term, I'd say give D 6 months to stabilize (given Andrei's book and the current push to stabilize the language, version 2 should be stable by then) and then take a hard look at it.
Edit: Now that the core language spec is relatively stable and the focus has turned to toolchain and library development, I would recommend D for small production projects unless you are in a very risk-averse environment. Larger projects that absolutely must have good toolchain and library support should still wait, though.
If you believe in the lean manufacturing principles, you should strive to "decide as late as possible". The moment should be the last responsible moment, meaning the moment at which failing to make a decision eliminates an important alternative.
I think this principle can be applied to your situation. Instead of committing now to a language (that you don't even know will be around in 10 years), you should keep your options open. Maybe refactor some of your code so it is a bit more generic or is built on more abstractions, so that when it is indeed required to migrate, the process will be easier.
Stick with C and C++. I don't see it going the way of COBOL, it runs as well as anything, and you'll have no problem finding people to code in C and C++.
C++ -- it is relatively young and updated... It has a big number of compiler vendors and got
improved all the time.
C -- it would live for a long time filling the gap between assembler and higher level languages. It is also very simple and easy to implement language, so it would remain the
first language for various "strange" architectures like embedded or extremely new ones.
D is promising but still very new and unstable specifications and libraries.
Go was born few weeks ago... Never use anything of version 0 for big important projects. Also it is significantly more limited the C++ or D.
2019 update: C++ will stay around for the next 10 years... (if not, I will correct this answer, when it will not be relevant any more....)
the reason companies works with COBOL today is b/c they already have millions of COBOL code written. if the could throw it - they will do it at once, on the other hand - companies work with C/C++ as part of their needs and new projects using this language b/c they can't / don't want to use java/c# any other framework based language - so COBOL is not the analogy here.
Like dsimcha said the D way is currently risky. Yet the language has a huge potential, it is low-level and i've experienced drastically better productivity with D (instead of C++). Perhaps what people feel with dynamic languages.
Go is so much blog-marketed it seems like a joke to me.
Dispatching an interface method is not trivial, and actually slower than dispatching a regular single-inheritance method.
If you'd have a huge codebase the decision is of course more difficult, I would advise only to switch for new projects, not for existing ones.
I wouldn't concentrate on a language but more on the libraries surrounding it. C++ in combination with the boost libraries are an excellent choice. People who develop in C++ tend to have a better understanding of computing, I myself started of with Java which made my life easier by hiding a lot of fundamental stuff, which is good, however I only really started to understand programming once I learned C/C++ (pointers etc).
I do recognise that C++ can be hard (e.g. memory management) so I think it's good to have a 'add on' language where performance is not essential and readability (==maintainability) scores high: I recommend Python for this.
There are countless machines running C++ software, I don't see them shutting down all at once. If C++ will go in the way of COBOL there will be a huge market for application migration. There will be specialized tools developed to translate C++ applications to the popular language of the time (Z++ ???).
So I guess the best advice is to cross that bridge when you come to it.
Check out IntelĀ® Cilk++ Software Development Kit if you want to spark your interest in C++/Multi-Core development. I don't see C or C++ going away anytime soon either.
Comparing C* to Cobol is questionable
Comparing C* to Cobol may lead to the wrong conclusion. C was perfect for its day, a huge leap forward on its introduction, and it still gets the job done today.
I would sum up Cobol on my most charitable day with "nice try".
C and C++ will survive for a long time because they fit the bill well as implementation languages. This won't ever really change.
Also, consider that the main negative issue with C/C++ is the lack of memory safety. This tends to be less and less of a problem as codes mature. This means there will not be a serious reason to replace the old codes.
I expect that software systems will grow outwards from C. Look at the hierarchy today:
application written in a framework such as Rails
application back-end written in Ruby, PHP, Python, C#, whatever
Ruby, PHP, Python, or C# run-time implementation (written in C*)
OS kernel (written in C89)
I don't think the old layers will vanish, and I think legacy higher layers written in C and C++ will simply be supported that way for an indefinite period of time, eventually being phased out for their replacements written in Ruby, Python, C#, or a future development.
We have no idea if Go will find acceptance. Just being by Google is probably not going to be enough.
D? Well, some nice things are being said about it but it won't be taking off either. No user base to speak of. D is #20 in popularity on the TIOBE Index, and dropping fast.
You may say that a language's popularity has little to do with how well it's suited for your company's work. But it has a lot to do with how easy it will be to find people qualified to program in it.
Java is on top and I would be surprised if it went far away in the next 20 years. It's not considered a systems programming language but performs well enough that there are few tasks you'd do in C++ that you couldn't in Java. Certainly these days nobody is willing to task human programmers with the job done (flawlessly and often more effectively) by the garbage collector. I for one considered Java a significant step up from C++ in terms of programming effectivity.
I'm quite impressed by Ruby. It's an elegant, expressive language: You can accomplish a lot with not too much code, yet that code is still mostly legible. One of Ruby's main principles is to be consistent and not hold surprises for the developer. This is an extremely good idea, IMO, and boosts productivity. At the time of the big Rails hype (which may still be ongoing), I made a wide berth around Ruby because its reference implementation is abysmally slow. However, the JRuby folks at Sun have made it blazingly fast on a JVM, so now it's definitely worth some consideration. Ruby provides closures and a good handful of functional programming capabilities (see below for why this important), though it's not really considered a FP language. TIOBE index: 10 and rising.
Something to consider for the future is the fact that CPU makers have run up against a performance limit imposed by physics. No longer is there a 30% faster CPU available every Christmas, as it was in the past. So now to get more performance you need more cores. Software development will need all the help it can get in supporting multi-core concurrent programming. C++ leaves you mostly alone with this, and Java's solutions are horrible by modern standards.
In view of this, there's a certain trend toward functional programming (which eliminates much of the hassle associated with concurrency) as well as languages with better concurrency support. Erlang was written specifically for this and for the ability to swap code in a running program (Ericsson wanted incredible uptimes). Scala is similar to Java but with much stronger support for functional programming and concurrency. Clojure, ditto, but it's a Lisp and it's not even in the top 50 (yet!!).
Scala was developed academics, and shows it: It's sophisticated and downright pedantic about data types; it tries to be the Swiss Army Knife of programming languages. I believe a lot of medium-smart programmers will have trouble getting a grip on Scala. Ruby is less FP and doesn't do so much about concurrency, but it's pragmatic, and fun and easy to get stuff done in. Also, running on the JVM, there is an enormous amount of code readily available in Java libraries, which Ruby can interface with. So:
My bet would be on Ruby, with an outside chance on Scala. But there are plenty of alternatives!
Java. For most low level things Java is fine these days. Why go with a partial solution to C/C++ such as D or Go when you can have something as safe and easy to develop with as Java? If you are looking for a real time solution, D and Go are definitely not it, not to mention they are probably even less supported than Java.
Java is now a system programming language. I don't see how you can consider anything with unsafe constructs such as pointers "next gen". The only reason those insecure constructs ever existed is because it was the pragmatic approach to building a turing complete language. There was no concern of representing the memory in discrete objects, because they just wanted to build something that worked. There are already hard and soft realtime applications in Java, a variety of hardware bytecode processors, and over 2 billion mobile devices running Java. At most all you would have to do is add some constructs for interoperability with devices, which wouldn't be that much code; even in C/C++ you'd still have to add these constructs...
What are you programming? 8-bit microcontrollers with 1KB ram? In that case, it would be pointless to use anything other than the assembler for that platform...

Getting started with a new code in an unfamiliar language

I'm starting a new project in a language I'm less familiar with (FORTRAN) and am in the 'discovery' phase. Normally reading through and figuring out code is a fairly simple task, however, this code is rather large and not so structured. Are there any methods/tips/tricks/tools to mapping out 50k lines of rather dense code?
Is it Fortran IV (unlikely), 77, or 90/95? The language changed a lot with these revisions. Some of the gotchas that High-Performance Mark listed were common in Fortran IV but uncommon in 77, while others were still common in 77. The suggestion to check the code with maximum compiler warnings is excellent -- even use two compilers.
I'd start by diagramming the subroutine structure, and getting a high-level view of what they do. Then get a deeper understanding of the areas that need to be changed.
There are tools for analyzing and even improving Fortran code, e.g., http://www.polyhedron.com/pf-plusfort0html or http://www.crescentbaysoftware.com/vast_77to90.html.
When I coded Fortran (F77) thirty (yikes!) years ago, we had limited facilities to automatically flowchart an unknown codebase. It was ugly, and limited to the real-estate that a plotter bed could supply. As #Simon mentions, you can (and could back then, with some versions) also use a debugger.
Now, interactive exploration is easier. Additionally, you can experiment with IDEs. I have not personally tried it, as Fortran is no longer my key development language, but Photran is an Eclipse plug-in for Fortran, and appears to be under active development (last release was this month).
The debugger is your friend - if you have one for Fortran.
Before you go too much further I would familiarise yourself with the basic syntax of the language plus any foibles like assumptions about variable types from their names and positions of declarations etc. If you don't get that stuff then you are likely to get very lost even with a helpful debugger.
Remember as well that structure is sometimes language dependent. The code you are looking at may be badly structured for the languages you are used to but may be very well structured for Fortran, which has its own set of peculiarities. I think I am just saying have an open mind to start with otherwise you'll be carrying around the unnecessary predisposition that the code you are looking at is bad. It may be, but it may just be something you are not used to.
Best of luck. I rather liked Fortran when I programmed in it for a living about 20 years ago, and it is still the language of choice for some applications because of computation speeds on some platforms. Still quite a lot of it in academia.
I always find the starting point of execution, or where other code (that I'm not working on) calls the code I'm examining. Then I just start reading through it from there, following method calls as necessary to figure out what's going on.
Take heart. One of Fortran's virtues is that it is very simple. Unless you find a code which has been programmed to take advantage of 'clever' tricks. I suggest that the first thing you do is to run your program through a compiler with syntax-checking and standards-compliance turned up to the max. Old (pre-Fortran 90) FORTRAN is notorious for the clever tricks that people used to get round the language's limitations. Some of the gotchas for programmers more familiar with modern languages:
-- common blocks; and other mechanisms for global state; especially bad are common blocks which are used to rename and redefine variables;
-- equivalences (horrid, but you might trip over them);
-- fixed-format source form;
-- use of CONTINUE statement, and the practice of having multiple loops ending at the same CONTINUE statement;
-- implicit declaration of variables (to sort these out, insert the line IMPLICIT NONE at immediately after the PROGRAM, MODULE, SUBROUTINE or FUNCTION statement everywhere they occur);
-- multiple entry points into sub-programs;
-- and a few others I'm so familiar with I can't recall them.
If these mean nothing to you, they soon will. And finally, you might want to look at Understand for Fortran. It costs, but it's very useful.
Regards
Mark
Are you running on Linux or OpenSolaris? If so, the Sun Studio Fortran compiler is one of the best. And the Sun Studio IDE understands Fortran and comes with a debugger. http://developers.sun.com/sunstudio
I'm sleepy so I'll be short :)
Start by grepping out (or whatever tool you use) program statements, subroutine statements, function statements and the like. Maybe modules and such if f90 is used. Draw a kind of diagram based on that, on which you'll see what calls what (what uses what subroutines, functions and the like).
After you've got a general view of the situation, get to data. Fortran requires everything to be declared at the start of program/subroutine ... so the first lines should give you declarations. After you've putted those in the diagram you just made, you should have a very clear situation by that time.
Now, the next step depends really on what you want to do with it.
#ldigas: "Fortran requires everything to be declared at the start of program/subroutine ... "
No, unless IMPLICIT NONE appears at the start of a routine, Fortran uses implicit typing. So variable names starting with A-H and O-Z are typed REAL, and I-N are INTEGER. (Which makes GOD a REAL variable, and INT, luckily, an INTEGER, implicitly.) Remember, Fortran (actually back then it was FORTRAN) was designed by scientists and mathematicians, for whom i, j, k, l, m, and n were ALWAYS integers.
With IMPLICIT NONE, you are forced to explicitly type all variables, as you would in C, C++, or Java. So you could have INTEGER GOD, and REAL INT.

Languages faster than C++ [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
It is said that Blitz++ provides near-Fortran performance.
Does Fortran actually tend to be faster than regular C++ for equivalent tasks?
What about other HL languages of exceptional runtime performance? I've heard of a few languages suprassing C++ for certain tasks... Objective Caml, Java, D...
I guess GC can make much code faster, because it removes the need for excessive copying around the stack? (assuming the code is not written for performance)
I am asking out of curiosity -- I always assumed C++ is pretty much unbeatable barring expert ASM coding.
Fortran is faster and almost always better than C++ for purely numerical code. There are many reasons why Fortran is faster. It is the oldest compiled language (a lot of knowledge in optimizing compilers). It is still THE language for numerical computations, so many compiler vendors make a living of selling optimized compilers. There are also other, more technical reasons. Fortran (well, at least Fortran77) does not have pointers, and thus, does not have the aliasing problems, which plague the C/C++ languages in that domain. Many high performance libraries are still coded in Fortran, with a long (> 30 years) history. Neither C or C++ have any good array constructs (C is too low level, C++ has as many array libraries as compilers on the planet, which are all incompatible with each other, thus preventing a pool of well tested, fast code).
Whether fortran is faster than c++ is a matter of discussion. Some say yes, some say no; I won't go into that. It depends on the compiler, the architecture you're running it on, the implementation of the algorithm ... etc.
Where fortran does have a big advantage over C is the time it takes you to implement those algorithms. And that makes it extremely well suited for any kind of numerical computing. I'll state just a few obvious advantages over C:
1-based array indexing (tremendously helpful when implementing larger models, and you don't have to think about it, but just FORmula TRANslate
has a power operator (**) (God, whose idea was that a power function will do ? Instead of an operator?!)
it has, I'd say the best support for multidimensional arrays of all the languages in the current market (and it doesn't seem that's gonna change so soon) - A(1,2) just like in math
not to mention avoiding the loops - A=B*C multiplies the arrays (almost like matlab syntax with compiled speed)
it has parallelism features built into the language (check the new standard on this one)
very easily connectible with languages like C, python, so you can make your heavy duty calculations in fortran, while .. whatever ... in the language of your choice, if you feel so inclined
completely backward compatible (since whole F77 is a subset of F90) so you have whole century of coding at your disposal
very very portable (this might not work for some compiler extensions, but in general it works like a charm)
problem oriented solving community (since fortran users are usually not cs, but math, phy, engineers ... people with no programming, but rather problem solving experience whose knowledge about your problem can be very helpful)
Can't think of anything else off the top of my head right now, so this will have to do.
What Blitz++ is competing against is not so much the Fortran language, but the man-centuries of work going into Fortran math libraries. To some extent the language helps: an older language has had a lot more time to get optimizing compilers (and , let's face it, C++ is one of the most complex languages). On the other hand, high level C++ libraries like Blitz++ and uBLAS allows you to state your intentions more clearly than relatively low-level Fortran code, and allows for whole new classes of compile-time optimizations.
However, using any library effectively all the time requires developers to be well acquainted with the language, the library and the mathematics. You can usually get faster code by improving any one of the three...
FORTAN is typically faster than C++ for array processing because of the different ways the languages implement arrays - FORTRAN doesn't allow aliasing of array elements, whereas C++ does. This makes the FORTRAN compilers job easier. Also, FORTRAN has many very mature mathematical libraries which have been worked on for nearly 50 years - C++ has not been around that long!
This will depend a lot on the compiler, programmers, whether it has gc and can vary too much. If it is compiled directly to machine code then expect to have better performance than interpreted most of the time but there is a finite amount of optimization possible before you have asm speed anyway.
If someone said fortran was slightly faster would you code a new project in that anyway?
the thing with c++ is that it is very close to the hardware level. In fact, you can program at the hardware level (via assembly blocks). In general, c++ compilers do a pretty good job at optimisations (for a huge speed boost, enable "Link Time Code Generation" to allow the inlining of functions between different cpp files), but if you know the hardware and have the know-how, you can write a few functions in assembly that work even faster (though sometimes, you just can't beat the compiler).
You can also implement you're own memory managers (which is something a lot of other high level languages don't allow), thus you can customize them for your specific task (maybe most allocations will be 32 bytes or less, then you can just have a giant list of 32-byte buffers that you can allocate/deallocate in O(1) time). I believe that c++ CAN beat any other language, as long as you fully understand the compiler and the hardware that you are using. The majority of it comes down to what algorithms you use more than anything else.
You must be using some odd managed XML parser as you load this page then. :)
We continously profile code and the gain is consistently (and this is not naive C++, it is just modern C++ with boos). It consistensly paves any CLR implementation by at least 2x and often by 5x or more. A bit better than Java days when it was around 20x times faster but you can still find good instances and simply eliminate all the System.Object bloat and clearly beat it to a pulp.
One thing managed devs don't get is that the hardware architecture is against any scaling of VM and object root aproaches. You have to see it to believe it, hang on, fire up a browser and go to a 'thin' VM like Silverlight. You'll be schocked how slow and CPU hungry it is.
Two, kick of a database app for any performance, yes managed vs native db.
It's usually the algorithm not the language that determines the performance ballpark that you will end up in.
Within that ballpark, optimising compilers can usually produce better code than most assembly coders.
Premature optimisation is the root of all evil
This may be the "common knowledge" that everyone can parrot, but I submit that's probably because it's correct. I await concrete evidence to the contrary.
D can sometimes be faster than C++ in practical applications, largely because the presence of garbage collection helps avoid the overhead of RAII and reference counting when using smart pointers. For programs that allocate large amounts of small objects with non-trivial lifecycles, garbage collection can be faster than C++-style memory management. Also, D's builtin arrays allow the compiler to perform better optimizations in some cases than C++'s STL vector, which the compiler doesn't understand. Furthermore, D2 supports immutable data and pure function annotations, which recent versions of DMD2 optimize based on. Walter Bright, D's creator, wrote a JavaScript interpreter in both D and C++, and according to him, the D version is faster.
C# is much faster than C++ - in C# I can write an XML parser and data processor in a tenth the time it takes me to write it C++.
Oh, did you mean execution speed?
Even then, if you take the time from the first line of code written to the end of the first execution of the code, C# is still probably faster than C++.
This is a very interesting article about converting a C++ program to C# and the effort required to make the C++ faster than the C#.
So, if you take development speed into account, almost anything beats C++.
OK, to address tht OP's runtime only performance requirement: It's not the langauge, it's the implementation of the language that determines the runtime performance. I could write a C++ compiler that produces the slowest code imaginable, but it's still C++. It is also theoretically possible to write a compiler for Java that targets IA32 instructions rather than the Java VM byte codes, giving a runtime speed boost.
The performance of your code will depend on the fit between the strengths of the language and the requirements of the code. For example, a program that does lots of memory allocation / deallocation will perform badly in a naive C++ program (i.e. use the default memory allocator) since the C++ memory allocation strategy is too generalised, whereas C#'s GC based allocator can perform better (as the above link shows). String manipulation is slow in C++ but quick in languages like php, perl, etc.
It all depends on the compiler, take for example the Stalin Scheme compiler, it beats almost all languages in the Debian micro benchmark suite, but do they mention anything about compile times?
No, I suspect (I have not used Stalin before) compiling for benchmarks (iow all optimizations at maximum effort levels) takes a jolly long time for anything but the smallest pieces of code.
if the code is not written for performance then C# is faster than C++.
A necessary disclaimer: All benchmarks are evil.
Here's benchmarks that in favour of C++.
The above two links show that we can find cases where C++ is faster than C# and vice versa.
Performance of a compiled language is a useless concept: What's important is the quality of the compiler, ie what optimizations it is able to apply. For example, often - but not always - the Intel C++ compiler produces better performing code than g++. So how do you measure the performance of C++?
Where language semantics come in is how easy it is for the programmer to get the compiler to create optimal output. For example, it's often easier to parallelize Fortran code than C code, which is why Fortran is still heavily used for high-performance computation (eg climate simulations).
As the question and some of the answers mentioned assembler: the same is true here, it's just another compiled language and thus not inherently 'faster'. The difference between assembler and other languages is that the programmer - who ideally has absolute knowledge about the program - is responsible for all of the optimizations instead of delegating some of them to the 'dumb' compiler.
Eg function calls in assembler may use registers to pass arguments and don't need to create unnecessary stack frames, but a good compiler can do this as well (think inlining or fastcall). The downside of using assembler is that better performing algorithms are harder to implement (think linear search vs. binary seach, hashtable lookup, ...).
Doing much better than C++ is mostly going to be about making the compiler understand what the programmer means. An example of this might be an instance where a compiler of any language infers that a region of code is independent of its inputs and just computes the result value at compile time.
Another example of this is how C# produces some very high performance code simply because the compiler knows what particular incantations 'mean' and can cleverly use the implementation that produces the highest performance, where a transliteration of the same program into C++ results in needless alloc/delete cycles (hidden by templates) because the compiler is handling the general case instead of the particular case this piece of code is giving.
A final example might be in the Brook/Cuda adaptations of C designed for exotic hardware that isn't so exotic anymore. The language supports the exact primitives (kernel functions) that map to the non von-neuman hardware being compiled for.
Is that why you are using a managed browser? Because it is faster. Or managed OS because it is faster. Nah, hang on, it is the SQL database.. Wait, it must be the game you are playing. Stop, there must be a piece of numerical code Java adn Csharp frankly are useless with. BTW, you have to check what your VM is written it to slag the root language and say it is slow.
What a misconecption, but hey show me a fast managed app so we can all have a laugh. VS? OpenOffice?
Ahh... The good old question - which compiler makes faster code?
It only matters in code that actually spends much time at the bottom of the call stack, i.e. hot spots that don't contain function calls, such as matrix inversion, etc.
(Implied by 1) It only matters in code the compiler actually sees. If your program counter spends all its time in 3rd-party libraries you don't build, it doesn't matter.
In code where it does matter, it all comes down to which compiler makes better ASM, and that's largely a function of how smartly or stupidly the source code is written.
With all these variables, it's hard to distinguish between good compilers.
However, as was said, if you've got a lot of Fortran code to compile, don't re-write it.

How can I make my own C++ compiler understand templates, nested classes, etc. strong features of C++?

It is a university task in my group to write a compiler of C-like language. Of course I am going to implement a small part of our beloved C++.
The exact task is absolutely stupid, and the lecturer told us it need to be self-compilable (should be able to compile itself) - so, he meant not to use libraries such as Boost and STL. He also does not want us to use templates because it is hard to implement.
The question is - is it real for me, as I`m going to write this project on my own, with the deadline at the end of May - the middle of June (this year), to implement not only templates, but also nested classes, namespaces, virtual functions tables at the level of syntax analysis?
PS I am not noobie in C++
Stick to doing a C compiler.
Believe me, it's hard enough work building a decent C compiler, especially if its expected to compile itself. Trying to support all the C++ features like nested classes and templates will drive you insane. Perhaps a group could do it, but on your own, I think a C compiler is more than enough to do.
If you are dead set on this, at least implement a C-like language first (so you have something to hand in). Then focus on showing off.
"The exact task is absolutely stupid" - I don't think you're in a position to make that judgment fairly. Better to drop that view.
"I`m going to write this project on my own" - you said it's a group project. Are you saying that your group doesn't want to go along with your view that it should morph into C++, so you're taking off and working on your own? There's another bit I'd recommend changing.
It doesn't matter how knowledgable you are about C++. Your ability with grammars, parsers, lexers, ASTs, and code generation seems far more germane.
Without knowing more about you or the assignment, I'd say that you'd be doing well to have the original assignment done by the end of May. That's three months away. Stick to the assignment. It might surprise you with its difficulty.
If you finish early, and fulfill your obligation to your team, I'd say you should feel free to modify what's produced to add C++ features.
I'll bet it took Bjarne Stroustrup more than three months to add objects to C. Don't overestimate yourself or underestimate the original assignment.
No problem. And while you're at it, why not implement an operating system for it to run on too.
Follow the assignment. Write a compiler for a C-like language!
What I'd do is select a subset of C. Remove floating-point datatypes and every other feature that isn't necessary in building your compiler.
Writing a C compiler is a lot of work. You won't be able to do that in a couple of months.
Writing a C++ compiler is downright insane. You wouldn't be able to do that in 5 years.
I will like to stress a few points already mentioned and give a few references.
1) STICK TO THE 1989 ANSI C STANDARD WITH NO OPTIMIZATION.
2) Don't worry, with proper guidance, good organization and a fair amount of hard work this is doable.
3) Read the The C Programming Language cover to cover.
4) Understand important concepts of compiler development from the Dragon Book.
5) Take a look at lcc both the code as well as the book.
6) Take a look at Lex and Yacc (or Flex and Bison)
7) Writing a C compiler (up to the point it can self compile) is a rite of passage ritual among programmers. Enjoy it.
For a class project, I think that requiring the compiler to be able to compile itself is a bit much to ask. I assume that this is what was meant by stupid in the question. It means that you need to figure out in advance exactly how much of C you are going to implement, and stick to that in building the compiler. So, building a symbol table using primitives rather than just using an STL map. This might be useful for a data structure course, but misses the point for a compiler course. It should be about understanding the issues involved with the compiler, and chosing which data structures to use, not coding the data structures.
Building a compiler is a wonderful way to really understand what happens to your code once the compiler get a hold of it. What is the target language? When I took compilers, it took 3 of us all semester to build a compiler to go from sorta-pascal to assembly. Its not a trivial task. Its one of those things that seems simple at first, but the more you get into it, the more complicated things get.
You should be able to complete c-like language within the time frame. Assuming you are taking more than 1 course, that is exactly what you might be able to do in time. C++ is also doable but with a lot more extra hours to put it. Expecing to do c++ templates/virtual functions is overexpecting yourself and you might fail in the assignment all together. So it's better stick with a c subset compiler and finish it in time. You should also consider the time it takes for QA. If you want to be thorough QA itself will also take good time.
Namespaces or nested clases, either virtual functions are at syntax level quite simple, its just one or two more rules to parser. It is much more complicated at higher levels, at deciding, which function / class choose (name shadowing, ambiguous names between namespaces, etc.), or when compiling to bytecode/running AST. So - you may be able to write these, but if isn't necessary, skip it, and write just bare functional model.
If you are talking about a complete compiler, with code generation, then forget it. If you just intend to do the lexical & syntactic analysis side of things, then some form of templating may just about be doable in the time frame, depending on what compiler building tools you use.

As a programmer with no CS degree, do I have to learn C++ extensively?

I'm a programmer with 2 years experience, I worked in 4 places and I really think of myself as a confident, and fluent developer.
Most of my colleagues have CS degrees, and I don't really feel any difference! However, to keep up my mind on the same stream with these guys, I studied C (read beginning C from novice to professional), DataStructures with C, and also OOP with C++.
I have a reasonable understanding of pointers, memory management, and I also attended a scholarship which C, DataStructures, and C++ were a part of it.
I want to note that my familiarity with C and C++ does not exceed reading some pages, and executing some demos; I haven't worked on any project using C or C++.
Lately a friend of mine advised me to learn C, and C++ extensively, and then move to OpenGL and learn about graphics programming. He said that the insights I may gain by learning these topics will really help me throughout my entire life as a programmer.
PS: I work as a full-time developer mostly working on ASP.NET applications using C#.
Recommendations?
For practical advancement:
From a practical sense, pick a language that suites the domain you want to work in.
There is no need to learn C nor C++ for most programming spaces. You can be a perfectly competent programmer without writing a line of code in those languages.
If however you are not happy working in the exact field you are in now, you can learn C or C++ so that you may find a lower level programming job.
Helping you be a better programmer:
You can learn a lot from learning multiple languages though. So it is always good to broaden your horizons that way.
If you want more experience in another language, and have not tried it yet, I would recommend to learn a functional programming language such as Scheme, Lisp, or Haskell.
First, having a degree has nothing to do with knowing C++. I know several people who graduated from CS without ever writing more than 50 lines of C/C++. CS is not about programming (in the same sense that surgery is not about knives), and it certainly isn't about individual languages. A CS degree requires you to poke your nose into several different languages, on your way to somewhere else. CS teaches the underlying concepts, an understanding of compilers, operating systems, the hardware your code is running on, algorithms and data structures and many other fascinating subjects. But it doesn't teach programming. Whatever programming experience a CS graduate has is almost incidental. It's something he picked up on the fly, or because of a personal interest in programming.
Second, let's be clear that it's very possible to have a successful programming career without knowing C++. In fact, I'd expect that most programmers fall into this category. So you certainly don't need to learn C++.
That leaves two possible reasons to learn C++:
Self-improvement
Changing career track
#2 is simple. If you want to transition to a field where C++ is the dominant language, learning it would obviously be a good idea. You mentioned graphics programming as an example, and if you want to do that for a living, learning C++ will probably be a good idea. (however, I don't think it's a particularly good suggestion for "insights that will help throughout your live as a programmer". There are other fields that are much more generally applicable. Learning graphics programming will teach you graphics programming, and not much else.)
That leaves #1, which is a bit more interesting. Will you become a better programmer simply by knowing C++? Perhaps, but not as much as some may think. There are several useful things that C++ may teach you, but there also seems to be a fair bit of superstition about it: it's low-level and has pointers, so by learning C++, you will achieve enlightenment.
If you want to understand what goes on under the hood, C or C++ will be helpful, sure, but you could cut out the middle man and just go directly into learning about compilers. That'd give you an even better idea. Supplement that with some basic information on how CPU's work, and a bit about operating systems as well, and you've learned all the underlying stuff much better than you would from C++.
However, some things I believe are worth picking up from C++, in no particular order:
(several of them are likely to make you despair at C#, which, despite adopting a lot of brilliant features, is still missing out some that to a C++ programmer seems blindingly obvious)
Paranoia: Becoming good at C++ implies becoming a bit of a language lawyer. The language leaves a lot of things undefined or unspecified, so a good C++ programmer is paranoid. "The code I just wrote looks ok, and it seems to be have ok when I run it - but is it well-defined by the standard? Will it break tomorrow, on his computer, or when I compile with an updated compiler? I have to check the standard". That's less necessary in other languages, but it may still be a healthy experience to carry with you. Sometimes, the compiler doesn't have the final word.
RAII: C++ has pioneered a pretty clever way to deal with resource management (including the dreaded memory management). Create an object on the stack, which in its constructor acquires the resource in question (database connection, chunk of memory, a file, a network socket or whatever else), and in its destructor ensures that this resource is released. This simple mechanism means that you virtually never write new/delete in your top level code, it is always hidden inside constructors or destructors. And because destructors are guaranteed to execute when the object goes out of scope, even if an exception is thrown, your resource is guaranteed to be released. No memory leaks, no unclosed database connections. C# doesn't directly support this, but being familiar with the technique sometimes lets you see a way to emulate it in C#, in the cases where it's useful. (Obviously memory management isn't a concern, but ensuring that database connections are released quickly might still be)
Generic programming, templates, the STL and metaprogramming: The C++ standard library (or the part of it commonly known as the STL) is a pretty interesting example of library design. In some ways, it is lightyears ahead of .NET or Java's class libraries, although LINQ has patched up some of the worst shortcomings of .NET. Learning about it might give you some useful insights into clever ways to work with sequences or sets of data. It also has a strong flavor of functional programming, which is always nice to poke around with. It's implemented in terms of templates, which are another remarkable feature of C++, and template metaprogramming may be beneficial to learn about as well. Not because it is directly applicable to many other languages, but because it might give you some ideas for writing more generic code in other languages as well.
Straightforward mapping to hardware: C++ isn't necessarily a low level language. But most of its abstractions have been modelled so that they can be implemented to map directly to common hardware features. That means it might help provide a good understanding of the "magic" that occurs between your managed .net code and the CPU at the other end. How is the CLR implemented, what do the heap and stack actually mean, and so on.
p/invoke: Let's face it, sometimes, .NET doesn't offer the functionality you need. You have to call some unmanaged code. And then it's useful to actually know the language you might be using. (if you can get around it with just a single pinvoke call, you only need to be able to read C function signatures on MSDN so you know which arguments to pass, but sometimes, it may be preferable to write your own C++ wrapper, and call into that instead.
I don't know if you should learn C++. There are valid reasons why doing so may make you a better programmer, but then again, there are plenty of other things you could spend your time on that would also make you a better programmer. The choice is yours. :)
Experience is the best teacher.
While you can read about things like memory management, data structures (and their implementations), algorithms, etc., you won't really get it until you've had a chance to put it in to practice. While I don't know if it's truly necessary to use C or C++ to learn these things I would put some effort into actually writing some code that manages its own memory and implements some common data structures. I think you'll learn things that will help you to understand your code better; to know what's really going on under the hood, so to speak. I would also recommend reading up on computer organization and operating systems, computer security, and boolean logic. On the other hand, I've never really found a need to do any OpenGL programming, though I did do some X Windows stuff once upon a time.
Having degree has got nothing to do with C/C++ actually. Now, stuff like big O() estimation, data structures or even mathematical background. For example linear algebra results very useful, even in context that seemingly have nothing to do (eg. search engines).
For example typical error that a good coder, but without any theoretical knowledge, might commit is to try to solve NP-complete problems by exact algorithm, rather than approximation.
Now, why in universities they teach you C/C++? Because it let's you see how it's all working "under the hood". You get opportunity to see how call stack works, how memory management works, how pointers work. Of course you don't need that knowledge to use most modern languages. But you need that to understand how their "magic" works. Eg. you can't understand how GC works, if you got no idea about pointers and memory allocation.
I've often asked this question (to myself). I think the more general version is, "how can I call myself a programmer if I don't know how to kick around a language that doesn't have automatic garbage collection, with pointers and all that 'complex' stuff'?" I've never learned C++ except to do a few HelloWorlds, so my answer is limited by that lack:
I think that the feeling that you need to learn C++ (or assembler, really) comes from the feeling that you're always working on someone else's abstractions: the "rocket scientists" who write the JVM, CLR, whatever. So if you can get to a lower level language, you'll really know what you're talking about. I think this is quite wrong. One is always building on a set of abstractions: even Assembler is translated into binary, which can be learned as well. And beyond that, you still couldn't make a computer out of firewood, even if you had a pair of pliers and a bit of titanium.
In my experience as a corporate trainer in software dev (in Java, mostly), the best people were not those who knew C++, but rather those that took the language that they are working in as an independent space for "play." Although memory management comes up all the time in C# and Java, you never have to think about anything beyond freeing your object from references (and a few other cliche places, like using streams instead of throwing around huge objects in memory). Pointers and all that stuff do not help you there, except as a right of passage (and a good one, I'm sure).
So in summary, work in the language you're in and branch out into as many relevant things as possible. These days I find myself dipping into Javascript though the APIs are supposed to make this unecessary, and doing some stuff in Fireworks while I mess with CSS by hand. And this is all in addition to the development I'm really doing in RoR, PHP and Actionscript. So my point is: focus on abstractions that you need, because they're more likely to be relevant than the lower-level stuff that underlies your platform.
Edit: I made some slight changes in response to jalf's comments, thanks.
I have a 1st class Software Engineering degree and work for a large console manufacturer developing a game engine in a team of programmers all of whom program across a wide range of languages from Asm to C++ to C# to LUA and know the hardware inside out.
I would say that 5% of my degree was useful and that by far and away the most important trait to furthering my career has been enthusiam and self development.
In fact many of the colleagues I've worked with haven't had a degree and on average have probably been the better ones.
I'd say this is because they've had to replace that piece of paper from a university degree with actual working code that they've developed in thier own spare time learning the skills off thier own back rather than being spoon fed it.
My driving instructor use to tell me that I would only start learning how to drive after I pass my test ie you only really learn from the practical application of the basics. A CS degree gives you the basics which if you've had a job programming any of the major languages for 6 months you will already have. A degree just opens up doors that you may not have otherwise - it doesn't help that much once inside the door.
Knowing how the software interacts with the hardware by the sounds of it is the most important area for you at the moment only then does the 'mystery' or 'magic' really disappear and you can be confident of what your talking about else where. Learning C and C++ will undoubtedbly help in this respect as will knowing an API like OpenGL.
But I'd say the most important thing is to find something you have interest in and code that. If you have real enthusiam for it you will naturally learn more low level information and become a better programmer, if indeed that is what your definition of being a better programmer is!
I've been working as a developer with no degree for almost 15 years now. I started with Ada and moved quickly into C/C++, but it's been my experience that there will always be some language that you "have to learn." If it's not C++, it will be C# or C or Java or Lisp. My advice is make sure you're solid on the basics that apply to any language(my best friend as a dev with no degree was the CLR book), and you should be able to move relatively easily between languages and frameworks.
You don't absolutely have to learn C/C++, but both languages will teach you to think about how your software interacts with the underlying OS and hardware, which is a essential skill. You say that you already know about pointers, memory management and so on, which is great. Many programmers without a CS degree lack this important knowledge.
Another good reason to learn C/C++ is that there's a lot of code written in these languages and a good way to learn more about programming is to read other people's code. If you're interested in writing low level code like drivers, OS, file systems and the like C/C++ is pretty much the only way to go.
Do you have to learn it extensively? I expect not.
However it's best to always be learning things that help you look at programming from a different perspective. Learning C or C++ are worth it for the insight into how things work at a lower level. For C and C++ programmers the same thing might be accomplished by learning assembly. Most people won't use assembly in a project, but knowing how it works can be very helpful from time to time.
My recommendation is always to learn as much as you can. If you're not working on a C++ project in the near future I wouldn't be too worried about learning the ins and outs, but it's always good to be able to look at problems from another angle and learning new languages is one way to do that.
Today for the majority of applications, C and C++ can be viewed as an academic exercise: "How can we write programs without garbage collection?"
The answer is: you can, but it's a mostly painful experience. Most of the details of best practices in C++ are related to the lack of garbage collection.
Given the brilliant performance of modern GCs, and the general increase in computing power, even cell phones have GCs these days. And in a platform with a GC, you can always code in such a way as to limit the pressure you put on the GC.
Listen or read SO podcast 44, where Joel plays his favorite song Write in C
Spolsky: Yeah, it's not paying the proper royalties to the Beatles anyway. We'll link to that from the shownotes. Awesome song, Write in C.
Atwood: That's right, Joel's favourite song. Write everything in C, because Joel does in fact write everything in C, don't you, Joel?
Spolsky: I started using a little bit of C99, the latest version of C, which let you declare variables after you written some statements.
...
Without a professional reason (other than the good practice of self-improvement) to learn C or C++, then you should have a passionate side project planned out that you could write in C or C++. Once the going gets tough on the side project, you'll need your enthusiasm and curiosity to take you over the hump (since on a side project, you naturally don't have the motivation of pay or de-motivation of a superior looming over you).
Also, most CS degrees are using Java as their language of choice now. This just proves the point that experience gained in the language of choice and exposure to some of the theory involved in the other classes in the degree is the main benefit for people with CS degrees, and not so much the specific language (though I think the higher they go up the abstraction scale, the worse it is for the students in the long run).
Without a practical reason for learning a programming language it is pretty hard going.
If you can think of particular problems or a specific task which the language is suit for - Then the learning experience is driven by needs, rather than simple academics.
I only just recently switched from VB to C# (1 month ago) while not as significantly different as a switch from C# to C, because I switch for a particular reason I found it much easier to learn. I had dabbled previous without a specific problem to solve, needless to say I switched back
If you have a different style of learning as in self-taught then my recommendation to be a better programmer is to research topics regarding your domain. From bottom to top, slowly climb up the ladder.There is a fairly amount of different programmers, no one will excel in all, so don't start off with that context in mind.
Best of luck to you.
C++ is just a programming language. What you don't have that other students (if they paid attention in class) have is the deeper understanding that comes through studying concepts.
Being a programmer is not and should not be the end goal of any CS graduate. However it is as far as most people get without such a degree.
Here is an analogy: An engineer and an architect both at some point learn to draft buildings using CAD. Also, someone completely untrained can come in and start work using CAD and be very effective. This is a good career and it pays well, but for both the engineer and the architect it is not where you want to be when you are 30.
One value of knowing C is that many other languages including C#, Java, C++, JavaScript, Python, and PHP have their roots in C syntax.
Another value, and arguably more important, is that it will build your confidence. Programmers are a confident group and very optimistic (you have to be confident to think that you can write the equivalent of a 1000 page book without a single spelling or grammatical error). And confidence in your ability to learn and effectively use any language will grow considerably with a pure C application or two under your belt.
So write a non trivial program in C; something that at least reads and writes files, allocates and deallocates memory, and manages a data structure like a queue or binary tree.
Your confidence will thank you.