I've got some code I need to run, written in FORTRAN IV. What are my options for running this code? Is there an application out there that can compile and run FORTRAN IV code on a PC? Or if possible I am looking for a utility to convert Fortran IV code to FORTRAN 77 or later. I have little experience in Fortran and in programing in general.
Thanks for you help
Intel's Fortran compiler supports Fortran IV. If you don't want to go that way, there are some conversion utilities mentioned in this question --- but none of them sound very promising.
Very few features have been deleted from Fortran. A few more features have been marked as obsolescent in the more recent language standards. But the compilers tend to support most or all of these features because some customers don't want to recode working legacy programs just because they use "bad" features. Sometimes one has to use compiler options to use some of these features. So I'd just pick a compiler and try it on the existing code. There are many to choose from. Maybe get a trial version to see whether it works before paying your money or use a free one.
Another possible problem is that your code base might have non-standard features. In pre Fortran-90 days there was less concern with language standards and some vendors added extra features for user convenience and to differentiate their product. If present, such features might cause greater problems and require recoding.
Probably the most important feature of Fortran IV is that do loops check the variable at the end. Thus do loops always execute at least once. This was a big issue when it was changed to the current method.
n = 5
do 99 i = n, 4
write(*,*) i
99 continue
Will print out "5" in Fortran IV, and nothing in Fortran 77. Codes that use this feature can be hard to port.
I'll just add that if the code does use non-standard features that your compiler doesn't handle, the maintainers at fortranwiki.org maintain a nice list of explanations of, and workarounds for, many such contstructs on their Modernizing Old Fortran page.
The obvious answer is to run it as is on MVS 3.8J under the Hercules emulator. MVS 3.8J and IBM FORTRAN G and FORTRAN H are public domain.
You will be able to compile and run the code as if you had a real mainframe! The IBM FORTRAN compilers support (defined, actually) the full FORTRAN IV language along with IBM extensions.
Related
Yep I know.., this is not imagined ... it's a real Fortran question.
By previous versions I mean Fortran 2003, 95, 90, and even 77.
by backwards compatible I mean can easily run code written for previous versions in 2008 (with nothing more than some minor changes to syntax)?
Nothing was deleted in Fortran 90 but some awful features have been deleted in Fortran 95 and later. More have been marked as "obsolescent". See, e.g., http://www.cisl.ucar.edu/zine/96/fall/articles/3.f90.obsolete.html. As a practical matter compiler vendors still include these features because there is so much legacy code out there. There would be customer rebellion if compilers couldn't compile legacy FORTRAN 77 programs.
Sure. This is all what Fortran is about, which is backward compatibility. Often, one expects that the Fortran standard is supported for the next 50 years (not kidding). You can still compile a Fortran 66 code with the intel compiler which supports most features of 2008 standard.
I work in a factory where 80% of our equipment uses an MS-DOS interface. None of our engineers have experience in c/c++ programming, and it was requested we add some features to the machines interface. Our automation group has abandoned the MS-DOS platform in favor of Allen-Bradley controls. I'm feeling ambitious and decided to take on this project, even though I have next to no experience in c/c++.
On to the question:
All of the programming was written and compiled in Turbo C++. I would prefer to use DEV++ for various reasons (ease of use, additional headers, more developed C++ platform, ect.). The problem is the existing programming relies heavily on non-standard headers from TC++. There are 10 or so headers unavailable in DEV++ in the source code, and rewriting the code using more modern constructs is not an option; we would lose what little support we have from our AG, time, ect.
Is there a way I could add all the headers from TC++ to DEV++? For example adding the graphics.h to DEV++ and have it be fully functional? I have tried adding it to the include folder, calling it with #include"graphics.h", and if DEV++ manages to recognize it, it throws a ton of compiling errors because it doesn't recognize the internal commands in the graphics.h file.
Unfortunately I cannot include any example code from this project, due to non-disclosure and copy-write policies.
My programming experience:
DABBLE in RSLogigx500,5000 ; Arduino IDE (don't judge) ; Parker 6K ; PanelView ; ~40hrs of self-taught c and c++.
Any help would be much appreciated.
UPDATE
Very helpful information. It seems like this isn't going to be possible given how outdated the hardware is and the restrictions I have on this project, but thank you all for your input.
Most of the headers from old Turbo C are really just the MSDOS API, in a way. So it doesn't make any sense to attempt to use those headers in any other environment and you can't port them to a Windows compiler. Similarly, graphics.h is for a Borland-specific DOS graphics library called BGI and will not work on any other compiler.
It should be noted that old Turbo C++ (I'm assuming version 3.1?) didn't follow the C or C++ standards much. The C++ version it used is completely antique.
Also note that the Dev C++ IDE is outdated and doesn't update the GCC compiler any longer. The CodeBlocks IDE is a better alternative.
This is more of a "long comment" than a direct answer to your question, trying to guide you to a better understanding of what MAY be your challenge in your project.
I personally would choose a more "professional" level development tool. Either Eclipse (positives is that this is portable and looks/feels the same whether you use Windows or Linux), XCode (works only on Mac) or Visual Studio (which works only on Windows). These are full featured integrated development environments, and they are all very slick. All of them are free or nearly free.
Compiling OLD code, that is written for DOS into a modern compiler with on a modern OS may be quite a challenge, depending on what the application does and how much in the way of assumptions about it's environment the code is written with:
does it assume int is 16 bits
does it call direct to DOS to get file info, opening/reading/writing/closing files
does it do raw keyboard input
does it poke characters and/or pixels directly at the screen
does it use far and near pointers, are there driver-like components that interface directly to hardware interrupts
One thing that stands out in your question is the mention of graphics.h, which I believe is very Borland specific. Which means you'll have to write your own replacement functions - either a replacement graphics.h set of functions (I expect most functionality is available in any modern OS, it's more a case of "what is it called and what do I need in order to call that function"). This can be quite a task in itself.
The tricky part here is not only to identify what the code does, but to replace it with similar logic, that does the same thing in your new environment.
And of course, it all depends on what you want to do with the code, how well written it is - is it nicely modular, does each function do one thing and one thing only, or are there functions that "This calculates the value of , and then reads some data from disk, then does some I/O to the screen, and then talks to some external hardware, and because it gets calls frequently, also updates the time on the screen if it has changed".
Today I was reading code from some very popular numerical libraries written in FORTRAN 77 such as QUADPACK (last updated in 1987), and I was wondering if there is any reason not to rewrite those libraries in Fortran 90 apart from the big amount of work it would pose, given the great improvements Fortran 90 brought to the language, including free-form source, better control structures so GO TO could be forgotten, vectorization, interfaces and so on.
Is it because FORTRAN 77 compilers produce more optimized code, maybe it is better for parallel execution? Notice that I'm not even talking about Fortran 2003 for example, which is only 8 years old: I'm talking about Fortran 90, so I assume it has enough widespread and the compilers are ready. I don't have contact with the industry, anyway.
Edit: janneb is right: LAPACK is actually written in Fortran 90.
Well, like "pst" mentioned, the "big amount of work it would pose" is a pretty major reason.
Some further minor points:
LAPACK IS Fortran 90, these days, in the sense that the latest version no longer compiles with a F77 compiler. That being said, it's far from a rewrite, only a few things that were changed.
Most of the F90 features you mention make it easier and faster to write robust programs, it doesn't necessary make the resulting programs any faster.
It wasn't that long ago that free F90 compilers were not available (plenty of people used g77!), so for a widely used library like LAPACK not using F90 features was likely a conscious decision.
A F77 compiler does not, generally, produce faster code than a F90 compiler. If not for any other reason, then because it's likely obsolete and cannot optimize for the latest CPU's. A modern Fortran compiler might create faster code from F77 than from an equivalent F90 program which makes extensive use of things like pointers etc., but that is highly dependent on the program in question (e.g. using pointers and fancier data structures may allow usage of better algorithms, allowing the F90 program to produce results faster even though it might execute at a lower average utilization of the CPU arithmetic units).
Vectorization, if by this you mean the F90+ array syntax, is mostly a programmer convenience issue rather than allowing faster code. A competent compiler will vectorize the equivalent DO loop just as well.
There is no reason to put in a ton of work to (maybe) improve something that works well. It should be noted that even though Fortran 90 introduced many new features, it did not change the language. Remember that Fortran 90 Standard is backwards compatible with FORTRAN 77. A modern compiler will be able to optimize a FORTRAN 77 code just as well. There are features introduced in Fortan 90 (e.g. pointers) that would hinder the efficiency and that should be avoided if one cares about execution time, e.g. in HPC.
So it would not really make a difference. For a modern compiler, well written FORTAN 77 is just as optimizable - it is like a cake without icing. Here, in case of Fortran 90 and later, icing looks better than it tastes - it is a convenience for the programmer, but does not necessarily improve the program efficiency.
There is one major reason why Fortran 77 programs might be faster :
Allocated arrays (Fortran 90) are much slower than declared-at-compile-time arrays.
Both are not put at the same place in memory. That's the stack vs heap memory management in Fortran.
See here for instance
This being said, Fortran 90+ have fast 'all in block' array operations. I'm a big fan of gcc-fortran (gfortran) and without any compilation option, for an array a of size N
a = a + 1
is 4 times faster than
do i = 1 , N
a(i) = a(i) + 1
end do
This bench is for me, on my machine, on gfortran without optimization option, and without any claming that this is professional benchmarking.
The allocation "problem" (which is not a problem but a feature) is not architecture, compiler or anything related but to the Fortran Standard.
Whether F77 program will be slower or faster then the equivalent F90 program, rewritten with newer syntax is discussable, but let's disregard that for now.
What should however be taken into account, is that nobody really cares about speed only. People only care about speed of execution in the cases where it is relevant, and where it is bussiness profitable (I'm sure there is a better term for this, but nothing comes to mind right now ... cost effective maybe).
Since those two (rather popular libraries) are still in F77, it is obvious that it is the general opinion that the costs of rewriting them outweights the benefits gained, benefits in term of speed of execution, and benefits in terms of cost effectiveness of that whole process.
I was wondering which high-level langunage allows the easiest manner in which to call Fortran subroutines? I currently use MATLAB and calling MEX files seems to be relatively complicated compared to other languages.
I'm particularly interested in how the following compares in terms of getting "up and running" quickly:
*Python via f2py
*R via ?
*MATLAB via MEX files
Another way of asking this would be "If you were to start over and learn a new language, which one would you choose if your objective was calling Fortran subroutines?"
I'm trying to getbthe "best of both worlds" i.e. having good data handling and graphics combined with the ability to call fast Fortran subroutines.
Thank you all in advance for any help you can provide. Alas, if someone knows of a good MEX tutorial for Fortran, that would be appreciated as well.
I was wondering which high-level langunage allows the easiest manner in which to call Fortran subroutines?
The obvious answer is Fortran itself. So-called Modern Fortran (2003 & 2008) has a lot of high-level features. And obviously it's easy to call legacy FORTRAN code (my guess is that you have old FORTRAN code base) from the modern one.
Python via f2py is very nice. I had a little bit of trouble getting it going on Windows with IVF, but it didn't take long to figure out, and the mailing list gives prompt responses. On Linux, it worked without any issues.
I haven't used R, but as I understand it, it's only useful if you do a lot of statistics with large amounts of data. As for MATLAB, it's a horrible language, if you're just calling FORTRAN, you're better off with Python.
What I usually do in your case is create Fortran programs that I can pass command line arguments to as input. This is readily available by Fortran 2003 standard using intrinsic get_command_argument subroutine. You can then parse Fortran program output from whatever language you are using as a wrapper (assuming language has access to system shell). In the past, I did this with shell scripts, MATLAB (avoid), Python.
Probably a pretty vague and broad question, but do all C++ compilers compile code into C first before compiling them into machine code?
Because C compilers are nearly ubiquitous and available on nearly every platform, a lot of (compiled) languages go through this phase in their development to bootstrap the process.
In the early phases of language development to see if the language is feasible the easiest way to get a working compiler out is to build a compiler that converts your language to C then let the native C compiler build the actual binary.
The trouble with this is that language specific constructs are lost and thus potential opportunities for optimization may be missed thus most languages in phase two get their own dedicated compiler front end that understands language specific constructs and can thus provide optimization strategies based on these constructs.
C++ has gone through phase 1 and phase 2 over two decades ago. So it is easy to find a `front end' of a compiler that is dedicated to C++ and generates an intermediate format that is passed directly to a backed. But you can still find versions of C++ that are translated into C (as an intermediate format) before being compiled.
Nope. GCC for example goes from C++ -> assembler. You can see this by using the -S option with g++.
Actually, now that I think about it, I don't think any modern compiler goes to C before ASM.
No. C++ -> C was used only in the earliest phases of C++'s development and evolution. Most C++ compilers today compile directly to assembler or machine code. Borland C++ compiles directly to machine code, for example.
No. This is a myth, based around the fact that a very early version of Stroustrup's work was implemented that way. C++ compilers generate machine code in almost exactly the same way that C compilers do.
As of this writing in 2010, the only C++ compiler that I was aware of that created C code was Comeau*. However, that compiler hasn't been heard from in over 5 years now (2022). There may be one or two more for embedded targets, but it is certainly not a mainstream thing.
* - There's a link to their old website on this WP page. I'd suggest not clicking that unless your computer has all its shots up to date
This is not defined by the standard. Certainly, compiling to C-source is a reasonable way to do it. It only requires the destination platform to have a C-compiler with a reasonable degree of compliance, so it is a highly portable way of doing things.
The downside is speed. Probably compilation speed and perhaps also execution speed (due to loads of casts for e.g. virtual functions that prevents the compiler to optimise fully) will suffer.
Not that long ago there was a company that had a very nice C++ compiler doing exactly that. Unfortunately, I do not remember the name of the company and a short google did not bring the name back. The owner of the company was an active participant in the ISO C++ committee and you could test your code directly on the homepage, which also had some quite decent ressources about C++.
Edit: one of my fellow posters just reminded me. I was talking about Comeau, of course.