In Fortan 77 arrays could be at most 7-dimensional. Is this restriction still there in Fortran 90? I can't find anything on this subject on this tutorial I have found on the internet.
Also Fortran 90 is limited to a maximum rank of 7. Fortran 2008 increases this to a maximum rank of 15.
See also The new features of Fortran 2008, page 4:
The maximum rank has been increased to 15. In the case of a coarray, the limit of 15 applies to the sum of the rank and corank.
Related
Is it about performance, clean source code, compilers, ...? I know that many compilers allow longer single-line codes. But, if this extension is possible without any compromise, then why does Fortran standard strictly adhere to this rule?
I know that this is very general question (stackoverflow warns me that this question might be downvoted given its title), but I cannot find any resources that explain the logic behind a max length of 132 characters in modern Fortran standard.
Update Oct 22, 2019: See https://j3-fortran.org/doc/year/19/19-138r1.txt for a proposal accepted as a work item for the next 202X revision of the Fortran standard, which eliminates the maximum line length and continuation limits.
Take a look at specification:
ftp://ftp.nag.co.uk/sc22wg5/N001-N1100/N692.pdf
section: 3.3.1
It's just convention. Somebody decided that 132 will be ok. In 66 version it was 72.
Standards: https://gcc.gnu.org/wiki/GFortranStandards#Fortran_Standards_Documents
Usually, these limitations (like 80, 132 characters per line), were dictated by terminals.
Just to illustrate, in a "funny" way, how was it to code in 90's ;)
The first programming language I learned back in the 1980s was Fortran (FORTRAN77 to be exact)
Everybody was super excited because my group of students were the first ones allowed to use the brand new terminals that had just been set up in the room next to the computer. BTW: The computer was an IBM mainframe and it resided in a room the size of a small concert hall, about four times the size of the classroom with the 16 terminals.
I remember having more than once spent hours and hours debugging my code only to find out that in one of my code lines I had again been using the full line width of 80 characters that the terminal provided instead of the 72 characters allowed by Fortan77. I used to call the language Fortran72 because of that restriction.
When I asked my tutor for the reason he pointed me to the stack of cardboard boxes in the hallway. It was rather a wall of boxes, 8m long and almost 2m high. All these boxes were full of unused punch cards that they did not need anymore after the installation of the terminals.
And yes the punchcards only used 72 characters per code line because the remaining 8 were required for the sequence number of the card.
(Imagine dropping a stack of cards with no sequence numbers punched in.)
I am aware that I broke some rules and conventions here: I hope you still like that little piece of trivia and won't mind that my story does not exactly answer the original question. And yeah, it also repeats some information from previous answers.
The old IBM line printers has 132 character width, so when IBM designed Fortran, that was the max line length
The reason was sequence numbers punched in columns 73-80 of the source code cards. When you dropped your program deck on the floor, they allowed you to bring the scrambled deck to a sorting machine (a large 5 foot long stand alone machine) and sort the deck back into order.
A sequencer program read the deck and could punch a new deck with updated sequence numbers, so the programmer did not get involved in the numbering. You punched a new deck after every few dozen changes.
I did it many times 1970-1990.
In the olden days the punchcards also were of finite length. I forget what was being used for terminals in the 90s other than they were long CRTs, but do not recall the resolution... But it was NOT 2k pixels wide.
Could I estimate, what would be the number of C++ LOCs in optimal code (desktop-application) given the number of LOCs in h-files?
The background:
I'm doing an effort-estimation and a plan for porting a C++ software to C#.
My first idea was to create a rough estimation based on LOCs and to track the process using LOCs ported to LOCs remaining. Assuming, that the porting speed will be 200LOCs/day I came to 1,5 person-years. If I present this figure to the customer, I certainly won't get the contract.
After a closer look to the code I found out, that the code is very inefficient, uses many-many C&P code, implements own container-classes, etc.
So the LOC-Number of C++ seems not to reflect the effort for implementing the same functionality. Now my assumption is, a header-file should reflect the functionality better.
No. The size of a header file is a really bad proxy for the size of the associated code file. A header only shows entry points to an API, and it can hide as much or as little things as the API requires.
In other words, a header that declares a single function only says that there's a single public function in that implementation file. The implementation file could have only one function in it, or it could have hundreds. Neither of them is better, there's nothing wrong with either development approach. It just means that you can't use headers to estimate effort.
With a 100k SLOCs program, it would be a stretch to use SLOCs as a measure, because you'll spend more time testing than developing. If you have access to the application's features documentation, consider using function points instead. From what I hear, they're one of the less broken heuristics around.
As far as development goes, don't forget that you can call to C++ code from C# and that C++/CX can integrate C#. This can ease some porting pain if you can just incrementally rewrite more or less independent components.
Not with the same objective, however for my curiosity I once checked my LOCs with cloc for a project in its intermediate (pre alpha) stage. It was not well documented and some of its places were slightly dirty coded or not well planned.
C++ 100 2545 3252 11680
C/C++ Header 108 2847 12721 9077
C 4 1080 971 6298
CMake 33 241 161 1037
Bourne Shell 4 16 0 709
Python 8 90 72 423
CSS 1 63 21 422
PHP 5 23 21 295
Javascript 5 42 23 265
JSON 4 0 0 183
XML 1 11 171 72
make 1 13 0 15
Bourne Again Shell 2 10 0 14
As you can see the ratio between header LOC and source LOC is 0.777. However average is not a good metric for anything. But along with other metrics e.g. comment lines some fuzzy lines may be drawn to indicate different parameters and stages of development. More studies of well known code bases are required to come up with a good huristics.
But at the end whatever measures you take, it can conclude an assumption which may be wrong.
The header file may not be an indicator.
Header files usually contain function declarations -- the interface or instructions on how to call a function.
Functions in source files can be zero statements or hundreds of LOC. One cannot tell the number of statements or lines in a function by looking at a function declaration.
Many LOC counters include both header files and source files.
I'm pretty new to Fortran, as in started learning it 2 days ago new. I started learning Fortran because I was getting into prime numbers, and I wrote a program in python that was so fast, it could determine 123098237 was a prime in 0.1 seconds.
Impressive, I know.
What's not impressive is when I try to find out if (2^127)-1 or 170141183460469231731687303715884105727 (it is, by the way) is a prime number. The program ran so long, I just ended up having to stop it.
So, I started looking for some faster languages to write it in, so I wrote the program in C.
It was faster, but the problem of super large prime numbers came into play.
I was going to to see if there was a solution but then I heard through the grapevine that, if your programming with numbers, Fortran is the fastest and best way to go. I vaguely remember my step dad's old Fortran 77 text books from college, but they were basically useless to me, because they were talking about working with punch cards. So, I went online, got gfortran for Ubuntu 12.04 x86, got a couple of pdfs, and started learning. Before you know it I made a program that received input and tested for primality, and worked!
But, the same old problem came up, the number was too big.
And so, how do I handle big numbers like this with Fortran?
Fortran, like many other compiled languages, doesn't provide such large integers or operations on them out-of-the-box. An up to date compiler ought to provide an integer with 18 decimal digits, but no more than that.
If you want to program, in Fortran, data types and operations for such big integers use your favourite search engine on terms such as Fortran multiple precision. You could even search around here on SO for relevant questions and answers.
If you want to investigate the mathematics of such large integers stick with Python; you'll struggle to write software yourself which matches its speed of operations on multiple precision arithmetic. One of the reasons that Python takes a long time to determine the primality of a large number is that it takes a program, any program written in any language, a long time to determine the primality of a large number. If you dig around you're likely to find that the relevant Python routines actually call code written in C or something similarly low-level. Investigate, if you wish, the topic of the computational complexity of primality testing.
I'm not saying you won't be able to write code to outperform the Python intrinsics, just that you will find it a challenge.
Most languages provide certain standard intrinsic types which are fully adequate for solving standard scientific and engineering problems. You don't need 80 digit numbers to calculate the thickness of a bridge girder or plan a spacecraft orbit. It would be difficult to measure to that accuracy. In Fortran, if you want to do extra precision calculations (e.g., for number theory) you need to look to libraries that augment the language, e.g., mpfun90 at http://crd-legacy.lbl.gov/~dhbailey/mpdist/ or fmlib at http://myweb.lmu.edu/dmsmith/fmlib.html
I'll guess that your algorithm is trial division. If that's true, you need a better algorithm; the implementation language won't matter.
Pseudocode for the Miller-Rabin primality test is shown below. It's probabilistic, but you can reduce the chance of error by increasing the k parameter, up to a maximum of about k=25:
function isPrime(n, k=5)
if n < 2 then return False
for p in [2,3,5,7,11,13,17,19,23,29]
if n % p == 0 then return n == p
s, d = 0, n-1
while d % 2 == 0
s, d = s+1, d/2
for i from 0 to k
x = powerMod(randint(2, n-1), d, n)
if x == 1 or x == n-1 then next i
for r from 1 to s
x = (x * x) % n
if x == 1 then return False
if x == n-1 then next i
return False
return True
I'll leave it to you to translate that to Fortran or some other language; if you're programming in C, there is a library called GMP that is frequently used for handling very large numbers, and the function shown above is built-it to that library. It's very fast; even numbers that are hundreds of digits long should be classified as prime or composite almost instantly.
If you want to be certain of the primality of a number, there are other algorithms that can actually provide a proof of primality. But they are much more complicated, and much slower.
You might be interested in the essay Programming with Prime Numbers at my blog.
On page 2 of The Art of Computer Programming, Volume 1, Fascicle 1, Knuth (2005) writes, "The same number may also be obtained in a simpler way by taking Roman Numerals."
This is part of Knuth's humorous explanation of the identifying number of the MMIX Computer. The number 2009 is the average of the identifying numbers of 14 other computers. He goes on to say that we can also obtain 2009 by "taking Roman Numerals." How?
I have tried taking the Roman Numerals from the names of the 14 other computers. The sum exceeds 2009 and is far less than 28,126, so neither the sum nor the average work. Knuth could just mean to take the Roman Numerals of MMIX, and if that's it, then fine. Is there something else though? I would love to know.
P.S.
Moderators, this question might not meet SO standards. In that case, please teach me where or how else to ask this question. That way I can better meet the community.
References
Knuth, D. E. (2005). The art of computer programming: Volume 1, fascicle 1 : MMIX, a RISC computer for the new millennium. Upper Saddle River, New Jersey: Addison-Wesley.
I would like to perform basic operations on numbers 270 digits long and I was recommended Matt McCutchen's BigInteger library but I am told it limits you based on what memmory your comp has and my comp has 2.87 GB usable RAM. I want to perform things like division, multiplication etc....any help on what I can use because I don't know yet if my computer's memory will be enough or not.
270 digits is tiny, relatively speaking - it's under 900 bits. Your computer routinely deals with 2048-bit numbers during SSL handshakes.
BigInteger should work fine. You may also want to check out libgmp (the GNU Multi-Precision library).
You'll be fine - 270 digits is not that much in the grand scheme of things.
Try it. 270 digits is nothing. 4096 bit cryptography tasks operate on 1233 digit numbers (granted, using modular arithmetic, so it'll never be larger than 1233 digits..) without breaking a sweat.