passing a noncontiguous array section in Fortran - fortran

I am using intel fortran compiler and intel mkl for a performance check. I am passing some array sections to Fortran 77 interface with calls like
call dgemm( transa,transb,sz_s,P,P,&
a, Ts_tilde,&
sz_s,R_alpha,P,b,tr(:sz_s,:),sz_s)
as evident, tr(:sz_s,:) is not contiguous in memory and the Fortran 77 interface is expecting a continuous block and creating a temporary for this.
What I was wondering is that will there be a difference if I create my temporary array explicitly in the code for tr and copy information from that temporary back and forth before and after the operation, or will that be the same as compiler itself creating the temporary from a performance point of view? I guess compiler will always be more efficient.
And of course any more suggestions to eliminate these temporaries are welcome.
One more point, If I use the Fortran 95 interface of the library apparently, with a similar call on a simpler test problem, no warning is issued for the creation of a temporary. Then I read in the manual of mkl that Fortran 95 interface uses assumed shape arrays which explains why temporaries are not created.
However at that point, I can not seem to use some support functions like timing routines.
Namely, intel mkl has some timing support functions but if I use them with the mkl_service routine like below then I get 'This name does not have a type, and must have an explicit type' error for dsecnd. Any idea for this problem is also welcome. A simple example for this is given as
program dgemm95_test
! some modules for Fortran 95 interface
use mkl_service
use mkl95_precision
use mkl95_blas
!
implicit none
!
double precision, dimension(4,3) :: a
double precision, dimension(6,4) :: b
double precision, dimension(5,5) :: r ! result array
double precision, dimension(3,2) :: dummy_b
!
character(len=1) :: transa
character(len=1) :: transb
!
double precision :: alpha, beta, t1, t2, t
integer :: sz1, sz2
! initialize some variables
alpha = 1.0
beta = 0.0
a = 2.3
b = 4.5
r = 0.0
transa = 'n'
transb = 'n'
dummy_b = 0.0
! Fortran 95 interface
t1 = dsecnd()
call gemm( a, b(4:6,1:3:2), r(2:5,3:4),&
transa, transb, alpha, beta )
t2 = dsecnd()
!
write(*,*) r
dummy_b = r(2:4,4:5)
!
end program dgemm95_test

The temporary is absolutely necessary when passing your array section to an assumed size array dummy argument, which the old routines use, because the array section is not contiguous in memory.
You can of course make your own temporary arrays. Whether it will be faster or not depends on many factors. Among others the important thing is whether the temporary is allocated on the stack or on the heap. The Intel Fortran compiler is capable of both, there are compiler switches to control the behavior (-heap-arrays n) and it can depend on the array size. Stack allocation is much faster and it is usually the default. Automatic arrays, which you might use for your own temporary are allocated on the stack by default too. Be careful with large arrays on the stack, you can easily overflow it and cause a crash.
I would suggest you to make a performance test and use the simpler variant if it is not too slow. Probably it will be the Fortran 95 interface, but you should measure the times, really.
As for the timing, MKL manual page for second()/dsecnd() states you must includemkl_lapack.fi and doesn't speak about any Fortran95 interface. You could get away declaring it external double precision too, but I would use the include. Or use system_clock() as a portable standard Fortran 95.

Related

Is a matrix automatically deallocated at the end? [duplicate]

I am interested in the difference between alloc_array and automatic_array in the following extract:
subroutine mysub(n)
integer, intent(in) :: n
integer :: automatic_array(n)
integer, allocatable :: alloc_array(:)
allocate(alloc_array(n))
...[code]...
I am familiar enough with the basics of allocation (not so much on advanced techniques) to know that allocation allows you to change the size of the array in the middle of the code (as pointed out in this question), but I'm interested in considering the case where you don't need to change the size of the array; they might be passed onto other subroutines for operation, but the only purpose of both variables in the code and any subroutine is to hold the data of an array of dimension n (and maybe change the data, but not the size).
(1) Is there any difference in memory usage? I am not an expert in low level procedures, but I have a very slight knowledge of how they matter and how they can impact on the higher level programming (kind of experience I'm talkng about: once trying to run a big code in fortran I was getting a mistake I didn't understand, sysadmin told me "oh, yeah, you are probably saturating the stack; try adding this line in your running script"; anything that gives me insight into how to consider this things when actually coding and not having to patch them later is welcomed). I've been told by people that it might be dependent on many other things like compiler or architecture, but I interpreted from those responses that they were not completely sure of exactly how this was so. Is it so absolutely dependant on a multitude of factors or is there a default/intended behavior in the coding that may then be over-riden by optional compiling keywords or system preferences?
(2) Would the subroutines have different interface needs? Again, not an expert, but it had happened to me before that because of the way I declare variables of subroutine, I end up having to put the subroutines in a module. I've been given to understand this may vary depending on whether I use things that are special for allocatable variables. I am thinking about the case in which everything I do with the variables can be done both by allocatables and automatics, not intentionally using anything specific of allocatables (other than allocation before usage, that is).
Finally, in case this is of use: the reason I am asking is because we are developing in a group and we have recently noticed different people use those two declarations in different ways and we needed to determine if this is something that can be left to personal preference or if there might be any reasons why it might be a good idea to set a clear criteria (and how to set that criteria). I don't need extremely detailed answers, I am trying to determine if this is something I should be doing research about to be careful on how we use it and in what aspects of it should the research be directed.
Though I would be interested to know of "interesting tricks" than can be done with allocation but are not directly related to the need of having size variability, I am leaving those for a possible future follow-up question and focusing here on the strictly functional differences (meaning: what I am explicitly telling compilers to do with my code). The two items I mentioned are the thing I could come up with due to previous experiences, but any other important one that I am missing and should consider, please do mention them.
Because gfortran or ifort + Linux(x86_64) are among the most popular combinations used for HPC, I made some performance comparison between local allocatable vs automatic arrays for these combinations. The CPU used is Xeon E5-2650 v2#2.60GHz, and the compilers are gfortran4.8.2 and ifort14.0. The test program is like the following.
In test.f90:
!------------------------------------------------------------------------
subroutine use_automatic( n )
integer :: n
integer :: a( n ) !! local automatic array (with unknown size at compile-time)
integer :: i
do i = 1, n
a( i ) = i
enddo
call sub( a )
end
!------------------------------------------------------------------------
subroutine use_alloc( n )
integer :: n
integer, allocatable :: a( : ) !! local allocatable array
integer :: i
allocate( a( n ) )
do i = 1, n
a( i ) = i
enddo
call sub( a )
deallocate( a ) !! not necessary for modern Fortran but for clarity
end
!------------------------------------------------------------------------
program main
implicit none
integer :: i, nsizemax, nsize, nloop, foo
common /dummy/ foo
nloop = 10**7
nsizemax = 10
do i = 1, nloop
nsize = mod( i, nsizemax ) + 1
call use_automatic( nsize )
! call use_alloc( nsize )
enddo
print *, "foo = ", foo !! to check if sub() is really called
end
In sub.f90:
!------------------------------------------------------------------------
subroutine sub( a )
integer a( * )
integer foo
common /dummy/ foo
foo = a( 1 )
ends
In the above program, I tried avoiding compiler optimization that eliminates a(:) itself (i.e., no operation) by placing sub() in a different file and making the interface implicit. First, I compiled the program using gfortran as
gfortran -O3 test.f90 sub.f90
and tested different values of nsizemax while keeping nloop = 10^7. The result is in the following table (time is in sec, measured several times by the time command).
nsizemax use_automatic() use_alloc()
10 0.30 0.31 # average result
50 0.48 0.47
500 1.0 0.90
5000 4.3 4.2
100000 75.6 75.7
So the overall timing seems almost the same for two calls when -O3 is used (but see Edit for different options). Next, I compiled with ifort as
[O3] ifort -O3 test.f90 sub.f90
or
[O3h] ifort -O3 -heap-arrays test.f90 sub.f90
In the former case the automatic array is stored on the stack, while when -heap-arrays is attached the array is stored on the heap. The obtained result is
use_automatic() use_alloc()
[O3] [O3h] [O3] [O3h]
10 0.064 0.39 0.48 0.48
50 0.094 0.56 0.65 0.66
500 0.45 1.03 1.12 1.12
5000 3.8 4.4 4.4 4.4
100000 74.5 75.3 76.5 75.5
So for ifort, the use of automatic arrays seems beneficial when relatively small arrays are mainly used. On the other hand, gfortran -O3 shows no difference because both arrays are treated the same way (see Edit for more details).
Additional comparison:
Below is the result for Oracle Fortran compiler 12.4 for Linux (used with f90 -O3). The overall trend seems similar; automatic arrays are faster for small n, indicating the internal use of stack.
nsizemax use_automatic() use_alloc()
10 0.16 0.45
50 0.17 0.62
500 0.37 0.97
5000 2.04 2.67
100000 65.6 65.7
Edit
Thanks to Vladimir's comment, it has turned out that gfortran -O3 put automatic arrays (with unknown size at compile-time) on the heap. This explains why use_automatic() and use_alloc() did not make any difference above. So I made another comparison between different options below:
[O3] gfortran -O3
[O5] gfortran -O5
[O3s] gfortran -O3 -fstack-arrays
[Of] gfortran -Ofast # this includes -fstack-arrays
Here, -fstack-arrays means that the compiler puts all local arrays with unknown size on the stack. Note that this flag is enabled by default with -Ofast. The obtained result is
nsizemax use_automatic() use_alloc()
[Of] [O3s] [O5] [O3] [Of] [O3s] [O5] [O3]
10 0.087 0.087 0.29 0.29 0.29 0.29 0.29 0.29
50 0.15 0.15 0.43 0.43 0.45 0.44 0.44 0.45
500 0.57 0.56 0.84 0.84 0.92 0.92 0.92 0.92
5000 3.9 3.9 4.1 4.1 4.2 4.2 4.2 4.2
100000 75.1 75.0 75.6 75.6 75.6 75.3 75.7 76.0
where the average of ten measurements are shown. This table demonstrates that if -fstack-arrays is included, the execution time for small n becomes shorter. This trend is consistent with the results obtained for ifort above.
It should be mentioned, however, that the above comparison probably corresponds to the "best-case" scenario that highlights the difference between them, so the timing difference can be much smaller in practice. For example, I have compared the timing for the above options by using some other program (involving both small and large arrays), and the results were not much affected by the stack options. Also the result should depend on machine architecture as well as compilers, of course. So your mileage may vary.
For the sake of clarity, I'll briefly mention terminology. The two arrays are both local variables and arrays of rank 1.
alloc_array is an allocatable array;
automatic_array is an explicit-shape automatic object.
Being local variables their scope is that of the procedure. Automatic arrays and unsaved allocatable arrays come to an end when execution of the procedure completes (with the allocatable array being deallocated); automatic objects cannot be saved and saved allocatable objects are not deallocated on completion of execution.
Again, as in the linked question, after the allocation statement both arrays are of size n. These are still two very different things. Of course, the allocatable array can have its allocation status changed and its allocation moved. I'll leave both of those (mostly) out of the scope of this answer. An allocatable array, of course, doesn't have to have these things changed once it's been allocated.
Memory usage
What was partly contentious about a previous revision of the question is how ill-defined the concept of memory usage is. Fortran, as a language definition, tells us that both arrays come to be the same size and they'll have the same storage layout, and are both contiguous. Beyond that, much follows terms you'll hear a lot: implementation specific and processor dependent.
In a comment you expressed interest in ifort. So that I don't wander too far, I'll stick to that one compiler. Other compilers have similar concepts, albeit with different names and options.
Often, ifort will place automatic objects and array temporaries onto stack. There is a (default) compiler option -no-heap-arrays described as having effect
The compiler puts automatic arrays and temporary arrays in the stack storage area.
Using the alternative option -heap-arrays allows one to control that slightly:
This option puts automatic arrays and arrays created for temporary computations on the heap instead of the stack.
There is a possibility to control size thresholds for which heap/stack would be chosen (when that is known at compile-time):
If the compiler cannot determine the size at compile time, it always puts the automatic array on the heap.
As n isn't a constant, one would expect automatic_array to be on the heap with this option, regardless of the size specified. To determine the size, n, of the array at compile time, the compiler would potentially need to do quite a bit of code analysis, even if it is possible.
There's probably more to be said, but this answer would be far too long if I tried. One thing to note, however, is that automatic local objects and (post-Fortran 90) allocatable local objects can be expected not to leak memory.
Interface needs
There is nothing special about the interface requirements of the subroutine mysub: local variables have no impact on that. Any program unit calling that would be happy with an implicit interface. What you are asking about is how the two local arrays can be used.
This largely comes down to what uses the two arrays can be put to.
If the dummy argument of a second procedure has the allocatable attribute then only the allocatable array here can be passed to that procedure. It will also need to have an explicit interface. This is true whether or not the procedure changes the allocation.
Of course, both arrays could be passed as arguments to a dummy argument without the allocatable attribute and then we don't have different interface requirements.
Anyway, why would one want to pass an argument to an allocatable dummy when there will be no change in allocation status, etc.? There are good reasons:
there may be a code path in the procedure which does have an allocation change (controlled by a switch, say);
allocatable dummy arguments also pass bounds;
etc.,
This second one is more obvious if the subroutine had specification
subroutine mysub(n)
integer, intent(in) :: n
integer :: automatic_array(2:n+1)
integer, allocatable :: alloc_array(:)
allocate(alloc_array(2:n+1))
Finally, an automatic object has quite strict conditions on its size. n here is clearly allowed, but things don't have to be much more complicated before allocation is the only plausible way. Depending on how much one wants to play with block constructs.
Taking also a comment from IanH: if we have a very large n the automatic object is likely to lead to crash-and-burn. With the allocatable, one could use the stat= option to come to some amicable agreement with the compiler run-time.

Scope and dealing with subroutines with many inputs

Some of my fortran subroutines have a gigantic amount of inputs passed to them, sometimes even 30 or 40. The reason for this is twofold, first, those subroutines have many clearly directly related subroutines which need some of those variables as input, and second, to avoid defining global variables, and the solution for this seems to be to pass every variable to a subroutine explicitly every time.
This seems unacceptable to me, but I don't really have a solution for it, and I am not 100% sure that it is a problem in the first place, perhaps this is the right way to do things in this language.
My question is then: is this a problem? If it is, is there a better way to manage scope in this language, without necessarily introducing objects?
I can see why the designers want to avoid global variables. I have to work with a code that took the opposite approach, almost no arguments and everything is in the global state in various modules and it is terrible, no matter how much they use the only clause in the use statements.
We can safely say that this amount of arguments (say 30) is way too large. All code style guidelines will probably agree with that. It is often a bit unpleasant to work with the many arguments libraries like LAPACK require, and that is nowhere close to 30.
There are several ways Fortran 90 and more recent can reduce the number of arguments.
Firstly, you can couple logically related variables into a derived type
type particle
integer :: species
real :: mass
real :: x, y, z
real :: vx, vy, vz
...
end type
Secondly, by using assumed shape arrays you can avoid passing the array dimensions. This allows modern LAPACK interfaces to have significantly smaller number of arguments, for example (both the Netlib and the MKL interfaces).
subroutine sub(A, NX, NY, NZ)
integer :: NZ, NY, NZ
real :: A(NX, NY, NZ)
vs.
subroutine sub(A)
real :: A(:,:,:)
This change requires explicit interface for the procedure so in practise the procedures have to be moved into modules.
Both these changes are rather significant changes and require significant refactoring efforts for large legacy codes.

FFTW: Trouble with real to complex and complex to real 2D tranfsorms

As the title states I'm using FFTW (version 3.2.2) with Fortran 90/95 to perform a 2D FFT of real data (actually a field of random numbers). I think the forward step is working (at least I am getting some ouput). However I wanted to check everything by doing the IFFT to see if I can re-construct the original input. Unfortunately when I call the complex to real routine, nothing happens and I obtain no error output, so I'm a bit confused. Here are some code snippets:
implicit none
include "fftw3.f"
! - im=501, jm=401, and lm=60
real*8 :: u(im,jm,lm),recov(im,jm,lm)
complex*8 :: cu(1+im/2,jm)
integer*8 :: planf,planb
real*8 :: dv
! - Generate array of random numbers
dv=4.0
call random_number(u)
u=u*dv
recov=0.0
k=30
! - Forward step (FFT)
call dfftw_plan_dft_r2c_2d(planf,im,jm,u(:,:,k),cu,FFTW_ESTIMATE)
call dfftw_execute_dft_r2c(planf,u(:,:,k),cu)
call dfftw_destroy_plan(planf)
! - Backward step (IFFT)
call dfftw_plan_dft_c2r_2d(planb,im,jm,cu,recov(:,:,k),FFTW_ESTIMATE)
call dfftw_execute_dft_c2r(planb,cu,recov(:,:,k))
call dfftw_destroy_plan(planb)
The above forward step seems to work (r2c) but the backward step does not seem to work. I checked this by differencing the u and recov arrays - which ended up not being zero. Additionally the max and min values of the recov array were both zero, which seems to indicate that nothing was changed.
I've looked around the FFTW documentation and based my implementation on the following page http://www.fftw.org/fftw3_doc/Fortran-Examples.html#Fortran-Examples . I am wondering if the problem is related to indexing, at least that's the direction I am leaning. Anyway, if any one could offer some help, that would be wonderful!
Thanks!
Not sure if this is the root of all troubles here, but the way you declare variables may be the culprit.
For most compilers (this is apparently not even a standard), Complex*8 is an old syntax for single precision: the complex variable occupies a total of 8 bytes, shared between the real and the imaginary part (4+4 bytes).
[Edit 1 following Vladimir F comment to my answer, see his link for details:] In my experience (i.e. the systems/compiler I ever used), Complex(Kind=8) corresponds to the declaration of a double precision complex number (a real and an imaginary part, both of which occupy 8 bytes).
On any system/compiler, Complex(Kind=Kind(0.d0)) should declare a double precision complex.
In short, your complex array does not have the right size. Replace occurences of Real*8 and Complex*8 by Real(kind=8) and Complex(Kind=8) (or Complex(Kind=kind(0.d0)) for a better portability), respectively.

Stack overflow in Fortran 90

I have written a fairly large program in Fortran 90. It has been working beautifully for quite a while, but today I tried to step it up a notch and increase the problem size (it is a research non-standard FE-solver, if that helps anyone...) Now I get the "stack overflow" error message and naturally the program terminates without giving me anything useful to work with.
The program starts with setting up all relevant arrays and matrices, and after that is done it prints a few lines of stats regarding this to a log-file. Even with my new, larger problem, this works fine (albeit a little slow), but then it fails as the "number crunching" gets going.
What confuses me is that everything at that point is already allocated (and that worked without errors). I'm not entirely sure what the stack is (Wikipedia and several treads here didn't do much since I have only a quite basic knowledge of the "behind the scenes" workings of a computer).
Assume that I for instance have some arrays initialized as:
INTEGER,DIMENSION(64) :: IA
REAL(8),DIMENSION(:,:),ALLOCATABLE :: AA, BB
which after some initialization routines (i.e. read input from file and such) are allocated as (I store some size-integers for easier passing to subroutines in IA of fixed size):
ALLOCATE( AA(N1,N2) , BB(N1,N2) )
IA(1) = N1
IA(2) = N2
This is basically what happens in the initial portion, and so far so good. But when I then call a subroutine
CALL ROUTINE_ONE(AA,BB,IA)
And the routine looks like (nothing fancy):
SUBROUTINE ROUTINE_ONE(AA,BB,IA)
IMPLICIT NONE
INTEGER,DIMENSION(64) :: IA
REAL(8),DIMENSION(IA(1),IA(2)) :: AA, BB
...
do lots of other stuff
...
END SUBROUTINE ROUTINE_ONE
Now I get an error! The output to the screen says:
forrtl: severe (170): Program Exception - stack overflow
However, when I run the program with the debugger it breaks at line 419 in a file called winsig.c (not my file, but probably part of the compiler?). It seems to be part of a routine called sigreterror: and it is the default case that has been invoked, returning the text Invalid signal or error. There is a comment line attached to this which strangely says /* should never happen, but compiler can't tell */ ...?
So I guess my question is, why does this happen and what is actually happening? I thought that as long as I can allocate all the relevant memory I should be fine? Does the call to the subroutine make copies of the arguments, or just pointers to them? If the answer is copies then I can see where the problem might be, and if so: any ideas on how to get around it?
The problem I try to solve is big, but not insane in any way. Standard FE-solvers can handle bigger problems than my current one. I run the program on a Dell PowerEdge 1850 and the OS is Microsoft Server 2008 R2 Enterprise. According to systeminfo at the cmd prompt I have 8GB of physical memory and almost 16GB virtual. As far as I understand the total of all my arrays and matrices should not add up to more than maybe 100MB - about 5.5M integer(4) and 2.5M real(8) (which according to me should be only about 44MB, but let's be fair and add another 50MB for overhead).
I use the Intel Fortran compiler integrated with Microsoft Visual Studio 2008.
Adding some actual source code to clarify a bit
! Update continuum state
CALL UpdateContinuumState(iTask,iArray,posc,dof,dof_k,nodedof,elm,&
bmtrx,detjac,w,mtrlprops,demtrx,dt,stress,strain,effstrain,&
effstress,aa,fi,errmsg)
is the actual call to the routine. Big arrays are posc, bmtrx and aa - all other are at least an order of magnitude smaller (if not more). posc is INTEGER(4) and bmtrx and aa is REAL(8)
SUBROUTINE UpdateContinuumState(iTask,iArray,posc,dof,dof_k,nodedof,elm,bmtrx,&
detjac,w,mtrlprops,demtrx,dt,stress,strain,effstrain,&
effstress,aa,fi,errmsg)
IMPLICIT NONE
!I/O
INTEGER(4) :: iTask, errmsg
INTEGER(4) :: iArray(64)
INTEGER(4),DIMENSION(iArray(15),iArray(15),iArray(5)) :: posc
INTEGER(4),DIMENSION(iArray(22),iArray(21)+1) :: nodedof
INTEGER(4),DIMENSION(iArray(29),iArray(3)+2) :: elm
REAL(8),DIMENSION(iArray(14)) :: dof, dof_k
REAL(8),DIMENSION(iArray(12)*iArray(17),iArray(15)*iArray(5)) :: bmtrx
REAL(8),DIMENSION(iArray(5)*iArray(17)) :: detjac
REAL(8),DIMENSION(iArray(17)) :: w
REAL(8),DIMENSION(iArray(23),iArray(19)) :: mtrlprops
REAL(8),DIMENSION(iArray(8),iArray(8),iArray(23)) :: demtrx
REAL(8) :: dt
REAL(8),DIMENSION(2,iArray(12)*iArray(17)*iArray(5)) :: stress
REAL(8),DIMENSION(iArray(12)*iArray(17)*iArray(5)) :: strain
REAL(8),DIMENSION(2,iArray(17)*iArray(5)) :: effstrain, effstress
REAL(8),DIMENSION(iArray(25)) :: aa
REAL(8),DIMENSION(iArray(14)) :: fi
!Locals
INTEGER(4) :: i, e, mtrl, i1, i2, j1, j2, k1, k2, dim, planetype, elmnodes, &
Nec, elmpnodes, Ndisp, Nstr, Ncomp, Ngpt, Ndofelm
INTEGER(4),DIMENSION(iArray(15)) :: doflist
REAL(8),DIMENSION(iArray(12)*iArray(17),iArray(15)) :: belm
REAL(8),DIMENSION(iArray(17)) :: jelm
REAL(8),DIMENSION(iArray(12)*iArray(17)*iArray(5)) :: dstrain
REAL(8),DIMENSION(iArray(12)*iArray(17)) :: s
REAL(8),DIMENSION(iArray(17)) :: ep, es, dep
REAL(8),DIMENSION(iArray(15),iArray(15)) :: kelm
REAL(8),DIMENSION(iArray(15)) :: felm
dim = iArray(1)
...
And it fails before the last line above.
As per steabert's request, I'll just summarize the conversation in the comments here where it's a bit more visible, even though M.S.B.'s answer already gets right to the nub of the problem.
In technical programming, where procedures often have large local arrays for intermediate computation, this happens a lot. Local variables are generally stored on the stack, which typically (and quite reasonably) a small fraction of overall system memory -- usually of order 10MB or so. When the local variable sizes exceed the stack size, you see exactly the symptoms described here -- a stack overflow occuring after a call to the relevant subroutine but before its first executable statement.
So when this problem happens, the best thing to do is to find the relevant large local variables, and decide what to do. In this case, at least the variables belm and dstrain were getting quite sizable.
Once the variables are located, and you've confirmed that's the problem, there's a few options. As MSB points out, if you can make your arrays smaller, that's one option. Alternatively, you can make the stack size larger; under linux, that's done with ulimit -s [newsize]. That really just postpones the problem, though, and you have to do something different on windows machines.
The other class of ways to avoid this problem is not to put the large data on the stack, but in the rest of memory (the "heap"). You can do that by giving the arrays the save attribute (in C, static); this puts the variable on the heap and thus makes the values persistent between calls. The downside there is that this potentially changes the behavior of the subroutine, and means the subroutine can't be used recursively, and similarly is non-threadsafe (if you're ever in a position where multiple threads will enter the routine simulatneously, they'll each see the same copy of the local varaiable and potentially overwrite each other's results). The upside is that it's easy and very portable -- it should work everywhere. However, this will only work with fixed-size local variables; if the temporary arrays have sizes that depend on the inputs, you can't do this (since there'd no longer be a single variable to save; it could be different size every time the procedure is called).
There are compiler-specific options which put all arrays (or all arrays of larger than some given size) on the heap rather than on the stack; every Fortran compiler I know has an option for this. For ifort, used in the OPs post, it's -heap-arrays in linux, or /heap-arrays for windows. For gfortran, this may actually be the default. This is good for making sure you know what's going on, but it means you have to have different incantations for every compiler to make sure your code works.
Finally, you can make the offending arrays allocatable. Allocated memory goes on the heap; but the variable which points to them is on the stack, so you get the benefits of both approaches. Also, this is completely standard fortran and so totally portable. The downside is that it requires code changes. Also, the allocation process can take nontrivial amounts of time; so if you're going to be calling the routine zillions of times, you may notice this slows things down slightly. (This possible performance regression is easy to fix, though; if you'll be calling it zillions of times with the same size arrays, you can have an optional argument to pass in a pre-allocated local array and use that instead, so that you only allocate/deallocate once).
Allocating/deallocating each time would look like:
SUBROUTINE UpdateContinuumState(iTask,iArray,posc,dof,dof_k,nodedof,elm,bmtrx,&
detjac,w,mtrlprops,demtrx,dt,stress,strain,effstrain,&
effstress,aa,fi,errmsg)
IMPLICIT NONE
!...arguments....
!Locals
!...
REAL(8),DIMENSION(:,:), allocatable :: belm
REAL(8),DIMENSION(:), allocatable :: dstrain
allocate(belm(iArray(12)*iArray(17),iArray(15))
allocate(dstrain(iArray(12)*iArray(17)*iArray(5))
!... work
deallocate(belm)
deallocate(dstrain)
Note that if the subroutine does a lot of work (eg, takes seconds to execute), the overhead from a couple allocate/deallocates should be negligable. If not, and you want to avoid the overhead, using the optional arguments for preallocated worskpace would look something like:
SUBROUTINE UpdateContinuumState(iTask,iArray,posc,dof,dof_k,nodedof,elm,bmtrx,&
detjac,w,mtrlprops,demtrx,dt,stress,strain,effstrain,&
effstress,aa,fi,errmsg,workbelm,workdstrain)
IMPLICIT NONE
!...arguments....
real(8),dimension(:,:), optional, target :: workbelm
real(8),dimension(:), optional, target :: workdstrain
!Locals
!...
REAL(8),DIMENSION(:,:), pointer :: belm
REAL(8),DIMENSION(:), pointer :: dstrain
if (present(workbelm)) then
belm => workbelm
else
allocate(belm(iArray(12)*iArray(17),iArray(15))
endif
if (present(workdstrain)) then
dstrain => workdstrain
else
allocate(dstrain(iArray(12)*iArray(17)*iArray(5))
endif
!... work
if (.not.(present(workbelm))) deallocate(belm)
if (.not.(present(workdstrain))) deallocate(dstrain)
Not all of the memory is created when the program starts. When you call the subroutine the executable is creating the memory that the subroutine needs for local variables. Typically arrays with simple declarations that are local to that subroutine -- neither allocatable, nor pointer -- are allocated on the stack. You could have simply run of of stack space when you reached these declarations. You might have reached a 2GB limit on a 32-bit OS with some array. Sometimes executable statements implicitly create a temporary array on the stack.
Possible solutions: 1) make your arrays smaller (not attractive), 2) make the stack larger), 3) some compilers have options to switch from placing arrays on the stack to dynamically allocating them, similar to the method used for "allocate", 4) identify large arrays and make them allocatable.
The stack is the memory area where the information needed to return from a function, and the information locally defined in a function is stored. So a stack overflow may indicate you have a function that calls another function which in its turn calls another function, etc.
I am not familiar with Fortran (anymore) but another cause might be that those functions declare tons of local variables, or at least variables that need a lot of place.
A last one: the stack is typically rather small, so it's not a priori relevant how much memory the machine has. It should be quite simple to instruct the linker to increase the stack size, at least if you are certain it's just a lack of space, and not a bug in your application.
Edit: do you use recursion in your program? Recursive calls can eat through the stack very quickly.
Edit: have a look at this: (emphasis mine)
On Windows, the stack space to
reserved for the program is set using
the /Fn compiler option, where n is
the number of bytes. Additionally,
the stack reserve size can be
specified through the Visual Studio
IDE which adds the Microsoft Linker
option /STACK: to the linker command
line. To set this, go to Property
Pages>Configuration
Properties>Linker>System>Stack Reserve
Size. There you can specify the stack
size in bytes in either decimal or
C-language notation. If not specified,
the default stack size is 1MB.
The only problem I ran into with a similar test code, is the 2Gb allocation limit for 32-bit compilation. When I exceed it I get an error message on line 419 in winsig.c
Here is the test code
program FortranCon
implicit none
! Variables
INTEGER :: IA(64), S1
REAL(8), DIMENSION(:,:), ALLOCATABLE :: AA, BB
REAL(4) :: S2
INTEGER, PARAMETER :: N = 10960
IA(1)=N
IA(2)=N
ALLOCATE( AA(N,N), BB(N,N) )
AA(1:N,1:N) = 1D0
BB(1:N,1:N) = 2D0
CALL TEST(AA,BB,IA)
S1 = SIZEOF(AA) !Size of each array
S2 = 2*DBLE(S1)/1024/1024 !Total size for 2 arrays in Mb
WRITE (*,100) S2, ' Mb' ! When allocation reached 2Gb then
100 FORMAT (F8.1,A) ! exception occurs in Win32
DEALLOCATE( AA, BB )
end program FortranCon
SUBROUTINE TEST(AA,BB,IA)
IMPLICIT NONE
INTEGER, DIMENSION(64),INTENT(IN) :: IA
REAL(8), DIMENSION(IA(1),IA(2)),INTENT(INOUT) :: AA,BB
... !Do stuff with AA,BB
END SUBROUTINE
When N=10960 it runs ok showing 1832.9 Mb. With N=11960 it crashes. Of course when I compile with x64 it works ok. Each array has 8*N^2 bytes storage. I don't know if it helps but I recommend using the INTENT() keywords for the dummy variables.
Are you using some parallelization? This can be a problem with statically declared arrays. Try all bigger arrays make ALLOCATABLE, otherwise, they will be placed on the stack in autoparallel or OpenMP threads.
For me the issue was the stack reserve size. I went and changed the stack reserved size from 0 to 100000000 and recompiled the code. The code now runs smoothly.

Fortran 90 kind parameter

I am having trouble understanding Fortran 90's kind parameter. As far as I can tell, it does not determine the precision (i.e., float or double) of a variable, nor does it determine the type of a variable.
So, what does it determine and what exactly is it for?
The KIND of a variable is an integer label which tells the compiler which of its supported kinds it should use.
Beware that although it is common for the KIND parameter to be the same as the number of bytes stored in a variable of that KIND, it is not required by the Fortran standard.
That is, on a lot of systems,
REAl(KIND=4) :: xs ! 4 byte ieee float
REAl(KIND=8) :: xd ! 8 byte ieee float
REAl(KIND=16) :: xq ! 16 byte ieee float
but there may be compilers for example with:
REAL(KIND=1) :: XS ! 4 BYTE FLOAT
REAL(KIND=2) :: XD ! 8 BYTE FLOAT
REAL(KIND=3) :: XQ ! 16 BYTE FLOAT
Similarly for integer and logical types.
(If I went digging, I could probably find examples. Search the usenet group comp.lang.fortran for kind to find examples. The most informed discussion of Fortran occurs there, with some highly experienced people contributing.)
So, if you can't count on a particular kind value giving you the same data representation on different platforms, what do you do? That's what the intrinsic functions SELECTED_REAL_KIND and SELECTED_INT_KIND are for. Basically, you tell the function what sort of numbers you need to be able to represent, and it will return the kind you need to use.
I usually use these kinds, as they usually give me 4 byte and 8 byte reals:
!--! specific precisions, usually same as real and double precision
integer, parameter :: r6 = selected_real_kind(6)
integer, parameter :: r15 = selected_real_kind(15)
So I might subsequently declare a variable as:
real(kind=r15) :: xd
Note that this may cause problems where you use mixed language programs, and you need to absolutely specify the number of bytes that variables occupy. If you need to make sure, there are enquiry intrinsics that will tell you about each kind, from which you can deduce the memory footprint of a variable, its precision, exponent range and so on. Or, you can revert to the non-standard but commonplace real*4, real*8 etc declaration style.
When you start with a new compiler, it's worth looking at the compiler specific kind values so you know what you're dealing with. Search the net for kindfinder.f90 for a handy program that will tell you about the kinds available for a compiler.
I suggest using the Fortran 2008 and later; INT8, INT16, INT32, INT64, REAL32, REAL64, REAL128. This is done by calling ISO_FORTRAN_ENV in Fortran 2003 and later. Kind parameters provides inconsistent way to ensure you always get the appropriate number of bit representation
Just expanding the other (very good) answers, specially Andrej Panjkov's answer:
The KIND of a variable is an integer label which tells the compiler
which of its supported kinds it should use.
Exactly. Even though, for all the numeric intrinsic types, the KIND parameter is used to specify the "model for the representation and behavior of numbers on a processor" (words from the Section 16.5 of the standard), that in practice means their bit model, that's not the only thing a KIND parameter may represent.
A KIND parameter for a type is any variation in its nature, model or behavior that is avaliable for the programmer to choose at compile time. For example, for the intrinsic character type, the kind parameter represents the character sets avaliable on the processor (ASCII, UCS-4,...).
You can even define your own model/behaviour variations on you defined Derived Types (from Fortran 2003 afterwards). You can create a Transform Matrix type and have a version with KIND=2 for 2D space (in which the underlying array would be 3x3) and KIND=3 for 3D space (with a 4x4 underlying array). Just remember that there is no automatic kind conversion for non-intrinsic types.
From the Portland Group Fortran Reference, the KIND parameter "specifies a precision for intrinsic data types." Thus, in the declaration
real(kind=4) :: float32
real(kind=8) :: float64
the variable float64 declared as an 8-byte real (the old Fortran DOUBLE PRECISION) and the variable float32 is declared as a 4-byte real (the old Fortran REAL).
This is nice because it allows you to fix the precision for your variables independent of the compiler and machine you are running on. If you are running a computation that requires more precision that the traditional IEEE-single-precision real (which, if you're taking a numerical analysis class, is very probable), but declare your variable as real :: myVar, you'll be fine if the compiler is set to default all real values to double-precision, but changing the compiler options or moving your code to a different machine with different default sizes for real and integer variables will lead to some possibly nasty surprises (e.g. your iterative matrix solver blows up).
Fortran also includes some functions that will help pick a KIND parameter to be what you need - SELECTED_INT_KIND and SELECTED_REAL_KIND - but if you are just learning I wouldn't worry about those at this point.
Since you mentioned that you're learning Fortran as part of a class, you should also see this question on Fortran resources and maybe look at the reference manuals from the compiler suite that you are using (e.g. Portland Group or Intel) - these are usually freely available.
One of the uses of kind could be to make sure that for different machine or OS, they truly use the same precision and the result should be the same. So the code is portable. E.g.,
integer, parameter :: r8 = selected_real_kind(15,9)
real(kind=r8) :: a
Now this variable a is always r8 type, which is a true "double precision" (so it occupies 64 bits of memory on the electronic computer), no matter what machine/OS the code is running on.
Also, therefore you can write things like,
a = 1.0_r8
and this _r8 make sure that 1.0 is converted to r8 type.
To summarize other answers: the kind parameter specifies storage size (and thus indirectly, the precision) for intrinsic data types, such as integer and real.
However, the recommended way now is NOT to specify the kind value of variables in source code, instead, use compiler options to specify the precision we want. For example, we write in the code: real :: abc and then compile the code by using the compiling option -fdefault-real-8 (for gfortran) to specify a 8 byte float number. For ifort, the corresponding option is -r8.
Update:
It seems Fortran experts here strongly object to the recommended way stated above. In spite of this, I still think the above way is a good practice that helps reduce the chance of introducing bugs in Fortran codes because it guarantees that you are using the same kind-value throughout your program (the chance that you do need use different kind-values in different parts of a code is rare) and thus avoids the frequently encountered bugs of kind-value mismatch between dummy and actual arguments in a function call.