Related
This question already has answers here:
Fortran array automatically growing when adding a value
(1 answer)
How to add new element to dynamical array in Fortran 90
(2 answers)
Closed last year.
I'm new to Fortran so I'm really lost on how to do this. I have a data file with 5 groups of 5000 values. My code reads in the first 5000 values and puts them in an array called flux, does some stuff, then cycles back through to read the next set of 5000 values (while deleting the values in the original flux array and storing the new ones).
I need to be able to save each value so that I can plot them all later. I was hoping to to take the flux array values and append them to a different array, that way at the end of the loop I have one array with all 25,000 values in it. In Python this would be a one liner, but from what I'm reading online it seems this is not easy to do in Fortran.
Thanks in advance!
real, allocatable :: flux(:), s(:)
open(1,file='spec_1503-070_th45',status='old')
read(1,*) nf
read(1,*) (xlam(ij),ij = 1,nf)
read(1,'(25x,f10.1,12x,e10.3)')teff,grav
write(header,111) int(teff),alog10(grav)
do il=100,1000
read(1,*,end=99) bmag,az,ax,ay
read(1,*)(flux(ij),ij = 1,nf)
fmax=0.
do ij=1,nf
if(flux(ij).gt.fmax) fmax=flux(ij)
enddo
do ij=1,nf
flux(ij)=flux(ij)/fmax
enddo
s=[s,flux]
Appending to or resizing arrays in Fortran is as simple as declaring them as an allocatable array,
integer, allocatable :: vecA(:), vecB(:), vecC(:)
character(*), parameter :: csv = "(*(g0,:,', '))"
vecA = [1,2,3]
vecB = [4,5,6]
vecC = [vecA, vecB]
write(*,csv) "vecA", vecA
write(*,csv) "vecB", vecB
write(*,csv) "vecC", vecC
vecC = [vecC, vecC(size(vecC):1:-1)]
write(*,csv) "vecC", vecC
end
vecA, 1, 2, 3
vecB, 4, 5, 6
vecC, 1, 2, 3, 4, 5, 6
vecC, 1, 2, 3, 4, 5, 6, 6, 5, 4, 3, 2, 1
The left-hand side is automatically resized to the proper length. Notice, how vecC = [vecC,vecC] doubles the length of vecC. If you have performance-critical code, you probably would want to avoid such automatic reallocations. But that becomes relevant only when the code is to be called on the order of billions of times.
sorry for a seemingly stupid question. I was testing the computational efficiency when replacing for-loop operations on matrices with intrinsic functions. When I check the matrices product results of the two methods, it confused me that the two outputs were not the same. Here is the simplified code I used
program matmultest
integer,parameter::nx=64,ny=32,nz=16
real*8::mat1(nx,ny),mat2(ny,nz)
real*8::result1(nx,nz),result2(nx,nz),diff(nx,nz)
real*8::localsum
integer::i,j,m
do i=1,ny
do j=1,nx
mat1(j,i)=dble(j)/7d0+2.65d0*dble(i)
enddo
enddo
do i=1,nz
do j=1,ny
mat2(j,i)=5d0*dble(j)-dble(i)*0.45d0
enddo
enddo
do j=1,nz
do i=1,nx
localsum=0d0
do m=1,ny
localsum=localsum+mat1(i,m)*mat2(m,j)
enddo
result1(i,j)=localsum
enddo
enddo
result2=matmul(mat1,mat2)
diff=result2-result1
print*,sum(abs(diff)),maxval(diff)
end program matmultest
And the result gives
1.6705598682165146E-008 5.8207660913467407E-011
The difference is non-zero for real8 but zero when I tested for integer later. I wonder if it is because of my code's faults somewhere or the numerical accuracy of MATMUL() is single precision?
And the compiler I am using is GNU Fortran (Ubuntu 9.3.0-17ubuntu1~20.04) 9.3.0
Thanks!
francescalus explained that reordering of operations causes these differences. Let's try to find out how it actually happened.
A few words about matrix product
Consider matrices A(n,p), B(p,q), C(n,q) and C = A*B.
The naive approach, a variant of which you used, involves the following nested loops:
c = 0
do i = 1, n
do j = 1, p
do k = 1, q
c(i, j) = c(i, j) + a(i, k) * b(k, j)
end do
end do
end do
These loops can be executed in any of 6 orders, depending on the variable that you choose at each level. In the example above, the loop is named "ijk", and the other variants "ikj", "jik", etc. are all correct.
There is a speed difference, due to the memory cache: when the inner loop runs across contiguous memory elements, the loop is faster. That's the jki or kji cases.
Indeed, since Fortran matrices are stored in column major order, if the innermost loop runs on i, in the instruction c(i, j) = c(i, j) + a(i, k) * c(k, j), the value c(k, j) is constant, and the operation is equivalent to v(i) = v(i) + x * u(i), where the elements of vectors v and u are contiguous.
However, regarding the order of operations, there shouldn't be a difference: you can check for yourself that all elements of C are computed in the same order. At least at the "higher level": the compiler might optimize things differently, and it's where it becomes really interesting.
What about MATMUL? I believe it's usually a naive matrix product, based on the nested loops above, say a jki loop.
There are other ways to multiply matrices, that involve the Strassen algorithm to improve the algorithm complexity or blocking (i.e. computed products of submatrices) to improve cache use. Other methods that could change the result are OpenMP (i.e. multithread), or using FMA instructions. But here we are not going to delve into these methods. It's really only about the nested loops. If you are interested, there are many resources online, check this.
A few words about optimization
Three remarks first:
On a processor without SIMD instructions, you would get the same result as MATMUL (i.e. you would print zero in the end).
If you had implemented the loops as above, you would also get the same result. There is a tiny but significant difference in your code.
If you had implemented the loops as a subroutine, you would also get the same result. Here I suspect the compiler optimizer is doing some reordering, as I can't reproduce your "accumulator" code with a subroutine, at least with Intel Fortran.
Here is your implementation:
do i = 1, n
do j = 1, p
s = 0
do k = 1, q
s = s + a(i, k) * b(k, j)
end do
c(i, j) = s
end do
end do
It's also correct of course. Here, you are using an accumulator, and at the end of the innermost loop, the value of the accumulator is written in the matrix C.
Optimization is typically relevant on the innermost loop mainly. For our purpose, two "basic" instructions in the innermost loop are relevant, if we get rid of all other details:
v(i) = v(i) + x*u(i) (the jki loop)
s = s + x(k)*y(k) (the accumulator loop where y is contiguous in memory, but not x)
The first is usually called a "daxpy" (from the name of a BLAS routine), for "A X Plus Y", the "D" meaning double precision. The second one is just an accumulator.
On an old sequential processor, there is not much to be done to optimize. On a modern processor with SIMD, registers can hold several values, and computations can be done on all of them at once, in parallel. For instance, on x86, an XMM register (from SSE instruction set) can hold two double precision floating-point numbers. A YMM register (from AVX2) can hold four numbers, and a ZMM register (AVX512, found on Xeon) can hold eight numbers.
For instance, on YMM the innermost loop will be "unrolled" to deal with four vector elements at a time (or even more if using several registers).
Here is what the basic loop block is then roughly doing:
daxpy case:
Read 4 numbers from u into register YMM1
Read 4 numbers from v into register YMM2
x is constant and is kept in another register
Multiply in parallel x with YMM1, add in parallel to YMM2, put the result in YMM2
Write back the result to corresponding elements of v
The read/write part is faster if the elements are contiguous in memory, but if they are not it's still worth doing this in parallel.
Note that here, we haven't changed the execution order of additions of the high level Fortran loop.
accumulator case
For the parallelism to be useful, there will be a trick: accumulate four values in parallel in a YMM register, and then add the four accumulated values.
The basic loop block is thus doing this:
The accumulator is kept in YMM3 (four numbers)
Read 4 numbers from X into register YMM1
Read 4 numbers from Y into register YMM2
Multiply in parallel YMM1 with YMM2, add in parallel to YMM3
At the end of the innermost loop, add the four components of the accumulator, and write this back as the matrix element.
It's like if we had computed:
s1 = x(1)*y(1) + x(5)*y(5) + ... + x(29)*y(29)
s2 = x(2)*y(2) + x(6)*y(6) + ... + x(30)*y(30)
s3 = x(3)*y(3) + x(7)*y(7) + ... + x(31)*y(31)
s4 = x(4)*y(4) + x(8)*y(8) + ... + x(32)*y(32)
And then the matrix element written is c(i,j) = s1+s2+s3+s4.
Here the order of additions has changed! And then, since the order is different, the result is very likely different.
I can replicate the results when using fast math (I have Intel Fortran), and when I compile with the default /fp:fast I get the following max error and speed
! Error Loops Matmul
! 0.58208E-10 107526.9 140056.0 FAST
The error is just maxval(abs(diff)) speed measured is in # of matrix operations per second.
But when I compile with /fp:strict then I get no error, but a slowdown with the loops
! Error Loops Matmul
! 0.0000 43140.6 141844.0 STRICT
I see a -60% slowdown in the loops with strict floating-point handling, but surprisingly no slowdown with the matmul() function.
Source Code for completeness
program Console1
use iso_fortran_env
implicit none
integer,parameter :: nr = 100000
integer,parameter::nx=64,ny=32,nz=16
real(real64)::mat1(nx,ny),mat2(ny,nz)
real(real64)::result1(nx,nz),result2(nx,nz),diff(nx,nz)
real(real64)::localsum
integer::i,j,r
integer(int64) :: tic, toc, rate
real(real64) :: dt1, dt2
do i=1,ny
do j=1,nx
mat1(j,i)=dble(j)/7d0+2.65d0*dble(i)
enddo
enddo
do i=1,nz
do j=1,ny
mat2(j,i)=5d0*dble(j)-dble(i)*0.45d0
enddo
enddo
call SYSTEM_CLOCK(tic,rate)
do r=1, nr
result1=mymatmul(mat1,mat2)
end do
call SYSTEM_CLOCK(toc,rate)
dt1 = dble(toc-tic)/rate
call SYSTEM_CLOCK(tic,rate)
do r=1, nr
result2=matmul(mat1,mat2)
end do
call SYSTEM_CLOCK(toc,rate)
dt2 = dble(toc-tic)/rate
diff=result2-result1
print ('(1x,a16,1x,a16,1x,a16)'), "Error", "Loops", "Matmul"
print ('(1x,g16.5,1x,f16.1,1x,f16.1)'), maxval(abs(diff)), nr/dt1, nr/dt2
! Error Loops Matmul
! 0.58208E-10 107526.9 140056.0 FAST
! 0.0000 43140.6 141844.0 STRICT
!
contains
pure function mymatmul(a,b) result(c)
real(real64), intent(in) :: a(:,:), b(:,:)
real(real64) :: c(size(a,1), size(b,2))
integer :: i,j,k
real(real64) :: sum
do j=1, size(c,2)
do i=1, size(c,1)
sum = 0d0
do k=1, size(a,2)
sum = sum + a(i,k)*b(k,j)
end do
c(i,j) = sum
end do
end do
end function
end program Console1
Always compiled as Release-x64 and not Debug.
Suppose I have an array A in Fortran of dimension 10 with numbers.
However I'm only interested in a subset of those numbers (for example 3).
I store those number in a smaller array B
B(1) = A(1)
B(2) = A(5)
B(3) = A(6)
I can also define a mapping table to store index 1, 5, 6 for example
MAP(1) = 1
MAP(2) = 5
MAP(3) = 6
How can I create an inverse map INVMAP such that
INVMAP(1) = 1
INVMAP(5) = 2
INVMAP(6) = 3
with the constrain that INVMAP has dimension 3 (and not 10).
The point is that the array A is too big to be stored in memory and B
is obtained iteratively (A is never really allocated).
Considerations:
I do not care about the 7 discarded values but I care about the position of the one we keep.
Since MAP and INVMAP are storing positions, there will never be collision (its a one to one correspondence).
Maybe it could be possible with HASH or Fortran table but I'm not really sure how because I'm mapping numbers, not keys. Any idea ?
Thanks a lot,
Sam
Here's a very simple solution. No Fortran on this machine so not entirely sure that I have the syntax absolutely correct. Define a derived type like this:
type :: row
integer :: a_index
integer :: a_value ! I've assumed that your A array contains integers
! use another type if you want to
end type
then
type(row), dimension(100) :: b ! In practice you'll probably want b to be
! allocatable
and
b(1) = (1, a(1)) ! each row of b contains the value at an index into a and
! the index
b(2) = (5, a(5))
b(3) = (6, a(6))
Now your map function is simply, in pseudo-code, map(n) = b(n)%a_index
and your inverse map is, again in pseudo-code, invmap(n) = findloc(b%a_index, n).
Since the inverse map is a simple scan, it might become too time-consuming for your taste when b becomes large. Then I might introduce an auxiliary index array pointing into b at intervals, or I might go crazy and start a binary search of b%a_index.
I can also define a mapping table to store index 1, 5, 6 for example
MAP(1) = 1
MAP(2) = 5
MAP(3) = 6
I don't know if you know, but Fortran has a nice feature (one of my favorites) known as Vector Subscripts. You can pass an 'array of indices' as an index to an array, and get the elements corresponding to each index, like this:
integer :: A(10) = [10, 20, 30, 40, 50, 60, 70, 80, 90, 100]
integer :: map(3) = [1, 5, 6]
print *, A(map)
! outputs 10 50 60
How can I create an inverse map INVMAP such that
INVMAP(1) = 1
INVMAP(5) = 2
INVMAP(6) = 3
Well, if your INVMAP will have a domain and an image of different sizes, it must be a function, not an array. I mean, INVMAP cannot be an array of length 3 and accept indices from 1 to 6, right? Arrays in Fortran (and in most languages) imply contiguous indices.
The intrinsic function FINDLOC can be pretty handy here (I am asuming your function is bijector).
function invmap(map, ids)
integer :: ids(:), map(:), invmap(size(ids))
invmap = [(findloc(map, i), i = 1, size(ids))]
end
You could use this function to relate each map value to its position on map
integer :: myinvmap = invmap(map, [6, 1, 5])
print *, myinvmap ! outputs 3 1 2
print *, invmap(map, [5]) ! outputs 2
The point is that the array A is too big to be stored in memory and B
is obtained iteratively (A is never really allocated).
Now, if you will never allocate the big array, then its values will also be accessed by some function (you can consider it a function actually). You have basically two options here:
Have two arrays, one with the values got from big_array_function, and one with the parameter you passed to big_array (the indices).
Have one array of pairs [index, value]. It is the answer that #HighPerformanceMark provided.
Alternatively... (and not tested...)
Integer, dimension(100) :: A
Logical, dimension(100) :: A_Mask
Integer, dimension( 3) :: B
Integer, dimension(. 3) :: A_pos
Integer, dimension(. 3) :: I, J
A_Mask = .false.
A_Mask(1) = .true.
A_Mask(1) = .true.
A_Mask(1) = .true.
B = PACK(A, MASK=A_Mask)
J = 0
Collect_Positions: Do I = 1, SIZE(A)
If(.not. A_Mask(I)) CYCLE Collect_Positions
J = J+1
A_Pos(J) = I
ENDDO Collect_Positions
...
And then if one want to UNPACK, then the mask has the position... so it is possible to not worry about the position of A in general sense (but may be needed in the OP's case)
I was wondering if there was a quick way to have fortran look throughout a maxtrix’s rows and determine if n number of terms are equal.
I wasn’t able to find a question similar to mine and can’t find any help online.
Assuming we consider a matrix of integers this comes at O(N³) cost, N being the dimension of the matrix. Essentially, for each row, you need to compare each element to each other element in that row, requiring O(N³) operations. You probably need to write that yourself, but its no big deal, loop over the rows and check for each separately, if some element appears n-times
integer :: M(N, N) ! matrix to check
integer :: n, i, j, k, counter ! flag if a value appears n times
logical :: appears
appears = .false.
do i = 1, N ! loop over the rows
do j = 1, N ! loop over the entries
counter = 1
do k = j + 1, N
! check if the elements are the same, if yes, increase the counter
! exact implementation depends on type of M
if(M(i, k) == M(i, j)) counter = counter + 1
! check if this element appears n times
if(counter == n) appears = .true.
! or even more often?
if(counter > n) appears = .false.
end do
end do
end do
You can adapt that to your need, but you can do it like this.
Here's a pragmatic alternative to the solutions #RodrigoRodrigues has already provided. In the absence of any good evidence (the question is seriously underspecified) that we need to be concerned about asymptotic complexity and all that good stuff, here's a simple straightforward function which took me about 5 minutes to design, code, and test.
This function accepts a rank-1 array of integers and spits back a rank-1 array of integers, each element corresponding to the count of that element in the input array. If that description confuses you, bear with me and read the code which is fairly simple:
FUNCTION get_counts(arr) RESULT(rslt)
INTEGER, DIMENSION(:), INTENT(in) :: arr
INTEGER, DIMENSION(SIZE(arr)) :: rslt
INTEGER :: ix
DO ix = 1, SIZE(arr)
rslt(ix) = COUNT(arr(ix)==arr)
END DO
END FUNCTION get_counts
For the input array [1,1,2,3,4,1,5] it returns [3,3,1,1,1,3,1]. If OP wants to use this as the basis of a function to see if there is any value which occurs n times then OP could write
any(get_counts(rank_1_integer_array)==n)
If OP is concerned to know what elements occur n times then it is fairly straightforward to use the result of get_counts to refer back to the original array to extract that element.
This solution is pragmatic in the sense that it is parsimonious with my time rather than with the computer's time. My solution is somewhat wasteful of space, which may be an issue for very large input arrays. Any of Rodrigo's solutions may outperform mine, in both time and space, in the limit.
I was wondering if there was a quick way to have fortran look throughout a maxtrix’s rows and determine if n number of terms are equal.
As far as I understood your problem, this is what you want:
a function with the signature: (integer(:), integer) -> logical
this function receives the 1-D array line and checks if there is any value that appears at least n times in the array
the function is not supposed to indicate what or how many were those values, their positions or the exact number of repetitions
There are many ways to achieve this. "What is the most efficient?" It will depend on the specific conditions of your data, system, compiler, etc. To illustrate that, I came out with 3 different solutions. All of them give the correct answer, of course. You are advised to test each of them (or any other you come up with) with samples of your real data.
Naive solution #1 - good 'ol do loops
This is the default algorithm. It traverses line and stores each value into the aggregator list packed, that has each distinct value found so far, along with how many times they appeared. In the moment that any value reaches n repetitions, the fuction returns .true.. If no values reached n repetitions, and there is no more chance to complete the predicate, it returns .false..
I say defalut because it is the minimum linear algorith (that I figured out) based on good ol' do loops. This would probably be the best for the general case, if you have zero information about the nature of the data, system or even the programming language specifics. The aggregator is there to terminating the function as soon as the condition is met, but at the cost of an aditional list-traverse (on its length). If there are many different values in the data and n is large, the aggregator gets too long and the look-up can become an expensive operation. Also, there is almost no room for parallelism, vectorization and other optimizations.
! generic approach, with loops and aggregator
pure logical function has_at_least_n_repeated(line, n)
integer, intent(in) :: line(:), n
integer :: i, j, max_repetitions, qty_distincts
! packed(1,:) -> the distinct integers found so far
! packed(2,:) -> number of repetitions of each distinct integer so far
integer :: packed(2, size(line) - n + 2)
if(n < 1 .or. size(line) == 0) then
has_at_least_n_repeated = .false.
else if(n == 1) then
has_at_least_n_repeated = .true.
else
packed(:, 1) = [line(1), 1]
qty_distincts = 1
max_repetitions = 1
i = 1
! iterate until there aren't enough elements left to reach n repetitions
outer: do, while(i - max_repetitions <= size(line) - n)
i = i + 1
! test for a match on packed
do j = 1, qty_distincts
if(packed(1, j) == line(i)) then
packed(2, j) = packed(2, j) + 1
if(packed(2, j) == n) then
has_at_least_n_repeated = .true.
return
end if
max_repetitions = max(max_repetitions, packed(2, j))
cycle outer
end if
end do
! add to packed
qty_distincts = qty_distincts + 1
packed(:, qty_distincts) = [line(i), 1]
end do outer
has_at_least_n_repeated = .false.
end if
end
Naive solution #2 - trying for some vectorization
This approach tries to take advantage of the arraysh-nature of Fortran and the fast implementations of the intrinsic functions. Instead of an internal do loop, there is a call to the intrinsic count with an array argument, allowing the compiler to do some vectorization. Also, if you hane any tool for parallelism or if you know how to work with coarrays (and your compiler supports), you could use this approach to implement them.
The disadvantage here is that the function does a scan for all elements, even if they appeared before. So, this is more suitable when there are many different posible values in your data, with few repetitions. Although, it would also be easy to add a cached list with the past values, and use the intrinsic any, passing the cache as a whole array.
! alternative approach, intrinsic functions without cache
pure logical function has_at_least_n_repeated(line, n)
integer, intent(in) :: line(:), n
Integer :: i
if(n < 1 .or. size(line) == 0) then
has_at_least_n_repeated = .false.
else if(n == 1) then
has_at_least_n_repeated = .true.
else
! iterate until there aren't enough elements left to reach n repetitions
do i = 1, size(line) - n + 1
if(count(line(i + 1:) == line(i)) + 1 >= n) then
has_at_least_n_repeated = .true.
return
end if
end do
has_at_least_n_repeated = .false.
end if
end
Naive solution #3 - functional style
This is my favorite (personal criteria). I like functional languages and I enjoy borrowing some aspects of it into imperative languages. This approach delegates the calculation to an internal auxiliary recursive function. There are no do loops here. On each function call, just a section of line is passed over as argument: a shorter array with only values not checked so far. No need for cache either.
To be honest, Fortran's support for recursion is far from great - there is no tail recursion, compilers usually implement low call-stack limit, and many auto-optimizations are prevented by recursion. Even though, the algorithm is smart, I love how it looks like and I wouldn't discard it before doing some tests and comparissons.
Note: Fortran does not allow nested procedures in the contains part of a main program. For it to work as presented, you'd need to put the function in a module, submodule or make it an external function. Other option would be extracting the nested function and making it a normal function in the same scope.
! functional approach, auxiliar recursive function and no loops
pure logical function has_at_least_n_repeated(line, n)
integer, intent(in) :: line(:), n
if(n < 1 .or. size(line) == 0) then
has_at_least_n_repeated = .false.
else if(n == 1) then
has_at_least_n_repeated = .true.
else
has_at_least_n_repeated = aux(line)
end if
contains
! on each iteration removes all entries of an element from array
pure recursive function aux(section) result(out)
integer, intent(in) :: section(:)
logical :: out, mask(size(section))
integer :: left
mask = section /= section(1)
left = count(mask)
if(size(section) - left >= n) then
out = .true.
else if(n > left) then
out = .false.
else
out = aux(pack(section, mask))
end if
end
end
Conclusion
Do the tests before choosing a path to follow! I talked a litte here about my personal feeling on each approach and its implications, but it would be really nice if some of the Fortran Gurus on this site join the discussion and provide accurate information an critic.
I took the question to mean that it was to be determined whether any value in a row was repeated at least n times. To figure this out I chose to sort a copy of each row using qsort from the C standard library and then it's easy to find the lengths of each run of values.
module sortrow
use ISO_C_BINDING
implicit none
interface
subroutine qsort(base, num, size, compar) bind(C,name='qsort')
import
implicit none
integer base(*)
integer(C_SIZE_T), value :: num, size
procedure(icompar) compar
end subroutine qsort
end interface
contains
function icompar(p1, p2) bind(C)
integer(C_INT) icompar
integer p1, p2
select case(p1-p2)
case(:-1)
icompar = -1
case(0)
icompar = 0
case(1:)
icompar = 1
end select
end function icompar
end module sortrow
program main
use sortrow
implicit none
integer, parameter :: M = 3, N = 10
integer i, j
integer array(M,N)
real harvest
integer, allocatable :: row(:)
integer current, maxMatch
call random_seed
do i = 1, M
do j = 1, N
call random_number(harvest)
array(i,j) = harvest*3
end do
end do
do i = 1, M
row = array(i,:)
call qsort(row, int(N,C_SIZE_T), C_SIZEOF(array(1,1)), icompar)
maxMatch = 0
current = 1
do j = 2, N
if(row(j) == row(j-1)) then
current = current+1
else
current = 1
end if
maxMatch = max(maxMatch,current)
end do
write(*,'(*(g0:1x))') array(i,:),'maxMatch =',maxMatch
end do
end program main
Sample run:
0 0 0 2 0 2 1 1 1 0 maxMatch = 5
2 1 2 1 0 1 2 1 2 0 maxMatch = 4
0 0 2 2 2 2 2 0 1 1 maxMatch = 5
I have an array with multiple dimensions (the goal is to allow for about 100) and each dimension has a size of about 2^10 and I only need to store in it about 1000 double precision coefficients. I don't need to do any operation with this array aside from reading and writing into it. The code is written in Fortran 90.
I assume that if I a library like one of the ones mentioned in this answer I would be able to store the do this, but would this be optimized for the simple reading and writing operations? Is there a library that would be most efficient for that purpose?
Edit: By "simple reading and writing operations" I mean the following. Suppose
REAL(8), DIMENSION(1000) :: coeff1
INTEGER, DIMENSION(1000,5) :: index
I want to define coeff2 to store the values in coeff1 and then read itat the indices in index, that is
DO i = 1,1000
index(i,:) = [something]
coeff1(i) = [another something]
coeff2(index(i,1),index(i,2),index(i,3),index(i,4),index(i,5)) = coeff1(i)
ENDDO
Then, for any i I would like to access the value of
coeff2(index(i,1),index(i,2),index(i,3),index(i,4),index(i,5))
as quickly as possible. Being able to do this fast is what I mean by "efficient".
Since the indices in [something] are at most 2^10 I am currently defining coeff2 as follows:
REAL(8), DIMENSION(2**10,2**10,2**10,2**10,2**10) :: coeff2
but this is too wasteful of memory specially since I need to increase the number of dimensions, now 5, to the order of 100 and most elements of this array are equal to 0. So, another measure of efficiency that is relevant to me is that the memory necessary to store coeff2 should not explode as I increase the number of dimensions.
Well, It's still not totally clear to me the nature of your data and the way you want to use it.
If what you need is indexed data, whose indices are not consecutive,
Sparse matrix can be an answer, and there are many solutions already implemented over the internet (as shown in the link you provided). But maybe it would be overkill for what I think you are trying to do. Maybe a simple datatype could serve your purpose, like this:
program indexed_values
implicit none
type :: indexed
integer :: index
real(8) :: value
end type
integer, parameter :: n_coeffs = 1000
integer, parameter :: n_indices = 5
integer :: i
real(8), dimension(n_coeffs) :: coeff1
integer, dimension(n_coeffs, n_indices) :: index
type(indexed), dimension(n_coeffs, n_indices) :: coeff2
type(indexed) :: var
do i = 1, n_coeffs
index(i, :) = [1, 2, 4, 16, 32] * i ! your calc here
coeff1(i) = real(i * 3, 8) ! more calc here
coeff2(i, :)%index = index(i, :)
coeff2(i, :)%value = coeff1(i)
end do
! that's how you fetch the indices and values by stored position
var = coeff2(500, 2)
print*, var%index, var%value ! outputs: 1000 1500.0
! that's how you fetch a value by its index
print*, fetch_by_index(coeff2(500, :), 1000) ! outputs: 1500.0
contains
real(8) function fetch_by_index(indexed_pairs, index)
type(indexed), dimension(:) :: indexed_pairs
integer, intent(in) :: index
integer :: i
do i=1, size(indexed_pairs)
if(index == indexed_pairs(i)%index) then
fetch_by_index = indexed_pairs(i)%value
return
end if
end do
stop "No value stored for this index"
end
end
The provided function for fetching values by its indices could be improved if your indices will be alwyas stored in ascending order (no need to traverse the whole list to fail). Moreover, if you will assing a constant result of coeff1 to all the indices at each row, you could do even better and just not having a coeff2 array at all, just have coeff1 for values and index for the indices, and correlate them by position.