I'd like to know what is the internal memory representation of a fortran allocatable array.
I understand this a bit more complex than a raw pointer, since shape and ranks must be stored as well.
I also guess it's implementation dependent, since I don't find the info in the Fortran 2003 standard.
However, I'd like to know what kind of structures are used to represent allocable arrays (even for just one compiler).
I know the question is a bit wide, but any help would be greatly appreciated.
Allocatable arrays, pointer arrays, but also assumed shape array arguments are treated using an array descriptor (also called dope vector).
Any compiler can have its own structure of the array descriptor. It can be found in the compiler's manual. But there is a standardized format for the descriptor, which is used for communication with C (and potentially other software outside of Fortran that can communicate with C).
This standard descriptor may not be used by the compiler internally, but it can be. If it is used also internally, then the compiler does not have to prepare a new descriptor when calling a C-interoperable procedure. For example, gfortran plans that the standard descriptor is supported "preferably as native format".
An example of a native array descriptor, different from the C-interoperable one, is described by Intel at https://software.intel.com/en-us/node/678452.
The structure of the array descriptor for C-interoperable array arguments is defined by the Technical Specification ISO/IEC TS 29113:2012 on further interoperability of Fortran with C, which is to become a part of Fortran 2015.
In the C header file ISO_Fortran_binding.h is defined a C structure which is defined with a Fortran descriptor (assumed shape, pointer or allocatable).
It looks as follows (from the IBM website, certain details may be compiler specific):
CFI_cdesc_t
A type definition that describes a C descriptor. It contains the following structure members:
void *base_addr
The base address of the data object that is described. For deallocated allocatable objects, base_addr is NULL.
size_t elem_len
For scalars: The size in bytes of the data object that is described.
For arrays: The size in bytes of one element of the array.
int version
The version number of the C descriptor. Currently, the only valid value is available by using the CFI_VERSION macro.
CFI_attribute_t attribute
The attribute code of the C descriptor. For the valid values for attribute, see Table 1.
CFI_type_t type
The type code of the C descriptor. Describes the type of the object that is described by the C descriptor. For the valid values for type, see Table 2.
CFI_rank_t rank
The rank of the object that is described by the C descriptor. Its value must be in the range 0 ≤ rank ≤ CFI_MAX_RANK. A value of 0 indicates that the object is a scalar. Otherwise, the object is an array.
CFI_dim_t dim[]
An array of size rank that describes the lower bound, extent, and stride of each dimension.
There is a reserved area between rank and dim. The size of the reserved area is 12 words in 32-bit mode and 9 words in 64-bit mode.
The referenced CFI_ types are also defined in the ISO_Fortran_binding.h header.
So, even-though this descriptor may not be exactly the same that your compiler is using internally, it is a good example of what kind of data components one should expect in a Fortran array descriptor.
However, be careful that gfortran, a very common compiler, does not yet use this type of descriptor. There is only an experimental version with the new descriptor and the current descriptor is described in the manual. The status is also mentioned at Further Interoperability of Fortran with C.
Related
I was surprised to discover, when using Spacetime to profile my OCaml, that my char and even bool arrays used a word to represent each element. That's 8 bytes on my 64 bit machine, and causes way too much memory to be used.
I've substituted char array with Bytes where possible, but I also have char list and dynamic arrays (char BatDynArray). Is there some primitive or general method that I can use across all of these vector data structures and get an underlying 8 bit representation?
Edit: I read your question too fast: it’s possible you already know that; sorry! Here is a more targeted answer.
I think the general advice for storing a varying numbers of chars of varying number (i.e. when doing IO) is to use buffers, possibly resizable. Module Buffer implements a resizable character buffer, which is better than both char list (bad design, except for very short lists perhaps) and char BatDynArray (whose genericity incurs a memory penalty here, as you noticed).
Below is the original answer.
That’s due to the uniform representation of values. Whatever their type , every OCaml value is a machine word: either an immediate value (anything that can fit a 31- or 63-bit integer, so int, char, bool, etc.), or a pointer to a block, i.e. a sequence of machine words (a C-fashion array), prefixed with a header. When the value is a pointer to a block we say that it is “boxed”.
Cells of OCaml arrays are always machine words.
In OCaml, like in C++ but without the ad-hoc overloading, we just define specializations of array in the few cases where we actually want to save space. In your case:
instead of char array use string (immutable) or bytes (mutable) or Buffer.t (mutable appendable and resizable); these types signal to the GC that their cells are never pointers, so they can pack arbitrary binary data;
Unfortunately, the standard library has no specialization for bool array, but we can implement one (e.g. using bytes); you can find one in several third-party libraries, for instance module CCBV (“bitvectors”) in package containers-data.
Finally, you may not have realized it, but floats are boxed! That’s because they require 64 bits (IEEE 754 double-precision), which is more than the 31 or even 63 bits that are available for immediates. Fortunately(?), the compiler and runtime have some adhoc-ery to avoid boxing them as much as possible. In particular float array is specially optimized, so that it stores the raw floating-point numbers instead of pointers to them.
Some more background: we can distinguish between pointers and immediates just by testing one bit. Uniform representation is highly valuable for:
implementing garbage collection,
free parametric polymorphism (no code duplication, by contrast with what you’d get in a template language such as C++).
By conducting a basic test by running a simple C++ program on a normal desktop PC it seems plausible to suppose that sizes of pointers of any type (including pointers to functions) are equal to the target architecture bits ?
For example: in 32 bits architectures -> 4 bytes and in 64 bits architectures -> 8 bytes.
However I remember reading that, it is not like that in general!
So I was wondering what would be such circumstances?
For equality of size of pointers to data types compared with size of pointers
to other data types
For equality of size of pointers to data types compared with size of pointers
to functions
For equality of size of pointers to target architecture
No, it is not reasonable to assume. Making this assumption can cause bugs.
The sizes of pointers (and of integer types) in C or C++ are ultimately determined by the C or C++ implementation. Normal C or C++ implementations are heavily influenced by the architectures and the operating systems they target, but they may choose the sizes of their types for reasons other than execution speed, such as goals of supporting lower memory use (smaller pointers means less memory used in programs with lots of pointers), supporting code that was not written to be fully portable to any type sizes, or supporting easier use of big integers.
I have seen a compiler targeted for a 64-bit system but providing 32-bit pointers, for the purpose of building programs with smaller memory use. (It had been observed that the sizes of pointers were a considerable factor in memory consumption, due to the use of many structures with many connections and references using pointers.) Source code written with the assumption that the pointer size equalled the 64-bit register size would break.
It is reasonable to assume that in general sizes of pointers of any type (including pointers to functions) are equal to the target architecture bits?
Depends. If you're aiming for a quick estimate of memory consumption it can be good enough. But not if your programs correctness depends on it.
(including pointers to functions)
But here is one important remark. Although most pointers will have the same size, function pointers may differ. It is not guaranteed that a void* will be able to hold a function pointer. At least, this is true for C. I don't know about C++.
So I was wondering what would be such circumstances if any?
It can be tons of reasons why it differs. If your programs correctness depends on this size it is NEVER ok to do such an assumption. Check it up instead. It shouldn't be hard at all.
You can use this macro to check such things at compile time in C:
#include <assert.h>
static_assert(sizeof(void*) == 4, "Pointers are assumed to be exactly 4 bytes");
When compiling, this gives an error message:
$ gcc main.c
In file included from main.c:1:
main.c:2:1: error: static assertion failed: "Pointers are assumed to be exactly 4 bytes"
static_assert(sizeof(void*) == 4, "Pointers are assumed to be exactly 4 bytes");
^~~~~~~~~~~~~
If you're using C++, you can skip #include <assert.h> because static_assert is a keyword in C++. (And you can use the keyword _Static_assert in C, but it looks ugly, so use the include and the macro instead.)
Since these two lines are so extremely easy to include in your code, there's NO excuse not to do so if your program would not work correctly with the wrong pointer size.
It is reasonable to assume that in general sizes of pointers of any type (including pointers to functions) are equal to the target architecture bits?
It might be reasonable, but it isn't reliably correct. So I guess the answer is "no, except when you already know the answer is yes (and aren't worried about portability)".
Potentially:
systems can have different register sizes, and use different underlying widths for data and addressing: it's not apparent what "target architecture bits" even means for such a system, so you have to choose a specific ABI (and once you've done that you know the answer, for that ABI).
systems may support different pointer models, such as the old near, far and huge pointers; in that case you need to know what mode your code is being compiled in (and then you know the answer, for that mode)
systems may support different pointer sizes, such as the X32 ABI already mentioned, or either of the other popular 64-bit data models described here
Finally, there's no obvious benefit to this assumption, since you can just use sizeof(T) directly for whatever T you're interested in.
If you want to convert between integers and pointers, use intptr_t. If you want to store integers and pointers in the same space, just use a union.
Target architecture "bits" says about registers size. Ex. Intel 8051 is 8-bit and operates on 8-bit registers, but (external)RAM and (external)ROM is accessed with 16-bit values.
For correctness, you cannot assume anything. You have to check and be prepared to deal with weird situations.
As a general rule of thumb, it is a reasonable default assumption.
It's not universally true though. See the X32 ABI, for example, which uses 32bit pointers on 64bit architectures to save a bit of memory and cache footprint. Same for the ILP32 ABI on AArch64.
So, for guesstimating memory use, you can use your assumption and it will often be right.
It is reasonable to assume that in general sizes of pointers of any type (including pointers to functions) are equal to the target architecture bits?
If you look at all types of CPUs (including microcontrollers) currently being produced, I would say no.
Extreme counterexamples would be architectures where two different pointer sizes are used in the same program:
x86, 16-bit
In MS-DOS and 16-bit Windows, a "normal" program used both 16- and 32-bit pointers.
x86, 32-bit segmented
There were only a few, less known operating systems using this memory model.
Programs typically used both 32- and 48-bit pointers.
STM8A
This modern automotive 8-bit CPU uses 16- and 24-bit pointers. Both in the same program, of course.
AVR tiny series
RAM is addressed using 8-bit pointers, Flash is addressed using 16-bit pointers.
(However, AVR tiny cannot be programmed with C++, as far as I know.)
It's not correct, for example DOS pointers (16 bit) can be far (seg+ofs).
However, for the usual targets (Windows, OSX, Linux, Android, iOS) then it's correct. Because they all use the flat programming model which relies on paging.
In theory, you can also have systems which uses only the lower 32 bits when in x64. An example is a Windows executable linked without LARGEADDRESSAWARE. However this is to help the programmer avoid bugs when switching to x64. The pointers are truncated to 32 bits, but they are still 64 bit.
In x64 operating systems then this assumption is always true, because the flat mode is the only valid one. Long mode in CPU forces GDT entries to be 64 bit flat.
One also mentions a x32 ABI, I believe it is based on the same paging technology, forcing all pointers to be mapped to the lower 4gb. However this must be based to the same theory as in Windows. In x64 you can only have flat mode.
In 32 bit protected mode you could have pointers up to 48 bits. (Segmented mode). You can also have callgates. But, no operating system uses that mode.
Historically, on microcomputers and microcontrollers, pointers were often wider than general-purpose registers so that the CPU could address enough memory and still fit within the transistor budget. Most 8-bit CPUs (such as the 8080, Z80 or 6502) had 16-bit addresses.
Today, a mismatch is more likely to be because an app doesn’t need multiple gigabytes of data, so saving four bytes of memory on every pointer is a win.
Both C and C++ provide separate size_t, uintptr_t and off_t types, representing the largest possible object size (which might be smaller than the size of a pointer if the memory model is not flat), an integral type wide enough to hold a pointer, and a file offset (often wider than the largest object allowed in memory), respectively. A size_t (unsigned) or ptrdiff_t (signed) is the most portable way to get the native word size. Additionally, POSIX guarantees that the system compiler has some flag that means a long can hold any of these, but you cannot always assume so.
Generally pointers will be size 2 on a 16-bit system, 3 on a 24-bit system, 4 on a 32-bit system, and 8 on a 64-bit system. It depends on the ABI and C implementation. AMD has long and legacy modes, and there are differences between AMD64 and Intel64 for Assembly language programmers but these are hidden for higher level languages.
Any problems with C/C++ code is likely to be due to poor programming practices and ignoring compiler warnings. See: "20 issues of porting C++ code to the 64-bit platform".
See also: "Can pointers be of different sizes?" and LRiO's answer:
... you are asking about C++ and its compliant implementations, not some specific physical machine. I'd have to quote the entire standard in order to prove it, but the simple fact is that it makes no guarantees on the result of sizeof(T*) for any T, and (as a corollary) no guarantees that sizeof(T1*) == sizeof(T2*) for any T1 and T2).
Note: Where is answered by JeremyP, C99 section 6.3.2.3, subsection 8:
A pointer to a function of one type may be converted to a pointer to a function of another type and back again; the result shall compare equal to the original pointer. If a converted pointer is used to call a function whose type is not compatible with the pointed-to type, the behavior is undefined.
In GCC you can avoid incorrect assumptions by using built-in functions: "Object Size Checking Built-in Functions":
Built-in Function: size_t __builtin_object_size (const void * ptr, int type)
is a built-in construct that returns a constant number of bytes from ptr to the end of the object ptr pointer points to (if known at compile time). To determine the sizes of dynamically allocated objects the function relies on the allocation functions called to obtain the storage to be declared with the alloc_size attribute (see Common Function Attributes). __builtin_object_size never evaluates its arguments for side effects. If there are any side effects in them, it returns (size_t) -1 for type 0 or 1 and (size_t) 0 for type 2 or 3. If there are multiple objects ptr can point to and all of them are known at compile time, the returned number is the maximum of remaining byte counts in those objects if type & 2 is 0 and minimum if nonzero. If it is not possible to determine which objects ptr points to at compile time, __builtin_object_size should return (size_t) -1 for type 0 or 1 and (size_t) 0 for type 2 or 3.
What is the maximum number of dimensions that you can use when declaring an array?
For Example.
#include <iostream.h>
#include <conio.h>
{
int a[3][3][3][4][3];
a[2][2][2][2][2] = 9;
}
So, how many dimensions can we declare on an array.
What is limitation of it?
And what is reason behind it?
ISO/IEC 9899:2011 — C
In C, the C11 standard requires:
5.2.4.1 Translation limits
The implementation shall be able to translate and execute at least one program that
contains at least one instance of every one of the following limits:18)
…
12 pointer, array, and function declarators (in any combinations) modifying an
arithmetic, structure, union, or void type in a declaration.
…
18) Implementations should avoid imposing fixed translation limits whenever possible.
That means that to be a standard-compliant compiler, it must allow at least 12 array dimensions on a simple type like int, but should avoid imposing any limit if at all possible. The C90 and C99 standards also required the same limit.
ISO/IEC 14882:2011 — C++
For C++11, the equivalent information is:
Annex B (informative) Implementation quantities [implimits]
Because computers are finite, C++ implementations are inevitably limited in the size of the programs they
can successfully process. Every implementation shall document those limitations where known. This documentation
may cite fixed limits where they exist, say how to compute variable limits as a function of available
resources, or say that fixed limits do not exist or are unknown.
2 The limits may constrain quantities that include those described below or others. The bracketed number
following each quantity is recommended as the minimum for that quantity. However, these quantities are
only guidelines and do not determine compliance.
…
Pointer, array, and function declarators (in any combination) modifying a class, arithmetic, or incomplete
type in a declaration [256].
…
Thus, in C++, the recommendation is that you should be able to use at least 256 dimensions in an array declaration.
Note that even after you've got the compiler to accept your code, there will ultimately be limits imposed by the memory on the machine where the code is run. The standards specify the minimum number of dimensions that the compiler must allow (over-specify in the C++ standard; the mind boggles at the thought of a 256-dimensional array). The intention is that you shouldn't run into a problem — use as many dimensions as you need. (Can you imagine working with the source code for a 64-dimensional array, let alone anything more — the individual expressions in the source would be horrid to behold, let alone write, read, modify.)
It is not hard to understand that it is only limited by the amount of memory your machine has. You can take 100 (n)dimensional array also.1
Note: your code is accessing a memory out of the bound which is undefined behavior.
1.standard specifies a minimum limit of 12 in case of C and 256 in case of c++11.(This information is added after discussion with Jonathan leffler.My earlier answer only points out the maximum limits which is constrained my machine memory.
maximum number depend on stack size. ex, if stack size = 1Mb --> size of int a[xx][xx][xx][xx][xx] must < 1Mb
Edit: Gfortran 6 now supports these extensions :)
I have some old f77 code that extensively uses UNIONs and MAPs. I need to compile this using gfortran, which does not support these extensions. I have figured out how to convert all non-supported extensions except for these and I am at a loss. I have had several thoughts on possible approaches, but haven't been able to successfully implement anything. I need for the existing UDTs to be accessed in the same way that they currently are; I can reimplement the UDTs but their interfaces must not change.
Example of what I have:
TYPE TEST
UNION
MAP
INTEGER*4 test1
INTEGER*4 test2
END MAP
MAP
INTEGER*8 test3
END MAP
END UNION
END TYPE
Access to the elements has to be available in the following manners: TEST%test1, TEST%test2, TEST%test3
My thoughts thusfar:
Replace somehow with fortran EQUIVALENCE.
Define the structs in C/C++ and somehow make them visible to the FORTRAN code (doubt that this is possible)
I imagine that there must have been lots of refactoring of f77 to f90/95 when the UNION and MAP were excluded from the standard. How if at all was/is this handled?
EDIT: The accepted answer has a workaround to allow memory overlap, but as far as preserving the API, it is not possible.
UNION and MAP were never part of any FORTRAN standard, they are vendor extensions. (See, e.g., http://fortranwiki.org/fortran/show/Modernizing+Old+Fortran). So they weren't really excluded from the Fortran 90/95 standard. They cause variables to overlap in memory. If the code actually uses this feature, then you will need to use equivalence. The preferred way to move data between variables of different types without conversion is the transfer intrinsic, but to you that you would have to identify every place where a conversion is necessary, while with equivalence it is taking place implicitly. Of course, that makes the code less understandable. If the memory overlays are just to save space and the equivalence of the variables is not used, then you could get rid of this "feature". If the code is like your example, with small integers, then I'd guess that the memory overlay is being used. If the overlays are large arrays, it might have been done to conserve memory. If these declarations were also creating new types, you could use user defined types, which are definitely part of Fortran >=90.
If the code is using memory equivalence of variables of different types, this might not be portable, e.g., the internal representation of integers and reals are probably different between the machine on which this code originally ran and the current machine. Or perhaps the variables are just being used to store bits. There is a lot to figure out.
P.S. In response to the question in the comment, here is a code sample. But .... to be clear ... I do not think that using equivalence is good coding pratice. With the compiler options that I normally use with gfortran to debug code, gfortran rejects this code. With looser options, gfortran will compile it. So will ifort.
module my_types
use ISO_FORTRAN_ENV
type test_p1_type
sequence
integer (int32) :: int1
integer (int32) :: int2
end type test_p1_type
type test_p2_type
sequence
integer (int64) :: int3
end type test_p2_type
end module my_types
program test
use my_types
type (test_p1_type) :: test_p1
type (test_p2_type) :: test_p2
equivalence (test_p1, test_p2)
test_p1 % int1 = 2
test_p1 % int1 = 4
write (*, *) test_p1 % int1, test_p1 % int2, test_p2 % int3
end program test
The question is whether the union was used to save space or to have alternative representations of the same data. If you are porting, see how it is used. Maybe, because the space was limited, it was written in a way where the variables had to be shared. Nowadays with larger amounts of memory, maybe this is not necessary and the union may not be required. In which case, it is just two separate types
For those just wanting to compile the code with these extensions: Gfortran now supports UNION, MAP and STRUCTURE in version 6. https://gcc.gnu.org/bugzilla/show_bug.cgi?id=56226
This is probably a C++ 101 question: I'm curious what the guidelines are for using size_t and offset_t, e.g. what situations they are intended for, what situations they are not intended for, etc. I haven't done a lot of portable programming, so I have typically just used something like int or unsigned int for array sizes, indexes, and the like. However, I gather it's preferable to use some of these more standard typedefs when possible, so I'd like to know how to do that properly.
As a follow-up question, for development on Windows using Visual Studio 2008, where should I look to find the actual typedefs? I've found size_t defined in a number of headers within the VS installation directory, so I'm not sure which of those I should use, and I can't find offset_t anywhere.
You are probably referring to off_t, not offset_t. off_t is a POSIX type, not a C type, and it is used to denote file offsets (allowing 64-bit file offsets even on 32-bit systems). C99 has superceded that with fpos_t.
size_t is meant to count bytes or array elements. It matches the address space.
Instead of offset_t do you mean ptrdiff_t? This is the type returned by such routines as std::distance. My understanding is that size_t is unsigned (to match the address space as previously mentioned) whereas ptrdiff_t is signed (to theoretically denote "backwards" distances between pointers, though this is very rarely used).
offset_t isn't mentioned at all in my copy of the C++ standard.
size_t on the other hand, is meant simply to denote object or array sizes. A size_t is guaranteed to be big enough to store the size of any in-memory object, so it should be used for that.
You use size_t whenever the C++ language specification indicates that it is used in a particular expression. For example, you'd use size_t to store the return value of sizeof, or to represent the number of elements in an array (new[] takes size_t).
I've no idea what offset_t is - it's not mentioned once in ISO C++ spec.
Dragging an old one back up... offset_t (which is a long long) is used by Solaris' llseek() system call, and is to be used anywhere you're likely to run into real 64-bit file offsets... meaning you're working with really huge files.