I am converting an old fortran code to java but I am stuck with following line:
PARAMETER (MAXC=15)
REAL CKV(MAXC,MAXC)
DATA (CKV( 1,J),J= 2,15)/10*0.,.45,.02,.12,.08/
DATA (CKV( 2,J),J= 3,15)/ 9*0.,.45,.06,.15,.07/
Can someone explain the above last two lines.
Thanks
PARAMETER (MAXC=15)
This declares MAXC a parameter (constant) and assign the value 15.
REAL CKV(MAXC,MAXC)
This is a declaration of the floating point (single precision) array CKV of dimensions (MAXC,MAXC)
DATA (CKV( 1,J),J= 2,15)/10*0.,.45,.02,.12,.08/
DATA (CKV( 2,J),J= 3,15)/ 9*0.,.45,.06,.15,.07/
This statement assigns initial values to CKV (at least to some elements). 10*0. means "take 10 times the 0.".
To clarify my answer (as requested in the comment):
(CKV( 1,J),J= 2,15) means "initialize the array subsection CKV( 1,2:15)", i.e. 14 elements. This matches the 14 elements on the right-hand-side (10x0., .45,.02,.12,.08).
The second implicit loop starts at 3, so only 13 elements are assigned. Therefore, it is just 9*0..
Related
stackoverflow. First-time poster, long-time reader. I am working on debugging a large program (that started in F77 but has evolved), and I'm getting a runtime error that the string I'm passing a subroutine is shorter than expected. The thing is, I'm putting in a debug statement right before calling the subroutine, and the string is indeed of the correct length. Can you help me figure this one out? Since the code is long I'll just post the relevant snippet of file her1pro.F here (note WORD="HUCKEL " with a space at the end, but this happens to all the strings):
SUBROUTINE PR1INT(WORD,WORK,LWORK,IORDER,NPQUAD,
& TRIANG,PROPRI,IPRINT)
...
CHARACTER WORD*7
...
WRITE(*,*)"LB debug, calling PR1IN1 from PR1INT"
WRITE(*,*)"LB debug, WORD=",WORD
WRITE(*,*)"LB debug, LENGTH(WORD)=",LEN(WORD)
CALL PR1IN1(WORK,KFREE,LFREE,WORK(KINTRP),WORK(KINTAD),
& WORK(KLBINT),WORD,IORDER,NPQUAD,TRIANG,
& PROPRI,IPRINT,DUMMY,NCOMP,TOFILE,'TRIANG',
& DOINT,WORK(KEXPVL),EXP1EL,DUMMY)
...
SUBROUTINE PR1IN1(WORK,KFREE,LFREE,INTREP,INTADR,LABINT,WORD,
& IORDER,NPQUAD,TRIANG,PROPRI,IPRINT,
& SINTMA,NCOMP,TOFILE,MTFORM,
& DOINT,EXPVAL,EXP1EL,DENMAT)
...
CHARACTER LABINT(*)*8, WORD*7, TABLE(NTABLE)*7, MTFORM*6,
& EFDIR*1, LABLOC*8
...
And this is the output I'm getting:
[xxx#yyy WORK_TEST ]$ ~/dalton/build/dalton.x
DALTON: default work memory size used. 64000000
Work memory size (LMWORK+2): 64000002 = 488.28 megabytes; node 0
0: Directories for basis set searches:
./:
LB debug, calling PR1IN1 from PR1INT
LB debug, WORD=HUCKEL
LB debug, LENGTH(WORD)= 7
At line 161 of file /p/home/lbelcher/dalton/DALTON/abacus/her1pro.F
Fortran runtime error: Actual string length is shorter than the declared one for dummy argument 'word' (6/7)
From the standard:
16.9.112 LEN (STRING [, KIND])
Description. Length of a character entity.
Class. Inquiry function.
Arguments.
STRING shall be of type character. If it is an unallocated allocatable variable or a pointer that is not associated, its length type parameter shall not be deferred.
KIND (optional) shall be a scalar integer constant expression.
Result Characteristics. Integer scalar. If KIND is present, the kind type parameter is that specified by the value of KIND; otherwise the kind type parameter is that of default integer type.
Result Value. The result has a value equal to the number of characters in STRING if it is scalar or in an element of STRING if it is an array.
Example. If C is declared by the statement
CHARACTER (11) C (100)
LEN (C) has the value 11.
16.9.113 LEN_TRIM (STRING [, KIND])
Description. Length without trailing blanks.
Class. Elemental function.
Arguments.
STRING shall be of type character.
KIND (optional) shall be a scalar integer constant expression.
Result Characteristics. Integer. If KIND is present, the kind type parameter is that specified by the value of KIND; otherwise the kind type parameter is that of default integer type.
Result Value. The result has a value equal to the number of characters remaining after any trailing blanks in STRING are removed. If the argument contains no nonblank characters, the result is zero.
Examples. LEN_TRIM (’ A B ’) has the value 4 and LEN_TRIM (’ ’) has the value 0.
I think the examples here tell the story.
This is a rather simple problem but is pretty confusing.
string R = "hhhh" ;
cout<< sizeof( R )<<endl;
OUTPUT:
4
Variation:
string R = "hhuuuuuuhh" ;
cout<< sizeof( R )<
OUTPUT2:
4
What is going wrong ? Should I use char array instead ?
Think of sizeof being compile-time evaluable. It evaluates to the size of the type, not the size of the contents. You can even write sizeof(std::string) which will be exactly the same as sizeof(foo) for any std::string instance foo.
To compute the number of characters in a std::string, use size().
If you have a character array, say char c[6] then the type of c is an array of 6 chars. So sizeof(c) (known at compile-time) will be 6 as the C++ standard defines the size of a single char to be 1.
sizeof expression returns the size required for storage of the type expression evaluates to (see http://en.cppreference.com/w/cpp/language/sizeof). In case of std::string, this contains a pointer to the data (and possibly a buffer for small strings), but not the data itself, so it doesn't (and can't) depend on string length.
Your string variable will consist of a part most often stored on the stack, which has fixed dimensions. The size of this part is what's returned by sizeof (). Inside this fixed part is a pointer (or reference) to a part stored on the heap, which actually contain your characters and has a varying size. However the size of this part is only known at runtime, while sizeof () is computed at compile time.
You may wonder why. Things like this are both the strength and the weakness of C++. C++ is a totally different beast from e.g. languages like Python and C#. While the latter languages can produce all kinds of dynamically changing meta-data (like the size or type of a variable), the price that is paid is that they're all slow. C++, while being a bit 'spartan', can run rings around such languages. In fact most 'dynamic' languages are in fact implemented (programmed) in C/C++.
The following code
int x;
cin >> x;
int b[x];
b[5] = 8;
cout << sizeof(b)/sizeof(b[0]) << endl << b[5];
with x inputted as 10 gives the ouput:
10
8
which seems very weird to me because:
According to http://www.cplusplus.com/doc/tutorial/arrays/ we shouldn't even be able to initialize an array using a value obtained from cin stream.
NOTE: The elements field within square brackets [], representing the number of elements in the array, must be a constant expression, since arrays are blocks of static memory whose size must be determined at compile time, before the program runs.
But that's not the whole story! The same code with x inputted as 4 sometimes gives the output
Segmentation fault. Core dumped.
and sometimes gives the output:
4
8
What the heck is going on? Why doesn't the compiler act in a single manner? Why can I assign a value to an array index that is larger than the array? And why can we even initialize an array using variable in the first place?
I initially mentioned this as a comment, but seeing how no one has answered, I'm gonna add it here.
What you have demonstrated above is undefined behavior. It means that you can't tell what will the outcome be. As Brian adds in the comments, it will result in a diagnostic message (which could be a warning). Since the compiler would go ahead anyway, it can be called UB as it is not defined in the standard.
I want to have a data variable which will be an integer and its range will be from
0 - 1.000.000.
For example normal int variables can store numbers from -2,147,483,648 to 2,147,483,647.
I want the new data type to have less range so it can have LESS SIZE.
If there is a way to do that please let me know?
There isn't; you can't specify arbitrary ranges for variables like this in C++.
You need 20 bits to store 1,000,000 different values, so using a 32-bit integer is the best you can do without creating a custom data type (even then you'd only be saving 1 byte at 24 bits, since you can't allocate less than 8 bits).
As for enforcing the range of values, you could do that with a custom class, but I assume your goal isn't the validation but the size reduction.
So, there's no true good answer to this problem. Here are a few thoughts though:
If you're talking about an array of these 20 bit values, then perhaps the answers at this question will be helpful: Bit packing of array of integers
On the other hand, perhaps we are talking about an object, that has 3 int20_ts in it, and you'd like it to take up less space than it would normally. In that case, we could use a bitfield.
struct object {
int a : 20;
int b : 20;
int c : 20;
} __attribute__((__packed__));
printf("sizeof object: %d\n", sizeof(struct object));
This code will probably print 8, signifying that it is using 8 bytes of space, not the 12 that you would normally expect.
You can only have data types to be multiple of 8 bits. This is because, otherwise that data type won't be addressable. Imagine a pointer to a 5 bit data. That won't exist.
I'm reading a lecture slide in my data structures class for arrays, but there is something that sort of confused me.
The example is in an array called x, defined as follows:
1-dimensional array x = [a, b, c, d]
location(x[i]) = start + i
I'm not really understanding this, so could somebody explain this?
start is a variable, which holds address to the array. Since a pointer in 32-bit system has 4 bytes, it will occupy these four bytes. So if you want 4-byte array, you will actually need 8 bytes of memory: 4 for the array and another 4 for pointer to the first element of this array.