I am using the same instructions as provided in this URL - Debug fortran code with gdb and I am getting these messages -
Breakpoint 1, psiappsor (omega=1.80999994,
pv=<error reading variable: value requires 31250000 bytes, which is more than max-value-size>,
psi=<error reading variable: value requires 31250000 bytes, which is more than max-value-size>, nsq=..., rhoref=...,
thetatop=<error reading variable: value requires 250000 bytes, which is more than max-value-size>,
thetabot=<error reading variable: value requires 250000 bytes, which is more than max-value-size>, thetaref=...,
coriol=<error reading variable: value requires 250000 bytes, which is more than max-value-size>,
ua=<error reading variable: value requires 125000 bytes, which is more than max-value-size>,
ub=<error reading variable: value requires 125000 bytes, which is more than max-value-size>,
va=<error reading variable: value requires 125000 bytes, which is more than max-value-size>,
vb=<error reading variable: value requires 125000 bytes, which is more than max-value-size>,
a=<error reading variable: value requires 31250000 bytes, which is more than max-value-size>,
b=<error reading variable: value requires 31250000 bytes, which is more than max-value-size>,
c=<error reading variable: value requires 31250000 bytes, which is more than max-value-size>, nx=249, ny=249, nz=124, dx=26437.7188, dy=27813.7012,
dz=200.139206) at inv_cart.f:1226
1226 > c(i,j,k))+(1.-omega)*psi(i,j,k).
Googling these texts or looking up previous SO Q/A's is not helpful at all. I can tell you that a lot of these variables such as pv, psi, thetatop, thetabot, ua,ub,va,vb,a,b,c are being passed in from the calling subroutine.
Is that what is causing these messages ? I can post the full code associated with the breakpoint if required. What do these messages mean ?
Related
I am going through some code that uses ReadFile and specifies an OVERLAPPED type.
From what I understand so far from reading other posts, this is what I got.
If you wanted to start a ReadFile at the 8th byte of the file, you would set the Offset variable of OVERLAPPED to 8 and the OffsetHigh variable to 0 before passing the OVERLAPPED to ReadFile. This makes sense.
Now, what happens if we set OffsetHigh to 1?
The actual offset is a 64-bit integer. The Offset field is the low 32 bits, and the OffsetHigh field is the high 32 bits. This is stated as much in the documentation:
Offset
The low-order portion of the file position at which to start the I/O request, as specified by the user.
...
OffsetHigh
The high-order portion of the file position at which to start the I/O request, as specified by the user.
...
The Offset and OffsetHigh members together represent a 64-bit file position. It is a byte offset from the start of the file or file-like device, and it is specified by the user; the system will not modify these values. The calling process must set this member before passing the OVERLAPPED structure to functions that use an offset, such as the ReadFile or WriteFile (and related) functions.
This split in low/high bits is a remnant from the early days of C when 64-bit integer types were not commonly available yet (this is why structs like (U)LARGE_INTEGER even exist in the Win32 API).
So:
Offset
OffsetHigh
64bit Value (Hex)
64bit Value (Decimal)
8
0
0x00000000'00000008
8
8
1
0x00000001'00000008
4'294'967'304
I'm pretty new to this so any clarification would be appreciated. When using the function ReadFile, how does the type of the lpBuffer interact with the parameter of "number of bytes to read"?
For instance what if you had an unsigned short MyShort[5] as lpBuffer, and then you set bytes to read as 2. Will all data be stored in MyShort[0]? Or would the first byte go into MyShort[0] and the second byte go into MyShort[1]? What happens when you set bytes to read is increase say to 9? Will 16bits go into MyShort[0] and then 16 more into MyShort[1] etc...?
Thanks
lpBuffer is always treated as a pointer to an array of specified amount of bytes (nNumberOfBytesToRead). The amount of bytes actually read will be stored in the variable pointed to by lpNumberOfBytesRead parameter or as async (overlapped) result later. So in your case if you request to read 2 bytes it may either read two bytes storing both of them in MyShort[0], or just a single byte stored in lower half of MyShort[0] or nothing at all. If you request to read 9 bytes then it will ready up to 9 bytes storing 2 + 2 + 2 + 2 + 1 bytes sequentially.
I am trying to analyze a basic read operation using ifstream with Procmon.
Part of the code used for read operation where i was trying to read data of 16kb size from a file:
char * buffer = new char[128000];
ifstream fileHandle("file.txt");
fileHandle.read(buffer, 16000);
cout << buffer << endl;
fileHandle.close();
In Procmon there were 4 ReadFile operation with the following details:
Offset: 0, Length: 4,096, Priority: Normal
Offset: 4,096, Length: 4,096
Offset: 8,192, Length: 4,096
Offset: 12,288, Length: 4,096
So does it mean that there were 4 operations of each 4kb size ? and if so why did that happen instead of just having a single ReadFile operation of 16 kb size.
So does it mean that there were 4 operations of each 4kb size ?
Yes.
and if so why did that happen instead of just having a single ReadFile operation of 16 kb size.
Probably because the standard library shipped with your compiler sets the default size of the buffer of file streams to 4 KB; since the read operation has to go through the buffer, it has to be filled (through OS calls) and emptied 4 times before satisfying your request. Notice that you can change the internal buffer of an fstream using fileHandle.rdbuf->pubsetbuf.
So does it mean that there were 4 operations of each 4kb size ?
That is exactly what it is saying.
and if so why did that happen instead of just having a single ReadFile operation of 16 kb size.
Just because you asked for 16000 bytes does not mean ifstream can actually read 16000 bytes in a single operation. File systems do not usually allow for such large reads, there is usually a cap. Even if you increase the size of the internal buffer that ifstream uses internaly, that is still no guarantee that the file system will honor a larger read size.
The contract of read() is that it returns the requested number of bytes unless an EOF/error is encountered. HOW it accomplishes that reading internally is an implementation detail. In this case, ifstream had to read four 4KB blocks in order to return 16000 bytes.
I'm new here so I'll try to be very clear with my issue. I've tried to get a direct answer, but when I check on other questions, they are very particular and I get confused.
I have a binary file and I need to read it for my project. I also have an specification sheet, and I'm reading the file accordingly to those specs. So I've created a cpp file, and writing a simple program to read each element. I use ifstream, and read() functions to read from file.
The problem is when on the specification sheet, I get that I need to read a bitstring with size 12. From the details, it's very clear that I should read only 12 bits for each of this elements. But I'm not really sure if reading bit to bit is possible. Rest of elements were read in bytes. And also, If I read 2 bytes each time and use bit "masks" to get 12 bits only, the rest of elements read after this does not match correctly. So my guess is that I really need to read only 12 bits.
So my question. Is it possible to read 12 bits from a binary file? or reading Bit to bit? . And I mean only 12, without reading bytes and then masking them.
Thanks a lot.
No, this is not possible.
What you should do is read 2 bytes, mask 12 bits to get the result you want but also store the other 4 bits somewhere. Now when you need 12 bits again, read only 1 byte and combine it with the 4 stored bits.
Assuming little endian.
read file to an array of uint8_t that is padded to a multiple of 6 bytes
make your access function
uint16_t get12Bits(uint8_t *ptr, int loc)
{
uint64_t temp;// use lower 48 bits
memcpy(&temp, ptr+(loc&~0x03), 6*uint8_t);//6bytes, 4 elements
return 0xfff&(temp>>(loc&0x03)*12);
}
I'm writing a set of tools, where a c++ application encodes data with the AES encryption standard and a java app decodes it. As far as I know the key length has to be 16 bytes. But when I was trying to use passwords with different length I came across the following behaviour of the AES_set_encrypt_key function:
length = 16 : nothing special happens, as expected
length > 16 : password gets cut after the sixteenth character
length < 16 : the password seems to be filled "magical"
So, does anyone know what exactly happens in the last case ?
Btw: Java throws an exception if the password is not exactly 16 chars long
Thanks,
Robert
Don't confuse byte array with C-String. Every C-String is a byte array, but not every byte array is a C-String.
The concept with AES is to use a "KEY". It acts like a password but the concept is a little bit different. It has a fixed size and must be 16 bytes on your case.
The key is a byte array of 16 bytes that is NOT a C-String. It means it can have any value at any point in the buffer, while a C-String must be null-terminated (the '\0' in the end of your content).
When you give a C-String to your AES, it still interprets it as a buffer, ignoring any \0 character on the way. In other words, if your string is "something", the buffer is in fact "something\0??????", where "??????" here means any random trash bytes that cannot be guaranteed to work all the time.
Why does the key length < 16 is working? In debug mode, when you start a buffer, it usually keeps a default value that is repeating on your case. But it changes accordinly to compiler and/or platform, so take take.
And the key length > 16, AES is just picking the 16 first bytes of your buffer and ignoring the rest.