Looking at the sample code for the seeding of gfortran's random number generator, I was puzzled by the time conversion here:
call date_and_time(values=dt)
tms = (dt(1) - 1970) * 365_8 * 24 * 60 * 60 * 1000 &
+ dt(2) * 31_8 * 24 * 60 * 60 * 1000 &
+ dt(3) * 24 * 60 * 60 * 60 * 1000 &
+ dt(5) * 60 * 60 * 1000 &
+ dt(6) * 60 * 1000 + dt(7) * 1000 &
+ dt(8)
t = transfer(tms, t)
I was curious why the 365 and 31 had _8 trailing. Looking it up, I found that this indicates an 8 bit integer. Why would that be used here? I understand that it's just a random seed, so it doesn't really matter, but why would you truncate or mod 365 to an 8 bit value, and not the other numbers? Is it just whimsy? Does anyone have some insight into this?
UPDATE: It turns out I was confused about _8 meaning 8 bits when actually it means 8 bytes, which I should have known. So yeah. Thanks for setting me straight on that.
It is not 8 bit, it is 8 byte.
Of course, 365 does not fit in 8 bits which should have set the alarm bells ringing.
To answer the rest of my question, setting the one number in those products to an 8 byte integer causes the product to be 8 bytes as well, according to this page. The first two terms are likely to be the only ones large enough to require this locally, and the others will be converted when they are summed. So that's why only 365 and 31 needed to be 8 bytes.
Related
Started working on screen capturing software specifically targeted for Windows. While looking through an example on MSDN for Capturing an Image I found myself a bit confused.
Keep in mind when I refer to the size of the bitmap that does not include headers and so forth associated with an actual file. I'm talking about raw pixel data. I would have thought that the formula should be (width*height)*bits-per-pixel. However, according to the example this is the proper way to calculate the size:
DWORD dwBmpSize = ((bmpScreen.bmWidth * bi.biBitCount + 31) / 32) * 4 * bmpScreen.bmHeight;
and or: ((width*bits-per-pixel + 31) / 32) * 4 * height
I don't understand why there's the extra calculations involving 31, 32 and 4. Perhaps padding? I'm not sure but any explanations would be quite appreciated. I've already tried Googling and didn't find any particularly helpful results.
The bits representing the bitmap pixels are packed in rows. The size of each row is rounded up to a multiple of 4 bytes (a 32-bit DWORD) by padding.
(bits_per_row + 31)/32 * 4 ensures the round up to the next multiple of 32 bits. The answer is in bytes, rather than bits hence *4 rather than *32.
See: https://en.wikipedia.org/wiki/BMP_file_format
Under Bitmap Header Types you'll find the following:
The scan lines are DWORD aligned [...]. They must be padded for scan line widths, in bytes, that are not evenly divisible by four [...]. For example, a 10- by 10-pixel 24-bpp bitmap will have two padding bytes at the end of each scan line.
The formula
((bmpScreen.bmWidth * bi.biBitCount + 31) / 32) * 4
establishes DWORD-alignment (in bytes). The trailing * 4 is really the result of * 32 / 8, where the multiplication with 32 produces a value that's a multiple of 32 (in bits), and the division by 8 translates it back to bytes.
Although this does produce the desired result, I prefer a different implementation. A DWORD is 32 bits, i.e. a power of 2. Rounding up to a power of 2 can be implemented using the following formula:
(value + ((1 << n) - 1)) & ~((1 << n) - 1)
Adding (1 << n) - 1 adjusts the initial value to go past the next n-th power of 2 (unless it already is an n-th power of 2). (1 << n) - 1 evaluates to a value, where the n least significant bits are set, ~((1 << n) - 1) negates that, i.e. all bits but the n least significant bits are set. This serves as a mask to remove the n least significant bits of the adjusted initial value.
Applied to this specific case, where a DWORD is 32 bits, i.e. n is 5, and (1 << n) - 1 evaluates to 31. value is the raw scanline width in bits:
auto raw_scanline_width_in_bits{ bmpScreen.bmWidth * bi.biBitCount };
auto aligned_scanline_width_in_bits{ (raw_scanline_width_in_bits + 31) & ~31 };
auto aligned_scanline_width_in_bytes{ raw_scanline_width_in_bits / 8 };
This produces the same results, but provides a different perspective, that may be more accessible to some.
I'm currently studying in my 3rd year of university - my exam for Computer Systems and Concurrency and I'm confused about a past paper question. Nobody - even the lecturer - has answered my question.
Question:
Consider the following GPU that consists of 8 multiprocessors clocked at 1.5 GHz, each of which contains 8 multithreaded single-precision floating-point units and integer processing units. It has a memory system that consists of 8 partitions of 1GHz Graphics DDR3DRAM, each 8 bytes wide and with 256 MB of capacity. Making reasonable assumptions (state them), and a naive matrix multiplication algorithm, compute how much time the computation C = A * B would take. A, B, and C are n * n matrices and n is determined by the amount of memory the system has.
Answer given in solutions:
> Assuming it has a single-precision FP multiply-add instruction,
Single-precision FP multiply-add performance =
\#MPs * #SP/MP * #FLOPs/instr/SP * #instr/clock * #clocks/sec =
8 * 8 * 2 * 1 * 1.5 G = 192 GFlops / second
Total DDR3RAM memory size = 8 * 256 MB = 2048 MB
The peak DDR3 bandwidth = #Partitions * #bytes/transfer * #transfers/clock * #clocks/sec = 8 * 8 * 2 * 1G = 128 GB/sec
>Modern computers have 32-bit single precision So, if we want 3 n*n SP matrices,
maximum n is
3n^2 * 4 <= 2048 * 1024 * 1024
>nmax = 13377 = n
>The number of operations that a naive mm algorithm (triply nested loop) needs is calculated as follows:
>For each element of the
result, we need n multiply-adds For each row of the result,
>we need n * n multiply-adds For the entire result matrix, we need n * n * n multiply-adds Thus, approximately 2393 GFlops.
> Assuming no cache, we have loading of 2 matrices and storing of 1 to the graphics memory.
>That is 3 * n^2 = 512 GB of data. This process will take 512 / 128 = 4 seconds
Also, the processing will take 2393 / 192 = 12.46 seconds Thus the
entire matrix multiplication will take 16.46 seconds.
Now my questions is - how does the calculation of 3*((13377)^2) = 536,832,387
translate to 536,832,387 = 512 GB.
That is 536.8 Million values. Each value is 4 bytes long. The memory interface is 8 bytes wide - assuming the GPU cannot fetch 2 values and split them - that effectively doubles the size of the reads and writes. Therefore the 2GB of Memory used is effectively read/written twice (because 8 bytes are read and 4 ignored) Therefore only 4GB of data is passed between the RAM and the GPU.
Can someone please tell me where I am going wrong as the only way I can think of is that 536.8 Million Result is the value of the memory operations in KB - which is not stated anywhere.
mlAnswer = ( ( ( degreesPLato->text().toInt() * 1000000 ) * 3800 ) / answer );
is the code in quesition
mlAnswer out puts -8223, while my calculator puts out 228000
debug output
12 * 1000000 * 3800 / 200000 = -8223
all data types are ints Please tell me what I'm doing wrong.
12 * 1000000 * 3800 = 45.6 billion.
This is out of range for a 4 byte signed integer, which is what int usually is. Try using long long instead.
The default type of an integer literal is int, unless the number is too big to fit in an int. As long as you are doing math operations between ints, the results remain as ints. 12 is an int, 1000000 is an int, and 3800 is an int. When you multiply them together, the result is still an int, even though it no longer fits. Add the LL suffix to make the integer literal a long long. i.e. 12LL, 1000000LL, 3800LL, etc...
You can fix this by reordering your operations:
12 * 1000000 * 3800 / 200000
Will overflow an int, however:
12 * 1000000 / 200000 * 3800
will not.
Note that this will only give the same answer if the numerator is an integer multiple of the denominator. Using LL is a better solution on platforms that support it, but if you are constrained to a 4 byte int type, this will at least stop overflow in more situations.
I know this is stupid but I'm a quiet a noob in a programming world here is my code.
This one works perfectly:
#include <stdio.h>
int main() {
float x = 3153600000 ;
printf("%f", x);
return 0;
}
But this one has a problem:
#include <stdio.h>
int main() {
float x = 60 * 60 * 24 * 365 * 100 ;
printf("%f", x);
return 0;
}
So 60 * 60 * 24 * 365 * 100 is 3153600000 right ??? if yes then why does it produced different results ??? I got the overflow in the second one it printed "-1141367296.000000" as a result. Could anyone tell me why ?
You're multiplying integers, then putting the result in a float. By that time, it has already overflowed.
Try float x = 60.0f * 60.0f * 24.0f * 365.0f * 100.0f;. You should get the result you want.
60 is an integer, as are 24, 365, and 100. Therefore, the entire expression 60 * 60 * 24 * 365 * 100 is carried out using integer arithmetic (the compiler evaluates the expression before it sees what type of variable you're assigning it into).
In a typical 32-bit architecture, a signed integer can only hold values up to 2,147,483,647. So the value would get truncated to 32 bits before it gets assigned into your float variable.
If you tell the compiler to use floating-point arithmetic, e.g. by tacking f onto the first value to make it float, then you'll get the expected result. (A float times an int is a float, so the float propagates to the entire expression.) E.g.:
float x = 60f * 60 * 24 * 365 * 100;
Doesn't your compiler spit this warning? Mine does:
warning: integer overflow in
expression
The overflow occurs before the all-integer expression is converted to a float before being stored in x. Add a .0f to all numbers in the expression to make them floats.
If you multiply two integers, the result will be an integer too.
60 * 60 * 24 * 365 * 100 is an integer.
Since integers can go up to 2^31-1 (2147483647) such values overflows and becomes -1141367296, which is only then converted to float.
Try multiplying float numbers, instead of integral ones.
I need some division algorithm which can handle big integers (128-bit).
I've already asked how to do it via bit shifting operators. However, my current implementation seems to ask for a better approach
Basically, I store numbers as two long long unsigned int's in the format
A * 2 ^ 64 + B with B < 2 ^ 64.
This number is divisible by 24 and I want to divide it by 24.
My current approach is to transform it like
A * 2 ^ 64 + B A B
-------------- = ---- * 2^64 + ----
24 24 24
A A mod 24 B B mod 24
= floor( ---- ) * 2^64 + ---------- * 2^64 + floor( ---- ) + ----------
24 24.0 24 24.0
However, this is buggy.
(Note that floor is A / 24 and that mod is A % 24. The normal divisions are stored in long double, the integers are stored in long long unsigned int.
Since 24 is equal to 11000 in binary, the second summand shouldn't change something in the range of the fourth summand since it is shifted 64 bits to the left.
So, if A * 2 ^ 64 + B is divisible by 24, and B is not, it shows easily that it bugs since it returns some non-integral number.
What is the error in my implementation?
The easiest way I can think of to do this is to treat the 128-bit numbers as four 32-bit numbers:
A_B_C_D = A*2^96 + B*2^64 + C*2^32 + D
And then do long division by 24:
E = A/24 (with remainder Q)
F = Q_B/24 (with remainder R)
G = R_C/24 (with remainder S)
H = S_D/24 (with remainder T)
Where X_Y means X*2^32 + Y.
Then the answer is E_F_G_H with remainder T. At any point you only need division of 64-bit numbers, so this should be doable with integer operations only.
Could this possibly be solved with inverse multiplication? The first thing to note is that 24 == 8 * 3 so the result of
a / 24 == (a >> 3) / 3
Let x = (a >> 3) then the result of the division is 8 * (x / 3). Now it remains to find the value of x / 3.
Modular arithmetic states that there exists a number n such that n * 3 == 1 (mod 2^128). This gives:
x / 3 = (x * n) / (n * 3) = x * n
It remains to find the constant n. There's an explanation on how to do this on wikipedia. You'll also have to implement functionality to multiply to 128 bit numbers.
Hope this helps.
/A.B.
You shouldn't be using long double for your "normal divisions" but integers there as well. long double doesn't have enough significant figures to get the answer right (and anyway the whole point is to do this with integer operations, correct?).
Since 24 is equal to 11000 in binary, the second summand shouldn't change something in the range of the fourth summand since it is shifted 64 bits to the left.
Your formula is written in real numbers. (A mod 24) / 24 can have an arbitrary number of decimals (1/24 is for instance 0.041666666...) and can therefore interfere with the fourth term in your decomposition, even once multiplied by 2^64.
The property that Y*2^64 does not interfere with the lower weight binary digits in an addition only works when Y is an integer.
Don't.
Go grab a library to do this stuff - you'll be incredibly thankful you chose to when debugging weird errors.
Snippets.org had a C/C++ BigInt library on it's site a while ago, Google also turned up the following: http://mattmccutchen.net/bigint/