For cycle bugged? - c++

I made a for cycle to calculate the population of an alien species growth. This is the cycle:
int mind = 96;
int aliens = 1;
for (int i=0; i <= mind; i++)
{
aliens = aliens * 2;
}
cout << aliens;
Oddly, the cout is returning 0, and it makes no sense, it should return a very high value. Is the cycle badly coded?

The issue is simple. you have a int (most likely 32-bit signed integer). The operation you're doing (x2 each cycle) can be expressed as a shift arithmetic left.
Beware the powers of 2! Doing 1 << 31 on a 32-bit signed integer will effectively go back to 0 (after an overflow).
Let's see how your loop goes.
0 2
1 4
2 8
3 16
4 32
5 64
6 128
7 256
8 512
9 1024
10 2048
11 4096
12 8192
13 16384
14 32768
15 65536
16 131072
17 262144
18 524288
19 1048576
20 2097152
21 4194304
22 8388608
23 16777216
24 33554432
25 67108864
26 134217728
27 268435456
28 536870912
29 1073741824
30 -2147483648 // A.K.A. overflow
31 0
At this point I don't think I need to tell you 0 x 2 = 0
The point being: use a double or a integer variable that's at least mind + 1 bits long

Related

Unable to get allocation size using !heap -stat -h command

Simulating user mode native memory leak using c++ program, used gflags to enable page heap. Took process memory dump. Following this article https://www.codeproject.com/Articles/31382/Memory-Leak-Detection-Using-Windbg, however unable to get the output of !heap -stat -h command as
size #blocks total ( %) (percent of total busy bytes)
I get the below output, what am I missing?
0:002> !heap -stat -h 00000224be780000
Walking the heap 00000224be780000 .
0: Heap 00000224be780000
Flags 00000002 - HEAP_GROWABLE
Reserved memory in segments 1020 (k)
Commited memory in segments 160 (k)
Virtual bytes (correction for large UCR) 1020 (k)
Free space 1 (k) (1 blocks)
External fragmentation 0% (1 free blocks)
Virtual address fragmentation 84% (1 uncommited ranges)
Virtual blocks 0 - total 0 KBytes
Lock contention 0
Segments 1
Default heap Front heap Unused bytes
Range (bytes) Busy Free Busy Free Total Average
------------------------------------------------------------------
0 - 1024 145 0 0 0 5126 35
1024 - 2048 11 1 0 0 346 31
2048 - 3072 2 0 0 0 64 32
5120 - 6144 2 0 0 0 78 39
7168 - 8192 1 0 0 0 32 32
19456 - 20480 1 0 0 0 40 40
29696 - 30720 1 0 0 0 32 32
38912 - 39936 1 0 0 0 40 40
------------------------------------------------------------------
Total 164 1 0 0 5758 35

What is the correct way to get the binary representation of long double? [duplicate]

This question already has an answer here:
Why does a long double take up only 10 bytes in a string? [duplicate]
(1 answer)
Closed 1 year ago.
Here's my attempt:
#include <iostream>
union newType {
long double firstPart;
unsigned char secondPart[sizeof(firstPart)];
} lDouble;
int main() {
lDouble.firstPart = -16.5;
for (int_fast16_t i { sizeof(lDouble) - 1 }; i >= 0; --i)
std::cout << (int)lDouble.secondPart[i] << " ";
return 0;
}
Output: 0 0 0 0 0 0 192 3 132 0 0 0 0 0 0 0
Hex: 0 0 0 0 0 0 c0 3 84 0 0 0 0 0 0 0
And I almost agree with the part "c0 3 84", which is "1100 0000 0000 0011 1000 0100".
-16.5 = -1.03125 * 2^4 = (-1 + (-0.5) * 2^-4) * 2^4
Thus, the 117th bit of my fraction part must be 1 and after 5th division I'll get only "0".
sign(-): 1
exponent(2^4): 4 + 16383 = 16387 = 100 0000 0000 0011
fraction: 0000 1000 and 104 '0'
Result: 1| 100 0000 0000 0011| 0000 1000 and 104 '0'
Hex: c 0 0 3 0 8 and 26 '0'
Or: c0 3 8 0 0 0 0 0 0 0 0 0 0 0 0 0
I don' get two things:
"c0 3 84" - where did I lose 4 in my calculations? My guess is that it somehow stores 1 (113 bit) and it shouldn't be stored. Then there's 1000 0100 instead of 0000 1000 (after "c0 3") and that's exactly "84". But we always store 112 bits and 1 is always implicit.
Why doesn't my output start from 192? Why does it start from 0? I thought that first bit is sign bit, then exponent (15 bits) and fraction (112 bits).
I've managed to represent other data types (double, float, unsigned char, etc.). With double I went with the similar approach and got the expected result (e.g. double -16.5 outputs 192 48 128 0 0 0 0 0, or c0 30 80 0 0 0 0 0).
Of course I've tested the solution from How to print binary representation of a long double as in computer memory?
Values for my -16.5 are: 0 0 0 0 0 0 0 0x84 0x3 0xc0 0xe2 0x71 0xf 0x56 0 0
If I revert this I get: 0 0 56 f 71 e2 c0 3 84 0 0 0 0 0 0 0
And I don't understand why (again) does the sequence start not from sign bit, what are those "56 f 71 e2 c0"? Where do they come from? And why (again) there's "4" after "8"?
What is the correct way to get the binary representation of long double?
Same as the way of getting the binary representation of any trivial type. Reinterpreting as an array of unsigned char, and iterating each byte is typical and well defined solution.
std::bitset helps with the binary representation:
long double ld = -16.5;
unsigned char* it = reinterpret_cast<unsigned char*>(&ld);
for (std::size_t i = 0; i < sizeof(ld); i++) {
std::cout
<< "byte "
<< i
<< '\t'
<< std::bitset<CHAR_BIT>(it[i])
<< '\t'
<< std::hex << int(it[i])
<< '\t'
<< std::dec << int(it[i])
<< '\n';
}
Example output on some system:
byte 0 00000000 0 0
byte 1 00000000 0 0
byte 2 00000000 0 0
byte 3 00000000 0 0
byte 4 00000000 0 0
byte 5 00000000 0 0
byte 6 00000000 0 0
byte 7 10000100 84 132
byte 8 00000011 3 3
byte 9 11000000 c0 192
byte 10 01000000 40 64
byte 11 00000000 0 0
byte 12 00000000 0 0
byte 13 00000000 0 0
byte 14 00000000 0 0
byte 15 00000000 0 0
Note that your example has undefined behaviour in C++ due to reading an inactive member of a union.
Why doesn't my output start from 192?
Probably because those bytes at the end happen to be padding.
Why does it start from 0?
Because the padding contains garbage.
I thought that first bit is sign bit, then exponent (15 bits) and fraction (112 bits).
Not so much the "first" bit, but rather the "most significant" bit, excluding the padding. And evidently, you've assumed the number of bits wrongly as some of it is used for padding.
Note that C++ doesn't guarantee that the floating point representation is IEEE-754 and in fact, long double is often not the 128 bit "quadruple" precision float, but rather 80 bit "extended" precision float. This is the case for example in the x86 CPU architecture family.

Number of combinations with C-pair of N elements

I have N buckets. Each bucket can contain 0 or 1. C is number that represents how many number 1 is showing continuously (e.g. if C=3 i would have 111).
E.g. for N=5 and C=2, total number of all combinations is 19 (here C=2, so I have always to have at least two ones - 11 in row):
And this is calculation for first 20 N and C numbers (I marked yellow case above):
How to get to the formula that depends on C and N ?
This python progam
import scipy.special
import fractions
def bi(n, m):
return scipy.special.comb(n, m, exact=True)
def fr(*args):
return fractions.Fraction(*args)
def f(N, k):
N = fr(N)
k = fr(k)
s = 1
m = 0
while m <= k - 1:
if m % k == N % k:
x = (N - m)/k
s -= bi(m, x) * (-1)**x * 2**(-(k + 1)*x)
m += 1
while m <= N:
if m % k == N % k:
x = (N - m)/k
s -= (bi(m, x) - fr(1, 2)**k * bi(m - k, x) ) * (-1)**x * 2**(-(k + 1)*x)
m += 1
return(s * 2**N)
for N in range(1, 20):
for C in range(1, N + 1):
print("%6.d" % f(N, C), end = ' ')
print()
Outputs:
1
3 1
7 3 1
15 8 3 1
31 19 8 3 1
63 43 20 8 3 1
127 94 47 20 8 3 1
255 201 107 48 20 8 3 1
511 423 238 111 48 20 8 3 1
1023 880 520 251 112 48 20 8 3 1
2047 1815 1121 558 255 112 48 20 8 3 1
4095 3719 2391 1224 571 256 112 48 20 8 3 1
8191 7582 5056 2656 1262 575 256 112 48 20 8 3 1
16383 15397 10616 5713 2760 1275 576 256 112 48 20 8 3 1
32767 31171 22159 12199 5984 2798 1279 576 256 112 48 20 8 3 1
65535 62952 46023 25888 12880 6088 2811 1280 576 256 112 48 20 8 3 1
131071 126891 95182 54648 27553 13152 6126 2815 1280 576 256 112 48 20 8 3 1
262143 255379 196132 114832 58631 28240 13256 6139 2816 1280 576 256 112 48 20 8 3 1
524287 513342 402873 240335 124192 60320 28512 13294 6143 2816 1280 576 256 112 48 20 8 3 1
The formula is from Markus Scheuer.

output of !heap -s command need clarifications

I am trying to understand the output of !heap -s command. I understand that each process has a default heap where all allocations are been done. An app can create its own heap. Does each row in this output show a different heap? If so, does it mean that app has created so many heap or can windows also create multiple heaps?
0:000> !heap -s
Heap Flags Reserv Commit Virt Free List UCR Virt Lock Fast
(k) (k) (k) (k) length blocks cont. heap
-------------------------------------------------------------------------------------
Virtual block: 000000000a790000 - 000000000a790000 (size 0000000000000000)
Virtual block: 000000000ac50000 - 000000000ac50000 (size 0000000000000000)
0000000000430000 00000002 48256 31976 48256 558 512 7 2 5 LFH
0000000000010000 00008000 64 4 64 1 1 1 0 0
0000000000680000 00001002 1088 368 1088 9 5 2 0 0 LFH
00000000005e0000 00041002 512 8 512 3 1 1 0 0
0000000000380000 00001002 1088 408 1088 5 5 2 0 0 LFH
0000000000840000 00041002 512 16 512 0 1 1 0 0
0000000000d00000 00001002 512 340 512 3 8 1 0 0 LFH
00000000003a0000 00041002 512 8 512 3 1 1 0 0
000000000c3d0000 00001002 512 344 512 3 22 1 0 0 LFH
000000000c5d0000 00001002 512 336 512 5 10 1 0 0 LFH
0000000000c80000 00001002 64 8 64 3 1 1 0 0
000000000c7d0000 00001002 64 8 64 3 1 1 0 0
000000000b770000 00011002 512 32 512 19 7 1 0 0
0000000000b70000 00001002 1088 368 1088 8 9 2 0 0 LFH
000000000d980000 00001002 512 8 512 3 2 1 0 0
000000000db60000 00001002 64448 37556 64448 34615 196 27 0 0 LFH
External fragmentation 92 % (196 free blocks)
000000000f4b0000 00001002 3136 1928 3136 1198 39 3 0 0 LFH
External fragmentation 62 % (39 free blocks)
0000000015780000 00001002 64064 41784 64064 20339 308 16 0 8 LFH
External fragmentation 48 % (308 free blocks)
000000000e360000 00001002 512 8 512 3 1 1 0 0
-------------------------------------------------------------------------------------
Yes, each row represents a different heap. Any code running in your process can create a heap by calling HeapCreate(). That code includes MS DLLs (such as msvcrt.dll), third party DLLs and your own code.

C++ Socket recv() order error

I want to create a RCON-sender in C++ for Jedi Academy Multiplayer Game. Everything works fine, only problem is that, when I read recv() from server, the order is not corrent sometimes!
std::vector<std::string> ReceiveLine() {
std::vector<std::string> ret;
char* r = new char[1024];
int i = 0;
while (i < 40) {
for(unsigned int j=0; j<1024; ++j) r[j]=0;
if (recv(s_, r, 1024, 0) <= 12) {
break;
}
ret.push_back(r+10);
++i;
}
return ret;
}
It prints like this:
map: mp/ffa3
num score ping name lastmsg address qport rate
4 0 0 Alora 33 bot 6145 16384
5 0 0 Alora 33 bot 22058 16384
6 0 0 Alora 33 bot 60636 16384
7 0 0 Alora 33 bot 18312 16384
8 0 0 Alora 33 bot 11812 16384
--- ----- ---- --------------- ------- --------------------- ----- -----
0 0 22 test 0 XX.XX.XXX.XXX:29070 65099 25000
1 0 0 Alora 33 bot 9234 16384
9 0 0 Alora 33 bot 27681 16384
10 0 0 Alora 33 bot 19116 16384
11 0 0 Alora 33 bot 3514 16384
2 1 0 Alora 33 bot 65099 16384
12 0 0 Alora 33 bot 5972 16384
3 0 0 Alora 33 bot 41129 16384
13 0 0 Alora 33 bot 30716 16384
It should be in order by num (Works in PHP)
UDP does not guarantee ordered delivery. There are lots of reasons why a datagram might arrive out of order.