how I can normal a double point number to a int number exacly? - c++

I have a Client program by tcp connection that send data to a server.
In client I need send a normalized decimal number to server for normalization I Multiplier decimal number to 100,000 then send it to server but I get wrong number in server.
for example.
double price;
I set it from Gui to 74.40
cout<<price; ---> 74.40
and when I serial my object I send
#define Normal 100000
int tmp = price*Normal;
oDest<<tmp;
In wireshrk I see that client sent 7439999.
why this happened? how I can pervent this problem?

Don't store anything as a floating point value. Use a rational number instead, or use a fixed point value. Floating point values (like double) basically "cheat" in order to fix a large range of possible values into a reasonable chunk of memory, and they have to make compromises in order to do so.
If you are storing a financial value, consider storing pennies or cents or whatever is the smallest denomination.

This is due to floating point precision errors. You can add some rounding:
int tmp = (price + 0.5/Normal)*Normal;

You need to round the number as you convert it to integer, due to the inability of floating point to represent a decimal number exactly.
int tmp = price*Normal + 0.5;

Related

Second real written to stdout with the P descriptor is wrong by factor 10

Here's a minimal working example:
program test_stuff
implicit none
real :: b
b = 10000.0
write(*,'(A10,1PE12.4,F12.4)') "b, b: ", b, b
end program
which I simply compile with gfortran test_stuff.f90 -o test_stuff
However, running the program gives the following output:
$ ./test_stuff
b, b: 1.0000E+04 100000.0000
The second real written to the screen is wrong by a factor of 10.
This happens with gfotran 9.3.0 as well as 10.2.0, so I definitely must be doing something wrong, but I can't see what it is. Can anybody spot what I'm doing wrong?
The P control edit descriptor "temporarily changes" (Fortran 2018 13.8.5) the scale factor connection mode of the connection.
However, what is meant by temporary is until the mode is changed again or until the end of the data transfer statement: (Fortran 2018 12.5.2)
Edit descriptors take effect when they are encountered in format processing. When a data transfer statement terminates, the values for the modes are reset to the values in effect immediately before the data transfer statement was executed.
In the case of the question, both output values are thus processed with the scale factor having value 1.
This scale factor is responsible for the "wrong" second value: there is a difference in interpretation of the scale factor for E and F editing. For E editing the scale factor simply changes the representation, with the external and internal values the same (with the significand scaled up by 10 and the exponent reduced by 1), but for F editing the output value is itself scaled:
On output, with F output editing, the effect is that the externally represented number equals the internally represented number multiplied by 10k
So while 10000 would be represented by 0.1000E+05 with scale factor 0 and 1.0000E+04 with scale factor 1 under E12.4, under F12.4 the value 10000 is scaled to 100000 with the scale factor in place.
As a style note: although the comma is optional between 1P and E12.4 (and similar), many would regard it much better to include the comma, precisely to avoid this apparent tight coupling of the two descriptors (or looking like one descriptor). As the scale factor has a different effect for each of E and F, has no effect for EN and sometimes but not always has an effect with G, I'm not going to argue with anyone who calls P evil.
You are looking for section 12.5.2 of the Fortran 2018 standard.
A connection for formatted input/output has several changeable modes: these are ... and scale factor (13.8.5).
Values for the modes of a connection are established when the connection is initiated. If the connection is initiated by an OPEN statement, the values are as specified, either explicitly or implicitly, by the OPEN statement. If the connection is initiated other than by an OPEN statement (that is, if the file is an internal file or pre-connected file) the values established are those that would be implied by an initial OPEN statement without the corresponding keywords.
The scale factor cannot be explicitly specified in an OPEN statement; it is implicitly 0.
The modes of a connection can be temporarily changed by ... or by an edit descriptor. ... Edit descriptors take effect when they are encountered in format processing. When a data transfer statement terminates, the values for the modes are reset to the values in effect immediately before the data transfer statement was executed.
So when you used 1P in your format, you changed the mode for the connection. This applies to all output items after the 1P has been processed. When the write statement completes the scale factor is reset to 0.

Reading the remaining noise budget of Ciphertexts without the Secret Key

I use SEAL 2.3.1 and this is my parameter setting:
seal::EncryptionParameters parms;
parms.set_poly_modulus("1x^2048 + 1"); // n = 2048
parms.set_coeff_modulus(coeff_modulus_128(2048)); // q = 54-bit prime
parms.set_plain_modulus(1 << 8); // t = 256
seal::SEALContext context(parms);
And some Ciphertext encrypted1; holding the number 5. The manual say that one can use the seal::Simulator class for reading the noise budget without the secret key. The only thing that I've found was this in the simulator.h file.
/**
Creates a simulation of a ciphertext encrypted with the specified encryption
parameters and given invariant noise budget. The given noise budget must be
at least zero, and at most the significant bit count of the coefficient
modulus minus two.
#param[in] parms The encryption parameters
#param[in] noise_budget The invariant noise budget of the created ciphertext
#param[in] ciphertext_size The size of the created ciphertext
#throws std::invalid_argument if ciphertext_size is less than 2
#throws std::invalid_argument if noise_budget is not in the valid range
*/
Simulation(const EncryptionParameters &parms, int ciphertext_size,
int noise_budget);
I can set it with some other Ciphertext encrypted2:
seal::Simulation(parms, encrypted2.size(), (context.total_coeff_modulus().significant_bit_count() - log2(context.poly_modulus().coeff_count() - 1) - log2(context.plain_modulus().value()));
But using this will only create a simulated Ciphertext without any real connection to the actual encrypted1 Ciphertext noise budget.
Is there a way to approximate the noise budget of encrypted1 without the secret key? This situations is important when I or someone else does some computation on externally stored Ciphertexts, e.g. in a cloud database and needs to check the noise budget server side without revealing the secret key.
The Simulation class is meant to estimate the noise budget consumption in various operations so that those operations don't actually have to be executed on real data. Moreover, it uses heuristic upper bounds estimate for the noise consumption, i.e. most likely it overestimates the noise consumption and this effect becomes more pronounced when the computation is more complicated, sometimes resulting in huge overestimates of the noise consumption. Of course, the idea is that the computation is guaranteed to work if it works according to the simulator. A typical use of Simulation would be through the ChooserPoly (and related) classes; this is demonstrated in one of the examples in SEALExamples/main.cpp for SEAL versions < 3.0.
It is impossible to know or estimate the noise in a ciphertext without knowing how that ciphertext was produced. So if I give you a ciphertext without telling you anything else (except encryption parameter), then you should not be able to know anything about the noise budget unless you know the secret key. I agree that in some cases it could be important for someone to know right away if a ciphertext is still valid for further computations, but it's not possible without some external mechanism.

Trying to decode a FM like signal encoded on audio

I have an audio signal that has a kind of FM encoded signal on it. The encoded signal is using this Biphase mark coding technique <-- see at the end of this page.
This signal is a digital representation of a timecode, in hours, minutes, seconds and frames. It basically works like this:
lets consider that we are working in 25 frames per second;
we know that the code is transmitting 80 bits of information every frame (that is 80 bits per frame x 25 frames per second = 2000 bits per second);
The wave is being sampled at 44100 samples per second. So, if we divide 44100/2000 we see that every bit uses 22,05 samples;
A bit happens when the signal changes sign.
If the wave changes sign and keeps its sign during the whole bit period it is a ZERO. If the wave changes sign two times over one bit period it is a ONE;
What my code does is this:
detects the first zero crossing, that is the clock start (to)
measures the level for to = to + 0.75*bitPeriod... 0.75 to give a tolerance.
if that second level is different, we have a 1, if not we have a 0;
This is the code:
// data is a C array of floats representing the audio levels
float bitPeriod = ceil(44100 / 2000);
int firstZeroCrossIndex = findNextZeroCross(data);
// firstZeroCrossIndex is the value where the signal changed
// for example: data[0] = -0.23 and data[1] = 0.5
// firstZeroCrossIndex will be equal to 1
// if firstZeroCrossIndex is invalid, go away
if (firstZeroCrossIndex < 0) return
float firstValue = data[firstZeroCrossIndex];
int lastSignal = sign(firstValue);
if (lastSignal == 0) return; // invalid, go away
while (YES) {
float newValue = data[firstZeroCrossIndex + 0.75* bitPeriod];
int newSignal = sign(newValue);
if (lastSignal == newSignal)
printf("0");
else
printf("1");
firstZeroCrossIndex += bitPeriod;
// I think I must invert the signal here for the next loop interaction
lastSignal = -newSignal;
if (firstZeroCrossIndex > maximuPossibleIndex)
break;
}
This code appears logical to me but the result coming from it is a total nonsense. What am I missing?
NOTE: this code is executing over a live signal and reads values from a circular ring buffer. sign returns -1 if the value is negative, 1 if the value is positive or 0 if the value is zero.
Cool problem! :-)
The code fails in two independent ways:
You are searching for the first (any) zero crossing. This is good. But then there is a 50% chance, that this transition is the one which occurs before every bit (0 or 1) or whether this transition is one which marks a 1 bit. If you get it wrong in the beginning you end up with nonsense.
You keep on adding bitPeriod (float, 22.05) to firstZeroCrossIndex (int). This means that your sampling points will slowly run out of phase with your analog signal and you will see strange effects when you sample point gets near the signal transitions. You will get nonsense, periodically at least.
Solution to 1: You must search for at least one 0 first, so you know which transition indicates just the next bit and which indicates a 1 bit. In practice you will want to re-synchronize your sampler at every '0' bit.
Solution to 2: Do not add bitPeriod to your sampling point. Instead search for the next transition, like you did in the beginning. The next transition is either 'half a bit' away, or a 'complete bit' away, which gives you the information you want. After a 'half a bit' period you must see another 'half a bit' period. If not, you must re-synchronize since you took a middle transition for a start transition by accident. This is exactly the re-sync I was talking about in 1.

Fast percentile in C++ - speed more important than precision

This is a follow-up to Fast percentile in C++
I have a sorted array of 365 daily cashflows (xDailyCashflowsDistro) which I randomly sample 365 times to get a generated yearly cashflow. Generating is carried out by
1/ picking a random probability in the [0,1] interval
2/ converting this probability to an index in the [0,364] interval
3/ determining what daily cashflow corresponds to this probability by using the index and some linear aproximation.
and summing 365 generated daily cashflows. Following the previously mentioned thread, my code precalculates the differences of sorted daily cashflows (xDailyCashflowDiffs) where
xDailyCashflowDiffs[i] = xDailyCashflowsDistro[i+1] - xDailyCashflowsDistro[i]
and thus the whole code looks like
double _dIdxConverter = ((double)(365 - 1)) / (double)(RAND_MAX - 1);
for ( unsigned int xIdx = 0; xIdx < _xCount; xIdx++ )
{
double generatedVal = 0.0;
for ( unsigned int xDayIdx = 0; xDayIdx < 365; xDayIdx ++ )
{
double dIdx = (double)fastRand()* _dIdxConverter;
long iIdx1 = (unsigned long)dIdx;
double dFloor = (double)iIdx1;
generatedVal += xDailyCashflowsDistro[iIdx1] + xDailyCashflowDiffs[iIdx1] *(dIdx - dFloor);
}
results.push_back(generatedVal) ;
}
_xCount (the number of simulations) is 1K+, usually 10K.
The problem:
This simulation is being carried out 15M times (compared to 100K when the first thread was written) at the moment, and it takes ~10 minutes on a 3.4GHz machine. Due to the nature of problem, this 15M is unlikely to be significantly lowered in the future, only increased. Having used VTune Analyzer, I am being told that the last but one line (generatedVal += ...) generates 80% of runtime. And my question is why and how I can work with that.
Things I have tried:
1/ getting rid of the (dIdx - dFloor) part to see whether double difference and multiplication is the main culprit - runtime dropped by a couple of percent
2/ declaring xDailyCashflowsDistro and xDailyCashflowDiffs as __restict so as to prevent the compiler thinking they are dependendent on each other - no change
3/ tried using 16 days (as opposed to 365) to see whether it is cache misses that drag my performance - not a slight change
4/ tried using floats as opposed to doubles - no change
5/ compiling with different /fp: - no change
6/ compiling as x64 - has effect on the double <-> ulong conversions, but the line in question is unaffected
What I am willing to sacrifice is resolution - I do not care whether the generatedVal is 100010.1 or 100020.0 at the end if the speed gain is substantial.
EDIT:
The daily/yearly cashflows are related to the whole portfolio. I could divide all daily cashflows by portflio size and would thus (at 99.99% confidence level) ensure that daily cashflows/pflio_size will not reach out of the [-1000,+1000] interval. In this case, though, I would need precision to the hundredths.
Perhaps you could turn your piecewise linear function into a piecewise-linear "histogram" of its values. The number you're sampling appears to be the sum of 365 samples from that histogram. What you're doing is a not-particularly-fast way to sample from the sum of 365 samples from that histogram.
You might try computing a Fourier (or wavelet or similar) transform, keeping only the first few terms, raising it to the 365th power, and computing the inverse transform. You won't get a probability distribution in the end, but there shouldn't be "too much" mass below 0 or above 1 and the total mass shouldn't be "too different" from 1 with this technique. (I don't know what your data looks like; this technique may well be unworkable for good mathematical reasons.)

How to mix voice audio

I am currently developing a simple VoIP project where multiple clients send out his voice to a server and later the server will mix up those voices together.
However, I can't mix it directly by using simple mathematic addition. Each cycle, a client will send 3584 Bytes voice data to the mixer.
Below is the snippet of the value contained in a receiver buffer:
BYTE buffer[3584];
[0] 0 unsigned char
[1] 192 'À' unsigned char
[2] 176 '°' unsigned char
[3] 61 '=' unsigned char
[4] 0 unsigned char
[5] 80 'P' unsigned char
[6] 172 '¬' unsigned char
[7] 61 '=' unsigned char
[8] 0 unsigned char
[9] 144 '' unsigned char
[10] 183 '·' unsigned char
[11] 61 '=' unsigned char
.
.
.
I'm not so sure how the pattern inside the buffer is generated in that way from a client side but I'm thinking it may be a wave pattern. Now let say I have another similar data like this, how do I mix the voice together.
Please help. Thank you.
You need to find out if your VoIP system uses compression. It probably does, in which case the first thing you need to do is to decompress the streams, then mix them, then recompress.
This is probably an array of floats (unlikely due to the byte pattern presented) or singed integers if it's raw PCM data so try using it as such. Mixing to PCM streams is fairly trivial, just add them together and divide them by two (use other weighting for volume control).
I looked at your data again and they appear to be floating point values the reason I was mistaken in my previous post is probably related to me working on big endian systems for a while now. However your data is in little endian IEEE floating point. Here are the values I got after conversion.
0.089630127 -> 0x0090b73d
0.084136963 -> 0x0050ac3d
0.086303711 -> 0x00c0b03d
As you can see, the values are fairly small so you'll probably need to take that into account when applying the volume; the usual convention is to have this data either between 0..1 or -1..1 for min and max volumes respectively.
Here is part of a mixing loop I've written a few years ago, for reference the full mixer is available here
for(int i = 0; i < a_Sample->count() / a_Sample->channels(); i++){
float l_Volume = a_Sample->volume() * m_MasterVolume;
*l_Output++ += *l_Left * l_PanLeft * l_Volume;
*l_Output++ += *l_Right * l_PanRight * l_Volume;
l_Left += a_Sample->channels();
l_Right += a_Sample->channels();
}
Notice that for the output you'll probably need to convert the data to signed integers so communicate properly if that's the responsibility of the mixer or the outputting device.
As others have mentioned you have to know what format the buffer is in. You can't simply just operate on the bytes directly (well, you could, but it would become quite complicated). Most raw PCM data is usually 44100 bits/second, 16 bit, 2 channel. However, that's not always the case. Each one of those can be different. It won't effect it too much, but is an example. However, even WAV files can be in other formats (like IEEE Float). You will need to interpret the buffer correct as the appropriate data type in order to operate on it.
Like:
BYTE buffer[3584];
if (SampleTypeIsPcm16Bit())
{
short *data = reinterpret_cast<short *>(buffer);
// Rock on
}
else if (SampleTypeIsFloat())
{
float *data = reinterpret_cast<float *>(buffer);
// Rock on
}
Of course, you can make it more generic with templates, but ignore that for know :P.
Keep in mind that if you are dealing with floats, they need to be capped to the range -1.0 and 1.0.
So, are you currently saying the "add two values and divide by two" (mentioned by Jasper) isn't working? How are you playing the data when you just hear silence? I wonder if that's a problem because if your math is off, you would likely hear audio glitches (pops/clicks/etc.) rather than just silence.