In python, i cannot divide 5 by 22. When I try this, it gives me zero-even when i use float?!!
>>> print float(5/22)
0.0
It's a problem with order of operations. What's happening is this:
* First python takes 5/22. Since 5 and 22 are integers, it returns an integer result, rounding down. The result is 0
* Next you're converting to a float. So float(0) results in 0.0
What you want to do is force one (or both) operands to floats before dividing. e.g.
print 5.0/22 (if you know the numbers absolutely)
print float(x)/22 (if you need to work with a variable integer x)
Right now you're casting the result of integer division (5/22) to float. 5/22 in integer division is 0, so you'll be getting 0 from that. You need to call float(5)/22.
Related
I need to write a Python script that will convert and number x in base 10 to binary with up to n values after the decimal point. And I can't just use bin(x)! Here's what I have:
def decimal_to_binary(x, n):
x = float(x)
test_str = str(x)
dec_at = test_str.find('.')
#This section will work with numbers in front of the decimal
p=0
binary_equivalent = [0]
c=0
for m in range(0,100):
if 2**m <= int(test_str[0:dec_at]):
c += 1
else:
break
for i in range(c, -1, -1):
if 2**i + p <= (int(test_str[0:dec_at])):
binary_equivalent.append(1)
p = p + 2**i
else:
binary_equivalent.append(0)
binary_equivalent.append('.')
#This section will work with numbers after the decimal
q=0
for j in range(-1, -n-1, -1):
if 2**j + q <= (int(test_str[dec_at+1:])):
binary_equivalent.append(1)
q = q + 2**j
else:
binary_equivalent.append(0)
print float((''.join(map(str, binary_equivalent))))
So say you call the function by decimal_to_binary(123.456, 4) it should convert 123.456 to binary with 4 places after the decimal, yielding 1111011.0111.
The first portion is fine - it will take the numbers in front of the decimal, in this case 123, and convert it to binary, outputting 1111011
However, the second portion, which deals with values after the decimal, is not doing what I think it should. The output it gives is not .0111, but rather .1111
I ran through the code with pen and paper writing down the value for each variable and it should work. But it doesn't. Can anyone help me fix this?
I call the function as decimal_to_binary(123.456, 4) and it prints out 1111011.1111
You're close, but there's an issue with your comparison when you go beyond the decimal:
if 2**j + q <= (int(test_str[dec_at+1:])):
What you're doing here is comparing a fractional value (since j is always negative) to a whole integer value. This comparison will, for all practical purposes, always be true.
Based on the surrounding logic, my guess would be that you're attempting to compare it to the actual decimal value here. Using your data, that would be 0.4 on the first iteration, so you expect the statement to be evaluated as:
0.5 <= 0.4
The actual comparison in your code is:
0.5 <= 4
There are two separate issues here:
You're taking all of the numbers after the decimal point, but not actually including the decimal point itself in your extraction. This is primarily why you are getting whole numbers in your test incorrectly. This is fixed simply by referencing test_str[dec_at:] rather than test_str[dec_at+1:]
You're casting to int. Even if you applied the change in the first point, your code would still not run correctly. However, in that case it would be because the cast would truncate the value down to 0 on every iteration. Cast to a float instead: float(test_str[dec_at:])
Your comparison line thus becomes if 2**j + q <= (float(test_str[dec_at:])):, which provides the correct output on my machine.
Note that floating point comparisons can be "finicky" in some situations, depending on rounding and the like. There are ways to mitigate this if needed.
This gives output as 0:
print -4/-5
Whereas:
print float(-4/-5)
This gives output as 0.0 . The required output is 0.8
You are doing integer division instead of floating point division. It has been answered already: Python division .
Casting types after the division doesn't make sense.
float(4)/float(5)
Or simpler
4./5.
should do the trick
To understand,
print float(-4/-5)
Bracket is calculated first. Value given to float is 0. Typecasting 0 to 0.0
This will give the required output:
print float(-4)/-5
/ does integer division.
To get your desired output, the operands should be float (either or both).
-4.0 / -5.0 = 0.8
To explain the second code snippet, the first one to be evaluated is the operation -4 / -5 which results to 0 since we did an integer division. Now what you tried to do is to convert 0 to a floating point using the function float(). Converting that resulted to 0.0
Since the division operator "/" only returns floor quotient.
When numerator or denominator is a minus number, operator "/" will not return the real quotient. Like -1/3 returns -1 rather than 0.
How can i get the real quotient?
Try like this,
a = 1
b = 3
print -(a / b)
The behavior of the / operator is decided by the types of the of the operands. So if you want to have the real quotient, put in the numbers as floats like
float(1) / float(3)
or just
1.0 / 3.0
this also gives correct behavior with negative numbers.
If you want to get the correct int quotient, you can math.ceil() the negative number (or math.floor() for positives respectively)
EDIT:
Instead of using math, you can also just int() the result, which also gives correct results for negative numbers
Is there a way to mask a decimal without rounding in ColdFusion?
Example:
45.5454
I want to get 45, not 46.
It depends on how you want to handle negative numbers.
If you want -45.5454 to be converted to -45, use Fix().
If you want -45.5454 to be converted to -46, use Int().
If you're only dealing with positive numbers either will suffice.
Fix
Description
Converts a real number to an integer.
Returns
If number is greater than or equal to 0, the closest integer less than or equal to number.
If number is less than 0, the closest integer greater than or equal to number.
myNumber=45.5454;
myResult=fix(myNumber);
Int
Description
Calculates the closest integer that is smaller than number. For example, it returns 3 for Int(3.3) and for Int(3.7); it returns -4 for Int(-3.3) and for Int(-3.7).
Returns
An integer, as a string.
myNumber=45.5454;
myResult=int(myNumber);
Use int:
#Int(5.2)# = 5
#Int(2.9)# = 2
Documentation
I have a program that is finding paths in a graph and outputting the cumulative weight. All of the edges in the graph have an individual weight of 0 to 100 in the form of a float with at most 2 decimal places.
On Windows/Visual Studio 2010, for a particular path consisting of edges with 0 weight, it outputs the correct total weight of 0. However on Linux/GCC the program is saying the path has a weight of 2.35503e-38. I have had plenty of experiences with crazy bugs caused by floats, but when would 0 + 0 ever equal anything other than 0?
The only thing I can think of that is causing this is the program does treat some of the weights as integers and uses implicit coercion to add them to the total. But 0 + 0.0f still equals 0.0f!
As a quick fix I reduce the total to 0 when less then 0.00001 and that is sufficient for my needs, for now. But what vodoo causes this?
NOTE: I am 100% confident that none of the weights in the graph exceed the range I mentioned and that all of the weights in this particular path are all 0.
EDIT: To elaborate, I have tried both reading the weights from a file and setting them in the code manually as equal to 0.0f No other operation is being performed on them other than adding them to the total.
Because it's an IEEE floating point number, and it's not exactly equal to zero.
http://www.cygnus-software.com/papers/comparingfloats/comparingfloats.htm
[...] in the form of a float with at most 2 decimal places.
There is no such thing as a float with at most 2 decimal places. Floats are almost always represented as a binary floating point number (fractional binary mantissa and integer exponent). So many (most) numbers with 2 decimal places cannot be represented exactly.
For example, 0.20f may look as an innocent and round fraction, but
printf("%.40f\n", 0.20f);
will print: 0.2000000029802322387695312500000000000000.
See, it does not have 2 decimal places, it has 26!!!
Naturally, for most practical uses the difference in negligible. But if you do some calculations you may end up increasing the rounding error and making it visible, particularly around 0.
It may be that your floats containing values of "0.0f" aren't actually 0.0f (bit representation 0x00000000), but a very, very small number that evaluates to about 0.0. Because of the way IEEE754 spec defines float representations, if you have, for example, a very small mantissa and a 0 exponent, while it's not equal to absolute 0, it will round to 0. However, if you add these numbers together a sufficiently number of times, the very small amount will accumulate into a value that eventually will become non-zero.
Here is an example case which gives the illusion of 0 being non-zero:
float f = 0.1f / 1000000000;
printf("%f, %08x\n", f, *(unsigned int *)&f);
float f2 = f * 10000;
printf("%f, %08x\n", f2, *(unsigned int *)&f2);
If you are assigning literals to your variables and adding them, though, it is possible that either the compiler is not translating 0 into 0x0 in memory. If it is, and this still is happening, then it's also possible that your CPU hardware has a bug relating to turning 0s into non-zero when doing ALU operations that may have squeaked by their validation efforts.
However, it is good to remember that IEEE floating point is only an approximation, and not an exact representation of any particular float value. So any floating-point operations are bound to have some amount of error.