I'm trying to get the value from the thumbstick with XInput, but the values are weird and I don't know how to handle them correctly.
How do I calculate so that I can read the values between -1 (thumbstick to the left/up) +1 (thumbstick to the right/down)
Similiar to XNA's Gamepad.GetState().ThumbSticks.Left.X ( -1 = to the left, +1 = to the right ).
Any ideas?
According to the documentation, _XINPUT_GAMEPAD.sThumbLX is a SHORT whose value lies between -32768 and 32767. If you want to convert that to a range of [-1, 1), divide the value by 32768.0.
Related
Crystal strangely seems to output negative numbers.
The code i'm using is
(1..10000000000).each do |num|
if num % 10000000 == 0
if num < 0
puts "error #{num}"
exit
else
puts num
end
end
end
This ouputs just before it exits 2140000000 and then error -2140000000. Why is this happening?
The integers in the range (1..10000000000), are wrapping round to the negative -2,147,483,648 after encountering 2,147,483,647.
This is common behaviour when working with 32 bit 2's complement signed integral types.
in Crystal by default integers have type Int32, so when you define Range(Int32, Int64) (1..10000000000) it can't go from Int32 to Int64. It'll add sign bite to max Int32 number (2147483647) and go with negative numbers.
So if you'll run next code:
max_32 = 2147483647
already_64 = 2147483649
(max_32..already_64).each do |num|
puts num
end
it'll never stop )))
puts 2147483647 + 1 # -2147483648
In your case you have to define types of your Range:
(1.to_i64..10000000000.to_i64).each do |num|
....... your next code
that will work!
I don't know crystal-lang but a lot of languages have a maximum value for numbers before it circles around to -maximum value. Maybe it is rapping around the max value.
Recently I encountered a problem while I was trying to subtract .size() values of two strings in c++. As far as I know, size() returns number of characters in a string. So lets say I have 2 strings p and q, abs(p.size()-q.size()) should return me difference in length of both strings. But when I ran this code, it returned an abruptly large value. When I individually print the length of both or if I store their length values in different integers and subtract them, they give me correct answer. Am not yet able to figure out why.
size() returns an unsigned value. A smaller unsigned value minus a larger one is then underflowing the calculation, resulting in a large negative value. Think of it as if you have the "rolling" counter of miles or km in a car, and you roll back past 0, it becomes 99999, which is a big number.
The solution, assuming you care about negative differences is to do static_cast<int>(p.size() - q.size()) (and pass that to abs).
Return Value of size() is the number of size_t (an unsigned integral type)
So if you subtract greater number from smaller number, you'll get into problem and get that big value as a result of subtraction.
Reference std::string::size
std::string member function size() returns an unsigned value, so if p.size() < q.size(), the expression p.size()-q.size() will not evaluate to a negative number (it's unsigned, cannot be negative) but to a (often) very very big (unsigned) number.
std::strings reports their size as some width of unsigned integer; such types are a bit like the second hand on a watch: you can wind it forward from 0 up to 59 but if you keep going clockwise it drops to 0 before incrementing again, while if you wind counterclockwise you count down to 0 then jump to 59 and count down from there, ad infinitum.
Say you are subtracting a string length of 6 from a string length of 4, it's much like saying "start the minute hand at 4 and wind counterclockwise by 6 minutes" - when you've wound back 4 minutes the second hand's already at 0, and you wind another minute to get to 59, and the final minute brings you to 58. For std::string::size_type the maximum isn't 59 - it's much larger - but the problem's the same. The result is always positive so is unaffected by abs, but regardless - not what you wanted!
The actual maximum value can be accessed after #include <limits> with std::numeric_limits<std::string::size_type>::max(), for whatever that's worth.
There are many ways to solve this problem. David Schwartz's comment on Zola's answer lists one good one: std::max(p.size(),q.size())-std::min(p.size(),q.size()), which you can think of as "subtract the smaller value from the larger value". Another option is...
p.size() > q.size() ? p.size() - q.size() : q.size() - p.size()
...which means "if p's larger, subtract q from it, otherwise subtract it (i.e. p) from q".
Is there a way to mask a decimal without rounding in ColdFusion?
Example:
45.5454
I want to get 45, not 46.
It depends on how you want to handle negative numbers.
If you want -45.5454 to be converted to -45, use Fix().
If you want -45.5454 to be converted to -46, use Int().
If you're only dealing with positive numbers either will suffice.
Fix
Description
Converts a real number to an integer.
Returns
If number is greater than or equal to 0, the closest integer less than or equal to number.
If number is less than 0, the closest integer greater than or equal to number.
myNumber=45.5454;
myResult=fix(myNumber);
Int
Description
Calculates the closest integer that is smaller than number. For example, it returns 3 for Int(3.3) and for Int(3.7); it returns -4 for Int(-3.3) and for Int(-3.7).
Returns
An integer, as a string.
myNumber=45.5454;
myResult=int(myNumber);
Use int:
#Int(5.2)# = 5
#Int(2.9)# = 2
Documentation
Recently I found this interesting thing in webkit sources, related to color conversions (hsl to rgb):
http://osxr.org/android/source/external/webkit/Source/WebCore/platform/graphics/Color.cpp#0111
const double scaleFactor = nextafter(256.0, 0.0); // it's here something like 255.99999999999997
// .. some code skipped
return makeRGBA(static_cast<int>(calcSomethingFrom0To1(blablabla) * scaleFactor),
Same I found here: http://www.filewatcher.com/p/kdegraphics-4.6.0.tar.bz2.5101406/kdegraphics-4.6.0/kolourpaint/imagelib/effects/kpEffectHSV.cpp.html
(int)(value * 255.999999)
Is it correct to use such technique at all? Why dont' use something straight like round(blabla * 255)?
Is it features of C/C++? As I see strictly speaking is will return not always correct results, in 27 cases of 100. See spreadsheet at https://docs.google.com/spreadsheets/d/1AbGnRgSp_5FCKAeNrELPJ5j9zON9HLiHoHC870PwdMc/edit?usp=sharing
Somebody pls explain — I think it should be something basic.
Normally we want to map a real value x in the (closed) interval [0,1] to an integer value j in the range [0 ...255].
And we want to do it in a "fair" way, so that, if the reals are uniformly distributed in the range, the discrete values will be approximately equiprobable: each of the 256 discrete values should get "the same share" (1/256) from the [0,1] interval. That is, we want a mapping like this:
[0 , 1/256) -> 0
[1/256, 2/256) -> 1
...
[254/256, 255/256) -> 254
[255/256, 1] -> 255
We are not much concerned about the transition points [*], but we do want to cover the full the range [0,1]. How to accomplish that?
If we simply do j = (int)(x *255): the value 255 would almost never appear (only when x=1); and the rest of the values 0...254 would each get a share of 1/255 of the interval. This would be unfair, regardless of the rounding behaviour at the limit points.
If we instead do j = (int)(x * 256): this partition would be fair, except for a sngle problem: we would get the value 256 (out of range!) when x=1 [**]
That's why j = (int)(x * 255.9999...) (where 255.9999... is actually the largest double less than 256) will do.
An alternative implementation (also reasonable, almost equivalent) would be
j = (int)(x * 256);
if(j == 256) j = 255;
// j = x == 1.0 ? 255 : (int)(x * 256); // alternative
but this would be more clumsy and probably less efficient.
round() does not help here. For example, j = (int)round(x * 255) would give a 1/255 share to the integers j=1...254 and half that value to the extreme points j=0, j=255.
[*] I mean: we are not extremely interested in what happens in the 'small' neighbourhood of, say, 3/256: rounding might give 2 or 3, it doesn't matter. But we are interested in the extrema: we want to get 0 and 255, for x=0 and x=1respectively.
[**] The IEEE floating point standard guarantees that there's no rounding ambiguity here: integers admit an exact floating point representation, the product will be exact, and the casting will give always 256. Further, we are guaranteed that 1.0 * z = z.
In general, I'd say (int)(blabla * 255.99999999999997) is more correct than using round().
Why?
Because with round(), 0 and 255 only have "half" the range that 1-254 do. If you round(), then 0-0.00196078431 get mapped to 0, while 0.00196078431-0.00588235293 get mapped to 1. This means that 1 has 200% more probability of occurring than 0, which is, strictly speaking, an unfair bias.
If, isntead, one multiplies by 255.99999999999997 and then floors (which is what casting to an integer does, since it truncates), then each integer from 0 to 255 are equally likely.
Your spreadsheet might show this better if it counted in fractional percentages (i.e. if it counted by 0.01% instead of 1% each time). I've made a simple spreadsheet to show this. If you look at that spreadsheet, you'll see that 0 is unfairly biased against when round()ing, but with the other method things are fair and equal.
Casting to int has the same effect as the floor function (i.e. it truncates). When you call round it, well, rounds to the nearest integer.
They do different things, so choose the one you need.
I'm currently working on an electronic projet and there's a little problem with the joystick values. The values are "correct" but they looks weird.
A classical axis from a joystick usually work (for example left to right).
Totally left : -128
Center : 0
Totally left : +128
But here's what I read from this one :
Totally left : -0
Slightly on the left : - 128
Center : "Random" (never totally zeroed, float between -125 and +125)
Slightly on the right : + 128
Totally right : +0
For the moment I'm using the following workaround to get a linear progression from -128 to +128 :
if (value > 0)
value = -(128 - value);
else
test = 128 + value;
The problem is I have to do that on several inputs, 2 axis per joyrstick, 3 joystick per device, 4 total devices so 24 times and I need to need keep a response time under 20ms for the entire operation. And that's freaking cycle consuming !
I can binary manipulate the value.
Here's how I actually center it. Raw dump contains array of 0 and 1 read from the controller I/O
for (i = 0; i<8; i++) {
value |= raw_dump[pos + i] ? (0x80 >> i):0 ;
}
Do you have any ideas or good algorithm ? I'm starting to be desesperate and I totally suck on binary manipulation... :'(
It looks like whatever mechanism is sampling the joystick actually returns an unsigned byte in the range of 0 .. 255, with 0 at the far left and 255 at the far right.
You can convert that value to the range -128 to 127 with one statement:
value = (value & 0xFF) - 128;
If value is a byte variable, you can shorten that to:
value ^= 0x80;
That conversion should be very quick on any processor, even a 1MHz 6502.
I'm not sure what your second bit of code is about. If you could describe what you're trying to accomplish there, I can offer further insight.