coldfusion - rounding to nearest 5 cents - coldfusion

In coldfusion, how does one round a decimal to the nearest 5 cents? So that a figure of 0.39675 would round up to 0.40. And if the figure was 0.3690, it would round down to 0.35.
I can't seem to find anything useful via google.
Sorry for the brief question, but I think that's all I can really input.

Multiply by 20, round it, divide by 20:
RoundedNumber = ( Round( Number * 20 ) / 20 )

Related

How Python calculates % function can some one please explain 3%5

How Python calculates % function? can some one please explain 3%5 outcome as 3 in Python? Answer for 5%3 is also showing 3. I use python 2.7
The Python % operator isn't percentage, it's modulo. That means the remainder part of a division. Remember when you were a kid and your math problems would be like 11 divided by 3 = 3 R 2 (remainder 2)? That's what % does. 5 % 3 = 2.
If you want to calculate percentage, do that yourself like A * 100.0 / B.

Format or Pattern for Decimal Numbers to display on axis on google charts

Good Morning,
When ever we create data table with certain data for Column Chart i.e.
**['Year', '% of ToTal Revenues ', '% of Total orders'],
['Feb12-July12', 0.25, 0.36],
['Aug12-Jan12', 0.58, 0.69],
['Feb13-July14', 0.47, 0.14],
['Aug13-Jan14', 0.62, 0.84]**
in the out put especially on VAxis the graph was displaying 0.1 to 0.98..
but when i a want to append % symbol to the given input values like 0.01%,0.02%,to 0.98% it was converting decimal into natural numbers that for ex 0.65 into 65 so what type of pattern i have to pass forex VAxis:{format:'#.##%'}};
Please Help Me
Thanks in advance
If I undestood right, you want to keep it as decimals instead of 0.5 = 50%. This should do the trick:
vAxis:{format: '#.#\'%\'' }

Calculating the distance between characters

Problem: I have a large number of scanned documents that are linked to the wrong records in a database. Each image has the correct ID on it somewhere that says where it belongs in the db.
I.E. A DB row could be:
| user_id | img_id | img_loc |
| 1 | 1 | /img.jpg|
img.jpg would have the user_id (1) on the image somewhere.
Method/Solution: Loop through the database. Pull the image text in to a variable with OCR and check if user_id is found anywhere in the variable. If not, flag the record/image in a log, if so do nothing and move on.
My example is simple, in the real world I have a guarantee that user_id wouldn't accidentally show up on the wrong form (it is of a specific format that has its own significance)
Right now it is working. However, it is incredibly strict. If you've worked with OCR you understand how fickle it can be. Sometimes a 7 = 1 or a 9 = 7, etc. The result is a large number of false positives. Especially among images with low quality scans.
I've addressed some of the image quality issues with some processing on my side - increase image size, adjust the black/white threshold and had satisfying results. I'd like to add the ability for the prog to recognize, for example, that "81*7*23103" is not very far from "81*9*23103"
The only way I know how to do that is to check for strings >= to the length of what I'm looking for. Calculate the distance between each character, calc an average and give it a limit on what is a good average.
Some examples:
Ex 1
81723103 - Looking for this
81923103 - Found this
--------
00200000 - distances between characters
0 + 0 + 2 + 0 + 0 + 0 + 0 + 0 = 2
2/8 = .25 (pretty good match. 0 = perfect)
Ex 2
81723103 - Looking
81158988 - Found
--------
00635885 - distances
0 + 0 + 6 + 3 + 5 + 8 + 8 + 5 = 35
35/8 = 4.375 (Not a very good match. 9 = worst)
This way I can tell it "Flag the bottom 30% only" and dump anything with an average distance > 6.
I figure I'm reinventing the wheel and wanted to share this for feedback. I see a huge increase in run time and a performance hit doing all these string operations over what I'm currently doing.

understanding precision and scale on a property

property name="poiLat" length="60" ormtype="big_decimal" persistent=true precision="16" scale="14" default="0" hint="";
I don't understand precision or scale correctly. Using the property above why would '1' give an error and '2' be accepted? what should I change it to to accept '1'
1 ) -118.27 = error
2) -18.27 = ok
Scale refers the number of digits to the right of the decimal place. If you have precision 16 and scale 14, you can only have 2 digits to the left of the decimal place, so
18.12345678901234 = ok
118.27 = error
Try:
precision="16" scale="13"
That will allow 118.1234567890123, but that is a lot of decimal places. How many do you really need?
precision="16" scale="4"
Will allow 123456789012.1234

How to count rating?

My question is more mathematical. there is a post in the site. User can like and dislike it. And below the post is written for example -5 dislikes and +23 likes. On the base of these values I want to make a rating with range 0-10 or (-10-0 and 0-10). How to make it correctly?
This may not answer your question as you need a rating between [-10,10] but this blog post describes the best way to give scores to items where there are positive and negative ratings (in your case, likes and dislikes).
A simple method like
(Positive ratings) - (Negative ratings), or
(Positive ratings) / (Total ratings)
will not give optimal results.
Instead he uses a method called Binomial proportion confidence interval.
The relevant part of the blog post is copied below:
CORRECT SOLUTION: Score = Lower bound of Wilson score confidence interval for a Bernoulli parameter
Say what: We need to balance the proportion of positive ratings with the uncertainty of a small number of observations. Fortunately, the math for this was worked out in 1927 by Edwin B. Wilson. What we want to ask is: Given the ratings I have, there is a 95% chance that the "real" fraction of positive ratings is at least what? Wilson gives the answer. Considering only positive and negative ratings (i.e. not a 5-star scale), the lower bound on the proportion of positive ratings is given by:
(source: evanmiller.org)
(Use minus where it says plus/minus to calculate the lower bound.) Here p is the observed fraction of positive ratings, zα/2 is the (1-α/2) quantile of the standard normal distribution, and n is the total number of ratings.
Here it is, implemented in Ruby, again from the blog post.
require 'statistics2'
def ci_lower_bound(pos, n, confidence)
if n == 0
return 0
end
z = Statistics2.pnormaldist(1-(1-confidence)/2)
phat = 1.0*pos/n
(phat + z*z/(2*n) - z * Math.sqrt((phat*(1-phat)+z*z/(4*n))/n))/(1+z*z/n)
end
This is extension to Shepherd's answer.
total_votes = num_likes + num_dislikes;
rating = round(10*num_likes/total_votes);
It depends on number of visitors to your app. Lets say if you expect about 100 users rate your app. When a first user click dislike, we will rate it as 0 based on above approach. But this is not logically right.. since our sample is very small to make it a zero. Same with only one positive - our app gets 10 rating.
A better thing would be to add a constant value to numerator and denominator. Lets say if our app has 100 visitors, its safe to assume that until we get 10 ups/downs, we should not go to extremes(neither 0 nor 10 rating). SO just add 5 to each likes and dislikes.
num_likes = num_likes + 5;
num_dislikes = num_dislikes + 5;
total_votes = num_likes + num_dislikes;
rating = round(10*(num_likes)/(total_votes));
It sounds like what you want is basically a percentage liked/disliked. I would do 0 to 10, rather than -10 to 10, because that could be confusing. So on a 0 to 10 scale, 0 would be "all dislikes" and 10 would be "all liked"
total_votes = num_likes + num_dislikes;
rating = round(10*num_likes/total_votes);
And that's basically it.