convert 12 7-12 to superscript 12⁷⁻¹/² is there a way to reduce length of / and the 12 7-1/2 is in one cell. I converted it manually with code below - superscript

function to convert to superscript
def get_super(x):
normal = "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+-=()"
super_s = "ᴬᴮᶜᴰᴱᶠᴳᴴᴵᴶᴷᴸᴹᴺᴼᴾQᴿˢᵀᵁⱽᵂˣʸᶻᵃᵇᶜᵈᵉᶠᵍʰᶦʲᵏˡᵐⁿᵒᵖ۹ʳˢᵗᵘᵛʷˣʸᶻ⁰¹²³⁴⁵⁶⁷⁸⁹⁺⁻⁼⁽⁾"
res = x.maketrans(''.join(normal), ''.join(super_s))
return x.translate(res)
display superscipt
print('12' + get_super('7-1/2')) # result 12⁷⁻¹/² the 12 added manually need to automate. have 12000 rows and 5 columns. will the elongated / affect calculations?

Related

Sorting sizes with two or more numbers in Django

I am trying to sort items that have sizes described by two numbers like the following
10 x 13
100 x 60
7 x 8
The size is saved as a string. I want them sorted like this (first by first dimension, then by second dimension)
7 x 8
10 x 13
100 x 60
how can this be achieved with Django? It would be nice if we could somehow use
Item.objects.sort
I would advice not to store these as a string, but as two IntegerFields, for example, with:
class Item(models.Model):
width = models.IntegerField()
height = models.IntegerField()
#property
def size(self):
return f'{self.width}x{self.height}'
#size.setter
def size(self, value):
self.width, self.height = map(int, value.split('x'))
Then you can easily sort by Item.objects.order_by('width', 'height') for example. We thus have a property .size that can format the item to a size, and even with a setter that can "parse" the value and put the width and height in the corresponding fields.
you could use sorted for this from python math library. Had a similar problem way back this is what I used and it worked just fine.
import math
l = ['10x13', '100x60','7x8']
sorted(l, key=lambda dim: math.hypot(*map(int, dim.split('x'))))
# ['7x8', '10x13', '100x60']

Find Number of 0's at end of integer using POWER QUERY Power Bi

I wanted to find out the number of 0's at end of integer.
Eg for 2020 it should count 1
for 2000 it should count 3
for 3010000 it should count 4
I have no idea to do it without counting all the zeros and not just the ending ones!
someone please help :)
Go to Power Query Editor and add a Custom Colum with this below code-
if Number.Mod([number],100000) = 0 then 5
else if Number.Mod([number],10000) = 0 then 4
else if Number.Mod([number],1000) = 0 then 3
else if Number.Mod([number],100) = 0 then 2
else if Number.Mod([number],10) = 0 then 1
else 0
Considered highst possibility of trailing 0 is 5. You can add more if/else case following the above logic if you predict more numbers of consecutive 0 at the end.
Here is sample output using above logic-
Take advantage of the fact, that text "00123" converted to number will be 2 characters shorter.
= let
TxtRev = Text.Reverse(Number.ToText([num]))&"1", /*convert to text and reverse, add 1 to handle num being 0*/
TxtNoZeroes = Number.ToText(Number.FromText(TxtRev)) /*convert to number to remove starting zeroes and then back to text*/
in
Text.Length(TxtRev)-Text.Length(TxtNoZeroes) /*compare length of original value with length without zeroes*/
This will work for any number of trailing zeroes (up to Int64 capacity of course, minus space for &"1"). Assuming that the column is of number type; if it's a text then just remove Number.ToText in TxtRev. If you have negative numbers or decimals, replace characters not being a digit after converting to text. For initial number being 0 it shows 1, but if it should show 0 just remove &"1".
You can do it as general string manipulation:
= Text.Length(Text.From([number])) - Text.Length(Text.TrimEnd(Text.From(number]), "0"))
We convert the column to string, strip of the zeroes, count take that away from the total length, giving you the amount of stripped zeroes.
Edit: I messed up my first answer, this one should in fact be correct

writer string and formula till the data available for particular range

I have data in excel and wanted to write a string with a sum for each group of the table.
So I wanted to loop through in a range and write a string "Subtotal" on the first column and apply formula '=sum{}:{}' from start row to before I'm writing a formula.
I know the start and end range.
How can I achieve that by using a loop where the first blank found write string and formula.
input:
See below code I'm trying but it does not work.
row_start = number_rows_placement + number_rows_adsize + 20
row_end = number_rows_placement + number_rows_adsize + number_rows_daily + unqiue_final_day_wise * 5 + 15
for i in range ( row_start , row_end ):
if i == " ":
worksheet.write(i,1,"Subtotal", format)
i += 5
worksheet.write_formula(i,2,'=sum(:)', format)
but it doesn't seem to be working. I don't know where I'm wrong. also while trying to get sum range would vary after each header to before the formula marked.
OutPut:
The formula isn't valid in Excel. It should be =SUM(), uppercase.
Also, you can generate the range for the formula with something like this:
from xlsxwriter.utility import xl_range
row_start = 60
row_end = 64
col = 1
cell_range = xl_range(row_start, col, row_end, col) # B61:B65
See the XlsxWriter Cell Utility Functions.

create a list from the individual values of one number

I am creating a way to convert an Arabic Numeral into a Roman Numeral. If the Arabic numeral to be converted is 124 I would like to create a list List that contains the values 100, 20, and 4. So basically I need to somehow find the base 10 decomposition of 124, and create a list of the values. Another example: 1,891 = 1,000 + 800 + 90 + 1, so the list could look like this: `list = [1000, 800, 90, 1]. I hope this explanation isn't too obscure for you to understand, and thank you.
Something like this would work:
def Roman(input):
digits = [int(i) for i in list(str(input))]
powers = range(len(digits))
powers.reverse()
return [digit * 10 ** power for digit, power in zip(digits, powers)]

Dynamically Delete Elements WIthin an R loop

Ok guys, as requested, I will add more info so that you understand why a simple vector operation is not possible. It's not easy to explain in few words but let's see. I have a huge amount of points over a 2D space.
I divide my space in a grid with a given resolution,say, 100m. The main loop that I am not sure if it's mandatory or not (any alternative is welcomed) is to go through EACH cell/pixel that contains at least 2 points (right now I am using the method quadratcount within the package spatstat).
Inside this loop, thus for each one of this non empty cells, I have to find and keep only a maximum of 10 Male-Female pairs that are within 3 meters from each other. The 3-meter buffer can be done using the "disc" function within spatstat. To select points falling inside a buffer you can use the method pnt.in.poly within the SDMTools package. All that because pixels have a maximum capacity that cannot be exceeded. Since in each cell there can be hundreds or thousands of points I am trying to find a smart way to use another loop/similar method to:
1)go trough each point at a time 2)create a buffer a select points with different sex 3)Save the closest Male-Female (0-1) pair in another dataframe (called new_colonies) 4)Remove those points from the dataframe so that it shrinks and I don't have to consider them anymore 5) as soon as that new dataframe reaches 10 rows stop everything and go to the next cell (thus skipping all remaining points. Here is the code that I developed to be run within each cell (right now it takes too long):
head(df,20):
X Y Sex ID
2 583058.2 2882774 1 1
3 582915.6 2883378 0 2
4 582592.8 2883297 1 3
5 582793.0 2883410 1 4
6 582925.7 2883397 1 5
7 582934.2 2883277 0 6
8 582874.7 2883336 0 7
9 583135.9 2882773 1 8
10 582955.5 2883306 1 9
11 583090.2 2883331 0 10
12 582855.3 2883358 1 11
13 582908.9 2883035 1 12
14 582608.8 2883715 0 13
15 582946.7 2883488 1 14
16 582749.8 2883062 0 15
17 582906.4 2883317 0 16
18 582598.9 2883390 0 17
19 582890.2 2883413 0 18
20 582752.8 2883361 0 19
21 582953.1 2883230 1 20
Inside each cell I must run something according to what I explained above..
for(i in 1:dim(df)[1]){
new_colonies <- data.frame(ID1=0,ID2=0,X=0,Y=0)
discbuff <- disc(radius, centre=c(df$X[i], df$Y[i]))
#define the points and polygon
pnts = cbind(df$X[-i],df$Y[-i])
polypnts = cbind(x = discbuff$bdry[[1]]$x, y = discbuff$bdry[[1]]$y)
out = pnt.in.poly(pnts,polypnts)
out$ID <- df$ID[-i]
if (any(out$pip == 1)) {
pnt.inBuffID <- out$ID[which(out$pip == 1)]
cond <- df$Sex[i] != df$Sex[pnt.inBuffID]
if (any(cond)){
eucdist <- sqrt((df$X[i] - df$X[pnt.inBuffID][cond])^2 + (df$Y[i] - df$Y[pnt.inBuffID][cond])^2)
IDvect <- pnt.inBuffID[cond]
new_colonies_temp <- data.frame(ID1=df$ID[i], ID2=IDvect[which(eucdist==min(eucdist))],
X=(df$X[i] + df$X[pnt.inBuffID][cond][which(eucdist==min(eucdist))]) / 2,
Y=(df$Y[i] + df$Y[pnt.inBuffID][cond][which(eucdist==min(eucdist))]) / 2)
new_colonies <- rbind(new_colonies,new_colonies_temp)
if (dim(new_colonies)[1] == maxdensity) break
}
}
}
new_colonies <- new_colonies[-1,]
Any help appreciated!
Thanks
Francesco
In your case I wouldn't worry about deleting the points as you go, skipping is the critical thing. I also wouldn't make up a new data.frame piece by piece like you seem to be doing. Both of those things slow you down a lot. Having a selection vector is much more efficient (perhaps part of the data.frame, that you set to FALSE beforehand).
df$sel <- FALSE
Now, when you go through you set df$sel to TRUE for each item you want to keep. Just skip to the next cell when you find your 10. Deleting values as you go will be time consuming and memory intensive, as will slowly growing a new data.frame. When you're all done going through them then you can just select your data based on the selection column.
df <- df[ df$sel, ]
(or maybe make a copy of the data.frame at that point)
You also might want to use the dist function to calculate a matrix of distances.
from ?dist
"This function computes and returns the distance matrix computed by using the specified distance measure to compute the distances between the rows of a data matrix."
I'm assuming you are doing something sufficiently complicated that the for-loop is actually required...
So here's one rather simple approach: first just gather the rows to delete (or keep), and then delete the rows afterwards. Typically this will be much faster too since you don't modify the data.frame on each loop iteration.
df <- generateTheDataFrame()
keepRows <- rep(TRUE, nrow(df))
for(i in seq_len(nrow(df))) {
rows <- findRowsToDelete(df, df[i,])
keepRows[rows] <- FALSE
}
# Delete afterwards
df <- df[keepRows, ]
...and if you really need to work on the shrunk data in each iteration, just change the for-loop part to:
for(i in seq_len(nrow(df))) {
if (keepRows[i]) {
rows <- findRowsToDelete(df[keepRows, ], df[i,])
keepRows[rows] <- FALSE
}
}
I'm not exactly clear on why you're looping. If you could describe what kind of conditions you're checking there might be a nice vectorized way of doing it.
However as a very simple fix have you considered looping through the dataframe backwards?