I keep getting an error message:TypeError: unsupported operand type(s) for *: 'dict' and 'float'
I've already changed it to a float, but it didn't work. Please advise!
job = {'fireman': 42600, 'librarian': 35000, 'clerk': 23000}
salary = float(job * 1.05 ** years_of_service)
return salary
...
Is this the way to go about it, or is there a simpler method?
Thank you!
The formula is salary * 1.05**years. Note ** is exponentiation.
First, get a map of the post with their salaries; for that, use a dictionary (the only mapping type in Python):
job_salary = {'engineer': 73200, 'programmer': 48700, 'retail': 23000}
The formula for adding a bonus is salary x 1+percentage increase. Now I leave it up to you to figure out the rest.
Related
I'm currently using the following formula:
=IFS(regexmatch(A2,"Malaysia"),
B2-dataset!B3,REGEXMATCH(A2,"Saudi Arabia"),
B2-dataset!B7,REGEXMATCH(A2,"Taiwan"),
B2-dataset!B11,REGEXMATCH(A2,"Russia"),
B2-dataset!B15,REGEXMATCH(A2,"Greece"),
B2-dataset!B19,REGEXMATCH(A2,"South Africa"),
B2-dataset!B23,REGEXMATCH(A2,"UAE"),
B2-dataset!B27,REGEXMATCH(A2,"Albania"),
B2-dataset!B31,REGEXMATCH(A2,"India"),
B2-dataset!B35,REGEXMATCH(A2,"South Korea"),
B2-dataset!B39,REGEXMATCH(A2,"Turkey"),
B2-dataset!B43)
The idea is that B2 (currently as =date(dd/mm/yyyy) has a deadline date. C2 (in which the formula houses) should show the date when everything should be delivered.
Currently the outcome is a number, not a date. I've tried IF statement, which delivers a date but I can only add 3 arguments. Can someone help me?
Kind regards
if your current output is number like 40000+ it's ok and it's just formatting issue. either you will format it internally or use a formula.
try:
=TEXT(IFS(regexmatch(A2,"Malaysia"),
B2-dataset!B3,REGEXMATCH(A2,"Saudi Arabia"),
B2-dataset!B7,REGEXMATCH(A2,"Taiwan"),
B2-dataset!B11,REGEXMATCH(A2,"Russia"),
B2-dataset!B15,REGEXMATCH(A2,"Greece"),
B2-dataset!B19,REGEXMATCH(A2,"South Africa"),
B2-dataset!B23,REGEXMATCH(A2,"UAE"),
B2-dataset!B27,REGEXMATCH(A2,"Albania"),
B2-dataset!B31,REGEXMATCH(A2,"India"),
B2-dataset!B35,REGEXMATCH(A2,"South Korea"),
B2-dataset!B39,REGEXMATCH(A2,"Turkey"),
B2-dataset!B43), "dd/mm/yyyy")
I am going through the Corda R3 training course and I am making headway, but when asked to create a Paid variable initialized to 0, the answer is:
package net.corda.training.state
import net.corda.core.contracts.Amount
import net.corda.core.contracts.ContractState
import net.corda.core.identity.Party
import java.util.*
/**
* This is where you'll add the definition of your state object. Look at the unit tests in [IOUStateTests] for
* instructions on how to complete the [IOUState] class.
*
* Remove the "val data: String = "data" property before starting the [IOUState] tasks.
*/
data class IOUState(val amount: Amount<Currency>,
val lender: Party,
val borrower: Party,
val paid: Amount<Currency> = Amount(0, amount.token) ):
ContractState {
override val participants: List<Party> get() = listOf()
}
Now I understand that we need to cast the value to type Amount, but why amount.token? I took the solution from:
https://github.com/corda/corda-training-solutions/blob/master/kotlin-source/src/main/kotlin/net/corda/training/state/IOUState.kt
Also, the task was to define it as Pounds, but I cannot figure out how to do so.
I find the reference for Pounds under:
https://docs.corda.net/api/kotlin/corda/net.corda.finance/kotlin.-int/index.html
I just do not understand how I would define the function.
Anyone have any pointers or suggestions for me? This code compiles and the tests pass, but I want to understand why... Thanks!
The token simply indicates what this is an amount of.
So when used here:
val paid: Amount<Currency> = Amount(0, amount.token)
You're taking whatever token was used for the amount parameter e.g. POUNDS, DOLLARS etc and setting the paid Amount to the same token type.
Take a look at how it's done in currencies.kt in Corda
I have endeavored this problem which was an easy solution in GAMS file but I cannot do it with PYOMO.
My problem is that I would like to put a limit in production as a daily basis as a constraint for a certain generation.
The generation is variable with 8760 hourly set and summation of this generation per day should be lower than certain limit.
In GAMS, I can easily solve it with following code;
set t hours in a year /1*8760/ ;
set d day /1*365/;
I make parameter day(t) for spliting hours in a year
parameter day(t) ;
day(t)$(ord(t) <= 24)=1;
day(t)$(ord(t)>24 and mod(ord(t),24) ne 0)=round(ord(t)/24-mod(ord(t),24)/24)+1;
day(t)$(ord(t)>24 and mod(ord(t),24) eq 0)= ord(t)/24;
with the sets and paramter day(t) I can make following equation as constraint
hydro_day(d)..sum(t$(day(t)=ord(d)),hydro_el(t))=l=6*spec('Hydro', 'cap');
In Pyomo, I have tried as follow but it doesn't work for now
def dcap_rule(model) :
dailyLimit = {}
for k in range(365) :
dailyLimit[k] = sum(model.discharge[i] for i in sequence((24*(k-1)+1), 24*k))
return dailyLimit[k]<= model.capa['pumped']*5
model.dcap_limit = Constraint( rule=dcap_rule)
This code only applied to the first day(1-24hours) then after the first day, there seems to be no constraint .
Could you help me solve this issue?
Thanks in advance
You are declaring a scalar constraint, so only one constraint is being generated. You want to change your code to generate an indexed constraint:
def dcap_rule(model, k):
return sum(model.discharge[i] for i in sequence((24*(k-1)+1), 24*k)) \
<= model.capa['pumped']*5
model.dcap_limit = Constraint(range(365), rule=dcap_rule)
Hi I am really new to Pandas. I tried to figure out what's going on with the datatype here but so far I am unable to go very far.
What I intend to do is very simple indeed. I am searching for the index of a DataFrame data2 with the nearest time to the target time in data1.
Since data1 and data2 are very similar, just that there are some minor time difference due to slightly different sampling rate, I attach only the sample of data1 here:
I did something like this in the search of closest match data by comparing the timestamp in data2 to timestamp in data1:
idxcollect = []
for loopidx, tstamploop in enumerate( tstamp_data1[820990:821000] ):
idxtemp = data2[ data2['timestamp'] == tstamp_data2.asof(tstamploop) ].index
delta1 = np.abs( data2.timestamp[idxtemp] - data1.timestamp[loopidx] )
delta2 = np.abs( data2.timestamp[idxtemp + 1] - data1.timestamp[loopidx] )
if delta1.iloc[0] < delta2.iloc[0]:
idxreturn = idxtemp
idxcollect.append( idxreturn )
else:
idxreturn = idxtemp + 1
idxcollect.append( idxreturn )
tstamp_data1 / tstamp_data2 is of dtype('<M8[ns]'), calculated from epoch time in data1 and data2.
The output I got is:
[Int64Index([809498], dtype='int64'), Int64Index([809499], dtype='int64'), Int64Index([809500], dtype='int64'), Int64Index([809501], dtype='int64'), Int64Index([809502], dtype='int64'), Int64Index([809503], dtype='int64'), Int64Index([809509], dtype='int64'), Int64Index([809513], dtype='int64'), Int64Index([809521], dtype='int64'), Int64Index([809533], dtype='int64')]
What I would like to do is to slice corresponding rows of data2 from the indices found through the operation above, with something as simple as:
data2.ix[ idxcollect[:11] ]
However with the Int64Index format, I am unable to do anything as simple as what I wanted to. Is there any way out? Thank you for your time and attention and help!!
You may store the index of data2 as a list, make timestamps of data1 a list and create a new DataFrame for storing the data:
data2indx = data2.index.tolist()
data1tm = data1['timestamp'].tolist()
data2sub = pd.DataFrame(columns = data2.columns)
Then slice data2 and append row to data2sub based on selection:
for n, i in enumerate(data1tm):
c = [abs(i-j) for j in data2indx]
mins = min(c)
index = c.index(mins)
data2sub.loc[n] = data2.iloc[index]
Maybe someone can contribute a more efficient approach.
I found a way out to solve the speed problem. The thing is, it takes more time to process a search for the nearest timestamp comparing to a search for the nearest float value.
So the trick is, if you have already noticed in the data, I already have a timesec column.
What I did was set the first timestamp as 0 and then from then onwards add the corresponding timedelta calculated from the timestamp to the baseline of 0. This yields the timesec column, an easy and fast computation.
In this question, I asked about the "iterable" numbers, and as Robbie has pointed out, the .tolist() function will solve the problem of nested list of lists. However, it takes 60 hours to search for a mere 87258 timestamps in another dataset. To speed this up, you can utilize the timesec for a cleaner and faster search instead.
By implementing a simple getnearpos function from a previous Stack Overflow answer:
def getnearpos(array,value):
idx = (np.abs(array-value)).argmin()
return idx
The search for 87258 timestamps now turns into a search for 87258 float numbers, and the time it takes to finish the search is: 1h 1min 23s, a huge improvement compared to the ~60 hours mark.
If anyone who happens to view this question knows of a faster implementation, do share with me. I am really eager to learn!! Thank you!
I have this code:
msgs = int(post['time_in_weeks'])
for i in range(msgs):
tip_msg = Tip.objects.get(week_number=i)
it always results in an error saying that no values could be found.
week_number is an integer field. When I input the value of i directly,
the query works.
When i print out the value of i I get the expected values.
Any input at all would be seriously appreciated.
Thanks.
The range function will give you a zero based list of numbers up to and excluding msgs. I guess there is no Tip with week_number=0.
Also, to limit the number of queries you could do this:
for tip in Tip.objects.filter(week_number__lt=msgs):
#do something
or, if you want specific weeks:
weeks=(1,3,5)
for tip in Tip.objects.filter(week_number__in=weeks):
#do something