NFT staking smart contract flexible daily ROI change - blockchain

I am working on NFT staking smart contract.
but the daily ROI amount should be changed by time goes.
For example, normal ROI is 3% daily, but after 3 days it goes down to 1.5% and after 9 days it becomes 0.
and can recover the ROI using additional purchase of ERC20 token or claimable rewards.
To sum up, main problem is to calculate daily ROI and staker should claim rewards any time.
How to approach this using solidity?

Related

Calculating Time of Use value for solar panels?

I am trying to apply Time of Use metering rates to calculate the dollar value of the energy produced by a solar panel array. (San Diego Gas and Electric's EV-TOU-5 rate, in particular). The value changes according to the time of day, day of week, and holidays during the year. This will be part of a larger model that factors in savings from EV charging at off-peak rates after midnight, etc. So just displaying kwh/month misses the full picture, which I'm trying to model.
Rather than jump into coding this from scratch with PVLIB, I would appreciate any pointers to existing software.

pvlib : time convention issue with cumulated GHI

I have a question regarding time conventions in pvlib.
As far as I understand, all the computations are made using instantaneous timestep convention. Instant weather will produce instant electric power.
However, in the weather models like GFS, GHI parameter is cumulated over the last hour. This makes inconsitant the solar radiation and astronomical parameters (zenith, azimuth...).
For example, if I take a look at the ERBS function, used to compute DHI and DHI from GHI :
df_dni_dhi_kt = pvlib.irradiance.erbs(ghi, zenith, ghi.index)
Here, all parameters are hourly timeseries, but the because of the convention, the output may be inaccurate.
ghi : cumulated radiation over last hour
zenith : zenith angle at exact hour (instantaneous)
ghi.index : hourly DateTimeIndex
At the end of the power conversion process, a shift is observed between observations and model (please don't care about the amplitude difference, only time shift matters).
Any idea about using cumulated GHI as input of the library ?
When using hourly data there definitely is a dilemma in how to calculate the solar position. The most common method is to calculate the solar position for the middle of the time step. This is definitely an improvement to using either the start or end of the hour (as shown in your example). However, around sunset and sunrise this poses an issue, as the sun may be below the horizon at the middle of the hour. Thus some calculate the sun position for the middle where the period is defined as the part of the hour where the sun is above the horizon - but that adds complexity.
There's a good discussion on the topic here: https://pvlib-python.readthedocs.io/en/stable/gallery/irradiance-transposition/plot_interval_transposition_error.html

How to assess the variance explained by a principal component in a sub-set of data?

I have conducted a PCA (in Matlab) on a set of thousands of points of spatial data. I have also calculated the variance explained across the full dataset by each principal component (i.e. PC or eigenvector) by dividing its eigenvalue by the sum of all eigenvalues. As an example, PC 15 accounts for 2% of the variance in the entire dataset; however, there is a subset of points in this dataset for which I suspect PC 15 accounts for a much higher % of their variance (e.g. 80%).
My question is this, is there a way to calculate the variance explained by a given PC from my existing analysis for only a subset of points (i.e. 1000 pts from the full dataset of 500k+). I know that I could run another PCA on just the subset, but for my purposes, I need to continue to use the PCs from my original analysis. Any idea for how to do this would be very helpful.
Thanks!

How to interpret feature weights from Google autoML

I've created a number of models using Google AutoML and I want to make sure I'm interpreting the output data correctly. This if for a linear regression model predicting website conversion rates on any given day.
First the model gives a model feature importance when the model has completed training. This seems to tell me which feature was most important in predicting the target value but not necessarily if it contributes most to larger changes in that value?
Secondly, we have a bunch of local feature weights which I think tell me the contribution each feature has made to prediction. So say feature weight of bounce rate has a weight of -0.002 we can say that the bounce rate for that row decreased the prediction by 0.002? Is there a correct way to aggregate that, is it just the range?

Ifta calculation - way points - mileage by state

We are developing Trucking Management Software web app
for ifta calculation and reporting purposes, I need to calculate a route with 60-120 way points
no directions, just total mileage and mileage by state.
I tried researching but im not sure i can find it,
Can google handle so many way points without directions, can I get a mileage breakdown by state...
Please help
The Distance Matrix API fits your need best. Each request can take up to 25 destinations so you'd have to split up the route into batches of 25 or fewer. Note that the pricing is different if you want simple distances and traffic-independent travel times vs. if you want travel times that take into account traffic information. The API does not split up a leg that crosses state lines, so you'd need to insert waypoints for state borders if you want to divide your mileage tallies by state.
You could also use the Directions API (handles up to 25 waypoints between the origin and destination) but its pricing charges the higher price for any request that includes more than 10 waypoints, so it won't be as cost-effective as the Distance Matrix API.