How to get top account with Vault in a way that it doesn't lag? - bukkit

Vault is a plugin for Bukkit that is also an Economy API. It can be used to post updates to Players' money, and used as a Currency in a server.
VaultAPI is also open-sourced at Github
I'm trying to get the accounts which have the biggest amount of money, but that's not straight-forward in Vault's API.
So, what I tried to do was:
Iterating through all OfflinePlayers and comparing money values
Recovering the biggest value
Code:
double highest = 0;
OfflinePlayer topPlayer;
OfflinePlayer[] players = Bukkit.getOfflinePlayers();
for (OfflinePlayer p : players) {
double playerAmount = econ.getBalance(p); //Econ is Economy instance from Vault.
if (playerAmount > highest){
highest = playerAmount;
topPlayer = p;
}
}
I tried iterating all acounts to find the highest amount, but it lags a lot when you have too many Players.
Is there any way to find which players have the biggest amount of money?

There are a few ways you can accomplish this.
First of all, you could use Essentials' UserBalanceUpdateEvent and determine if the balance is higher than the high score in the config (in which case, you update the stored value with the new value and the UUID it belongs to).
Second, you could iterate through the OfflinePlayers, but using an Asynchronous task.

Related

How to restrict PancakeSwap users from adding new pair on my new token's LP?

Hi there
(I'm new to the world of crypto and I'm still a noob and confused so please help me with your answer)
As a note I have read this:
How to restrict pancakeswap users from adding initial liquidity for my issued token?
Let's look at this scenario:
Bob created and launched a new token on BSC with 1000 total supply with name "test".
Bob also created a new pair with 80% of the total supply with 800 wbnb (test800 / 800 wbnb).
So each test token worth 1 wbnb and Instead, he receives 20 new LP-tokens.
Bob burned all the LP tokens by sending them to Dead-Wallet (or used any other locking platform), so far so good.
As nowadays everybody know, whenever someone trades on PancakeSwap, the trader pays a 0.25% fee, of which 0.17% is added to the Liquidity Pool of the swap pair they traded on.
https://docs.pancakeswap.finance/products/pancakeswap-exchange/pancakeswap-pools
A new buyer arrives, buys 100 of my tokens (test) and opens a new pair 100test/300wbnb (Or the same old LP is used) what will happen ?
Will my token price increase?
What if a new buyer creates a new pair with USDT?
How do I create an initial price for my new token?
Is it my first LP that creates my initial token price or the second
LP?
My most important question is:
How can I limit and restrict my LP from adding new pairs by others?
with thanks
I searched a lot on the internet, but no one mentioned it, or at least it was not simple or understandable for an amateur like me. I tried it myself on the Testnet network. The initial price of my coins was built according to my initial pool, but apparently everything changed when I built the second pool.
How can I limit and restrict my LP from adding new pairs by others?
You cannot. If a token is transferable, it is also freely tradeable.

Show top 10 scores for a game using DynamoDB

I created a table "scores" in DynamoDB to store the scores of a game.
The table has the following attributes:
uuid
username
score
datetime
Now I want to query the "top 10" scores for the game (leaderboard).
In DynamoDB, it is not possible to create an index without a partition key.
So, how do I perform this query in a scalable manner?
No. You will always need the partition key. DynamoDB is able to provide the required speed at scale because it stores records with the same partition key physically close to each other instead of on 100 different disks (or SSDs, or even servers).
If you have a mature querying use case (which is what DynamoDB was designed for) e.g. "Top 10 monthly scores" then you can hash datetime into a derived month attribute like 12/01/2017, 01/01/2018, and so on and then use this as your partition key so all the scores generated in the same month will get "bucketized" into the same partition for the best lookup performance. You can then keep score as the sort key.
You can of course have other tables (or LSIs) for weekly and daily scores. Or you can even choose the most granular bucket and sum up the results yourself in your application code. You'll prob need to pull a lot more then 10 records to get close enough to 100% accuracy on the aggregate level... I don't know. Test it. I wouldn't rely on it if I were dealing with money/commissions but for game scores who cares :)
Note: If you decide to use this route then instead of using "12/10/2017" etc. as the partition values I'd use integer offsets e.g. UNIX epoch (rounded off to represent the first midnight of the month) to make it easier to compute/code against. I used the friendly dates to better illustrate the approach.

Understanding "Real world modelling" program

Few days now I've got new project to do related with a "real world modelling" program.
Here's how it looks like:
A visit to a psychologist (Use queue). Experts provides psychologist's advice, some of them (n) forms therapeutic groups of k people (GrT - duration of group therapy in hours), other experts (m) takes individual patients (InT - duration of individual therapy in hours). Each newly came patient (new patient's appearance probability is p1, recurring patients comes after period of time (h)) can choose to go to a psychologist providing individual therapies, or to group therapies. If group therapy session is full, patients who are wishing to participate in group sessions must wait. Recurring patients wishing to go to group sessions can start a session with smaller group, but can't go to same session with newly came patients. It has been observed that patients who took individual therapy are recovering faster than those, who chose group sessions(they will need less sessions), but there are exceptions - due to social interaction factor, some patients (probability p2) recover h percent faster than those, who choose individual therapy. Individual session costs InC, group session GrC. You need to assess what therapeutic approach patient should choose optimizing with their resources, and how many and what specialists should hire a health care facility.
Here's my approach to this problem:
Read text file containing Names, Surnames, money willing to spend and place everything in queue structure.
Find which group is better for patient by generating random number for p2probability and using it, we'll find if patient recover faster in individual or group therapy. IMO factor sequence here: Money(looking, if patient can afford individual therapy sessions) > p2 (should patient take group sessions if it's better for him).
By looking how many patients there are in queue, we can find how many psychologists we'll need. (Are there any other factors here? What if we are short of experts?)
Problems that I can't understand: how do I implement p1 probability of new patients appearance if I write every patient into a text file and put them in a queue? How many therapy sessions does it take for patient to recover (static number?)?
Am I missing something? Basically it's open question and there could be no right answer. If anyone have any suggestions how to build this program to better one, I'd be glad to take it!
Programming language I'm using: C++
If you want to break up a task, analyse it and prepare it for coding, you could :
Firstly make a Block diagram, representing program flow control.
Followed by Pseudo code implementation.
P.S. update the question following the above and when you reach the "code stage", there, definitely, will be more help.

Proven correct receipt module

I'm working on a register which produces receipts when customers buy articles. As an exercise, I'm thinking about making a receipt module in Coq which cannot produce erroneous receipts. In short, the articles and the payments on the receipt should always sum to 0 (where articles have price > 0 and payments have amount < 0). Is this doable, or sensible?
To do a quick sketch, a receipt would consist of receipt items and payments, like
type receipt = {
items : item list;
payments : payment list
}
And there would be functions to add articles and payments
add_article(receipt, article, quantity)
add_payment(receipt, payment)
In real life, this procedure is of course more complicated, adding different types of discounts etc.
Certainly it's easy to add a boolean function check_receipt that confirms that the receipt is correct. And since articles and payments always is added in runtime, maybe this is the only way?
Example:
receipt = {
items = [{name = "article1"; price = "10"; quantity = "2"}];
payments = [{payment_type = Cash; amount = (-20)}];
}
Is it clear what I want?
Or maybe it's more interesting to prove correct VAT calculations. There's a couple of such properties that could be proven.
You can use Coq to prove your program has properties like that, but I don't think it applies to the particular example you have presented. Articles and payments would be added at different times, so there's no way of garanteeing the balance is 0 all the time. You could check at the end if the balance is 0, but the program already has to do that anyway. I don't think there's a way of moving that check from execution time to compile time even with a proof assistant.
I think it would make more sense to use Coq to prove that an optimized and a naive implementation of an algorithm obey the same input / output relation. If there were a way to simplify your program, maybe at the cost of performance, maybe you could compare the two versions using Coq. Then you would be confident you didn't introduce a bug during optimization.
That's all I can say without looking at any code.

Collaborative Filtering: Ways to determine implicit scores for products for each user?

Having implemented an algorithm to recommend products with some success, I'm now looking at ways to calculate the initial input data for this algorithm.
My objective is to calculate a score for each product that a user has some sort of history with.
The data I am currently collecting:
User order history
Product pageview history for both anonymous and registered users
All of this data is timestamped.
What I'm looking for
There are a couple of things I'm looking for suggestions on, and ideally this question should be treated more for discussion rather than aiming for a single 'right' answer.
Any additional data I can collect for a user that can directly imply an interest in a product
Algorithms/equations for turning this data into scores for each product
What I'm NOT looking for
Just to avoid this question being derailed with the wrong kind of answers, here is what I'm doing once I have this data for each user:
Generating a number of user clusters (21 at the moment) using the k-means clustering algorithm, using the pearsons coefficient for the distance score
For each user (on demand) calculating their a graph of similar users by looking for their most and least similar users within their cluster, and repeating for an arbitrary depth.
Calculating a score for each product based on the preferences of other users within the user's graph
Sorting the scores to return a list of recommendations
Basically, I'm not looking for ideas on what to do once I have the input data (I may need further help with that later, but it's not the point of this question), just for ideas on how to generate this input data in the first place
Here's a haymaker of a response:
time spent looking at a product
semantic interpretation of comments left about the product
make a discussion page about a product, brand, or product category and semantically interpret the comments
if they Shared a product page (email, del.icio.us, etc.)
browser (mobile might make them spend less time on the page vis-à-vis laptop while indicating great interest) and connection speed (affects amt. of time spent on the page)
facebook profile similarity
heatmap data (e.g. à la kissmetrics)
What kind of products are you selling? That might help us answer you better. (Since this is an old question, I am addressing both #Andrew Ingram and anyone else who has the same question and found this thread through search.)
You can allow users to explicitly state their preferences, the way netflix allows users to assign stars.
You can assign a positive numeric value for all the stuff they bought, since you say you do have their purchase history. Assign zero for stuff they didn't buy
You could do some sort of weighted value for stuff they bought, adjusted for what's popular. (if nearly everybody bought a product, it doesn't tell you much about a person that they also bought it) See "term frequency–inverse document frequency"
You could also assign some lesser numeric value for items that users looked at but did not buy.