Buses scheduling - every day - resource-scheduling

I have the following problem and I need some ideas to handle it:
I have a number of buses (approx 150) owned by individuals
Every individual drives his own bus (or is responsible for the bus driver)
So I don't need to care about bus drivers because buses and drivers are the same thing.
The above buses have to "execute/perform" bus routes on a daily base (approx 200).
A bus can do ONE or more routes daily
A bus can WORK normally 5 days a week and a certain amount of hours in a day (or month)
I have to find a FAIR way to distribute the daily routes every 3 months.
Fair means that at the END of a 3 month period all the buses must have done the same number of kilometers (each bus route is assigned a fixed number of Kilometers)
I can't do the scheduling, at the begining, for the WHOLE 3 month period, because "special things" happen each day. Like a bus has a problem, a driver has a problem and so on.. This means that I do TODAY the NEXT DAY Schedule.
Any Ideas?

OptaPlanner (java, open source) has been used for such problems succesfully. Even if you don't use java, the idea's behind it could serve you well too in any language:
1-4: basic constraints, nothing special
5: Fairness constraint, see this video
6: Continuous planning, see this video

Related

AWS CloudWatch interpreting insights graph -- how many read/write IOs will be billed?

Introduction
We are trying to "measure" the cost of usage of a specific use case on one of our Aurora DBs that is not used very often (we use it for staging).
Yesterday at 18:18 hrs. UTC we issued some representative queries to it and today we were examining the resulting graphs via Amazon CloudWatch Insights.
Since we are being billed USD 0.22 per million read/write IOs, we need to know how many of those there were during our little experiment yesterday.
A complicating factor is that in the cost explorer it is not possible to group the final billed costs for read/write IOs per DB instance! Therefore, the only thing we can think of to estimate the cost is from the read/write volume IO graphs on CLoudwatch Insights.
So we went to the CloudWatch Insights and selected the graphs for read/write IOs. Then we selected the period of time in which we did our experiment. Finaly, we examined the graphs with different options: "Number" and "Lines".
Graph with "number"
This shows us the picture below suggesting a total billable IO count of 266+510=776. Since we have choosen the "Sum" metric, this we assume would indicate a cost of about USD 0.00017 in total.
Graph with "lines"
However, if we choose the "Lines" option, then we see another picture, with 5 points on the line. The first and last around 500 (for read IOs) and the last one at approx. 750. Suggesting a total of 5000 read/write IOs.
Our question
We are not really sure which interpretation to go with and the difference is significant.
So our question is now: How much did our little experiment cost us and, equivalently, how to interpret these graphs?
Edit:
Using 5 minute intervals (as suggested in the comments) we get (see below) a horizontal line with points at 255 (read IOs) for a whole hour around the time we did our experiment. But the experiment took less than 1 minute at 19:18 (UTC).
Wil the (read) billing be for 12 * 255 IOs or 255 ... (or something else altogether)?
Note: This question triggered another follow-up question created here: AWS CloudWatch insights graph — read volume IOs are up much longer than actual reading
From Aurora RDS documentation
VolumeReadIOPs
The number of billed read I/O operations from a cluster volume within
a 5-minute interval.
Billed read operations are calculated at the cluster volume level,
aggregated from all instances in the Aurora DB cluster, and then
reported at 5-minute intervals. The value is calculated by taking the
value of the Read operations metric over a 5-minute period. You can
determine the amount of billed read operations per second by taking
the value of the Billed read operations metric and dividing by 300
seconds. For example, if the Billed read operations returns 13,686,
then the billed read operations per second is 45 (13,686 / 300 =
45.62).
You accrue billed read operations for queries that request database
pages that aren't in the buffer cache and must be loaded from storage.
You might see spikes in billed read operations as query results are
read from storage and then loaded into the buffer cache.
Imagine AWS report these data each 5 minutes
[100,150,200,70,140,10]
And you used the Sum of 15 minutes statistic like what you had on the image
F̶i̶r̶s̶t̶,̶ ̶t̶h̶e̶ ̶"̶n̶u̶m̶b̶e̶r̶"̶ ̶v̶i̶s̶u̶a̶l̶i̶z̶a̶t̶i̶o̶n̶ ̶r̶e̶p̶r̶e̶s̶e̶n̶t̶ ̶o̶n̶l̶y̶ ̶t̶h̶e̶ ̶l̶a̶s̶t̶ ̶a̶g̶g̶r̶e̶g̶a̶t̶e̶d̶ ̶g̶r̶o̶u̶p̶.̶ ̶I̶n̶ ̶y̶o̶u̶r̶ ̶c̶a̶s̶e̶ ̶o̶f̶ ̶1̶5̶ ̶m̶i̶n̶u̶t̶e̶s̶ ̶a̶g̶g̶r̶e̶g̶a̶t̶i̶o̶n̶,̶ ̶i̶t̶ ̶w̶o̶u̶l̶d̶ ̶b̶e̶ ̶(̶7̶0̶+̶1̶4̶0̶+̶1̶0̶)̶
Edit: First, the "number" visualization represent the whole selected duration, aggregated with would be the total of (100+150+200+70+140+10)
The "line" visualization will represent all the aggregated groups. which would in this case be 2 points (100+150+200) and (70+140+10)
It can be a little bit hard to understand at first if you are not used to data points and aggregations. So I suggest that you set your "line" chart to Sum of 5 minutes you will need to get value of each points and devide by 300 as suggested by the doc then sum them all
Added images for easier visualization

Slot time consumed in BigQuery

I ran a query which resulted in the below stats.
Elapsed time: 12.1 sec
Slot time consumed: 14 hr 12 min
total_slot_ms: 51147110 ( which is 14 hr 12 min)
We are on an on-demand pricing plan. So the max slots would be 2000. That being said, if I used 2000 slots for the whole 12.1 seconds span then I should end up with total_slot_ms as 24200000 ( which is 2000x12.1x1000). However, the total_slot_ms is 51147110. Average number of slots used are 51147110/121000 = 4225 ( which is way above 2000). Can some explain to me how I ended up using more than 2000 slots?
In a course of Google, there is an example where a query shows 13 "elapsed time" seconds and 50 minutes of "slot time consumed". They says:
Hey, across all of our workers, we did essentially 50 minutes of work massively in parallel, 50 minutes so that your query could be returned back in 13 seconds. Best of all for you, you don't need to worry about spinning up those workers, moving data in-between them, making sure they're sharing all their results between their aggregations. All you care about is writing the SQL, finding the insights, and then running that query in a very fast turnaround. But there is abstracted from you a lot of distributed parallel processing that's happening.
Increasing Bigquery slot capacity significantly improves overall query performance, despite the fact that slots amount is actually the subject for Quotas restriction along Bigquery on-demand pricing plan, exceeding slots limit does not charge you for additional costs:
BigQuery slots are shared among all queries in a single project.
BigQuery might burst beyond this limit to accelerate your queries.
To check how many slots you're using, see Monitoring BigQuery using
Cloud Monitoring.
BigQuery on-demand supports limited bursting. https://cloud.google.com/bigquery/docs/release-notes#December_10_2019
You might want to check execution plan for the query and understand all different slot_time_ms for wait, read, write activities at each stage. Since this is on-demand slots, you may see lots of wait time, that will add up into total time.
Besides bursting, each stage of explain pan will help you understand that total time is not necessarily actual slot consumption but equivalent slot consumption.

interactive brokers api lagging behind tws when using 100 symbols

I haven't been having this problem until I started putting more symbols up on my screeen. I don't think it's a processing thing, my cpu has been fine and I'm not doing anything super super fancy anyways (just storing data to objects and writing to txt files every so often).
From day 1 with the api, I noticed that I had to put a sleep(1) in the while loop that constantly checks for messages, like so:
PosixTestClient client;
client.connect( host, port, clientId);
while( client.isConnected()) {
sleep(1);
client.processMessages();
}
If I don't have that sleep(1) there, it just crashes. So I guess my first question is: is that normal? Or is something wrong with that?
And my next question is... any tips as to why there might be a lag in the api data as compared to the tws data? I know there's a lag because as the data comes into the api, I'm storing it to strings and then every minute writing the data to text files. Then I go back through my text files and compare it to the charts in tws... and I notice there's about a 2min lag! I also notice it seems to get better (the lag goes away) after the first half hour of the trading day, when things are pretty active.
So... any advice?
Do you have subscribed Booster pack?
TWS API has 100 quotes limit, as well as API. You can buy additional 100 quotes for 30$.
Quote Booster
Increase your allowance of simultaneous quotes windows by purchasing monthly Quote Booster packs at USD 30.00 per pack.
Each booster pack provides 100 simultaneous Level I quotes. Booster Pack quotes are additional to your monthly quote allotment from all sources, including commissions.
Booster pack quotes are available for use in the desktop systems and in the API.
Once subscribed, quotes are available immediately and will display the next time you log into the system.
Data from a cancelled booster pack subscription remains available through the end of the current billing cycle.
Limit of 10 Quote Booster packs per account.
So... with the help of the very helpful and friendly yahoo TWS API users group: https://groups.io/g/twsapi/messages
I was able to find the answer, which was simply:
reduce the sleep time! Running it with no sleep in between the client.proccessMessages() would cause my cpu to run pretty high, but all I needed to really relax cpu was to just sleep for a milisecond... not a whole second. Sleeping for a whole second was causing a lag in data (I suspect that IB ques the data and then 'sends' it to you when you call proccessMessages(), so you need to call that often enough to stay ahead of the tick data you are receiving!)
For anyone who wants to read it in more detail, here was the thread: https://groups.io/g/twsapi/topic/4702705#37186
Fingers crossed that it continues to work, but today I got good data on 100 high-volume tickers with no lag :)

How to use Amazon MWS to indicate two different shipping times on items?

I have a bit of a unique problem here. I currently have two warehouses that I ship items out of for selling on Amazon, my primary warehouse and my secondary warehouse. Shipping out of the secondary warehouse takes significantly longer than shipping from the main warehouse, hence why it is referred to as the "secondary" warehouse.
Some of our inventory is split between the two warehouses. Usually this is not an issue, but we keep having a particular issue. Allow me to explain:
Let's say that I have 10 red cups in the main warehouse, and an additional 300 in the secondary warehouse. Let's also say it's Christmas time, so I have all 310 listed. However, from what I've seen, Amazon only allows one shipping time to be listed for the inventory, so the entire 310 get listed as under the primary warehouse's shipping time (2 days) and doesn't account for the secondary warehouse's ship time, rather than split the way that they should be, 10 at 2 days and 300 at 15 days.
The problem comes in when someone orders an amount that would have to be split across the two warehouses, such as if someone were to order 12 of said red cups. The first 10 would come out of the primary warehouse, and the remaining two would come out of the secondary warehouse. Due to the secondary warehouse's shipping time, the remaining two cups would have to be shipped out at a significantly different date, but Amazon marks the entire order as needing to be shipped within those two days.
For a variety of reasons, it is not practical to keep all of one product in one warehouse, nor is it practical to increase the secondary warehouse's shipping time. Changing the overall shipping date for the product to the longest ship time causes us to lose the buy box for the listing, which really defeats the purpose of us trying to sell it.
So my question is this: is there some way in MWS to indicate that the inventory is split up in terms of shipping times? If so, how?
Any assistance in this matter would be appreciated.
Short answer: No.
There is no way to specify two values for FulfillmentLatency, in the same way as there is no way to specify two values for Quantity in stock. You can only ever have one inventory with them (plus FBA stock)
Longer answer: You could.
Sign up twice with Amazon:
"MySellerName" has an inventory of 10 and a fulfillment latency of 2 days
"MySellerName Overseas Warehouse" has an inventory of 300 and a fulfillment latency of 30 days
I haven't tried by I believe Amazon will then automatically direct the customer to the best seller for them, which should be "MySellerName" for small orders and "MySellerName Overseas Warehouse" for larger quantities.

Amount of Test Data needed for load testing of a web service

I am currently working on a project that requires load testing of web services.
One of the services is being called 60,000 times in the production during Busy-Day/Busy-HR.
{PerfTest Env=PROD}
Input Account Number
Output AccountDetails
Do I really need 60,000 unique account numbers(TEST DATA) for this loadrunner script to simulate the production scenario?
If unique data is required, for endurance test I will have to prepare lot of test data for each web service.
If I don't get that much test data, what is the chance of Load Test being affected due to Application Server Cache mechanism??
Can somebody help me?
Thanks
Ram
Are you simulating a day or the highest volume hour in the last year? This can help you to shape the amount of data that you need. Rarely would you start with a 24 hour test. Instead you would be looking at your high water test of an hour with a ramp up and ramp down, so you would need approximately 1.333* your high water hour's worth of data.
So this can drop your 60K to (potentially) 20K(?) I am making an assumption that your worst hour over the last year is somewhere around 1/3 of your traditional day. I have observed this pattern over and over again in different environments over the past two decades. You will want to objectively verify this with log data or query data to support the number in your environment.
Next up, how many of these inquiries are actually unique? You are really going to need a log of the queries across a day (or your high water hour) to determine this. Log processing tools such as Microsoft Logparser or Splunk/Splunk Storm can help you to pull the observed distribution of unique account references within your data, including counts of those which are multiple. Once you know this you can simply use a data file with a fixed block size for each user for unique data and once the data is exhausted the user exits.