Suggestions for statistical model/approach to “Pattern recognition for non-uniform time data” - sas

I have a dataset from which I would like to detect recurring patterns (i.e: daily, weekly, monthly). The dataset only contains a time stamp (datetime), and the spacing is non-uniform.
The observations in the data reflect the exact time when this one person passes my window. He does this several times a day (on a single day he walks by my window approx 10-30 times), and I am trying to see, if there is any pattern (there might also be some seasonality, sudden changes in previous behavior and other interesting stuff going on).
Does anyone have a suggestion for a statistical model/approach that might be helpful in figuring out if there is any pattern in this behavior? Hopefully, I’ll be able to predict when he will pass my window again ;)
How would you approach this?
Any help would really be appreciated.

Related

Optimizing / speeding up calculation time in Google Sheets

I have asked a few questions related to this personal project of mine already on this platform, and this should be the last one since I am so close to finishing. Below is the link to a mock example spreadsheet I've created, which mimics what my actual project does but it contains less sensitive information and is also smaller in size.
Mock Spreadsheet
Basic rundown of the spreadsheet:
Pulls data from a master schedule which is controlled/edited by another party into the Master Schedule tab.
In the columns adjacent to the imported data, an array formula expands the master schedule by classroom in case some of the time slots designate multiple rooms. Additional formulas adjust the date, start time, and end time to be capped within the current day's 24-hour period. The start time of each class is also made to be an hour earlier.
In the Room Schedule tab, an hourly calendar is created based on the room number in the first column, and only corresponds to the current day.
I have tested the spreadsheet extensively with multiple scenarios, and I'm happy with how everything works except for the calculation time. I figured the two volatile functions I use would take some processing time just by themselves, and I certainly didn't expect this to be lightning-fast especially without using a script, but the project that I am actually implementing this method for is much larger and takes a very long time to update. The purpose of this spreadsheet is to allow users to find an open room and "reserve" it by clicking the checkbox next to it (which will consequently color the entire row red) allowing everyone else to know that it is now taken.
I'd like to know if there is any way to optimize / speed up my spreadsheet, or to not update it every time a checkbox is clicked and instead update it "manually", similar to what OP is asking here. I am not familiar with Apps Script nor am I well-versed in writing code overall, but I am willing to learn - I just need a push in the right direction since I am going into this blind. I know the number of formulas in the Room Schedule tab is probably working against me yet I am so close to what I wanted the final product to be, so any help or insight is greatly appreciated!
Feel free to ask any questions if I didn't explain this well enough.
to speed up things you should avoid usage of the same formulae per each row and make use of arrayformulas. for example:
=IF(AND(TEXT(K3,"m/d")<>$A$1,(M3-L3)<0),K3+1,K3+0)
=ARRAYFORMULA(IF(K3:K<>"",
IF((TEXT(K3:K, "m/d")<>$A$1)*((M3:M-L3:L)<0), K3:K+1, K3:K+0), ))
=IF(AND(TEXT(K3,"m/d")=$A$1,(M3-L3)<0),TIMEVALUE("11:59:59 PM"),M3+0)
=ARRAYFORMULA(IF(K3:K<>"",
IF((TEXT(K3,"m/d")=$A$1)*((M3-L3)<0), TIMEVALUE("11:59:59 PM"), M3:M+0), ))

Google AutoML Importing text items very slow

I'm importing text items to Google's AutoML. Each row contains around 5000 characters and I'm adding 70K of these rows. This is a multi-label data set. There is no progress bar or indication of how long this process will take. Its been running for a couple of hours. Is there any way to calculate time remaining or total estimated time. I'd like to add additional data sets, but I'm worried that this will be a very long process before the training even begins. Any sort of formula to create even a semi-wild guess would be great.
-Thanks!
I don't think that's possible today, but I filed a feature request [1] that you can follow for updates. I asked for both training and importing data, as for training it could be useful too.
I tried training with 50K records (~ 300 bytes/record) and the load took more than 20 mins after which I killed it. I retried with 1K, which ran for 20 mins and then emailed me an error message saying I had multiple labels per input (yes, so what? training data is going to have some of those) and I had >100 labels. I simplified the classification buckets and re-ran. It took another 20 mins and was successful. Then I ran 'training' which took 3 hours and billed me $11. That maps to $550 for 50K recs, assuming linear behavior. The prediction results were not bad for a first pass, but I got the feeling that it is throwing a super large neural net at the problem. Would help if they said what NN it was and its dimensions. They do say "beta" :)
don't wast your time trying to using google for text classification. I am a GCP hard user but microsoft LUIS is far better, precise and so much faster that I can't believe that both products are trying to solve same problem.
Luis has a much better documentation, support more languages, has a much better test interface, way faster.. I don't know if is cheaper yet because the pricing model is different but we are willing to pay more.

How to classify these sentences as positive OR negative?

I have a list of comments made by executives. They are never the same (very unlikely). They indicate the overall sentiment of the company's performance. My objective is to use the past comments to train a classifier and sort the future comments as positive or negative. Is this possible? What techniques will help me achieve this outcome? Help is much appreciated. I have included some sample comments below:
“Business [is] improving and lead times are extending by two or more weeks.”
“Very positive outlook for this quarter. Production goals have been adjusted multiple times and increased each time due to demand.”
“Product demand continues to be solid.”
“Bookings are heavy early in the season. Expect robust first half of the year.”
“Demand still outstrips capacity. Competitors have announced heavy capital investments to increase capacity.”
“Sales and business continue to be strong and increasing.”
“Business holding steady in Q1.”
“Medical device manufacturing is still strong.”
“Even though oil and gas prices are on the upswing, we still face a tough 2017 and will continue to save on costs.”
“Major focus on commodities and potential [for] further inflation.”

AWS Machine Learning Data

I'm using the AWS Machine Learning regression to predict the waiting time in a line of a restaurant, in a specific weekday/time.
Today I have around 800k data.
Example Data:
restaurantID (rowID)weekDay (categorical)time (categorical)tablePeople (numeric)waitingTime (numeric - target)1 sun 21:29 2 23
2 fri 20:13 4 43
...
I have two questions:
1)
Should I use time as Categorical or Numeric?
It's better to split into two fields: minutes and seconds?
2)
I would like in the same model to get the predictions for all my restaurants.
Example:
I expected to send the rowID identifier and it returns different predictions, based on each restaurant data (ignoring others data).
I tried, but it's returning the same prediction for any rowID. Why?
Should I have a model for each restaurant?
There are several problems with the way you set-up your model
1) Time in the form you have it should never be categorical. Your model treats times 12:29 and 12:30 as two completely independent attributes. So it will never use facts it learn about 12:29 to predict what's going to happen at 12:30. In your case you either should set time to be numeric. Not sure if amazon ML can convert it for you automatically. If not just multiply hour by 60 and add minutes to it. Another interesting thing to do is to bucketize your time, by selecting which half hour or wider interval. You do it by dividing (h*60+m) by some number depending how many buckets you want. So to try 120 to get 2 hr intervals. Generally the more data you have the smaller intervals you can have. The key is to have a lot of samples in each bucket.
2) You should really think about removing restaurantID from your input data. Having it there will cause the model to over-fit on it. So it will not be able to make predictions about restaurant with id:5 based on the facts it learn from restaurants with id:3 or id:9. Having restaurant id there might be okay if you have a lot of data about each restaurant and you don't care about extrapolating your predictions to the restaurants that are not in the training set.
3) You never send restaurantID to predict data about it. The way it usually works you need to pick what are you trying to predict. In your case probably 'waitingTime' is most useful attribute. So you need to send weekDay, time and number of people and the model will output waiting time.
You should think what is relevant for the prediction to be accurate, and you should use your domain expertise to define the features/attributes you need to have in your data.
For example, time of the day, is not just a number. From my limited understanding in restaurant, I would drop the minutes, and only focus on the hours.
I would certainly create a model for each restaurant, as the popularity of the restaurant or the type of food it is serving is having an impact on the wait time. With Amazon ML it is easy to create many models as you can build the model using the SDK, and even schedule retraining of the models using AWS Lambda (that mean automatically).
I'm not sure what the feature called tablePeople means, but a general recommendation is to have as many as possible relevant features, to get better prediction. For example, month or season is probably important as well.
In contrast with some answers to this post, I think resturantID helps and it actually gives valuable information. If you have a significant amount of data per each restaurant then you can train a model per each restaurant and get a good accuracy, but if you don't have enough data then resturantID is very informative.
1) Just imagine what if you had only two columns in your dataset: restaurantID and waitingTime. Then wouldn't you think the restaurantID from the testing data helps you to find a rough waiting time? In the simplest implementation, your waiting time per each restaurantID would be the average of waitingTime. So definitely restaurantID is a valuable information. Now that you have more features in your dataset, you need to check if restaurantID is as effective as the other features or not.
2) If you decide to keep restaurantID then you must use it as a categorical string. It should be a non-parametric feature in your dataset and maybe that's why you did not get a proper result.
On the issue with day and time I agree with other answers and considering that you are building your model for the restaurant, hourly time may give a more accurate result.

semi-static data design question

I'm designing a project that will be developed in Django and I had a design philosophy question. In my app I need to track information like current week. This is related to the current week in the NFL (1-17) and can be calculated based on other models in the system (schedule and the current day for example). Since this information gets updated once a week, and will be used quite often in the app, does it make sense to store this information in a model (db table) of its own and just run the update weekly?
There is other information that might be useful to store as well (date/time of first and last games of the current week) so would a model of something like "current weeks information" be appropriate for this, even though the data can be calculated on the fly?
would a model of something like
"current weeks information" be
appropriate for this, even though the
data can be calculated on the fly?
It might be. You can calculate the date Easter falls on, but few applications do that. The calculation is far from dead simple, and any error would have to be treated as a bug fix. But if you store Easter dates in a table, any error can be fixed by anyone who can update calendar data.
It's simple to calculate USA holidays like Martin Luther King Day (observed on the 3rd Monday in January), President's Day (observed on the 3rd Monday in February), and Labor Day (observed on the 1st Monday in September). It's also pretty easy to calculate factory production weeks, which parallels your problem in some ways.
But when I'm building tables for businesses to use for scheduling, estimating, process control, and so on, I like to have the dates that are important to the business--holidays, for example--stored in a table rather than in procedural (calculating) code. The main advantage is that they can be collected, reviewed, and approved or corrected by relatively unskilled employees instead of needing a programmer.
So, if I were in your shoes, I would probably store the weeks in a table. A secondary advantage (or maybe the main advantage, in your case) is that most queries involving weeks might take advantage of indexes on the start and end dates.