I have been learning about AWS recently and I had an idea. My personal laptop is small, I use it for coding and keep all my files on the cloud. My work laptop is huge, but I can't game on my work laptop.
I have a few games Im looking at on Steam that just wouldn't fit on my personal laptop. Do you think it would be cost effective to download them onto a server through AWS and play them like that?
To continue my comment. Cost-effectiveness is not really the issue but rendering and lag is the issue.
On your PC / Laptop you want to go for 60 frames per second (fps), meaning at max ~13ms of rendering per frame. Depending on the type of game you can get by with 30fps, but e.g. for first person shooters you might even try to go for 144+ fps.
The key thing to understand is that your action, e.g. moving around or clicking something has to be rendered as soon as possible, e.g. at most 13ms after you do it. You will immediately notice a high input lag meaning a "long" time between an input being issued and the input taking effect. If you now move the rendering to the server the frame does not only need to be rendered but it needs to also be transferred onto your machine. Depending on your bandwidth and the image size this can naively easily take a second. This is extremely far beyond anything that is actually playable. And that does not yet account for you sending your interaction to the server to make the server render it.
Long story short: it is not feasible for you to implement. As already mentioned Google Stadia tries to achieve that but you can be sure there are A LOT of optimizations in place to make it work.
Related
I recently recorded a decent chunk of data by recording the mass of a digital scale by videoing it by hand on my phone. The mass is changing over time, and I am needing to look at the relationship there. The equipment I have was relatively limited, which is why I was not able to connect this scale directly to a data logger.
I would very much prefer not to have to manually go through every second of every video to log the data as it would be a very repetitive process.
Thus, I was hoping computer vision would be a good alternative but I have no idea how to go about it. Would anyone happen to know a program or tool I could use or create that can just read these numbers from the video and then record them with their timestamps, possibly as a .csv file?
I'd be willing to learn about computer vision or AI to do so myself as well, as I am interested in this area, but simply don't have experience in it, so any advice or tools would be greatly appreciated.
I am facing a challenging problem. On the courtyard of company I am working is a camera trap which takes a photo of every movement. On some of these pictures there are different kinds of animals (mostly deep gray mice) that cause damages to our cable system. My idea is to use some application that could recognize if there is a gray mouse on the picture or not. Ideally in realtime. So far we have developed a solution that sends alarms for every movement but most of alarms are false. Could you provide me some info about possible ways how to solve the problem?
In technical parlance, what you describe above is often called event detection. I know of no ready-made approach to solve all of this at once, but with a little bit of programming you should be all set even if you don't want to code any computer vision algorithms or some such.
The high-level pipeline would be:
Making sure that your video is of sufficient quality. Gray mice sound kind of tough, plus the pictures are probably taken at night - so you should have sufficient infrared lighting etc. But if a human can make it out whether an alarm is false or true, you should be fine.
Deploying motion detection and taking snapshot images at the time of movements. It seems like you have this part already worked out, great! Detailing your setup could benefit others. You may also need to crop only the area in motion from the image, are you doing that?
Building an archive of images, including your decision of whether they are false or true alarm (labels in machine learning parlance). Try to gather at least a few tens of example images for both cases, and make them representative of real-world variations (do you have the problem during daytime as well? is there snowfall in your region?).
Classifying the images taken from the video stream snapshot to check whether it's a false alarm or contains bad critters eating cables. This sounds tough, but deep learning and machine learning is making advances by leaps; you can either:
deploy your own neural network built in a framework like caffe or Tensorflow (but you will likely need a lot of examples, at least tens of thousands I'd say)
use an image classification API that recognizes general objects, like Clarifai or Imagga - if you are lucky, it will notice that the snapshots show a mouse or a squirrel (do squirrels chew on cables?), but it is likely that on a specialized task like this one, these engines will get pretty confused!
use a custom image classification API service which is typically even more powerful than rolling your own neural network since it can use a lot of tricks to sort out these images even if you give it just a small number of examples for each image category (false / true alarm here); vize.it is a perfect example of that (anyone can contribute more such services?).
The real-time aspect is a bit open-ended, as the neural networks take some time to process an image — you also need to include data transfer etc. when using a public API, but if you roll out your own, you will need to spend a lot of effort to get low latency as the frameworks are by default optimized for throughput (batch prediction). Generally, if you are happy with ~1s latency and have a good internet uplink, you should be fine with any service.
Disclaimer: I'm one of the co-creators of vize.it.
How about getting a cat?
Also, you could train your own custom classifier using the IBM Watson Visual Recognition service. (demo: https://visual-recognition-demo.mybluemix.net/train ) It's free to try and you just need to supply example images for the different categories you want to identify. Overall, Petr's answer is excellent.
I currently have a GUI single-threaded application in C++ and Qt. It takes a good 1 minute to load (read from disk) and ~5 seconds to close (saving settings, finalize connections, ...).
What can I do to make my application appear to be faster?
My first thought was to have a server component of the app that does all the works while the GUI component is only for displaying. The communication is done via socket, pipe or memory map. That seems like an overkill (in term of development effort) since my application is only used by a handful of people.
The first step is to start profiling. Use an actual, low-overhead profiling tool (eg, on Linux, you could use oprofile), not guesswork. What is your app doing in that one minute it takes to start up? Can any of that work be deferred until later, or perhaps skipped entirely?
For example, if you're loading, say, a list of document templates, you could defer that until the user tells you to create a new document. If you're scanning the system for a list of fonts, load a cached list from last startup and use that until you finish updating the font list in a separate thread. These are just examples - use a profiler to figure out where the time's actually going, and then attack the code starting with the largest time figures.
In any case, some of the more effective approaches to keep in mind:
Skip work until needed. If you're doing initialization for some feature that's used infrequently, skip it until that feature is actually used.
Defer work until after startup. You can take care of a lot of things on a separate thread while the UI is responsive. If you are collecting information that changes infrequently but is needed immediately, consider caching the value from a previous run, then updating it in the background.
For your shutdown time, hide your GUI instantly, and then spend those five seconds shutting down in the background. As long as the user doesn't notice the work, it might as well be instantaneous.
You could employ the standard trick of showing something interesting while you load.
Like many games nowadays show a tip or two while they are loading
It looks to me like you're only guessing at where all this time is being burned. "Read from disk" would not be high on my list of candidates. Learn more about what's really going on.
Use a decent profiler.
Profiling is a given, of course.
Most likely, you may find I/O is substantial - reading in your startup files. As bdonlan notes, deferring work is a standard technique. Google 'lazy evaluation'.
You can also consider caching data that does not change. Save a cache in a faster format, such as binary. This is most useful if you happen to have a large static data set read into something like an array.
I'm very curious to know how this process works. These sites (http://www.sharkscope.com and http://www.pokertableratings.com) data mine thousands of hands per day from secure poker networks, such as PokerStars and Full Tilt.
Do they have a farm of servers running applications that open hundreds of tables (windows) and then somehow spider/datamine the hands that are being played?
How does this work, programming wise?
There are a few options. I've been researching it since I wanted to implement some of this functionality in a web app I'm working on. I'll use PokerStars for example, since they have, by far, the best security of any online poker site.
First, realize that there is no way for a developer to rip real time information from the PokerStars application itself. You can't access the API. You can, though, do the following:
Screen Scraping/OCR
PokerStars does its best to sabotage screen/text scraping of their application (by doing simple things like pixel level color fluctuations) but with enough motivation you can easily get around this. Google AutoHotkey combined with ImageSearch.
API Access and XML Feeds
PokerStars doesn't offer public access to its API. But it does offer an XML feed to developers who are pre-approved. This XML feed offers:
PokerStars Site Summary - shows player, table, and tournament counts
PokerStars Current Tournament data - files with information about upcoming and active tournaments. The data is provided in two files:
PokerStars Static Tournament Data - provides tournament information that does not change frequently, and
PokerStars Dynamic Tournament Data - provides frequently changing tournament information
PokerStars Tournament Results - provides information about completed tournaments. The data is provided in two files:
PokerStars Tournament Results – provides basic information about completed tournaments, and
PokerStars Tournament Expanded Results – provides expanded information about completed tournaments.
PokerStars Tournament Leaders Board - provides information about top PokerStars players ranked using PokerStars Tournament Ranking System
PokerStars Tournament Leaders Board BOP - provides information about top PokerStars players ranked using PokerStars Battle Of Planets Ranking System
Team PokerStars – provides information about Team PokerStars players and their online activity
It's highly unlikely that these sites have access to the XML feed (or an improved one which would provide all the functionality they need) since PokerStars isn't exactly on good terms with most of these sites.
This leaves two options. Scraping the network connection for said data, which I think is borderline impossible (I don't have experience with this so I'm not sure; I've heard it's highly encrypted and not easy to tinker with, but I'm not sure) and, mentioned above, screen scraping/OCR.
Option #2 is easy enough to implement and, with some work, can avoid detection. From what I've been able to gather, this is the only way they could be doing such massive data mining of PokerStars (I haven't looked into other sites but I've heard security on anything besides PokerStars/Full Tilt is quite horrendous).
[edit]
Reread your question and realized I didn't unambiguously answer it.
Yes, they likely have a massive amount of servers running watching all currently running tables, tournaments, etc. Realize that there is a decent amount of money in what they're doing.
This, for instance, could be how they do it (speculation):
Said bot applications watch the tables and data mine all information that gets "posted" to the chat log. They do this by already having a table of images that correspond to, for example, all letters of the alphabet (since PokerStars doesn't post their text as... text. All text in their software is actually an image). So, the bot then rips an image of the chat log, matches it against the store, converts the data to a format they can work with, and throws it in a database. Done.
[edit]
No, the data isn't sold to them by the poker sites themselves. This would be a PR nightmare if it ever got out, which it would. And it wouldn't account for the functionality of these sites, which appears to be instantaneous. OPR, Sharkscope, etc. There are, without a doubt, applications running that are ripping the data real time from the poker software, likely using the methods I listed.
maybe I can help.
I play poker, run a HUD, look at the stats and am a software developer.
I've seen a few posts on this suggesting it's done by OCR software grabbing the screen. Well, that's really difficult and processor hungry, so a programmer wouldn't choose to do that unless there were no other options.
Also, because you can open multiple windows, the poker window can be hidden or partially obscured by other things on the screen, so you couldn't guarantee to be able to capture the screen.
In short, they read the log files that are output by the poker software.
When you install your HUD like Sharkscope or Jivaro etc, than they run client software on your PC. It reads the log files and updates its own servers with every hand you play.
Most poker software is similar, but lets start with Pokerstars, as thats where I play. The Poker software outputs to local log files for every action you/it makes. It shows your cards, any opponents cards that you see plus what you do. eg. which button you have pressed, how much you/they bet etc. It posts these updates in near real time and timestamps the log file.
You can look at your own files to see this in action.
On a PC do this (not sure what you do on a Mac, but will be similar)
1. Load File Explorer
2. Select VIEW from the menu
3. Select HIDDEN ITEMS so that you can see the hidden data files
4. Goto C:\Users\Dave\AppData\Local\PokerStars.UK (you may not be called DAVE...)
5. Open the PokerStars.log.0 file in NOTEPAD
6. In Notepad, SEARCH for updateMyCard
7. It will show your card numerically
3c for 3 of Clubs
14d for Ace of Diamonds
You can see your opponents cards only where you saw them at the table.
Here is a few example lines from the log file.
OnTableData() round -2
:::TableViewImpl::updateMyCard() 8s (0) [2A0498]
:::TableViewImpl::updateMyCard() 13h (1) [2A0498]
:::TableViewImpl::updatePlayerCard() 7s (0) [2A0498]
:::TableViewImpl::updatePlayerCard() 14s (1) [2A0498]
[2015/12/13 12:19:34]
cheers, hope this helps
Dave
I've thought about this, and have two theories:
The "sniffer" sites have every table open, AND:
Are able to pull the hand data from the network stream. (or:)
Are obtaining the hand data from the GUI (screen scraping, pulling stuff out via the GUI API).
Alternately, they may have developed/modified clients to log everything for them, but I think one of the above solutions is likely simpler.
Well, they have two choices:
they spider/grab the data without consent. Then they risk being shut down anytime. The poker site can easily detect such monitoring at this scale and block it. And even risk a lawsuit for breach of the terms of service, which probably disallow the use of robots.
they pay for getting the data directly. This saves a lot of bandwidth (e.g. not having to load the full pages, extraction, updates with html changes etc.) and makes their business much less risky (legally and technically).
Guess which one they more likely chose; at least if the site has been around for some time without being shut down every now and then.
I'm not sure how it works but I have an application id and a key- which you get as a gold or silver subscriber- sign up for a month and send them an email and you will get access and the API documentation.
SQLite is a great little database, but I am having an issue with it on Windows. It can take up to 50 seconds to perform a query on a 100MB database the first time the application is launched. Subsequent loads take 10% of that time.
After some discussions on the SQLite mailing list, I am told
"The bug is in Windows. It aggressively pre-caches big database files
-- reads in big chunks of the files -- to make it look as if programs
like Outlook are better than they really are. Unfortunately although
this speeds up some programs it makes others act jerky because they
have no control over how much is read when they ask for just a few
bytes of file."
This problem is compounded because there is no way to get progress information while all this is happening from SQLite, so my users think something is broken. (I could display a dummy progress report, but that is really cheesy for a sharp tool.)
I believe there is a way to turn the pre-caching off globally, but is there some way around this programmatically?
I don't know how to fix the caching problem, but 50 seconds sounds extreme. If the query itself takes 10% of that, that means 45 seconds to load a 100mb file. Even if Windows does read in the entire file in one go, that shouldn't take more than a couple of seconds given normal harddrive speeds.
Is the file very fragmented or something?
It sounds to me like there's more than just precaching at play here.
I'm too having the same problem with my first query. The problem returns after not querying the database for a long time. It seems to be a memory caching problem. My software runs 24/7 and every once in a while the user performs the SELECT query. I am also performing the query on a database of the same size.