I'm doing some tests with an HTML map in conjunction with Leaflet. Server side I have a Ruby Sinatra app serving json markers fetched by a MySQL table. What are the best practices working with 2k-5k and potentially more markers?
Load all the markers in the first place and then delegate everything to Leaflet.markercluster.
Load the markers every time the map viewport change, sending southWest & northEast points to the server, elaborate the clipping server side and then sync the marker buffer client side with the server-fetched entries (what I'm doing right now).
A mix of the two above approaches.
Thanks,
Luca
A few months have passed since I originally posted the question and I made it through!
As #Brett DeWoody correctly noted the right approach is to be strictly related to the number of DOM elements on the screen (I'm referring mainly to markers). The more the merrier if your device is faster (CPU especially). Since the app I was developing has both desktop and tablet as target devices, CPU was a relevant factor just like the marker density of different geo-areas.
I decided to separate DBase querying/fetching and map representation/displaying. Basically, the user adjusts controls/inputs to filter the whole dataset, afterward records are fetched and Leaflet.markercluster does the job of representation. When a filter is modified the cycle starts over. Users can choose the map zoom level of clusterization depending on their CPU power.
In my particular scenario, the above-mentioned represented the best approach (verified by console.time). I found that viewport optimization was good for lower marker-density areas (a pity).
Hope it may be helpful.
Cheers,
Luca
Try options and optimize when you see issues rather than optimizing early. You can probably get away with just Leaflet.markercluster unless your markers have lots of data attached to them.
I'm having a problem with Advanced segments in Google Mobile App Analytics.
A condition has been setup to include all screens that match regex "/01-12-2013/" - but it's also showing me screens which does not contain this string. For example I'm getting a screen name containing "/11-11-2013/" which I would have expected to be filtered out.
The segment seems to return different results based on which tab I'm in in Google Mobile App Analytics. If that helps at all.
In "Audience Overview" I's returning 48.02% of all Screen Views. In "Behavior Overview" it's returning 71.51% of all screen views.
Here are some screenshots to illustrate the problem.
This is going to sound a bit ridiculous but, when creating advanced segments, after you've created them, I'd give them an hour or two before relying on the data they provide. I still have yet to find a solid answer as to why this is, but across a wide range of sites over the past year or two I've found similar issues. I've noticed that when I create an advanced segment to filter specific pages, invariably the initial results still show irrelevant pages I specifically filtered out. The only thing I've been able to attribute this to is some sort of "lag", on Google's side, in updating the Analytics data/property/view/segment. In almost every case, I've simply waited an hour or two after creating an advanced segment, and the data that was filtered usually displays correctly by then.
An interesting thing I've also seen was that a third party reporting platform I use, that has a Google Analytics integration, actually displayed the correct advanced segmented data BEFORE it showed up properly in Analytics. Strange.
The autocomplete API allows us to retrieve lists of all countries, regions, and locales by leaving out the query string and setting the result limit to a large number, but this feature isn't available at the city level.
Is there a way that we can retrieve a full list of all targetable cities and their IDs? If not, can we cache the autocomplete data for cities to build up such a list?
That functionality is probably not supported because of the massive amount of return data that would result in fetching all the cities in the world, even with paging. Although limiting the response data by country (by using country_list=["ca"]) and then fetching all cities doesn't sound too far-fetched, however, it is not implemented either.
To me, it sounds like you have two options.
Create a bug report using our bug tool to request a wishlist feature (doesn't guarantee anything, but at least we can track it if we choose to implement it and can serve as a way to gauge interest in the feature)
IANAL, but according to the FB Platform Policies part 2 of section 2 states
You may cache data you receive through use of the Facebook API in order to improve your application’s user experience, but you should try to keep the data up to date. This permission does not give you any rights to such data.
Which sounds like you can cache the autocomplete data since it will better improve the UX of your app, however, just remember that you do not have the rights to the data. I would be cautious about this as it would really suck if you worked really hard to get all the caching functionality built in only to have FB say that it's not allowed. I would advise with some experts some more before pursuing this path.
We were having a beer talk and have something to clear out.
Is the following conclusion correct:
When I put a facebook-like-button-box on my page, does facebook know
every time I'm on that page, even if i'm not logged in.
basically the same as google analytics
if this nis correct, it should be possible to sandbox, the like-button until someone will use it. Then facebook gets only informations when the user actively confirms that.
cheers endo
No, they can't directly track you if you are not logged in and you view an external "like" button. They can, however, set a tracking cookie that identifies you when you sign in, which would allow them to match the tracking data in the current session to you.
One of Facebook's primary revenue streams comes from the analysis and sale of market trend information. They can analyse the likes and comment keywords of certain user clusters (e.g. middle-aged American females, teenagers in college, etc) and use these to produce statistics about market patterns and trends. They can also use keyword analysis to tell a company how many people are talking about something, e.g. "how many people have mentioned my latest blockbuster film?"
You could simply move the image and JavaScript code away from the Facebook servers and host it locally to avoid them from tracking your users.
In pre-emption of the "FACEBOOK = EVIL" arguments:
In the end, though, is it really a big issue? Some people see Facebook as this massive life-infringing uncaring supercorporation, but in reality they're just making a buck through completely anonymous statistics. No human being (or sentient robot) views your preferences, browser tracking data, or personal information. Everything is anonymised and turned into a bunch of numbers relating to a group. Sure, they could screw everyone over and be evil, but why bother when you already make that much money legitimately?
I'm very curious to know how this process works. These sites (http://www.sharkscope.com and http://www.pokertableratings.com) data mine thousands of hands per day from secure poker networks, such as PokerStars and Full Tilt.
Do they have a farm of servers running applications that open hundreds of tables (windows) and then somehow spider/datamine the hands that are being played?
How does this work, programming wise?
There are a few options. I've been researching it since I wanted to implement some of this functionality in a web app I'm working on. I'll use PokerStars for example, since they have, by far, the best security of any online poker site.
First, realize that there is no way for a developer to rip real time information from the PokerStars application itself. You can't access the API. You can, though, do the following:
Screen Scraping/OCR
PokerStars does its best to sabotage screen/text scraping of their application (by doing simple things like pixel level color fluctuations) but with enough motivation you can easily get around this. Google AutoHotkey combined with ImageSearch.
API Access and XML Feeds
PokerStars doesn't offer public access to its API. But it does offer an XML feed to developers who are pre-approved. This XML feed offers:
PokerStars Site Summary - shows player, table, and tournament counts
PokerStars Current Tournament data - files with information about upcoming and active tournaments. The data is provided in two files:
PokerStars Static Tournament Data - provides tournament information that does not change frequently, and
PokerStars Dynamic Tournament Data - provides frequently changing tournament information
PokerStars Tournament Results - provides information about completed tournaments. The data is provided in two files:
PokerStars Tournament Results – provides basic information about completed tournaments, and
PokerStars Tournament Expanded Results – provides expanded information about completed tournaments.
PokerStars Tournament Leaders Board - provides information about top PokerStars players ranked using PokerStars Tournament Ranking System
PokerStars Tournament Leaders Board BOP - provides information about top PokerStars players ranked using PokerStars Battle Of Planets Ranking System
Team PokerStars – provides information about Team PokerStars players and their online activity
It's highly unlikely that these sites have access to the XML feed (or an improved one which would provide all the functionality they need) since PokerStars isn't exactly on good terms with most of these sites.
This leaves two options. Scraping the network connection for said data, which I think is borderline impossible (I don't have experience with this so I'm not sure; I've heard it's highly encrypted and not easy to tinker with, but I'm not sure) and, mentioned above, screen scraping/OCR.
Option #2 is easy enough to implement and, with some work, can avoid detection. From what I've been able to gather, this is the only way they could be doing such massive data mining of PokerStars (I haven't looked into other sites but I've heard security on anything besides PokerStars/Full Tilt is quite horrendous).
[edit]
Reread your question and realized I didn't unambiguously answer it.
Yes, they likely have a massive amount of servers running watching all currently running tables, tournaments, etc. Realize that there is a decent amount of money in what they're doing.
This, for instance, could be how they do it (speculation):
Said bot applications watch the tables and data mine all information that gets "posted" to the chat log. They do this by already having a table of images that correspond to, for example, all letters of the alphabet (since PokerStars doesn't post their text as... text. All text in their software is actually an image). So, the bot then rips an image of the chat log, matches it against the store, converts the data to a format they can work with, and throws it in a database. Done.
[edit]
No, the data isn't sold to them by the poker sites themselves. This would be a PR nightmare if it ever got out, which it would. And it wouldn't account for the functionality of these sites, which appears to be instantaneous. OPR, Sharkscope, etc. There are, without a doubt, applications running that are ripping the data real time from the poker software, likely using the methods I listed.
maybe I can help.
I play poker, run a HUD, look at the stats and am a software developer.
I've seen a few posts on this suggesting it's done by OCR software grabbing the screen. Well, that's really difficult and processor hungry, so a programmer wouldn't choose to do that unless there were no other options.
Also, because you can open multiple windows, the poker window can be hidden or partially obscured by other things on the screen, so you couldn't guarantee to be able to capture the screen.
In short, they read the log files that are output by the poker software.
When you install your HUD like Sharkscope or Jivaro etc, than they run client software on your PC. It reads the log files and updates its own servers with every hand you play.
Most poker software is similar, but lets start with Pokerstars, as thats where I play. The Poker software outputs to local log files for every action you/it makes. It shows your cards, any opponents cards that you see plus what you do. eg. which button you have pressed, how much you/they bet etc. It posts these updates in near real time and timestamps the log file.
You can look at your own files to see this in action.
On a PC do this (not sure what you do on a Mac, but will be similar)
1. Load File Explorer
2. Select VIEW from the menu
3. Select HIDDEN ITEMS so that you can see the hidden data files
4. Goto C:\Users\Dave\AppData\Local\PokerStars.UK (you may not be called DAVE...)
5. Open the PokerStars.log.0 file in NOTEPAD
6. In Notepad, SEARCH for updateMyCard
7. It will show your card numerically
3c for 3 of Clubs
14d for Ace of Diamonds
You can see your opponents cards only where you saw them at the table.
Here is a few example lines from the log file.
OnTableData() round -2
:::TableViewImpl::updateMyCard() 8s (0) [2A0498]
:::TableViewImpl::updateMyCard() 13h (1) [2A0498]
:::TableViewImpl::updatePlayerCard() 7s (0) [2A0498]
:::TableViewImpl::updatePlayerCard() 14s (1) [2A0498]
[2015/12/13 12:19:34]
cheers, hope this helps
Dave
I've thought about this, and have two theories:
The "sniffer" sites have every table open, AND:
Are able to pull the hand data from the network stream. (or:)
Are obtaining the hand data from the GUI (screen scraping, pulling stuff out via the GUI API).
Alternately, they may have developed/modified clients to log everything for them, but I think one of the above solutions is likely simpler.
Well, they have two choices:
they spider/grab the data without consent. Then they risk being shut down anytime. The poker site can easily detect such monitoring at this scale and block it. And even risk a lawsuit for breach of the terms of service, which probably disallow the use of robots.
they pay for getting the data directly. This saves a lot of bandwidth (e.g. not having to load the full pages, extraction, updates with html changes etc.) and makes their business much less risky (legally and technically).
Guess which one they more likely chose; at least if the site has been around for some time without being shut down every now and then.
I'm not sure how it works but I have an application id and a key- which you get as a gold or silver subscriber- sign up for a month and send them an email and you will get access and the API documentation.