Finding out how often a phrase is used on facebook - facebook-graph-api

I'm doing some research on the role of social media in social awareness campaigns, and want to be able to find out how many people mentioned a phrase in their status updates over a time period. Pretty much a 'Google Trends' for status updates instead of Google search queries. Is there a way to get that kind of data? I can imagine it'd be a pretty intensive query given the number of possible results, especially over longer time periods, so I don't think they'd let anyone just craft up a curl to do that and go to town. Is there a way for me to run that kind of metric?

Just poll the http://graph.facebook.com/search endpoint
https://graph.facebook.com/search?q=QUERY&type=OBJECT_TYPE
In your case
https://graph.facebook.com/search?q=watermelon&type=post
https://developers.facebook.com/docs/reference/api/search/

Related

What is the best tool to use for real-time web statistics?

I operate a number of content websites that have several million user sessions and need a reliable way to monitor some real-time metrics on particular pieces of content (key metrics being: pageviews/unique pageviews over time, unique users, referrers).
The use case here is for the stats to be visible to authors/staff on the site, as well as to act as source data for real-time content popularity algorithms.
We already use Google Analytics, but this does not update quickly enough (4-24 hours depending on traffic volume). Google Analytics does offer a real-time reporting API, but this is currently in closed beta (I have requested access several times, but no joy yet).
New Relic appears to offer a few analytics products, but they are quite expensive ($149/500k pageviews - we have several times this).
Other answers I found on StackOverflow suggest building your own, but this was 3-5 years ago. Any ideas?
Heard some good things about Woopra and they offer 1.2m page views for the same price as Relic.
https://www.woopra.com/pricing/
If that's too expensive then it's live loading your logs and using an elastic search service to read them to get he data you want but you will need access to your logs whilst they are being written to.
A service like Loggly might suit you which would enable you to "live tail" your logs (view whilst being written) but again there is a cost to that.
Failing that you could do something yourself or get someone on freelancer to knock something up for you enabling logs to be read and displayed in a format you recognise.
https://www.portent.com/blog/analytics/how-to-read-a-web-site-log-file.htm
If the metrics that you need to track are just limited to the ones that you have listed (Page Views, Unique Users, Referrers) you may think of collecting the logs of your web servers and using a log analyzer.
There are several free tools available on the Internet to get real-time statistics out of those logs.
Take a look at www.elastic.co, for example.
Hope this helps!
Google Analytics offers real time data viewing now, if that's what you want?
https://support.google.com/analytics/answer/1638635?hl=en
I believe their API is now released as we are now looking at incorporating this!
If you have access to web server logs then you can actually set up Elastic Search as a search engine and along with log parser as Logstash and Kibana as Front end tool for analyzing the data.
For more information: please go through the elastic search link.
Elasticsearch weblink

Buidling an ecommerce frontend for Amazon cloud services

I have deployed atmospheric modeling instances in Amazon EC2. I can launch via command line interface, and this sets up a server that I can then query and control via REST-like methods. It's great for free use but I want to find a way to charge for its use to cover costs plus a little profit. For the life of me I don't know where to start, and I've searched in all the wrong places.
The scenario I have in mind is to have users register and pay for the time and resources they have used (which could be hours, even days or weeks, plus disc and transfer). I don't want to have to deal with the payments and registrations if I don't have to.
I guess Amazon has some means to facilitate it, but it requires approval and such (not necessarily a bad thing). I have thought of Shopify, but I'm not sure I see a path. I can do crude stuff with Django, but again I don't really understand the path I would take. Likewise with Joomla ecommerce.
Is it typical that people would generally roll their own for this kind of thing, or is there an existing framework out there that might help? Most importantly, are there books or links that help someone at least get a grounding to start on a path? I find nothing in my long searches.
I suspect this may be off topic, but I don't know where else to ask. It's all about programming "something" and surely must be relevant to a community out there.

Amazon AWS / Rakuten API - Inventory Management

I am sure this question may seem a bit lacking, but I literally do not know where to begin with. I want to develop a solution that will allow me to manage ALL of my Amazon and Rakuten/Buy.com inventory from my own website.
My main concern is keeping the inventory in sync, so the process would be as follows:
1.Fetch Orders sold today
a.Subtract the respective quantities
2.Fetch Rakuten orders sold
a.Subtract the respective quantities
3.Update Internal DB of products
a.Send out updated feeds to Amazon and Rakuten.
Again, I apologize if this question may seem a bit lacking, but I am having trouble understanding how exactly to implement this, any tips would be appreciated
For the Amazon part look at https://developer.amazonservices.com/
Rakuten, I think you will be able to do what you want with it via the FTP access, I'm still researching this. If I find more I'll respond with a better answer.
In order to process orders, you'll need to use be registered with Rakuten in order to get an authorisation token. For the API doc etc... try sending an email to support#rakuten.co.uk.
Incidentally, to send out updated feeds, you'll need to use the inventory API in order to update stock quantities (given that you'll be selling the same item Amazon etc..).

How does a tool like SEOMoz Rank Checker work?

It seems there are a number of tools that allow you to check a site's position in search results for long lists of keywords. I'd like to integrate a feature like that in an analytics project I'm working on, but I cannot think of a way to run queries at such high volumes (1000s per hour) without violating the Google TOS and potentially running afoul of their automatic query detection system (the one that institutes a CAPTCHA if search volume at your IP gets too high).
Is there an alternative method for running these automated searches, or is the only way forward to scrape search result pages?
Use a third party to scrape it if you're scared of Google's TOS.
Google is very hot on banning/blocking temporarily IP addresses that appear to be sending automated queries. And yes of course, this is against their TOS.
It's also quite difficult to know exactly how they are detecting them but the main reason is definitely identical keyword searches from the same IP address.
The short answer is basically: Get a lot of proxies
Some more tips:
Don't search further than you need to (e.g. the first 10 pages)
Wait around 4-5 seconds between queries for the same keyword
Make sure you use real browser headers and not something like "CURL..."
Stop scraping with an IP when you hit the road blocks and wait a few days before using the same proxy.
Try and make your program act like a real user would and you won't have too many issues.
You can scrape Google quite easily but to do it at a very high volume will be challenging.

How do sites count other sites' visitors and "value", and how can they tell users' location?

Hi actually this is a simple question but just came up out of the curiosity...
I have seen a web evaluation online tool recently called teqpad.com.I have lots of queries on it
How do they do it?? eg:page views daily visitors etc. without mapping real website??...
Website worth...is this getting any near to any site??
I don't know how do they got daily revenue??
I like traffic by country..it has seen same like in Google analytic s..how they got that info??
another one is ISP info and Google map location of server..
is there any one here done similar scripts?? if so what is your opinion??
They may be tracking user browser stats like Alexa does. (More info on Wikipedia.) A group of users installs a plug-in that reports which sites each user visits, like TV ratings work in most (all?) countries. This method is obviously not very reliable, and often nowhere near the actual numbers of visitors.
This is usually based on bullshit pseudo-scientific calculations and never a viable basis for evaluating the "value" of a web site, even though it may be possible to guesstimate the approximate ad revenues a site yields (see 3) But that is only one revenue stream - it says nothing about how expensive the site's daily maintenance is - servers, staff, content creation....
It should be possible to very roughly estimate daily revenue by taking the guesses on daily visitors/page views, count the frequency with which ads are shown, and look at what those ads usually yield per page view. It is probably pretty easy to get some rough numbers on what an ad view is worth on a big site if you're in the market.
and 5. It is possible to track down most IP addresses down to the visitor's country and sometimes even city. See the Geo targeting article on Wikipedia