How do pagerank checking services work?
There's a PHP script here which should return the pagerank for you http://www.pagerankcode.com/download-script.html
Almost all those services are hitting the same service that the Google Toolbar uses. However, people at Google have said over and over not to look at PageRank, and that it's such a small portion of ranking.
That said, you can grab someone's (open source) SEO toolbar (just search for it) and open up the javascript to see how they're doing it.
Most services just copy what the Google tool bar shows. But pagerank is usually not the important thing, the important thing is to get quality backlinks with relevant anchor text.
Nick is right - Google Page Rank is really not what you should be looking for. In fact, it might be going away. Instead, I would look at SEOmoz.org's metrics from their SEO toolbar. They use metrics called Page Authority (the general power of the site out of 100 - most comparable to Page Rank * 10), mozRank (how popular a site is, i.e. how many links it has and how good those links are), and mozTrust (how trustworthy the site is considered. For example, if a site is in a "bad neighborhood" and is linking to/linked to by a lot of spammy sites, it would have a low mozTrust). MozRank and mozTrust are out of 10.
The script at http://www.pagerankcode.com/download-script.html is not working on most wide known hosting providers, while it will run perfectly if you install a small Apache server on your own PC (XAMPP and similar).
I think the only way is to wait until Google releases a web service API capable of performing such a rank (incredibly there are APIs to query almost every Google service, except this PageRank function).
Related
I operate a number of content websites that have several million user sessions and need a reliable way to monitor some real-time metrics on particular pieces of content (key metrics being: pageviews/unique pageviews over time, unique users, referrers).
The use case here is for the stats to be visible to authors/staff on the site, as well as to act as source data for real-time content popularity algorithms.
We already use Google Analytics, but this does not update quickly enough (4-24 hours depending on traffic volume). Google Analytics does offer a real-time reporting API, but this is currently in closed beta (I have requested access several times, but no joy yet).
New Relic appears to offer a few analytics products, but they are quite expensive ($149/500k pageviews - we have several times this).
Other answers I found on StackOverflow suggest building your own, but this was 3-5 years ago. Any ideas?
Heard some good things about Woopra and they offer 1.2m page views for the same price as Relic.
https://www.woopra.com/pricing/
If that's too expensive then it's live loading your logs and using an elastic search service to read them to get he data you want but you will need access to your logs whilst they are being written to.
A service like Loggly might suit you which would enable you to "live tail" your logs (view whilst being written) but again there is a cost to that.
Failing that you could do something yourself or get someone on freelancer to knock something up for you enabling logs to be read and displayed in a format you recognise.
https://www.portent.com/blog/analytics/how-to-read-a-web-site-log-file.htm
If the metrics that you need to track are just limited to the ones that you have listed (Page Views, Unique Users, Referrers) you may think of collecting the logs of your web servers and using a log analyzer.
There are several free tools available on the Internet to get real-time statistics out of those logs.
Take a look at www.elastic.co, for example.
Hope this helps!
Google Analytics offers real time data viewing now, if that's what you want?
https://support.google.com/analytics/answer/1638635?hl=en
I believe their API is now released as we are now looking at incorporating this!
If you have access to web server logs then you can actually set up Elastic Search as a search engine and along with log parser as Logstash and Kibana as Front end tool for analyzing the data.
For more information: please go through the elastic search link.
Elasticsearch weblink
Please excuse the noobiness of my question. I am mostly searching here for some directions and buzzwords to start digging from.
I spent some time developing an application in Python
Basically, it takes a bunch of images and creates a video out of it.
It i quite simple, and uses only a few libraries (opencv and nunmpy mostly).
I designed a small gui in gtk, but I think that it would be a good idea to offer the service over the web.
I think I could reuse some of my core and design a front end that people could access in their browser.
I only need a few data to get it running (images, an email)
The thing is my web dev skills are really close to 0, and I don't exactly know where to start from .
I don't plan on having hundreds of people a day on the platform.
People would connect, feed me with the data (link to a dropbox folder, google drive, whatever) and I would send them a message where it's finished.
If you could provide me with some names or links so that I could touch the field, I'd be really glad.
CGI is a fine option, but if you already have Python experience Django is definitely worth checking out (it falls in the category of rhooligan's #3 except it uses Python!). Django completely takes care of all of the database backend details for you, which is a benefit over simple CGI. It also provides easy-to-use pre-defined classes for handling file uploads, images, etc. It also has a great tutorial that will get you up and running. Just be careful about whether you're using version 1.3, 1.4, or the latest dev version, because some aspects of the framework have changed fairly quickly. Make sure that you're always looking at the right version of the docs.
Another handy service to keep in mind for doing something like image processing through a web app is a hosted cloud computing service provider like PiCloud. Unless you already have a private web server with lots of memory and processing power, these cloud services that charge by the ms are really cool. They also give you 1000s of cores which could allow you to do lot's of concurrent processing. They provide a nice Python API, and it has numpy and opencv pre-installed in both v2.6 and v2.7. (They use PyOpenCV, but you also have root access to install anything you want, so you can set up the "cv2" interface if that's what you're using--actually I just looked at your GitHub and it looks like you're using the old "cv" interface. You can also install any application you want on PiCloud--it doesn't have to be Python.)
You could start by looking into the Python CGI module and see if it will work for you. Then you'll need to do the following steps:
Decide on a webserver and install it, Apache is probably a good starting point.
Design the UI. Wireframe things out on paper paper. Figure out how you'd ideally want the users to go through your site and what you want on each page/view.
Your decision in #2 drives all the decisions from this point out. These days, most web applications are a combination of Web 1.0 and JSON/REST "services" (there's a couple of buzzwords for ya!). JQuery is a popular and widely used JavaScript library for developing the front end of your site. That would be another thing to look at. JQuery is completely independent from the back end and can be used with any type of back end (PHP, Ruby, Perl, .NET, etc)
We have a (AJAX heavy) web application hosted in cloud across servers and we need to monitor the availability of this service. Requires logging in to the application with a username-password, perform some searches as that user etc.
Since we plan to use Nagios for some other monitoring tasks, we decided to use Nagios for web application monitoring too.
I came across three such solutions:
Webinject: I don't feel like using this. Project not under active development. It was last released in Jan 2006. I can't see any support/help available. Also I suspect how will it behave with Ajax.
Cucumber-Nagios:
I tried using this. It involves many Ruby components and found that you have to have in-depth knowledge of Ruby platform to make all these components work together. I am not a Ruby guy and having tough time making all these components work together. Also even this project is not under active development and I don't see support/help options available. I posted a bug 4 days back and don't see any response yet.
Selenium plugin for Nagios: Haven't tried it yet. Will try now.
Any more solutions available?
Also, since I don't see any good actively developed solutions for monitoring web applications using Nagios, I suspect if it's really a good approach to use Nagios for this? If not, what alternatives do I have? In short what is the best approach to monitor web applications availability?
Edit 1: We can't afford the Nagios XI paid version and will prefer open source solutions.
If not, what alternatives do I have?
Although Nagios was one of options that we've considered, we've chosen OpenNMS for monitoring purposes. Rationale for our decision is that OpenNMS is highly reliable and configurable free open-source tool and additionally, most of our applications are Java-based; OpenNMS offers integration with JMX. However, bear in mind that if you're demanding very complex tests for your Web site maybe it's better to look elsewhere. OpenNMS can be set to check for HTTP status codes etc., but if you're looking for complex scenarios take a look at:
Apache JMeter (we're using it mainly during the testing phase)
Selenium (can be well used even in production phase)
we need an online offline wiki type app that is basically a number of pages with documentation in, but that also would need to link to a number of files .. words docs/ pdfs/ ppts etc that are on a synched mapped drive on the users laptop..
could anyone suggest whether or not google gears would be a reasonable solution to this, i have just had a brief peruse on the gears documentation.. and its seems pretty cool/useful.
as in make a web wiki and gears it up.
the app would also need some way of holding the links to the actual files (docs/pdfs etc) but that should impact the gearsiness of it i imagine)
thanks...
sorry its late in the day so the question may not actually make any sense..
nat
Given that Google is publicly committed to supporting HTML 5 and it's very extensive offline application capabilities, I would personally choose that direction over Gears.
Neither Gears nor HTML 5 is going to give you any ability to read content off of the computer. Web browsers are intentionally sand-boxed to prevent that kind of activity.
Check out the remarkable capabilities of HTML 5, and then see how extensive support for it already is.
Gears is for allowing web pages to store local content and applications on a client computer for offline mode, not for allowing the web browser to peek out on the user's computer.
Gears is also deprecated in favor of HTML5 local storage and other developments.
Web services and web APIs have managed to increase the accessibility of the information stored and catalogued on the internet. They have also opened up a vast array of enterprise power functionality for smaller thin client applications.
By taping into these services developers can provide functionality that would have taken them months perhaps years to set up. They can combine them into single applications that make life generally easier for its users.
Whether displaying information about the music being played, finding items of interest in the locale of the user or just simply tweeting and blogging from the same application - the possibilities are growing everyday.
I want to know about the most interesting or useful services that are out there, especially ones that most of us may not have heard about yet. Do you maintain an API or service? or do you have a clever mash up that provides even more benefits than the originals?
YQL - Yahoo provide a tool that lets you query many different API's across the web, even for sites that don't provide an API as such.
From the site:
The Yahoo! Query Language is an
expressive SQL-like language that lets
you query, filter, and join data
across Web services.
...
With YQL, developers can access and
shape data across the Internet through
one simple language, eliminating the
need to learn how to call different
APIs.
The World Bank API is pretty cool. Google uses it in search results. My favourite implementations are the cartograms at worldmapper.
(source: worldmapper.org)
It's very niche, but I happen to think the OpenCongress API is amazing.
Less niche: Google Translate has an API which will guess the language of something. You'd be AMAZED how frequently this comes in handy (even though it's not as tweakable as you'd like and is not trained on small samples).
I was just about to have a stab at using the SoundCloud API
I know many people who already use for sharing their musical masterpieces and its a pretty good site. Hopefully the api will be as well!
I like the RESTful API for weather.com. It's free and very useful for the new age of location-aware apps: https://registration.weather.com/ursa/xmloap/step1
It does require registration, but they don't spam you or anything - it's just to provide you a key to use the API.
Ah yes - here's another one I've been meaning to check out but haven't tried yet
The BBC offer a bunch of apis/feeds that look very promising
http://ideas.welcomebackstage.com/data
They include apis for accessing schedule data for both TV and Radio listings along with all kinds of news searches. It even looks like they'll be offering some sort of geo-location service soon so it will be interesting to see what that has to offer
Another interesting one for liberal brits! ;)
The Guardian news paper have their own api
http://www.guardian.co.uk/open-platform
MuiscBrainz
Excellent service for music mashups.
Not so many knows that Last.FM initial database was scraped from this service.
The United States Postal Service offers a web service that does address standardization. Quite useful in reducing clutter and cleaning data before it gets put into your database.