How to encourage non-anonymous editing on MediaWiki? - wiki

Problem
At work we have a department wiki (running Mediawiki). Unfortunately several
persons edit without logging in, and that makes it very difficult to track
down editors to ask questions about the content.
There are two strategies to improve this
encourage logged in editing
discourage anonymous editing.
Encouraging
For this part, any tips are welcome. But of course there is always risks involved
in rewarding behaviours.
Discourage
I know that this must be kept low or else it will discourage any editing.
But something just slightly annoying would be nice to have.
[update]
I know it is possible to just disallow anonymous editing, but that will put a high barrier to any first time contribution (especially for people outside our department!), so I do not think that is an option.
[/update]
[update2]
Using LDAP or Active Directory does not solve the problem since the wiki is also accessible and used by external contractors.
[/update2]
[update3]
I am no longer working for this company. That does not mean that I completely have lost interest in this question, but from my current interest point the most valuable part is the "Did you forget to log in?" part below, and I will accept answers based on this part of the question.
[/update3]
Confirmation
One thought was to have an additional confirmation step for anonymous users -
"Are you really sure you want to submit this anonymously?", although with
such a question there is a risk that people will give up or resist editing. However,
if that question is re-phrased in a more diplomatic way as "Did you forget
to log in?" I think it will appear as much more acceptable. And besides that
will also capture those situations where the author did in fact forget to
log in, but actually would want to have his/her contributions credited
his/her user. This last point is by itself a good enough reason for wanting it.
Is this possible?
Delay
Another thought for something to be slightly annoying is to add an extra
forced delay after "save page" displaying something like "If you had logged
in you would not have to wait x seconds". Selecting a right x is difficult
because if it is to high it will be a barrier and if it too low might not
make any difference. But then I started thinking, what about starting at
zero and then add one second delay for each anonymous edit by a given IP
address in a given time frame? That way there will be no barrier for
starting to use the wiki, and by the time the delay is getting significant
the user has already contributed a lot so I think the outcome is much
more likely to be that the editor eventually creates a user rather than
giving up. This assumes IP addresses are rather static, but that is very
typically is the case in a business network.
Is this possible?

You can Turn off Anonymous Editing in Mediawiki like so:
Edit LocalSettings.php and add the following setting:
$wgDisableAnonEdit = true;
Edit includes/SkinTemplate.php, find $fname-edit and change the code to look like this (i.e., basically wrap the following code between the wfProfileIn() and wfProfileOut() functions):
wfProfileIn( "$fname-edit" );
global $wgDisableAnonEdit;
if ( $wgUser->mId || !$wgDisableAnonEdit) {
// Leave this as is
}
wfProfileOut( "$fname-edit" );
Next, you may want to disable the [Edit] links on sections. To do this, open includes/Skin.php and search for editsection. You will see something like:
if (!$wgUser->getOption( 'editsection' ) ) {
Change that to:
global $wgDisableAnonEdit;
if (!$wgUser->getOption( 'editsection' ) || !$wgDisableAnonEdit ) {
Section editing is now blocked for anonymous users.

Forbid anonymous editing and let people log in using their domain logins (LDAP). Often the threshold is the registering of a new user and making up username and password and such.

I think you should discourage anonymous edits by forbidding them - it's an internal wiki, after all.
The flipside is you must make the login process as easy as possible. Hopefully you can configure the login cookie to have a decent length (like 1 month) so they only need to login once per month.

Play to the people's egos, and add a rep system kind of like here. Just make a widget for the home page that shows the number of edits made by the top 5 users or something. Give the top 1 or 2 users a MVP reward at regular (monthly?) intervals.

Well, I doubt that this solution will be valuable for hlovdal, given that this question is now two months old, but maybe somebody else will find it useful:
The optimum solution to this problem is to enable automatic logins. This requires two steps. First, you need to add automatic authentication to your web service. Right now, we're using Apache with the Debian usn-libapache2-authenntlm-perl package on our internal application server*. (Our network is Active Directory and, obviously, the server runs on Debian Linux.) Second, you need a MediaWiki extension that makes MediaWiki aware of the web service's authentication. I've used the Automatic REMOTE_USER Authentication module successfully on an Apache web server that was tied into our network via an NTLM authentication module, but I do recall that it required a bit of massaging the code to make it work:
I had to follow the "horrid hacks" given on the extension's page, changing the setPassword() and addUser() functions to always return true instead of always returning false.
Since Active Directory is case-insensitive and MediaWiki isn't, I replaced both instances of the statement $username = $_SERVER['REMOTE_USER'] with $username = getCanonicalName($_SERVER['REMOTE_USER']).
Since I wanted to only allow certain people within the company to use our wiki, I set autoCreate() to always return false. It doesn't sound as if you need to worry about this, so you should leave autoCreate() at always returning true, which means that anybody on your company network will be able to access the wiki.
The nifty thing about this solution is that nobody has to log in into the wiki, ever; they simply go to a wiki page and they are logged in under their network ID.
* We just switched to this from a Red Hat server that was using mod_ntlm. Unfortunately, mod_ntlm hasn't been updated in a while and it's been starting to sporadically fail. I mention this because I've started to stumble on a performance issue with our current MediaWiki configuration that may require further code massaging....

Make sure users don't get logged out if they look away from the screen or sneeze or scratch their head. You want long, persistent, sessions. Once logged in, stay logged in.
That's the problem with the MediaWiki our company is using internally - you log in, do stuff, then come back later and it logged you out, but the notification of not being logged in anymore is so insignificant on the screen that the user never notices.

If this runs within an internal network, you could pull Active Directory information so that no one has to log in, ever. That's how I do it at work. That is, if they are logged into their windows machine, then my webapps can pick up their username and associate that (or their userid) with their edits.
I don't know if this would be easy to add to MediaWiki, though.

I'd recommend checking out wikipatterns.org - a great site about the social aspects of wikis

Explicitly using some form of directory service (LDAP) would probably be a good idea, so that your users are always fully identified. On the other hand, wikis are subject to their own dynamics, in fact some wikis are so successful because they can be anonymously edited, so that's another thing to keep in mind.
Apart from that, personally I'd try to create some sort of incentive for users to contribute openly and identifiable: this could be based on a point/score system so that there are stats shown for all users who have contributed to the wiki each day, this could possibly even create some sort of competition.
Likewise, the wiki could by default not show any anonymously contributed contents without them being reviewed first, which would be another incentive for users to contribute openly.

SO has an extremely low barrier for posting. You could allow people to specify their name when making an edit. When they are ready, they can finally log in to avoid having to type their name all the time.

You said this is in a departmental situation. Can't you add a feature to the wiki where it makes an educated guess as to who is editing based on the IP address, and annotates the edit accordingly?

I agree absolutely with everyone who recommends carefully researching the effects of anonymity in your application before you start "forbidding" it. In a great many cases people prefer anonymous editing because they DO NOT WANT TO BE ASKED ABOUT IT, IDENTIFIED WITH IT, OR SUFFER SOME PROBLEM FOR POINTING IT OUT. You need to be VERY sure these factors are not driving users to prefer anonymous edits, and frankly you should continue to allow anonymized edits with a generic credential login like "anonymous_employee" or "anonymous_contractor", in case someone wants to point out an issue without becoming identified with it.
Re the "thought... to have an additional confirmation step for anonymous users- "Are you really sure you want to submit this anonymously?", it's a good idea, but do not "re-phrase" in a way that suggests it is wrong to not be logged in as yourself, i.e. don't say "Did you forget to log in?" I'd instead note it this way:
"Your edit will appear as an IP number - it may be attributed to 'anonymous_employee' or 'anonymous_contractor' or 'anonymous_contributor' for your privacy protection. You will not be notified of any answer or response to it. If you prefer to have this contribution credited, then [log in right now]."
That leaves it absolutely clear what will happen, doesn't pressure anyone to do it either way, and does not bias what is being contributed with some "rewards".
You can also, alternately, force a login via LDAP / cookies, and then ask them if they prefer this edit to be anonymous. That is the approach taken on some blog platforms. In an intranet the abuse potential for this is basically zero, so you would presumably only have situations where someone didn't want 'how they knew' or 'why they raised this' to be the question rather than the data itself... IBM has shown in some careful research that anonymized feedback is very much more useful than attributed in correcting groupthink & management blind sides.

Related

Commenting / Forum type software for ColdFusion

We operate a ColdFusion site with a custom CSS acting as a directory of various companies. Depending on the type of company, we have a set of subpages containing specific information pulled from the CMS about the company, such as "location/directions". We're looking to add functionality enabling users to add comments to the existing content. I'm looking for suggestions on open source or other available ColdFusion software out there that could work for this. While we could write something custom, commenting tools have been done a thousand times and probably better than we can do it.
While what we're looking for sounds like a blog or forum, its more of a hybrid. We'd like to be able to add functionality enabling commenting on the content we post in the context we post it in. Seems like there must be something out there that can be easily modified and integrated with our CMS.
Does anyone know of anything out there we should look into?
I'm going to vote to close this too, as per the others, but here's an answer anyway.
If you just want to add commenting to existing content, perhaps use Disqus. It's not locally installable (and is not CFML-based; it's all JS), but it does handle most things one would need if just wanting to add comments to a site.
If you want a native, self-managed solution, unfortunately StackOverflow have deemed that sort of question "unworthy", so you'll need to ask elsewhere. Despite being an entirely reasonable question, for which the answers would be helpful to other people later on (which is - in theory - the raison d'etre of Stack Overflow. Although that's hard to tell, sometimes).

Can the _ga cookie value be used to exclude self traffic in Google analytics universal?

Google analytics store a unique user id in a cookie names _ga. Some self traffic was already counted, and I was wondering if there's a way to filter it out by providing the _ga cookie value to some exclusion filter.
Any ideas?
Firstly, I'm gonna put it out there that there is no solution for excluding or removing historical data, except to make a filter or segment for your reports, which doesn't remove or prevent that data from showing up; it simply hides it. So if you're looking for something that gets rid of the data that is already there, sorry, not happening. Now on to making sure more data doesn't show up.
GA does not offer a way to exclude traffic by its visitor cookie (or any cookie in general). In order to do this, you will need to read the cookie yourself and expose it to something that GA can exclude by. For example, you can pop a custom variable or override/append the page name.
But this isn't really that convenient for lots of reasons, such as having to burn a custom variable slot, or having to write some server-side or client-side code to read the cookie and act on a value, etc..
And even if you do decide to do this, you're going to have to consider at least 2 scenarios in which this won't work or break:
1) This won't work if you go to your site from a different browser, since browsers can't read each other's cookies.
2) It will break as soon as you clear your cookies.
An alternative you should consider is to make an exclusion filter on your IP address. This has the benefit of:
works regardless of which browser you are on
you don't have to write any code for it that burns or overwrites any variables
you don't have to care about the cookie
General Notes
I don't presume to know your situation or knowledge, so take this for what it's worth: simply throwing out general advice because nothing goes without saying.
If you were on your own site to QA something and are wanting to remove data from some kind of development or QA efforts, a better solution in general is to have a separate environment for developing and QAing stuff. For example a separate dev.yoursite.com subdomain that mirrors live. Then you can easily make an exclusion on that subdomain (or have a separate view or property or any number of ways to keep that dev/qa traffic out).
Another thing is.. how much traffic are we talking here anyway? You really shouldn't be worrying about a few hits and whatnot that you've personally made on your live site. Again, I don't know your full situation and what your normal numbers look like, but in the grand scheme of things, a few extra hits is a drop of water in a bucket, and in general it's more useful to look at trends in the data over time, not exact numbers at a given point in time.

Determine unique visitors to site

I'm creating a django website with Apache2 as the server. I need a way to determine the number of unique visitors to my website (specifically to every page in particular) in a full proof way. Unfortunately users will have high incentives to try to "game" the tracking systems so I'm trying to make it full proof.
Is there any way of doing this?
Currently I'm trying to use IP & Cookies to determine unique visitors, but this system can be easily fooled with a headless browser.
Unless it's necessary that the data be integrated into your Django database, I'd strongly recommend "outsourcing" your traffic to another provider. I'm very happy with Google Analytics.
Failing that, there's really little you can do to keep someone from gaming the system. You could limit based on IP address but then of course you run into the problem that often many unique visitors share IPs (say, via a university, organization, or work site). Cookies are very easy to clear out, so if you go that route then it's very easy to game.
One thing that's harder to get rid of is files stored in the appcache, so one possible solution that would work on modern browsers is to store a file in the appcache. You'd count the first time it was loaded in as the unique visit, and after that since it's cached they don't get counted again.
Of course, since you presumably need this to be backwards compatible then of course it leaves it open to exactly the sorts of tools which are most likely to be used for gaming the system, such as curl.
You can certainly block non-browserlike user agents, which makes it slightly more difficult if some gamers don't know about spoofing browser agent strings (which most will quickly learn).
Really, the best solution might be -- what is the outcome from a visit to a page? If it is, for example, selling a product, then don't award people who have the most page views; award the people whose hits generate the most sales. Or whatever time-consuming action someone might take at the page.
Possible solution:
If you're willing to ignore people with JavaScript disabled, you could choose to count only people who access the page and then stay on that page for a given window of time (say, 1 minute). After a given period of time, do an Ajax request back to the server. So if they tried to game by changing their cookie and loading multiple tabs at once, it wouldn't work because they'd need to have the same cookie in order to register that they'd been on that page long enough. I actually think this might work; I can't honestly see a way to game that. Basically on the server side you store a dictionary called stay_until in request.session with keys for each unique page and after 1 minute or so you run an Ajax call back to the server. If the value for stay_until[page_id] is less than or equal to the current time, then they're an active user, otherwise they're not. This means that it will take someone at least 20 minutes to generate 20 unique visitors, and so long as you make the payoff worth less than the time consumed that will be a strong disincentive.
I'd even make it more explicit: on the bottom of the page in a noscript tag, put "Your access was not counted. Turn on JavaScript to be counted" with a page that lays out the tracking process.
As HTML Requests are stateless and you have no control over the users behavior on his clientside, there is no bulletproof way.
The only way you're going to be able to track "unique" visitors in a fool-proof way is to make it contingent on some controlled factor such as a login. Anything else can and will fail to be completely accurate.

What would be a good Coldfusion-based bug tracking software?

What I am looking for is a tool that easily or automatically sends coldfusion error messages to their system.
Then I can use the web-based interface, to manage priorities, track who fixed what and so forth.
But I want to use this to help us deal with errors better, but also to show the importance of a bug tracking system to my fellow works.
System Requirements: Apache, Windows, Coldfusion 8 Standard, Sql Server 2005.
Financial Requirements: Free or Open Source
Goal Or Purpose: To encourage my fellow workers to want and use a bug tracking system.
Does this re-write make more sense?
Thanks
Craig
Wiki has a list of issue tracking software, maybe this list could help.
http://en.wikipedia.org/wiki/Comparison_of_issue_tracking_systems
You may be able to find a hosted service and use either email or web services to create the ticket using onError. With that said, a simple issue tracking app could be created for your site using the same DB used to drive the content. 2 or 3 tables would take care of the data storage and you're already using CF so the application layer is already there.
HTH.
I have been heavily using this type of a setup for several years by email only, and the last 3 years with a Bug Tracking Software.
I must say, the bug tracking software has made my life so much more peaceful. Nothing is left, forgotten, or slips through the cracks. It's easy to find trends in errors, and remember "all the times" it happened.
Our setup is like this:
1) Coldfusion + Appropriate framework with error reporting - It doesn't matter what you use. I have used Fusebox extensively and am making the transition to ColdBox. Both are very capable, in addition to Mach-II, FW/1, Model-Glue, etc. The key part you have to find in them is their ability to catch "onError", usualy in the application CFC.
2) Custom OnError Script - Wherever an error occurs, you want to capture the maximum amount of information about that error and email it in. What we do is, when an error occurs, we log the user out with a message of "oops, log in again". Before logging them out, the application captures the error and emails it to Fogbugz. Along with it, at the top we include the CGI variables for the IP address, browser being used, etc. Over time you will find the things you need to add.
3) Routing in Fogbugz. A 2 user version of Fogbugz is free, and hosted online. There are two main ways to submit bugs. One is to email one in at a time. So if an error happens 2000 times, you get 2000 emails, and 2000 cases. Not always the best to link them together, etc. They have a feature called BugzScout, which is essentially an HTTP address that you do a form post to with cfform with all of the same information you would have put into the email. There's plenty of documentation on this and something I've always wanted to get around to. I had a scenario of 2000 emails for the first time happen a few weeks ago so I'll be switching over to this.
Hope that helps. Share what you ended up doing and why so we all can learn too!
I'm surprised no one mentioned LighthousePro (http://lighthousepro.riaforge.org). Open source - 100% free - and ColdFusion. As the author I'm a bit biased though. :)
Hard question to answer not knowing what kind of restrictions are there? Do you have any permissions to install anything? Also most bug-tracking systems require some kind of database support.
I have a suggestion. You can put in place a basic bug-tracking system, that just allows people to create tickets, and allows you/someone else to close it.
More Windows based tools are mentioned here
Good open-source bug tracking / issue tracking sofware for Windows
Any reason why coldfusion specifically?
I really like Fogbugz from the makers of Stack Overflow. For one user it's quite reasonably priced. I enter some bugs manually and have others emailed in.
A lot of bug tracking software will expose SOAP methods for entering data into them.
For example, we used Axosoft's OnTime and that exposed some WSDL pages that I consumed in my application. I was told that Jira did as well.
There are few in CF411 list: Bug Tracking/Defect Tracking/Trouble Ticket/Help Desk Tools Written in CFML
We use HopToad. There is another bug-tracking app called LightHouse that integrates with HopToad so you can easily create a [bug] ticket from an incoming exception. HopToad has an API of which there are many clients, you want the CF based one:
http://github.com/timblair/coldfusion-hoptoad-notifier
Even if you dont use HopToad and you end up using a different service or roll your own, if you needed to write your own API client you could leverage the code or pattern(s) of the above HopToad client.
A lot of good information from everyone, and I really do appreciate the efforts given. But not the answer i was looking for. Which maybe means, that what i want does not exist, yet.
So i may have to roll my own solution...Or maybe integrate with another existing app...
Thank You all.

Is there any reason to sanitize user input to prevent them from cross site scripting themself?

If I have fields that will only ever be displayed to the user that enters them, is there any reason to sanitize them against cross-site scripting?
Edit: So the consensus is clear, that it should be sanitized. What I'm trying to understand is why? If the only user that can ever view the script they insert into the site is the user himself, then the only thing he can do is execute the script himself, which he could already do without my site being involved. What's the threat vector here?
Theoretically: no. If you are sure that only they will ever see this page, then let them script whatever they want.
The problem is that there are a lot of ways in which they can make other people view that page, ways you do not control. They might even open the page on a coworker's computer and have them look at it. It is undeniably an extra attack vector.
Example: a pastebin without persistent storage; you post, you get the result, that's it. A script can be inserted that inconspicuously adds a "donate" button to link to your PayPal account. Put it up on enough people's computer, hope someone donates, ...
I agree that this is not the most shocking and realistic of examples. However, once you have to defend a security-related decision with "that is possible but it does not sound too bad," you know you crossed a certain line.
Otherwise, I do not agree with answers like "never trust user input." That statement is meaningless without context. The point is how to define user input, which was the entire question. Trust how, semantically? Syntactically? To what level; just size? Proper HTML?
Subset of unicode characters? The answer depends on the situation. A bare webserver "does not trust user input" but plenty of sites get hacked today, because the boundaries of "user input" depend on your perspective.
Bottom line: avoid allowing anybody any influence over your product unless it is clear to a sleepy, non-technical consumer what and who.
That rules out almost all JS and HTML from the get-go.
P.S.: In my opinion, the OP deserves credit for asking this question in the first place. "Do not trust your users" is not the golden rule of software development. It is a bad rule of thumb because it is too destructive; it detracts from the subtleties in defining the frontier of acceptable interaction between your product and the outside world. It sounds like the end of a brainstorm, while it should start one.
At its core, software development is about creating a clear interface to and from your application. Everything within that interface is Implementation, everything outside it is Security. Making a program do the things you want it to is so preoccupying one easily forgets about making it not do anything else.
Picture the application you are trying to build as a beautiful picture or photo. With software, you try to approximate that image. You use a spec as a sketch, so already here, the more sloppy your spec, the more blurry your sketch. The outline of your ideal application is razor thin, though! You try to recreate that image with code. Carefully you fill the outline of your sketch. At the core, this is easy. Use wide brushes: blurry sketch or not, this part clearly needs coloring. At the edges, it gets more subtle. This is when you realize your sketch is not perfect. If you go too far, your program starts doing things that you do not want it to, and some of those could be very bad.
When you see a blurry line, you can do two things: look closer at your ideal image and try to refine your sketch, or just stop coloring. If you do the latter, chances are you will not go too far. But you will also make only a rough approximation of your ideal program, at best. And you could still accidentally cross the line anyway! Simply because you are not sure where it is.
You have my blessing in looking closer at that blurry line and trying to redefine it. The closer you get to the edge, the more certain you are where it is, and the less likely you are to cross it.
Anyway, in my opinion, this question was not one of security, but one of design: what are the boundaries of your application, and how does your implementation reflect them?
If "never trust user input" is the answer, your sketch is blurry.
(and if you don't agree: what if OP works for "testxsshere.com"? boom! check-mate.)
(somebody should register testxsshere.com)
Just because you don't display a field to someone, doesn't mean that a potential Black Hat doesn't know that they're there. If you have a potential attack vector in your system, plug the hole. It's going to be really hard to explain to your employer why you didn't if it's ever exploited.
I don't believe this question has been answered entirely. He wants to see an accuall XSS attack if the user can only attack himself. This is actually done by a combination of CSRF and XSS.
With CSRF you can make a user make a request with your payload. So if a user can attack himself using XSS, you can make him attack himself (make him make a request with your XSS).
A quote from The Web Application
Hacker’s Handbook:
COMMON MYTH:
“We’re not worried about that low-risk XSS bug. A user could exploit it only to attack himself.”
Even apparently low-risk vulnerabilities can, under the right circumstances, pave the way for a devastating attack. Taking a defense-in-depth approach to security entails removing every known vulnerability, however insignificant it may seem. The authors have even used XSS to place file browser dialogs or ActiveX controls into the page response, helping to break out of a kiosk-mode system bound to a target web application. Always assume that an attacker will be more imaginative than you in devising ways to exploit minor bugs!
Yes, always sanitize user input:
Never trust user input
It does not take a lot of effort to do so.
The key point being 1.
If the script, or service, that the form submits the values to is available via the internet then anyone, anywhere, can write a script that will submit values to it. So: yes, sanitize all inputs received.
The most basic model of web-security is pretty simple:
Do not trust your users
It's also worth linking to my answer in another post (Steps to become web-security savvy): Steps to become web security savvy.
I can't believe I answered without referring to the title-question:
Is there any reason to sanitize user input to prevent them from cross site scripting themself?
You're not preventing the user's being cross-site scripted, you're protecting your site (or, more importantly, you're client's site) from being the victim of cross-site scripting. If you don't close known security holes because you couldn't be bothered it will become very hard to get repeat business. Or good word-of-mouth advertising and recommendation from previous clients.
Think of it less as protecting your client, think of it -if it helps- as protecting your business.