So my school website has been hacked cos my professor got XSS-ed. So I found that attacker used this XSS attack vector
<img src="<img src=search"/onerror=alert("Xss")//">
Can anyone explain me how this vector works , and how do you suggest I setup my site against further XSS attacks. Why img inside img tag?
Thanks..
Simply, he is ending your src tag and then inserting his own onerror handler. The onerror gets called when the image cannot be loaded. You can prevent this from escaping all user input - expect that every user is trying to hack your site.
In PHP, you could do this by using:
strip_tags($_POST['val']):
Related
I'm creating a classifieds website in Django. A single view function handles global listings, city-wise listings, barter-only global listings and barter-only city-wise listings. This view is called ads.
The url patterns are written in the following order (note that each has a unique name although it's tied to the same ads view):
urlpatterns = patterns('',
url(r'^buy_and_sell/$', ads,name='classified_listing'),
url(r'^buy_and_sell/barter/$', ads,name='barter_classified_listing'),
url(r'^buy_and_sell/barter/(?P<city>[\w.#+-]+)/$', ads,name='city_barter_classified_listing'),
url(r'^buy_and_sell/(?P<city>[\w.#+-]+)/$', ads,name='city_classified_listing'),
)
The problem is that when I hit the url named classified_listing in the list above, the function ads gets called twice. I.e. here's what I see in my terminal:
[14/Jul/2017 14:31:08] "GET /buy_and_sell/ HTTP/1.1" 200 53758
[14/Jul/2017 14:31:08] "GET /buy_and_sell/None/ HTTP/1.1" 200 32882
This means double the processing. I thought urls.py returns the first url pattern matched. What am I doing wrong and what's the best way to fix this? All other calls work as expected btw (i.e. only once).
Note: Ask for more information in case I've missed something.
Great explanation to understand these type of occurences: https://groups.google.com/d/msg/django-users/CRMMYWix_60/KEIkguUcqxYJ
This issue has nothing to do with how url patterns are ordered in urls.py.
Like pointed out in the comments under the question, this has to do with problematic asset references in the HTML template.
What does that mean?
For instance, try curl -i http://localhost:8000/example/ >> output.txt in your terminal. Then open up output.txt in your editor of choice. Now search for href or src attributes where values are None (or otherwise malformed). That's one reason a double call is being created. That was the reason for me. I removed these, and the double call disappeared.
There's this old - but relevant - writeup about how to comprehensively diagnose this problem on your machine here: https://groups.google.com/forum/#!msg/django-users/CRMMYWix_60/KEIkguUcqxYJ
Happy testing.
As I can't comment on other answers, just to add for future wanderers that for me the "problem" was in a correctly formed but yet for the browser instructing <iframe src="#"..> tag. On django server the view was rendering twice, once with original request and then again by the hidden iframe element that I used for some of the modal popups later in the page usage.
After emptying the src attribute like <iframe src=""..> a second request is no longer initiated and my modals work fine.
The solution actually is from the link posted already in answers before [https://groups.google.com/forum/#!msg/django-users/CRMMYWix_60/KEIkguUcqxYJ][1]
where it is explained:
Note that it's a URI. That means something that is retrieved. Since
you've used the value "#fff", that will be interpreted by the browser as
a reference to the current page (#fff being an anchor, and not passed to
the server). Ergo, a second request is made.
that the iframe src # (anchor) is instructing the browser to load again the same URL, for the iframe element in my case.
I indeed had several style elements with #fff colors inside and whatnot, but this wasn't it, as browsers are smart enough to recognize this is not an anchor.
With available tools (browser only) I found to be easy to debug and find these initiation href/src attributes over the Network tab of your browser developer tools - in Chrome is just by clicking the Initiator link of the corresponding row - giving you the exact line from the page source that initiated the request to the same URL.
I struggled with the same problem and just wanted to share my experience with it. I had double requests all over my application but everything seemed to work as expected apart form it.
What Daniel Rossman pointet out in the comments was actually also true for my problem. I had a <link rel="shortcut icon" href="#"> in my base template which caused the double request, because of the #, which is a reference to the page itself. Once i removed it, i had no double requests anymore.
Hope this answer can save someone some debugging time.
I got double request in view function, in my scenario, this went wrong:
<img id="profile-img" src="#" alt="" class="profile-cover">
by setting src="" dismiss double request. it was a silly thing, I just thought it apply to a then must apply to img, but img actually send another request.
Thanks for everyone in advance.
I encountered a problem when using Scrapy on Python 2.7.
The webpage I tried to crawl is a discussion board for Chinese stock market.
When I tried to get the first number "42177" just under the banner of this page (the number you see on that webpage may not be the number you see in the picture shown here, because it represents the number of times this article has been read and is updated realtime...), I always get an empty content. I am aware that this might be the dynamic content issue, but yet don't have a clue how to crawl it properly.
The code I used is:
item["read"] = info.xpath("div[#id='zwmbti']/div[#id='zwmbtilr']/span[#class='tc1']/text()").extract()
I think the xpath is set correctly and I have checked the return value of this response and it indeed told me that there is nothing under this directory. Results shown here:'read': [u'<div id="zwmbtilr"></div>']
If it has something, there should be something between <div id="zwmbtilr"> and </div>.
Really appreciated if you guys share any thoughts on this!
I just opened your link in Firefox with NoScript enabled. There nothing inside the <div #id='zwmbtilr'></div>. If I enable the javascripts, I can see the content you want. So, as you already new, it is a dynamic content issue.
Your first option is try to identify the request generated by javascript. If you can do that, you can send the same request from scrapy. If you can't do it, the next option is usually to use some package with javascript/browser emulation or someting like that. Something like ScrapyJS or Scrapy + Selenium.
I tested site for vulnerables (folder /service-contact) and possible XSS DOM issue came up (using Kali Linux, Vega and XSSER). However, i tried to manually test url with 'alert' script to make sure it's vulnerable. I used
www.babyland.nl/service-contact/alert("test")
No alert box/pop-up was shown, only the html code showed up in contact form box.
I am not sure i used the right code (i'm a rookie) or did the right interpretation. Server is Apache, using javascript/js.
Can you help?
Thanks!
This is Not Vulnerable to XSS, Whatever you are writing in the URL is Coming in Below Form section ( Vraag/opmerking ) . And the Double Quotes (") are Escaped. If you try another Payload like <script>alert(/xss/)</script> That Also won't work, Because this is Not Reflecting neither Storing. You will see output as a Text in Vraag/opmerking. Don't Rely on Online Scanners, Test Manually, For DOM Based XSS ..Check Sink and Sources and Analyze them.
The tool is right. There is a XSS-Vulnerability on the site, but the proof of concept (PoC) code is wrong. The content of a <textarea> can only contain character data (see <textarea> description on MDN). So your <script>alert("test")</script> is interpreted as text and not as HTML code. But you can close the <textarea> tag and insert the javascript code after that.
Here is the working PoC URL:
https://www.babyland.nl/service-contact/</textarea><script>alert("test")</script>
which is rendered as:
<textarea rows="" cols="" id="comment" name="comment"></textarea<script>alert("test")</script></textarea>
A little note to testing for XSS injection: Chrome/Chromium has a XSS protection. So this code doesn't exploit in this browser. For manual testing you can use Firefox or run Chrome with: --disable-web-security (see this StackOverflow Question and this for more information).
I am building a social networking website for musicians and I would like them to be able to enter the embed code provided by SoundCloud, so that they may have a sound clip on their posts.
However, I am unsure how I would sanitise the input, to ensure that it's only a SoundCloud iframe embed code that they enter. I want to avoid them pasting in embed code for say, YouTube or anything else for that matter.
An example embed code from SoundCloud looks like:
<iframe width="100%" height="166" scrolling="no" frameborder="no" src="https://w.soundcloud.com/player/?url=http%3A%2F%2Fapi.soundcloud.com%2Ftracks%2F85146642"></iframe>
I am using the HTML parser, jSoup to sanitise input.
The key fragment to this is the src content:
https://w.soundcloud.com/player/?url=http%3A%2F%2Fapi.soundcloud.com%2Ftracks%2F85146642
One possibility I thought of, was to extract the src parameters value and then rebuild the iframe myself, this way, only storing the URL and ensuring that any HTML output to the browser is that which I have created myself. Doing this may also allow me to run checks on the domain name etc.
I'm wondering what the best approach would be for this?
Appreciate any input you may have.
Thanks,
Michael.
PS - I am using Railo (ColdFusion server) and the Java jSoup library, but I guess the same principles would apply regardless of what language one would use.
I wrote a simple pixel tracking program that works something like this
Step 1) tracker.com sets a cookie
Step 2) mysite.com displays <img src="tracker.com/tracking.php">. That image reads the cookie from Step 1 & does some processing.
Works great in Chrome, Firefox and Safari. But when tested in IE, the cookie can't be read in Step 2. It's as if the cookie doesn't exist -- but I know it does.
Any idea why IE pretends the cookie doesn't exist? I've tried messing with P3P headers, no luck.
Does your domain have a privacy policy? I forget what it's called, maybe p3p? Some random list of headers that you have to add.
Try adding the domain in the src attribute to trusted sites in IE. My guess is this is security, and you've got a rather arcane security measure you're coming up against.
If the cookie setting domain is 2 letters, I believe there is a bug within IE that prevents IE from doing cookies properly with 2 letter domains. If it isn't 2 letters, then nevermind.
It may be that IE is blocking 3rd-party cookies.
Its tricky without knowing more specifics of its use, but I'm trying at this late hour to figure out how to clone the cookie for the current domain using REMOTE_ADDR
So, the first answer was more about testing... try using JS to handle this -
From site-reference.com forums..
<script type='text/javascript'>
var track = new Image();
track.src="http://www.my-site.com/tracker.php?self=" + this.location;
</script>
*NOTE: Capital "I" in image, not lowercase!
Let us know! :D
Fred