Prevent Suspicios actions in django - django

I have following suspicious logs in my django output logs. Somebody is doing vulnerability check or what?
Invalid HTTP_HOST header: '47.95.231.250:58204'. You may need to add '47.95.231.250' to ALLOWED_HOSTS.
[03/Dec/2017 20:09:28] "GET http://47.95.231.250:58204/ip_js.php?IP=my_ip&DK=my_port&DD=FOQGCINPZHEHIIFR HTTP/1.0" 400 62446
How can I prevent it? Tried to block 47.95.231.250 IP, but didn't help. Request is coming from different IP address probably

Check your server - you will very likely find that 47.95.231.250 is your own server's IP address! This error indicates that someone is able to get to your server but that your Django application is not set to respond to the requests based on IP address. If it is working otherwise then you actually have ALLOWED_HOSTS set correctly based on domain name. Do NOT add the IP address to your ALLOWED_HOSTS unless you actually want to access it by IP address, which is usually not necessary in a production system.
So the IP address access is an indication of someone trying to get it that shouldn't be allowed. The port 58204 is also a clue. Regular ports for most web servers are 80 & 443. Occasionally, in order to have alternate ports for different applications, you will see 8000 or 8080 or other numbers. 58204 is not a typical web site port number. The third clue is that the requested file is ip_js.php which indicates a request for a PHP-based web site and not Django/Python.
Bottom line: See if you can configure your firewall to allow ONLY the necessary open ports from the outside world in to your server. Typically this will include:
80 - http
443 - https
22 - ssh
and possibly others depending on how your server is configured and what applications it runs. For example, if you host MySQL or another database on the same box then you will need to open additional ports if-and-only-if you require remote access to the database outside of the application.

Related

Direct IP Attacks, ElastickBeanstalk/NGINX

I have a bit problem with my site.
So setup is ElasticBeanstalk(NGINX) + Cloudflare
But each day around 4AM I have direct IP attack to my server.
Around 300 requests in 1-2 minutes.
Bot try to access some resources like
GET /phpMyadmi/index.php HTTP/1.1
GET /shaAdmin/index.php HTTP/1.1
POST /htfr.php HTTP/1.1
For now all of them going to 80 or 8080 ports.
And successfully handled by Nginx configuration that redirect it to example:443
server {
listen 80 default_server;
listen 8080 default_server;
server_name _;
return 301 https://example.com$request_uri;
}
server {
listen 443 ssl;
server_name example.com;
ssl on;
...
So questions are,
have many site owners/devOps face the same attack. What is your action to prevent such attacks.
For now it is handled very well and did not affect on server work, should I worry about it? Or just filter out logs with /phpmy/ pattern and forgot about it.
Before this attacks I have request with method PROPFIND, should I blocked it for security reasons? It is handled by default server for now.
I know that I can use Cloudflare Argotunel or ELB + WAF. But I am not really want to do it for now.
I have found one solution on stackoverflow. Is whitelist of all cloudflare ips. But i think it is not a good one.
Also another solution that should work I guess it is to check Host header, and compare it with 'example.com'.
To answer your specific questions:
Every public IP receives unwanted traffic like you describe, sadly its pretty normal. This isnt really an attack as such, its just a bot looking for signs of specific weaknesses, or otherwise trying to provoke a response that contains useful data. This data is no doubt later used in actual attacks, but its basically automated recognisance on a potentially massive scale.
This kind of script likely isnt trying to do any damage, so as long your server is well configured & fully patched its not a big concern. However these kinds of scans are first step towards launching an attack - by identifying services & application versions with known vulnerabilities - so its wise to keep your logs for analysis.
You should follow the principle of least privilege. PROPFIND is related to WebDAV - if you dont use it, disable it (or better white list the verbs you do support and ignore the rest).
If your site is already behind CloudFlare then you really should firewall access to your IP so only Cloudflares IPs can talk to your server. Those IPs do change, so I would suggest a script to download the latest from https://www.cloudflare.com/ips-v4 and have it periodically update your firewall. Theres a slightly vuage help article from CloudFlare on the subject here: https://support.cloudflare.com/hc/en-us/articles/200169166-How-do-I-whitelist-Cloudflare-s-IP-addresses-in-iptables-
If for whatever reason you cant firewall the IP, your next best option is something like fail2ban (www.fail2ban.org) - its a log parser that can manipulate the firewall to temporarily or permanently block an IP address based on patterns found in your log files.
A final thought - id advise against redirecting from your IP to your domain name - your telling the bot/hackers your URL - which they can then use to bypass the CDN and attack your server directly. Unless you have some reason to allow HTTP/HTTPS traffic to your IP address, return a 4XX (maybe 444 a " Connection Closed Without Response") instead of redirecting when requests hit your IP. You should then create a separate server block to handle your redirects, but only have it respond to genuine named URLs.

ShimmerCat with reverse proxy when using "the old way"

I have used ShimmerCat with sc-tool to connect to my development sites as described here, and everything has worked always like a charm with it, but I also wanted to follow the "old way" configuring my /etc/hosts. In this case I had a small problem, the server ran ok, and I could access to my development site (let's say that I used https://www.example.com:4043/), but I'm also using a reverse proxy as described on this article, and on the config file reference. It redirects to a Django app I'm using. Let's say it is my devlove.yaml config file:
---
shimmercat-devlove:
domains:
www.example.com:
root-dir: site
consultant: 8080
cache-key: xxxxxxx
api.example.com:
port: 8080
The problem is that when I try to access to a URL that requests the API, a 404 response is sent from the API. Let me try to explain it through an example. I try to access to https://www.example.com:4043/country/, and on this page I do a request to the API: /api/<country>/towns/, then the API endpoint is returning a 404 response so it is not finding this URL, which does not happen when using Google Chrome with sc-tool. I had set both domains www.example.com, and api.example.com on my /etc/hosts. I have been trying to solve it, but without any luck, is there something I'm missing? Any help will be welcome. Thanks in advance.
With a bit more of data, we may be able to find the issue. In the meantime, here is a list of troubleshooting tips:
Possible issue: DNS is cached in browser, /etc/hosts is not being used (yet)
This can happen if somehow your browser has not done a DNS lookup since before you changed your /etc/hosts file. Then the connection is going to a domain in the Internet that may not have the API endpoint that you are calling.
Troubleshooting: Check ShimmerCat's log for the requests. If this is the issue, closing and opening the browser may solve the issue.
Possible issue: the host header is incorrect
ShimmerCat uses the Host header in HTTP/1.1 requests and the :authority header in HTTP/2 requests to distinguish the domains. It always discards any port number present in them. If these headers are not set or are set to a domain other than the ones ShimmerCat is configured to listen, the server will consider the situation so despicable that it will just close the connection.
Troubleshooting: This is not a 404 error, but a connection close (if trying to connect un-proxied, directly to the SSL port where ShimmerCat is listening), or a Socks Connection Failed (if trying to connect through ShimmerCat's built-in SOCKS5 proxy). In the former case, the server will print the message "Rejected request to Just https://some-domain-or-ip/some/path" in his log, using the actual value for the domain, or "Rejected request to Nothing", if no header was present. The second case is more complicated, because the SOCKS5 proxy is before the HTTP routing algorithm.
In any case, the browser will put a red line in the network panel of the developer tools. If you are accessing the server using curl, like this:
curl -k -H host:api.incorrect-domain.com https://127.0.0.1:4043/contents/blog/data-density/
or like
curl -k -H host:api.incorrect-domain.com
(notice the --http2 parameter in the second form), you will get a response:
curl: (56) Unexpected EOF
Extra-tip: There is a field for the network address in the browser's developer tools. Check it, it may tell you something!
Possible issue: something gets messed up when passing the request to the api back-end.
API backends are also sensitive to the host header, and to additional things like authentication cookies and request parameters.
Troubleshooting: A way to diagnose things is invoking ShimmerCat using the --show-proxied-headers command-line option. It makes ShimmerCat to report the proxied headers to the log:
Issuing request with headers :authority: api.example.com
:method: GET
:path: /my/api/endpoint/path/
:scheme: https
accept: */*
user-agent: curl/7.47.0
Possible issue: there are two instances or more of ShimmerCat running
...and they are using different configurations. ShimmerCat uses port sharing among several processes to increase availability. A downside of this is that is perfectly possible to mistakenly start ShimmerCat, forget about stopping it, and start it again after changing some configuration bit. The two instances will be running at the same time, and any of them will pick connections made to the listening port.
Troubleshooting: Shutdown all instances of ShimmerCat, then double-check there are none running by using the corresponding form of the ps command, and start the server with the configuration you want.

AWS VPC instance doesn't resolve public DNS name

The problem:
My url xyz.co is getting resolved into an ugly AWS public DNS name such as ec2-11-22-33-44.ap-southeast-2.compute.amazonaws.com. It doesn't stick to xyz.co.
Here's what I did:
I have set up my Route 53 configuration according to http://docs.aws.amazon.com/Route53/latest/DeveloperGuide/MigratingDNS.html, so I created an A record pointing to the IP address and a CNAME alias record to allow for www.xyz.co. The domain is sitting with godaddy and the name servers are configured to the AWS delegation set.
The instance itself sits in the default VPC. I double-checked and DNS resolution and DNS host names are both active.
I'm a bit stuck here with this. Any help would be highly appreciated!
Cheers,
Bruno
What you are seeing isn't actually related to name resolution.
It's impossible for DNS to change what appears in the address bar of the web browser -- DNS and web browsers simply do not interact in a way that makes such behavior possible. Your URL is not "getting resolved to" this new value via anything DNS-related, since DNS, configured correctly or incorrectly, can't impact what shows up there, on its own.
The fact that navigating to the IP address has the same impact backs up this assertion.
What you are seeing is not related in any way to DNS or Route 53 or even EC2 or VPC. Your web server is, for whatever reason, configured to redirect incoming requests with any other hostname... over to the hostname you are subsequently seeing in the address bar (which is the one you don't like).
You should notice this in your web server's log. It will be issuing a 301 or 302 redirect on the initial request.
You should also be able to verify this yourself with the curl command line utility. Here, a server accessed as "www.example.com" is redirecting the browser to use its preferred address, "example.com." (Hostnames and addresses are sanitized, but the output is otherwise unmodified.)
$ curl -v www.example.com
* Rebuilt URL to: www.example.com/
* Hostname was NOT found in DNS cache
* Trying 203.0.113.139...
* Connected to www.example.com (203.0.113.139) port 80 (#0)
The next block of output is the request sent to the web server.
> GET / HTTP/1.1
> User-Agent: curl/7.35.0
> Host: www.example.com
> Accept: */*
>
The http response from the web server includes a redirect.
< HTTP/1.1 301 Moved Permanently
< Content-length: 0
< Location: http://example.com/
< Connection: close
<
* Closing connection 0
If we were using a browser instead of a command line tool, this would cause the address bar to change to the new value, and establish a new connection to the web server (which might actually be the same one, or a different one... in this case, it's the same).
In spite of the fact that I had typed http://www.example.com into my browser, it would now show only http://example.com/. The same thing would happen if I typed in the IP address if my server was configured to redirect everything to one hostname, as yours appears to be. In my case, it's deliberately configured to do something else.
The above should illustrate that you do not actually have a DNS issue, and explain the mechanism that's causing this to occur (because you may find this to be something useful to do deliberately in the future, as my web servers do -- any www.* request gets stripped and rewritten without the www).
The issue is with your web server, telling the browser to use a different hostname. How to fix that will depend on what web server you are running and why it thinks the redirect is necessary.

Wamp server online modeconflict with router

I'm trying to setup my local wamp server online using its online/offline feature.
Every time I head to my IP address, I get redirected to my router's config page instead of wamp homepage.
I tried changing these lines in httpd.config file at C:\wamp\bin\apache\appache2.4.9\conf\
# Listen: Allows you to bind Apache to specific IP addresses and/or
# ports, instead of the default. See also the <VirtualHost>
# directive.
#
# Change this to Listen on specific IP addresses as shown below to
# prevent Apache from glomming onto all bound IP addresses.
#
#Listen 12.34.56.78:80
Listen 0.0.0.0:80
Listen [::0]:80
to
# Listen: Allows you to bind Apache to specific IP addresses and/or
# ports, instead of the default. See also the <VirtualHost>
# directive.
#
# Change this to Listen on specific IP addresses as shown below to
# prevent Apache from glomming onto all bound IP addresses.
#
#Listen 12.34.56.78:8080
Listen 0.0.0.0:8080
Listen [::0]:8080
but my ip (the one that shows at http://whatsmyip.org) still redirects to my router's config page. Even if I write XXX.XXX.XXX.XXX:8080 i get ERR_CONNECTION_TIMED_OUT
EDIT: Adding some info
Router info:
http://www.zyxel.com/products_services/amg1202_t10b.shtml?t=p
Dynamic IP (That's my ISP plan)
Your router is at a lower level. Some type of remote config/help/whatever is turned on in your router. Since it is at a lower level then your Apache server, it will always grab the packets with destination port 80 first.
You need to figure out what "feature" is turned on in your router.
Internet -> Router -> Apache
Your apache is probably just fine.
Regards to your router...since it's not showing what you need. You could factory reset it and make sure you set it up with "advanced view" selected. You should have all your options to set port-forwarding correctly
https://www.youtube.com/watch?v=QZ0zoZ_pbUM
The easiest way is to give your PC a static IP address and add that address to your router's DMZ but without knowing your router's make/model I cannot give you step by step instructions.
It's basically a port forwarding issue. Check you have forwarded the correct ports on your router.
Most home/office routers will do this.
It's because most do not have a feature called LoopBack either available or turned on.
Without this feature the router has no way of spotting that the ip address you are using in your browser is in fact your routers WAN IP Address so it assumes you are addressing your routers httpd server and launches the routers Admin panels. ( yup there is an web server running in your router )
With loopback enabled it would, loop you back into your internal network.

Browser DNS behaviour - multiple IP addresses for single domain

I have the following problem and I am struggling to find if a solution exists for it, or what the best practice is here.
I have a site example.com, and multiple servers with different IP addresses around the world. I am seeing the following behaviour in my browser (Chrome) - for simplicity lets say I only have 2 IP addresses for now.
I connect to example.com and data is served from IP address A.B.C.D (server 1). After 40 seconds or, any subsequent request (GET/POST) to example.com then resolves to W.X.Y.Z (server 2). My issue is that I have a cookie based web session on server 1, and server 2 knows nothing about that session. There is no kind of back-end replication I can do to sync state between both servers.
Is there any way I can force the browser to only connect to a single server once a server has served the first page? I am using RR DNS with multiple A records at the moment. Would switching to CNAME solve this problem?
One solution I was thinking of was having each server reply with a configured domain in the http headers (e.g. server1 would reply with X-HEADER: server1.example.com, server2 would reply with X-HEADER: server2.example.com) and then force the browser to make requests to these. I would then have a single IP address for server1.example.com, and another for server2.example.com. Does this break same-origin policy though? If I am on example.com can I send GET/POST/PUT etc. to server1.example.com?
I'd really appreciate any advice on this - I'm so confused!