Blocking IP addresses, preventing DoS attacks - denial-of-service

So this is more of a general question on the best practice of preventing DoS attacks, I'm just trying to get a grasp on how most people handle malicious requests from the same IP address which is the problem we are currently having.
I figure it's better to block the IP of a truly malicious IP as high up as possible as to prevent using more resources, especially when it comes to loading you application.
Thoughts?

You can prevent DoS attacks from occuring in various ways.
Limiting the number of queries/second
from a particular ip address. Once
the limit is reached, you can send a
redirect to a cached error page to
limit any further processing. You
might also be able to get these IP
address firewalled so that you don't
have to process their requests at
all. Limiting requests per IP address
wont work very well though if the
attacker forges the source IP address
in the packets they are sending.
I'd also be trying to build some
smarts into your application to help
dealing with a DoS. Take Google maps
as an example. Each individual site
has to have it's own API key which I
believe is limited to 50,000 requests
per day. If your application worked
in a similar way, then you'd want to
validate this key very early on in
the request so that you don't use too
many resources for the request. Once
the 50,000 requests for that key are
used, you can send appropriate proxy
headers such that all future requests
(for the next hour for example) for
that key are handled by the reverse
proxy. It's not fool proof though. If
each request has a different url,
then the reverse proxy will have to
pass through the request to the
backend server. You would also run
into a problem if the DDOS used lots
of different API keys.
Depending on the target audience for
your application, you might be able
to black list large IP ranges that
contribute significantly to the DDOS.
For example, if your web service is
for Australian's only, but you were
getting a lot of DDOS requests from
some networks in Korea, then you
could firewall the Korean networks.
If you want your service to be
accessible by anyone, then you're out
of luck on this one.
Another approach to dealing with a DDOS is to
close up shop and wait it out. If
you've got your own IP address or IP
range then you, your hosting company
or the data centre can null route the
traffic so that it goes into a block
hole.
Referenced from here. There are other solutions too on same thread.

iptables -I INPUT -p tcp -s 1.2.3.4 -m statistic --probability 0.5 -j DROP iptables -I INPUT n -p tcp -s 1.2.3.4 -m rpfilter --loose -j ACCEPT
# n would be an numeric index into the INPUT CHAIN -- default is append to INPUT chain
more at...
Can't Access Plesk Admin Because Of DOS Attack, Block IP Address Through SSH?

Related

Jetty's DDOS Protection

Currently Jetty has DOSFilter which appears to be providing protection against DOS attack i.e. it keeps track of number of requests from a connection. In DDOS attack, we expect attack could be from millions of ip addresses and in that case DOSFilter won't do the job. Any other strategy you could apply here so that Jetty could survive ?
Dealing with millions of IP addresses ...
This would need to be solution before the connection is accepted. some kind of OS or network hardware solution.
Jetty, being a server, has to accept the connection in order to do anything with it.
You could probably use the Jetty request log and a custom fail2ban setup to ban IP addresses at the OS level based on some kind of criteria in the access log. (too many requests on a connection over X amount of time, triggering an IP specific DOSFilter action, ban that IP at the OS level for Y amount of time)

Clone http traffic to another port on same server transparently

I am experimenting with following setup.
Clone/copy (but not redirect) all incoming HTTP requests from port 80 to another port say 8080 on same machine. I have a simple NGINX + Lua based WAF which is listening on 8080. Essentially, I am running two instances of webservers here, one which is serving real requests and other one working on cloned traffic for detection purpose. I don't care about being able to block the malicious requests so I dont care about being inline.
I want to use WAF only for detection purpose i.e. it should analyze all incoming requests, raise alert and drop the request after that. This will not hamper anything from users point of view since port 80 is serving real requests.
How can I clone traffic this way and just discard it after analysis is done ? Is this feasible ? If yes, please suggest any tools which can clone traffic with minimal performance hit.
2.
Have a look : https://github.com/buger/gor
Example instructions are straightforward. Additional logging or certain forwards you could possibly add as well
In the current Nginx version, there is an ngx_http_mirror_module, which retranslates requests to another endpoint and ignores responses. See also this answer

Hosting multiple websites on a single server

I have a bunch of different websites, mostly random weekend projects that I'd like to keep on the web because they are still useful to me. They don't see more than 3-5 hits per day between all of them though, so I don't want to pay for a server for each of them when I could probably fit them all on a single EC2 micro instance. Is that possible? They all run off different web servers, since I tend to experiment with a lot of new tech. I was thinking I could have each webserver serve on a different port, then have incoming requests to app1.com get routed to app1.com:3000 and requests to app2.com get routed to app2.com:3001 and so on, but I don't know how I would go about setting that up.
I would suggest that what you are looking for is a reverse web proxy, which typically includes among its features the ability to understand portions of the request at layer 7, and direct the incoming traffic to the appropriate set of one or more back-end ip/port combinations based on what's observed in the request headers (or other aspects of the request).
Apache, Varnish, and Nginx all have this capacity, as does HAProxy, which is the approach that I use because it seems to be very fast and easy on memory, and thus appropriate for use on a micro instance... but that is not at all to imply that it is somehow more "correct" to use than the others. The principle is the same with any of those choices; only the configuration details are different. One service is listening to port 80, and based on the request, relays it to the appropriate server process by opening up a TCP connection to the appropriate destination, tying the ends of the two pipes together, and otherwise for the most part staying out of the way.
Here's one way (among several alternatives) that this might look in an haproxy config file:
frontend main
bind *:80
use_backend app1svr if { hdr(host) -i app1.example.com }
use_backend app2svr if { hdr(host) -i app2.example.com }
backend app1svr
server app1 127.0.0.1:3001 check inter 5000 rise 1 fall 1
backend app2svr
server app2 127.0.0.1:3002 check inter 5000 rise 1 fall 1
This says listen on port 80 of all local IP addresses; if the "Host" header contains "app1.example.com" (-i means case-insensitive) then use the "app1" backend configuration and send the request to that server; do something similar for app2.example.com. You can also declare a default_backend to use if none of the ACLs match; otherwise, if no match, it will return "503 Service Unavailable," which is what it will also return if the requested back-end isn't currently running.
You can also configure a stats endpoint to show you the current state and traffic stats of your frontends and backends in an HTML table.
Since the browser isn't connecting "directly" to the web server any more, you have to configure and rely on the X-Forwarded-For header inserted into the request headers to identify the browser's IP address, and there are other ways in which your applications may have to take the proxy into account, but this overall concept is exactly how web applications are typically scaled, so I don't see it as a significant drawback.
Note these examples do use "Anonymous ACLs," of which the documentation says:
It is generally not recommended to use this construct because it's a lot easier
to leave errors in the configuration when written that way. However, for very
simple rules matching only one source IP address for instance, it can make more
sense to use them than to declare ACLs with random names.
— http://cbonte.github.io/haproxy-dconv/configuration-1.4.html
For simple rules like these, this construct makes more sense to me than explicitly declaring an ACL and then later using that ACL to cause the action that you want, because it puts everything together on the same line.
I use this to solve a different root problem that has the same symptoms -- multiple sites for development/test projects, but only one possible external IP address (which by definition means "port 80" can only go to one place). This allows me to "host" development and test projects on different ports and platforms, all behind the single external IP of my home DSL line. The only difference in my case is that the different sites are sometimes on the same machine as the haproxy and other times they're not, but the application seems otherwise identical.
Rerouting in way you show - depends on the OS your server is hosting on. For linux you have to use iptables, for windows you could use windows firewall. You should set all incoming connections to a port 80 to be redirected do desired port 3000
But, instead of port, you could use a different host name for each service, like
app1.apps.com
app2.apps.com
and so on. You can configure it with redirecting on your DNS hosting, for apps.com IMHO this is best solution, if i got you right.
Also, you can configure a single host to reroute to all other sites, like
app1.com:3001 -> apphost1.com
app1.com:3002 -> apphost2.com
Take in mind, in this case, all traffic will pas through app1.com.
You can easily do this. Set up a different hostname for each app you want to use, create a DNS entry that points to your micro instance, and create a name-based virtual host entry for each app.
Each virtual host entry should look something like:
<VirtualHost *>
ServerName app1.example.com
DocumentRoot /var/www/html/app1/
DirectoryIndex index.html
</VirtualHost>

C++ - Detecting multiple connections to server

In a Linux/C++ TCP server I need to prevent a malicious client from opening multiple sockets, otherwise they could just open thousands of connections until the server crashes.
What is the standard way for checking if the same computer already has a connection on the server? If I do it based off of IP address wouldn't that mean two people in the same house couldn't connect to the server at the same time even if they are on different computers?
Any info helps!
Thanks in advance.
TCP in itself doesn't really provide anything other than the IP address for identifying clients. A couple of (non-exclusive) options:
1) Limit the number of connections from any IP address to a reasonable number, like 10 or 20 (depending on what your system actually does.) This way, it will prevent malicious DoS attacks, but still allow for reasonable usability.
2) Limit the maximum number of connections to something reasonable.
3) You could delegate this to a higher-layer solution. As a part of your protocol, have the client send a unique identifier that is generated only once (per installation, etc). This could be easily spoofed, however.
I believe 1 and 2 is how many servers handle it. Put them in config files, so they can be tuned depending on the scenario.
There is only IP address to base "is this the same sender", unless you have some sort of subscription/login system (but then someone can try to log in a gazillion times at once, since there must be some sort of handshake for logging in).
If two clients are using the same router (that uses NAT or some similar scheme), your server will see the same IP address, so allowing only one connection per IP address wouldn't work very well for "multiple users from the same home". This also applies if they are for example using a university network or a company network.
So depending on what you are supplying and how many clients you can expect from the same place, you may need to go a fair bit higher than 10. Of course, if you log when this happens, and you see a fair number of "looks like valid real users failing to get in", you can adjust the number.
It may also make sense to have some sort of "rolling average", so you accept X new connections per Y seconds from each IP address, rather than having a fixed maximum number. This is meaningful if connections last quite some time... For short duration connections, it's pretty pointless...

Is IP address authentication safe for web service / site?

We're building a web service which users will subscribe to, and we were thinking of authenticating users based on their IP address.
I understand that this creates some hassle, eg, if a client's IP changes, but I wanted to know from a security point of view if this was safe? I'm not sure how hard it is to spoof IP addresses, but my thinking is that even if that happened we wouldn't end up sending data back to the attacker.
Any thoughts?
Thanks!
I'd say this would be very risky. Hackers use a number of IP spoofing tools to avoid detection, and there are legitimate anonymity uses. Check out IP onions via the Tor network (used extensively by wikileaks folks, for example) http://www.torproject.org
That said, if your data isn't sensitive AT ALL, like you want to guess their location to show the local weather, you can certainly use IP blocks to roughly locate people. If that kind of thing is all you're after, check out: http://www.hostip.info/dl/index.html
Think about proxies and VPN's.
And what if an user would like to use your site from an other PC?
You might want to use browser fingerprints (together with IP) it's safer, but then they must always use the same browser...
Conclusion: not a good idea.