Akamai request from Server unless ddos attack detected - akamai

Im looking for some help and no Akamai expert. Basically Im wondering if it is possible to implement Akamai in the following scenario:
1. Normal running request passes through akamai and only static assets such as css and js are cached, everything else is served direct from the server.
2. DDOS attack detected: Not sure if this is possible but the idea would be if akamai has a feature or api call to check if there has been a large scale ddos attack started, if so instead of passing the request to the app servers it reverts to a fully cached version of the site until the attack finishes.
Icing on the cake would be if we could set the threshold to what levels of attack to revert to, maybe based on a percentage over expected traffic levels?
Appreciate any sort of guidance.

Yes you can do this, either from Akamai config or by sending relevant headers from the origin (your site/API) telling Akamai not to cache certain files. Bear in mind that you'll still pay for traffic flowing through Akamai, even if they're not caching it, although if you have global users that want good performance you won't mind that cause the performance through their network is awesome.
They do indeed have a product that will protect you from ddos, how configurable it is I'm not sure but basically with Akamai everything is configurable if you've got the money to spend with them! Having said that, they wouldn't​ be serving anything to ddos requests of course, they'd be trying to determine where the attack is coming from and ignoring those requests so not sure what you're suggesting is what you'd actually want.

Related

What happened with polyfill.io 's CDN SSL certificate?

Certificate errors happen from time to time but this looks very fishy too me. A certificate for all those names? What's going on? Got the CDN hacked?
If so, what is the best thing to do? Removing it until it is fixed? That would be bad for a good part of our user-base. A hacked CDN is worse though, I guess… Maybe someone knows what really is going on?
I am one of the maintainers of polyfill.io, and a Fastly employee. Yesterday we enacted a change to our DNS configuration to enable support for HTTP/2.0. In doing so, a small typo was made in the hostname, resulting in our DNS targeting the wrong endpoint on Fastly's network, and a cert that was not valid for polyfill.io or cdn.polyfill.io. Having realised the error, we corrected the entry and it took around 30 minutes to propagate.
Lessons learnt include not increasing DNS TTL until some time after a change is made, in case the change needs to be rolled back.
The reason there are so many names listed on the cert is that we are sharing a cert with other Fastly customers. This is perfectly normal practice for CDN providers.
More information is available on the relevant GitHub issue:
https://github.com/Financial-Times/polyfill-service/issues/1208
We're very disappointed to suffer this downtime. Generally, polyfill.io has a very good uptime record, and we plan for origin outages. It's hard to mitigate the risks associated with DNS changes to the main public domain, but we are very sorry to everyone impacted.
Polyfill.io uses pingdom to independently monitor our uptime and reports that number here: https://polyfill.io/v2/docs/usage (data has up to 24 hrs latency).
Looks like "they" (see below) botched it, I can't see cdn.polyfill.io or *.polyfill.io on that large list, hence the error saying much the same.
(or maybe I overlooked some other problem)
To enlighten you about the names, virtual hosting (the act of hosting multiple websites on the same IP address on the same HTTP port) occurs over HTTPS /after/ the encryption is established, thus, at the time the server presents a certificate to the browser, it doesn't know which site exactly the user is after, that information is part of the encrypted request.
Thus, it is necessary for the certificate to cover all secure websites operating on that IP address and port combination.
CDN for Content Delivery Network, presumably a huge bunch of stuff is being hosted on this "network", probably not even owned by polyfill (i've no idea who they are), given the first name on the certificate is "f2.shared.global.fastly.net" you can speculate the true CDN, who actually messed up the cert, and what else they're hosting on the CDN there :)

Some Varnish Requests Getting Past my Block.vcl

Recently dealt with a botnet running a sub-domain brute force/crawling script. Would run through the alphabet & numbers sequentially, which resulted in a minor nuisance and small load increase for legitimate traffic.
For example, hitting a.domain, b.domain, .., 9.domain, aa.domain, .., a9.domain. etc.
Obviously, this is quite stupid and fortunately it only originated from a few IP's at a time and the website in question was behind multiple auto-scaling load balancers. Attacks were stopped grabbing the X-Forwarded-For address from Varnish, detection was scripted via the subdomain attempts and the IP added to a remote blocklist which would be regularly refreshed and added into a Block.vcl on all Varnish servers, voila.
This worked well, detecting and taking care of things within a couple minutes each time. However it was noted that in the space of time after blocking an brute IP and applying blockage, 99.9% of the traffic would stop but the occasional requests from the blocked IP would still manage to get through. Not enough to cause a fuss, but more raise the question why? As I don't understand why a request at the varnish level would still make it through when hitting the reject on IP rule of my Block.vcl?
Is there some inherent limitation that might have come into play here which would allow a small number of requests through? Maybe based on the available resources or sheer number of requests per second hitting Varnish overwhelming it ever so slightly?
Resource wise the web servers seemed fine so I'm unsure. Any ideas?

Why are RESTful Applications easier to scale

I always read that one reason to chose a RESTful architecture is (among others) better scalability for Webapplications with a high load.
Why is that? One reason I can think of is that because of the defined resources which are the same for every client, caching is made easier. After the first request, subsequent requests are served from a memcached instance which also scales well horizontally.
But couldn't you also accomplish this with a traditional approach where actions are encoded in the url, e.g. (booking.php/userid=123&travelid=456&foobar=789).
A part of REST is indeed the URL part (it's the R in REST) but the S is more important for scaling: state.
The server end of REST is stateless, which means that the server doesn't have to store anything across requests. This means that there doesn't have to be (much) communication between servers, making it horizontally scalable.
Of course, there's a small bonus in the R (representational) in that a load balancer can easily route the request to the right server if you have nice URLs, and GET could go to a slave while POSTs go to masters.
I think what Tom said is very accurate, however another problem with scalability is the barrier to change upon scaling. So, one of the biggest tenants of REST as it was intended is HyperMedia. Basically, the server will own the paths and pass them to the client at runtime. This allows you to change your code without breaking existing clients. However, you will find most implementations of REST to simply be RPC hiding behind the guise of REST...which is not scalable.
"Scalable" or "web scale" is one of the most abused terms when it comes to the web, the cloud and REST, and mainly used to convince management to get their support for moving their development team on board the REST train.
It is a buzzword that holds no value. If you search the web for "REST scalability" you'll find a lot of people parroting each other without any concrete evidence.
A REST service is exactly equally scalable as a service exposed over a SOAP interface. Both are just HTTP interfaces to an application service. How well this service actually scales depends entirely on how this service was actually implemented. It's possible to write a service that cannot scale as all in both REST and SOAP.
Yes, you can do things with SOAP that makes it scale worse, like rely on state and sessions. SOAP out of the box does not do this. This requires you to use a smarter load balancer, which you want anyway if you're really concerned with whatever form of scaling.
One thing that REST allows that SOAP doesn't, and that some other answers here address, is caching cacheable responses through an HTTP caching proxy or at the client side. This may make a REST service somewhat more lightly loaded than a SOAP service when a lot of operations' responses are cacheable. All this means is that fewer requests end up in your service.
The main reason behind saying a rest application is scalable is, Its built upon a HTTP protocol. Because HTTP is stateless. Stateless means it wont share anything between other request. So any request can go to any Server in a load balanced cluster. There is nothing forcing this user request go to this server. We can overcome this by using token.
Because of this statelessness,All REST application are very easy to scale. But if you want get high throughput(number of request capable in one second) in each server, then you should optimize blocking things from the application. Follow the following tips
Make each REST resource is a small entity. Don't read data from join of many tables.
Read data from near by databases
Use caches (Redis) instead of databases(You can save DISK I/O)
Always keep data sources as much as near by because these blocks will make server resources (CPU) ideal and it no other request can use that resource while it is ideal.
A reason (perhaps not the reason) is that RESTful services are sessionless. This means you can easily use a load balancer to direct requests to various web servers without having to replicate session state among all of your web servers or making sure all requests from a single session go to the same web server.

How heavy for the server to transmit data over HTTPS?

I am trying to implement web service and web client applications using Ruby on Rails 3. For that I am considering to use a SSL but I would like to know: how "heavy" is it for servers to handle a lot of HTTPS connection instead of HTTP? what is the difference of response time and the performance at all?
The cost of SSL/TLS handshake (which takes most of the overall "slowdown" SSL/TLS adds) nowadays is much less than the cost of TCP connection establishment and other actions associated with session establishment (logging, user lookup etc). And if you worry about speed and want to save any ns of time, there exist hardware SSL accelerators that you can install to your server.
It is several times slower to go with HTTPS, however, most of the time that's not what is actually going to slow your app down. Especially if you're running on Rails, your performance scaling is going to be bottlenecked elsewhere in the system. If you are doing anything that requires the passing of secrets of any kind over the wire (including a shared session cookie), SSL is the only way to go and you probably won't notice the cost. If you happen to scale up to the point where you do start to see a performance hit from encryption, there are hardware acceleration appliances out there that help tremendously. However, rails is likely to fall over long before that point.

Secure Web Services: REST over HTTPS vs SOAP + WS-Security. Which is better? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I'm not a security expert by any means, but I favor creating REST-style web services.
In creating a new service which needs to have the data it transmits secure. We've entered a debate over which approach is more secure - REST with HTTPS or a SOAP WS with WS-Security.
I am under the impression we could use HTTPS for all the web service calls and this approach would be secure. The way I look at it is, "if HTTPS is good enough for bank and financial web sites, it's good enough for me". Again, I'm not expert in this space, but I'd think that these people have thought considerably hard about this problem and are comfortable with HTTPS.
A coworker disagrees and says SOAP and WS-Security is the only way to go.
The web seems all over the board on this.
Maybe the community here could weigh in on the pros and cons of each? Thanks!
HTTPS secures the transmission of the message over the network and provides some assurance to the client about the identity of the server. This is what's important to your bank or online stock broker. Their interest in authenticating the client is not in the identity of the computer, but in your identity. So card numbers, user names, passwords etc. are used to authenticate you. Some precautions are then usually taken to ensure that submissions haven't been tampered with, but on the whole whatever happens over in the session is regarded as having been initiated by you.
WS-Security offers confidentiality and integrity protection from the creation of the message to it's consumption. So instead of ensuring that the content of the communications can only be read by the right server it ensures that it can only be read by the right process on the server. Instead of assuming that all the communications in the securely initiated session are from the authenticated user each one has to be signed.
There's an amusing explanation involving naked motorcyclists here:
https://learn.microsoft.com/archive/blogs/vbertocci/end-to-end-security-or-why-you-shouldnt-drive-your-motorcycle-naked
So WS-Security offers more protection than HTTPS would, and SOAP offers a richer API than REST. My opinion is that unless you really need the additional features or protection you should skip the overhead of SOAP and WS-Security. I know it's a bit of a cop-out but the decisions about how much protection is actually justified (not just what would be cool to build) need to be made by those who know the problem intimately.
REST security is transport dependent while SOAP security is not.
REST inherits security measures from the underlying transport while SOAP defines its own via WS-Security.
When we talk about REST, over HTTP - all security measures applied HTTP are inherited and this is known as transport level security.
Transport level security, secures your message only while its on the wire - as soon as it leaves the wire, the message is no more secured.
But, with WS-Security, its message level security - even though the message leaves the transport channel it will be still protected. Also - with message level security you can partly encrypt the message [not the entire message, but only the parts you want] - but with transport level security you can't do it.
WS-Security has measures for authentication, integrity, confidentiality and non-repudiation while SSL doesn't support non repudiation [with 2-legged OAuth it does].
In performance-wise SSL is very much faster than WS-Security.
Thanks...
Technically, the way you have it worded, neither is correct, because the SOAP method's communication isn't secure, and the REST method didn't say anything about authenticating legitimate users.
HTTPS prevents attackers from eavesdropping on the communication between two systems. It also verifies that the host system (server) is actually the host system the user intends to access.
WS-Security prevents unauthorized applications (users) from accessing the system.
If a RESTful system has a way of authenticating users and a SOAP application with WS-Security is using HTTPS, then really both are secure. It's just a different way of presenting and accessing data.
See the wiki article:
In point-to-point situations confidentiality and data integrity can also be enforced on Web services through the use of Transport Layer Security (TLS), for example, by sending messages over https.
WS-Security however addresses the wider problem of maintaining integrity and confidentiality of messages until after a message was sent from the originating node, providing so called end to end security.
That is:
HTTPS is a transport layer (point-to-point) security mechanism
WS-Security is an application layer (end-to-end) security mechanism.
As you say, REST is good enough for banks so should be good enough for you.
There are two main aspects to security: 1) encryption and 2) identity.
Transmitting in SSL/HTTPS provides encryption over the wire. But you'll also need to make sure that both servers can confirm that they know who they are speaking to. This can be via SSL client certificates, shares secrets, etc.
I'm sure one could make the case that SOAP is "more secure" but probably not in any significant way. The nude motorcyclist analogy is cute but if accurate would imply that the whole internet is insecure.
I don't yet have the rep needed to add a comment or I would have just added this to Bell's answer. I think Bell did a very good job of summing up the top level pros and cons of the two approaches. Just a few other factors that you might want to consider:
1) Do the requests between your clients and your service need to go through intermediaries that require access to the payload? If so then WS-Security might be a better fit.
2) It is actually possible to use SSL to provide the server with assurance as to the clients identity using a feature called mutual authentication. However, this doesn't get much use outside of some very specialized scenarios due to the complexity of configuring it. So Bell is right that WS-Sec is a much better fit here.
3) SSL in general can be a bit of a bear to setup and maintain (even in the simpler configuration) due largely to certificate management issues. Having someone who knows how to do this for your platform will be a big plus.
4) If you might need to do some form of credential mapping or identity federation then WS-Sec might be worth the overhead. Not that you can't do this with REST, you just have less structure to help you.
5) Getting all the WS-Security goop into the right places on the client side of things can be more of a pain than you would think it should.
In the end though it really does depend on a lot of things we're not likely to know. For most situations I would say that either approach will be "secure enough" and so that shouldn't be the main deciding factor.
Brace yourself, here there's another coming :-)
Today I had to explain to my girlfriend the difference between the expressive power of WS-Security as opposed to HTTPS. She's a computer scientist, so even if she doesn't know all the XML mumbo jumbo she understands (maybe better than me) what encryption or signature means. However I wanted a strong image, which could make her really understand what things are useful for, rather than how they are implemented (that came a bit later, she didn't escape it :-)).
So it goes like this. Suppose you are naked, and you have to drive your motorcycle to a certain destination.
In the (A) case you go through a transparent tunnel: your only hope of not being arrested for obscene behaviour is that nobody is looking. That is not exactly the most secure strategy you can come out with... (notice the sweat drop from the guy forehead :-)). That is equivalent to a POST in clear, and when I say "equivalent" I mean it.
In the (B) case, you are in a better situation. The tunnel is opaque, so as long as you travel into it your public record is safe. However, this is still not the best situation. You still have to leave home and reach the tunnel entrance, and once outside the tunnel probably you'll have to get off and walk somewhere... and that goes for HTTPS. True, your message is safe while it crosses the biggest chasm: but once you delivered it on the other side you don't really know how many stages it will have to go through before reaching the real point where the data will be processed. And of course all those stages could use something different than HTTP: a classical MSMQ which buffers requests which can't be served right away, for example. What happens if somebody lurks your data while they are in that preprocessing limbo? Hm. (read this "hm" as the one uttered by Morpheus at the end of the sentence "do you think it's air you are breathing?").
The complete solution (c) in this metaphor is painfully trivial: get some darn clothes on yourself, and especially the helmet while on the motorcycle!!! So you can safely go around without having to rely on opaqueness of the environments. The metaphor is hopefully clear: the clothes come with you regardless of the mean or the surrounding infrastructure, as the messsage level security does. Furthermore, you can decide to cover one part but reveal another (and you can do that on personal basis: airport security can get your jacket and shoes off, while your doctor may have a higher access level), but remember that short sleeves shirts are bad practice even if you are proud of your biceps :-) (better a polo, or a t-shirt).
I'm happy to say that she got the point! I have to say that the clothes metaphor is very powerful: I was tempted to use it for introducing the concept of policy (disco clubs won't let you in sport shoes; you can't go to withdraw money in a bank in your underwear, while this is perfectly acceptable look while balancing yourself on a surf; and so on) but I thought that for one afternoon it was enough ;-)
Architecture - WS, Wild Ideas
Courtesy : http://blogs.msdn.com/b/vbertocci/archive/2005/04/25/end-to-end-security-or-why-you-shouldn-t-drive-your-motorcycle-naked.aspx
I work in this space every day so I want to summarize the good comments on this in an effort to close this out:
SSL (HTTP/s) is only one layer ensuring:
The server being connected to presents a certificate proving its
identity (though this can be spoofed through DNS poisoning).
The communications layer is encrypted (no eavesdropping).
WS-Security and related standards/implementations use PKI that:
Prove the identity of the client.
Prove the message was not modified
in-transit (MITM).
Allows the server to authenticate/authorize the
client.
The last point is important for service requests when the identity of the client (caller) is paramount to knowing IF they should be authorized to receive such data from the service.
Standard SSL is one-way (server) authentication and does nothing to identify the client.
The answer actually depends on your specific requirements.
For instance, do you need to protect your web messages or confidentiality is not required and all you need is to authenticate end parties and ensure message integrity? If this is the case - and it often is with web services - HTTPS is probably the wrong hammer.
However - from my experience - do not overlook the complexity of the system you're building. Not only HTTPS is easier to deploy correctly, but an application that relies on the transport layer security is easier to debug (over plain HTTP).
Good luck.
REST Over HTTPS Should be a secure method as long as API provider implements authorization a server end. In a case of web application as well what we do is accessing a web application via HTTPS and some authentication/authorization, traditionally web applications did not have security issues then Restful API would also counter security issues without problem !
If your RESTFul call sends XML Messages back and forth embedded in the Html Body of the HTTP request, you should be able to have all the benefits of WS-Security such as XML encryption, Cerificates, etc in your XML messages while using whatever security features are available from http such as SSL/TLS encryption.