Do you know of a NGiNX module that performs something similar to verification of Amazon Web Service request signatures? - web-services

I'd like to restrict access to my web service to registered clients. The first thing I thought of was to mimic that of AWS which, in a nutshell, issues clients a non-secret and secret key pair, and requires clients to prove knowledge of the secret key by using a cryptographic function of some of the HTTP request data and the secret key, then specifying the output of the crypto function in a request header. AWS does the same and checks that the expected signature matches what the client has specified. The secret is not transmitted, blah blah. This is pretty typical and not that interesting albeit useful.
http://mws.amazon.com/docs/devGuide/Signatures.html
http://chrisroos.co.uk/blog/2009-01-31-implementing-version-2-of-the-amazon-aws-http-request-signature-in-ruby
My preferred web server for web services is nginx. I'd like to start requiring similar request signatures in certain services. It makes sense to me to create an nginx module that handles request signature validation before ever sending the request to an upstream process (my web service instance(s)).
Do you know of such a nginx module? Do you know of a different one that I can base my work off of?
There's a decent nginx module writing guide here:
http://www.evanmiller.org/nginx-modules-guide.html
Please note that I'm not asking "how do I write a nginx module?" I'm simply trying to avoid reinventing the wheel.
Thanks!

If I'm understanding correctly, you could simply check for custom headers with an if($http_{yourheader}){} and validate that against a backend such as memcached, or proxy to a fastcgi script, or even use an embedded perl script (although this will be slow and could block).
AFAIK there aren't any specific standard or third-party modules that do this, but a combination of them could provide a suitable solution (eg; $http_{header} + redis backend, for instance).
Is there a particular reason you're not looking to use custom SSL certs? They would seem an adequate solution for restricting access with added security.

Related

How to pentest rest apis using burpsuite?

I want to pen test rest apis, the use case I have is a client(desktop app with username and password) connecting to a server. So I am confused from where to start and how to configure burp. Usually I use burp to pen test websites, which is quite easier to configure, you only set the proxy and intercept in the browser, but now the use case is different.
Furthermore, I did some search on google I noticed postman is mentioned many times, I know it's a tool for building apis, but is it also used in the pentesting with the burp?
It may be useful to first confirm that the application is communicating via HTTP/HTTPS to ensure Burp is the right tool to use.
Postman is only useful for penetration testing if you already have Postman docs. It doesn't sound like that's the case here so I wouldn't worry about that.
Assuming the desktop app does use HTTP, there are two things you will need to do:
Change system-level proxy settings to point to Burp (127.0.0.1:8080)
Install and trust the Burp CA Certificate (available locally from http://burp:8080).
In some cases you might need to enable 'invisible proxying' in Burp.
Depending on the type of client, this may not always work at first, but if the client supports a proxy, you should see the traffic in your Burp window. Please do pay attention to your Dashboard in Burp, if you see TLS warnings, it may be an indicator the client uses certificate pinning, and some reverse engineering may be needed on the client.
As you know, burp, intercept a http/s protocol network and it isn't a tool for intercept network traffic. so To achieve your goal, you can use the wiresharkor something else, for finding a software rest api endpoint.
After that, you can start your penetration testing using the burp as you did before.
so how you can find rest api endpoint in wireshark?
you can filter network results, using this pattern:
tcp.port==443

Accessing a SOAP service URL from a Clients point of view

I was asked this question in a technical interview for a integration intern role.
He was digging much into understanding of SOAP web services.
Question). Consider that you are exposing a web service through SOAP to a Client.
The url through which you are providing the service is up and running when you check it.
But the Client has a problem, he is not able to access your webservice.
How will you go on troubleshooting this issue?
My response:
I would first check whether the url the client is trying to access the service is correct.
Will check the .wsdl file: port, bindings & will check once whether upon sending a SOAP request to the URL, am I receiving the SOAP response in local through SOAP UI.
If I get error, will troubleshoot based on the kind of error I get: Like page not found, null exception etc.
I felt he was still expecting some other point. He hinted saying where in what registry you will check all the web services which have been hosted(I guess this was much of a production support issue :P)
I told I may look into UDDI registry, but was not sure with this.
Please let me know your inputs on what could be possibly a right approach?
Apache jUDDI PMC here. Yes UDDI could be used to verify that the client is pointed at the right location, assuming the client knows where the UDDI server and that it is registered and the client knows what to query for on the UDDI server and a UDDI query is part of that client's normal workflow. That's a lot of assumptions but certainly feasible.
Most of time, the endpoint is in a config file somewhere or some idiot hard coded it.
That said, this my go to list for checking SOAP service connectivity (from the client's perspective)
DNS resolution of the hostname in the URL
Ping the remote host
HTTP GET to the URL of the SOAP service + ?wsdl (this usually works). This is also a good time to verify SSL connectivity.
You can also parse the WSDL doc, assuming one is returned for identify the endpoint url.
Finally if that all works, execute the service. HTTP 200 is general a positive sign
Edit:
Another alternative approach is to implement a very simple API (wsdl method) on every SOAP service that simple returns a true/false that answers the question "Am I open for business?". This method would provide a standardized approach for identifying if a service was available or not by testing an external dependencies (databases and whatnot).

Building a secure web service without buying (and renewing) a certificate

The goal: a web service, secure, that will be called by exactly two clients, both outside the local network. The most obvious way to secure a web service is via https, obtaining a certificate from some CA. The problem is that this is a silly waste of money. The whole point of a CA is that it is a publicly trusted authority, so I don't have to verify my identity to every single person who wants to use my web page, the CA is doing that for them. However, when I'm dealing with a very small number of known clients, rather than the wide open public, I don't need anyone to vouch for me. We can do verification through our own channels.
Is there any way to accomplish this? Ideally, I'd be able to operate https with a certificate recognized by those calling my service, and if nobody else recognizes the certificate as valid, I don't care. I don't want them calling this service anyway. This should be a fairly common need in B2B data transfers (fixed-endpoint communications, rather than services intended for public consumption), and it is easy to do if you're transferring actual files (PGP-style encryption lets you simply verify and import one another's keys directly). But it isn't clear to me that this is possible with web sessions. It sure should be, if it is not. I have found some documentation of self-signed certificates, but they all seem to be intended for development purposes only, or internal use only, and expire quickly or require being on the same network.
Is there a good way to achieve this? Or am I going to have to encrypt the contents of the web service call instead? The latter is less desirable, because it would require the users of this service to add encryption code to their client applications (which assumes they are building these on a platform which easily can add support for common encryption routines, something that may or may not be true) rather than just relying on the standard, https framework.
I'm working on the Windows (IIS/ASP.NET) platform, if that makes any difference.
Creating your own CA and generating self-signed certificates is the way to go. There is no reason why they must be for development only, or expire quickly. You will be in control of this.
When I implemented this in a Java environment, the most useful resource I found was on Baban's Weblog. You can probably find a resource more relevant to your IIS environment.
To offer a secure service you don't need any certificate, only an https link. You are right that, in your case, a certificate does nothing for you. If your visitor insists on a certificate, then I second #sudocode's answer.
Our old authorization service used certificates, but in rebuilding it we got rid of the certificates and went to an Amazon ec2 style security for the services.

Non-Transparent Proxy Caching of SSL

I asked the question before but didn't phrase it quite right. I'm using RESTful principles to build a secure web-app that uses both transport authentication/encryption and message level security.
The message level security is essentially client-independent (still encrypted though), and hence this allows the individual messages to be cached, or stored on an intermediary server without significant risk of exposing private data.
Transport level security is needed to authenticate both end-points using TLS client-authentication. The situation is analogous to having a central mainframe where messages originate, and caches at each branch where the clients are located. I want the client->cache and cache->mainframe connections to be secured using TLS and the individual X509 Certificates. Hence, the client will know it is talking to a proxy, and the mainframe will know it is talking to the proxy and not directly to the client.
Is there some way of doing this using HTTP standards, and not through some hack?
Essentially, I want the client to try and access the mainframe URI, to know it has to go through the proxy, and use TLS with the proxy (with the proxy having its own certificate), and then for the proxy to proceed to connect to the mainframe (with each having their own certificate) on behalf of the client. The proxy can cache the data the mainframe returns, and use that instead of having to connect to the mainframe each time.
Does anybody know proxy/caching software or a method that will allow this?
Would this get more responses on serverfault.com as it's essentially a server software/config question rather than a programming problem per se?
Basically, it sounds like you want a standard SSL reverse proxy with caching. You could do this without writing any code with Apache + mod_cache, configured as a reverse proxy.
The kicker is the message security. It'd only work if your requests are 100% cacheable based only on path/querystring, and if they were "unique by client" (eg, a client ID in the QS or something). Something tells me that one or both of these are not true. This would be pretty trivial to build in ASP.NET, or by extending mod_cache (basically just standard response caching, bucketed by the client cert thumbprint).

Web Service Interface

I'm looking to add a web services interface to an existing server application. The set of services to expose is not known at compile time and can change over the runtime life of the server.
From a tech standpoint all the server/web services endpoints will be on Windows.
In our server app a user will have the option to register workflows as 'web services callable'. This will create the WSDL defining this particular workflow service.
For the calling endpoint I'm thinking of an HttpModule that accepts the inbound web service request, unpacks the request and converts the XML data types into our server applications "domain", calls the server and finally converts the server outputs back into XML for return down the http connection.
Does that make sense?
Critical comments welcomed.
In effect writing your own WS engine. Clearly doable, but quite a bit of work to get right from scratch. I guess if you find some open source implementation, then adapting it should be possible.
A rather dirtier alternative, but one I've seen applied in another context, is to go for a simgle WS interface
String call( String workkFlowName, String payload)
The payload and response are both Strings containing any XML. So the caller needs to undestand the schemas for those XMLs. From the client's perspective the amount of coding effort is not much different. Your coding effort would I think be significantly redcued.
an HttpModule that accepts the inbound
web service request, unpacks the
request and converts the XML data
types into our server applications
"domain", calls the server and finally
converts the server outputs back into
XML for return down the http
connection.
That is what all web service frameworks do (e.g. Metro, Axis). So I can't see your problem. What's your concern with this approach?
The downside for the client is that, as far as I understand it, availability of your services may change over time. So you should consider a way to inform the client if the service is available (other than getting a time out error because it is not there), e.g. WS-ResourceLifetime or UUDI.
I ended up creating a C# class that implements the IHttpHandler interface. The implementation serves up the WSDL of the services exposed from our app and accepts SOAP posts to invoke the services. In the end most of the work went on converting SOAP types to our types and vice versa.