Lan-based authentication of applications and secure channel - c++

I've a server which authenticates clients applications and allows them to execute or not. I want to have a secure channel between server and clients. I've written my server with both ssl and ssh protocols but I don't know which one must be used in these scenario and which one is more logical.
both client and server has been written in Qt,c++.
ssl is mostly used in https and web based application and ssh is used for remote administration, so I think that ssh is more appropriate for my server. also I think it's not a good design if I release my application with certificates(exe file along with a certificate.)

Both ssl and ssh use the same, fundamental, cryptographic technologies. Looks like you understand the practical differences between the two, so use whichever one is convenient for your application. As long as you follow proper security practices (keeping your certificates and/or private keys under a watchful eye, etc etc etc), either one will give you the same, basic, level of security.

Related

How to enable SSL communication between two Apache Tomcat servers

I have two computer systems each having an apache server. One machine is a client machine and the other is a server machine. I want both the client request and the server response to be encrypted thus making the data transfer safe.
Could someone please give pointers/steps on how I could make progress in this front.
The communication doesn't involve any GUI components meaning the communication is purely a backend one.
Both the client and the server are coded in java. I am using Axis2 and jaxws for the communication.
Currently I am able to send the client request and receive the server response without SSL enabled. Now If I enable SSL does it mean that I should also modify the existing code according to the SSL or the current working code still holds good.
You have many options here. Since you mention SSL...
On each server generate an asymmetric key-pair (RSA 2048 is a safe choice). Then create a self signed certificate on each server. Then copy each certificate to the other machine and mark it as trusted by the Java environment that apache is using and that NONE OTHER are trusted. Configure SSL/TLS on each of the apaches to use a good symmetric cypher (3DES is a safe choice, but there are other newer ciphers if you want leading edge). Next ensure that all access between Tomcat servers is via https URLs and you should be in decent shape.
An alternative is to use IPSEC to establish a static tunnel between the two servers using certificates or other trust bases.
One fairly simple option is to use stunnel, which is available via the standard package-manager on most *NIX systems. You configure an stunnel as a client (and server if you with) on one server and then another as the server (and client if you wish) and then configure your Tomcat instance(s) to connect to localhost:XYZ where XYZ is the port where stunnel is listening.
The nice part about using stunnel is that you can use it to tunnel any protocol: it is neither a Tomcat-specific nor a Java-specific technique, so you can use it for other applications in the same environment if you want.

Difference between WS Security Mechanisms

What advantages are there between implementing SOAP messages over SSL by modifying the web.xml/ejb-jar.xml VS modifying the WSDL with a WS-Policy?
Our project can acheive its goal of having our clients (ourselves) access the Web Service over a secured connection by adding a transport-guarantee but we're not sure if that is a complete/correct solution.
With SSL, you get a point-to-point encryption between client and service. If the service is not the ultimate receipient of the message, but a proxy that routes this message to another service, you have no encryption between the two services, or you have to configure that also.
WS-Security configured via WS-Policy has the potential to give you end-to-end encryption between the client and the ultimate receipient of the message, because you can encrypt the message body. You do not need to configure SSL for every pair of communicating entities. Every proxy can just route the message on, as defined in the header.
That said, if you do not need end-to-end guarantees, but point-to-point is enough (which is your scenario, as far as I understand), I would say that using SSL is a fair choice.
Another thing to consider is that the WS-Security implementations of client and service need to be able to interoperate. SSL generally is quite mature, but my personal experience is that WS-Security implementations are not. So, if you have different WS-* Stacks for client and server, it might be some hacking and trial-and-error to find a policy configuration that works for both.

Building a secure web service without buying (and renewing) a certificate

The goal: a web service, secure, that will be called by exactly two clients, both outside the local network. The most obvious way to secure a web service is via https, obtaining a certificate from some CA. The problem is that this is a silly waste of money. The whole point of a CA is that it is a publicly trusted authority, so I don't have to verify my identity to every single person who wants to use my web page, the CA is doing that for them. However, when I'm dealing with a very small number of known clients, rather than the wide open public, I don't need anyone to vouch for me. We can do verification through our own channels.
Is there any way to accomplish this? Ideally, I'd be able to operate https with a certificate recognized by those calling my service, and if nobody else recognizes the certificate as valid, I don't care. I don't want them calling this service anyway. This should be a fairly common need in B2B data transfers (fixed-endpoint communications, rather than services intended for public consumption), and it is easy to do if you're transferring actual files (PGP-style encryption lets you simply verify and import one another's keys directly). But it isn't clear to me that this is possible with web sessions. It sure should be, if it is not. I have found some documentation of self-signed certificates, but they all seem to be intended for development purposes only, or internal use only, and expire quickly or require being on the same network.
Is there a good way to achieve this? Or am I going to have to encrypt the contents of the web service call instead? The latter is less desirable, because it would require the users of this service to add encryption code to their client applications (which assumes they are building these on a platform which easily can add support for common encryption routines, something that may or may not be true) rather than just relying on the standard, https framework.
I'm working on the Windows (IIS/ASP.NET) platform, if that makes any difference.
Creating your own CA and generating self-signed certificates is the way to go. There is no reason why they must be for development only, or expire quickly. You will be in control of this.
When I implemented this in a Java environment, the most useful resource I found was on Baban's Weblog. You can probably find a resource more relevant to your IIS environment.
To offer a secure service you don't need any certificate, only an https link. You are right that, in your case, a certificate does nothing for you. If your visitor insists on a certificate, then I second #sudocode's answer.
Our old authorization service used certificates, but in rebuilding it we got rid of the certificates and went to an Amazon ec2 style security for the services.

Why is RPC over HTTP a secutity problem?

I am currently reading on Web Services. There is a SOAP tutorial at http://www.w3schools.com/soap/soap_intro.asp . The following paragraph is from that page:
"Today's applications communicate using Remote Procedure Calls (RPC) between objects like DCOM and CORBA, but HTTP was not designed for this. RPC represents a compatibility and security problem; firewalls and proxy servers will normally block this kind of traffic."
I don't understand this. Can someone explain it to me, please. Escpecially I want to know, why RPC is a security problem (at lease over HTTP). Knowing why exactly it is a compatibility problem would be nice, too.
The point they're making is that "traditional RPC" sometimes uses unusual low-level network protocols that often get blocked by corporate firewalls. Because SOAP uses HTTP, it's traffic is "indistinguishable" from normal web page views, and so is not caught out by these firewalls.
Not too sure about the security point, I think they're probably implying that HTTP can easily be secured over HTTPS and that proprietary RPC protocols often don't. Of course, this is protocol dependant, not all RPC protocols will be insecure, and many of them can be tunnelled over HTTPS.
Regarding compatibility, the problem is that it's not obvious to make something that uses DCOM talk to something that uses CORBA, for example. One of the aims of SOAP is to provide interoperability, so as to harmonize the way this sort of communication is implemented. (There may still be a few glitches regarding interoperability with SOAP, depending on the tools you use.)
Regarding security, for a long time, policies have been made around using port numbers to distinguish applications: if you want to block a certain service (say NNTP), you block its port at the firewall level. It makes it easy to have a coarse control over which applications may be used. What SOAP over HTTP does is pushing the problem at the layer above. You can no longer distinguish which application or service is used from the port number at the TCP level, instead, you would have to be able to analyse the content of the HTTP message and the SOAP messages to authorize certain applications or services.
SOAP mostly uses HTTP POST to send its messages: that's using HTTP as a transport protocol, whereas HTTP is a transfer protocol, therefore not using HTTP in accordance to the web architecture (SOAP 2 may have attempted to improve the situation). Because almost everyone needs access to the web nowadays, it's almost guaranteed that the HTTP ports won't be blocked. That's effectively using a loop-hole, if no security layer is added on top of this.
This being said, in terms of security, there are advantages in using HTTP for SOAP communication as there is more harmonization in terms of existing HTTP authentication systems for example. What the SOAP/WS-* stack attempts to do is to harmonize the "RPC" communications, independently of the platform. It's not a case of "SOAP is secure" v.s. "DCOM/CORBA isn't", you still have to make use of its security components, e.g. WS-Security, and you may have been able to achieve a reasonable level of security with other systems too.

Non-Transparent Proxy Caching of SSL

I asked the question before but didn't phrase it quite right. I'm using RESTful principles to build a secure web-app that uses both transport authentication/encryption and message level security.
The message level security is essentially client-independent (still encrypted though), and hence this allows the individual messages to be cached, or stored on an intermediary server without significant risk of exposing private data.
Transport level security is needed to authenticate both end-points using TLS client-authentication. The situation is analogous to having a central mainframe where messages originate, and caches at each branch where the clients are located. I want the client->cache and cache->mainframe connections to be secured using TLS and the individual X509 Certificates. Hence, the client will know it is talking to a proxy, and the mainframe will know it is talking to the proxy and not directly to the client.
Is there some way of doing this using HTTP standards, and not through some hack?
Essentially, I want the client to try and access the mainframe URI, to know it has to go through the proxy, and use TLS with the proxy (with the proxy having its own certificate), and then for the proxy to proceed to connect to the mainframe (with each having their own certificate) on behalf of the client. The proxy can cache the data the mainframe returns, and use that instead of having to connect to the mainframe each time.
Does anybody know proxy/caching software or a method that will allow this?
Would this get more responses on serverfault.com as it's essentially a server software/config question rather than a programming problem per se?
Basically, it sounds like you want a standard SSL reverse proxy with caching. You could do this without writing any code with Apache + mod_cache, configured as a reverse proxy.
The kicker is the message security. It'd only work if your requests are 100% cacheable based only on path/querystring, and if they were "unique by client" (eg, a client ID in the QS or something). Something tells me that one or both of these are not true. This would be pretty trivial to build in ASP.NET, or by extending mod_cache (basically just standard response caching, bucketed by the client cert thumbprint).