In https protocol a per-secrect key is generated by client and is sent to server . And for thereon symmetric encryption takes place .My question is if this is the case how a message digest is considered as signed by server .
Or the digital signature comes to play only in establishing https connection ?.
Does it apply only to public keys ?.
You skipped the most important part (in terms of trust) of the protocol. The client (browser) needs to confirm that the server is who it claims to be. The server provides to the client its certificate for this proof. The client then does a number of checks on the certificate such as:
Does it have a valid chain of trust?
Is the root signature authority a trusted authority by the client?
Is the certificate within its period of validity?
Has the certificate been revoked? (this check is not always possible)
And a few other checks. Once the client trusts the certificate, it can then use it to establish a session with the the server using its public key. The creation of the session involves sharing symmetric keys (note the plural) for the remaining communications.
During the session, two types of security are enforced: privacy via encryption and message integrity via MAC (typically HMAC). The MAC is a symmetric method for computing signatures on every message using a shared secret key (one of the keys that was shared during the creation of the session). This prevents a 3rd party from altering the messages in transit.
You ask how "message digest is considered digital signature?" I think you are referring to the MAC part of the protocol in your question. For more information, see Wikipedia.
Related
I need a snippet of code for a program i am writing with Boost Asio SSL.
I have a system of two clients, that connect with each other. I require them to do a mutual authentication, so, that at the end of the handshake() command, both clients can be certain that the other client has the private key to the certificate they supplied.
Both clients have a context object, lets call them ctx1and ctx2 and each client has a public certificate and a private key.
Is it possible to set up the context objects so, that when i call socket.handshake() the clients will do a two-way authentication. If not, what would be the correct way to go about to achieve my goal ?
It looks like boost just uses the OpenSSL interface.
I don't know boost much but I've implemented lots of OpenSSL internals for Perl and came to the following conclusions after reading the relevant parts of the boost source code:
To have mutual authentication with OpenSSL you have to use SSL_VERIFY_PEER on the client side and SSL_VERIFY_PEER|SSL_VERIFY_FAIL_IF_NO_PEER_CERT on the server side. If you use only SSL_VERIFY_PEER on the server side it will only send the certificate request to the client, but silently accept if the client sends no certificate back.
With boost this would probably be:
ctx.set_verify_mode(ssl::verify_peer); // client side
ctx.set_verify_mode(ssl::verify_peer|ssl::verify_fail_if_no_peer_cert); // server side
If you set verify_mode this way it will verify the certificates against the configured trusted CAs (set with ctx.load_verify_file or ctx.load_verify_path).
If you have full control over the CA which signed the certificates (i.e. its your own CA) it might be enough for you to accept any certificates signed by this CA. But if you use a CA which also signed certificates you don't want to accept, like typically the case with public CAs, you also need to verify the contents of the certificate. Details how to do this depends on your protocol, but for the usual internet protocols like HTTP or SMTP this involves checking commonName and/or subjectAltNames of the certificate. Details like wildcard handling vary between the protocols.
Boost provides rfc2818_verification to help you with HTTP-style validation, although from reading the code I think the implementation is slightly wrong (multiple wildcards accepted, IDN wildcards allowed - see RFC6125 for requirements).
I don't know of any standards for verifying client certificates. Often just any certificate signed by a specific (private) CA will be accepted. Other times certificates from a public CA matching a specific e-mail pattern. It looks like boost does not help you much in this case, so you probably have to get OpenSSL SSL* handle with sock.native_handle() and then use OpenSSL functions to extract certificate (SSL_get_peer_certificate) and to check the contents of the certificate (various X509_* functions).
At least if public CAs are involved you should also check the revocation status of the certificates. It looks like boost does not provide a direct interface to CRL (certificate revocation list) so you have to use ctx.native_handle() with the appropriate OpenSSL functions (X509_STORE_add_crl etc). Using OCSP (online status revocation protocol) is way more complex and relevant OpenSSL functions are mostly undocumented, which means you have to read the OpenSSL source code to use them :(
One cant force other side to authenticate against you, it is up to protocol, Ieach side autenticate only against other side. Just follow manuals as http://www.boost.org/doc/libs/1_47_0/doc/html/boost_asio/overview/ssl.html
ssl::context ctx(ssl::context::sslv23);
ctx.set_verify_mode(ssl::verify_peer);
ctx.load_verify_file("ca.pem");
Iām using wso2esb-4.9.0, then wso2-5.0.0, and now working on wso2ei-6.0.0
I would like to create a secured proxy service that could be used by different clients.
Required security is scenario 5 (sign and encrypt ā x509 authentication) : Messages are encrypted using service (server) public certificate and signed using client private key. Since multiple client will use the service, each client should sign the message using client private key.
At the server side, the public certificate for each client should be already be in the trust store of the server.
At server side, I can do a hardcoded configuration for rampart in order to respond correctly for incoming request from client1 OR for client2. This means that, for now, the only solution I found in order to support 2 clients, for the same backend service, is through the use of two proxy service, each configured to verify the signature of exactly one client.
I would like to get advice or pointers in order to configure the server side in a dynamic way, where only one proxy service is used. This proxy service should be able to configure at run time correctly rampart, in order to decrypt and verify the signature of the incoming message (one proxy, for N clients).
Thanks,
So, in fact nothing extra needs to be done at configuration level of rampat, since the harcoded configuration is related to the server side, when it would like to consume smthg for other party.
Since the incomming request contains informations related to certificate data, server will dynamically check his keystore in order to verify the incomming signed message... so once again, just configure rampart, at service side, and at client side and let the magic happen.
thanks to wso2 team for great product suite !
I'm running a small web app on an EC2 instance and I want some friends to be able to use it. I also want to make it use HTTPS, just for basic security purposes (prevent packet snooping whenever possible). Of course I am using a self-signed certificate, because my budget for this project is $0. But Chrome throws up a warning page upon trying to visit it:
Your connection is not private
Attackers might be trying to steal your information from [...]
(for example, passwords, messages, or credit cards).
NET::ERR_CERT_AUTHORITY_INVALID
This server could not prove that it is [...]; its security certificate is not trusted by your computer's operating system. This may be caused by a misconfiguration or an attacker intercepting your connection.
Is is not true that "any encryption is better than no encryption"? On unenecrypted HTTP, I could be trying to steal information as well, and don't have to prove anything about my server identity, AND my communication can be read in plaintext by packet sniffing, but Chrome doesn't throw up any warning flags there...
What gives? Why does Chrome hate self-signed certificates so much? Why doesn't it just put a little red box over the padlock icon, instead of giving me a two-click warning page?
Edit Sep 2021 (this was applicable since 2016): Just suck it up and use one of the free key issuers. Let's Encrypt and AWS ACM will literally do it for free.
This question is not specific to chrome. Firefox and probably other browsers behave similar and in the last years the warnings even got stricter. Complaining about these warnings shows more a missing understanding of the role of certificates in HTTPS.
With HTTPS one expects encryption, i.e. private communication between the browser and the server with nobody sniffing or manipulating the transferred data. At the beginning of the encryption client and server exchange the encryption keys, so that one can encrypt the data and the other can decrypt the data. If some man-in-the-middle manages to manipulate the key exchange in a way that it gets control over the encryption keys, then the connection will still be encrypted but not private. Thus it is essential that the key exchange is protected and this is done with certificates. Only with proper checking of the certificates the client can verify that it talks to the server and not some man-in-the-middle and thus the critical key exchange can be protected.
Certificates are usually verified by
Checking the trust chain, i.e. if the certificate is directly or indirectly (via immediate certificates) issued by a certificate agency (CA) trusted by the browser or operating system.
Verifying that the certificate is issued for the expected hostname, i.e. the subject matches the hostname.
With self-signed certificates or certificates issued by a CA unknown to the browser/OS this check will fail. In this case it is unknown, if the original certificate was already not issued by a trusted CA or if there is some man-in-the-middle manipulating the connection. Being man-in-the-middle is not hard, especially in unprotected networks like public hotspots.
Because the browser can not verify the certificate in this cases it will throw a big fat warning to show the user that something is seriously wrong. If your friends know that you only have some self-signed certificate there they should also know that this is the expected behavior of the browser in this case. You also should provide them with the fingerprint of your certificate so that they can be sure that this is the expected certificate - because there is no other way to check the validity of this certificate. Note that this warning also comes once because the browser saves the fingerprint and from then on knows that your site is associated with this certificate. But if you change the certificate it will complain again.
If you don't like the trouble of teaching all of your friends how to properly verify your certificate then get yourself a certificate by a public CA. They don't need to be expensive and some also issue free certificates.
Is is not true that "any encryption is better than no encryption"?
While bad encryption might be better than no encryption, transferring sensitive data over en encrypted but man-in-the-middle connection is definitely worse then transferring non-sensitive data with no encryption. And contrary to plain HTTP you can actually detect a potential man-in-the-middle attack with HTTPS. What you can not do is find out if this a potential man-in-the-middle attack or if the non-verifiable certificate is actually the expected, because the browser has no previous knowledge what to expect. Thus a self-signed certificate is actually not that bad provided that the browser knows up-front that this site only provides a self-signed certificate. And it might also not bad if the transferred data are not sensitive. But how should the browser know what kind of data and what kind of certificate are to expect?
Because SSL/TLS are trying to solve two problems in a single stroke, but you are completely ignoring one of them.
SSL is meant to provide both encryption (between two endpoints) and authentication (where each endpoint is exactly who it says it is). This latter solution is generally meant to be solved via organizations known as Certificate Authorities (CAs), who are supposed to verify your identity before agreeing to give you a certificate. While there have been some spectacular failures of this level of trust in the past, we don't have anything better yet, and so browsers still expect to see SSL/TLS certificates to be issued by one of these Trusted authorities; if it's not, there's no way to know if you're actually talking to the party you intended to.
So, while it may be encrypted, having an encrypted conversation with someone who shouldn't be party to the conversation is actually WORSE than having a plaintext conversation with someone who SHOULD be party to the conversation.
There are a few free SSL providers out there such as Let's Encrypt that won't cause this warning and still fit in your $0 budget.
Put chrome://flags/#allow-insecure-localhost into the Chrome address bar, then select the enabled option.
The reason for the click through is to offer some protection from phishing attacks.
The $0 work-around is to create a certification authority verified by Justaskin_ (which is just a special file) and have your friends to install the public key derived from of it on their computers.
Instead, use the private key to sign your https certificate and their browsers will accept it. OpenSSL is one tool that can do this.
So, i have made a thrift based program with a client and a server and client can communicate well with server. Now, since the data transfer will be quite crucial, I wanted some kind of security in it.
So, I thought of login system, but the problem is I am not storing any kind of session data on server side(I don't even know, what should i store, after all the client request come and go and there is no way to differentiate them). So after much thinking, this is what i came up with
Using random numbers, i would generate some kind of random string when the server starts
Client side will enter the username and password which will be verified at the server end using PAM authentiation(just read something about it).
If verified, server will just send that random generated string to the client side
Client will send that string to server every time it tries to execute a RPC
If verified, server will do the work, else return some error code
Possible problem that i can think of
Currently, when server goes down, and client was in midst of some RPC, it would give some error message and when server restarts, we can do the task without any problem
Now, if the server goes down, then the string generated will be different. So i will again have to do the authentication part
So, what do you think of this entire schema for authentication? Are there any better or simpler way?
P.S : I am not using any kind of database. I am using C++ on both sides. My Client side uses QT
Disclaimer - I do not have much idea as to how PAM works, so I only have some high-level questions about this approach. I apologize in advance if I misunderstood any part of your approach.
When you say you want to secure the data transfer, I feel like you want to have authentication and secrecy, you only have an approach for authentication now.
For instance, if client C1 is authenticating to server(assuming credentials are not sent in cleartext), the server sends the random string in step 3. What happens when someone else is sniffing on the network? Can a rogue client not send the random string and perform RPC calls to the server, posing as C1? If username and password are sent to server in cleartext, can someone on the network get access to the credentials also? Also, what about data that is subsequently sent? It is just encoded in thrift format and can be decoded by anyone on the network, correct? Is the data sensitive?
If so, I want to suggest the use of PKI/certificates. Using a self-signed certificate must be fine. If you only want the client to authenticate to the server and prove it is legitimate, you can make all the clients present their certificate. Certificate is basically a public key for that client signed by an authority that vouches for that client.The client has the private key stored locally, that will never leave the client. Now, when client presents the certificate to server, server looks at who signed the certificate(CA). If it is a CA the server trusts, it can send the random string or just the thrift data directly, encrypted using the client's public key. The client will be able to decrypt with its private key and it looks like random bytes to anyone else who is sniffing. The server will do this for every single client and only needs to store the name of the certifying authority it trusts. This could be your name and address. You can generate the self-signed certificate on every client using openssl. But this means you have additional setup work on each client. Generate a key-pair and certificate. You can explore this approach if this constraint works for you.
I'm attempting to write a simple HTTP/HTTPS proxy using Boost ASIO. HTTP is working fine, but I'm having some issues with HTTPS. For the record this is a local proxy. Anyway so here is an example of how a transaction works with my setup.
Browser asks for Google.com
I lie to the browser and tell it to go to 127.0.0.1:443
Browser socket connects to my local server on 443I attempt to read the headers so I can do a real host lookup and open a second upstream socket so I can simply forward out the requests.
This is where things fail immediately. When I try to print out the headers of the incoming socket, it appears that they are already encrypted by the browser making the request. I thought at first that perhaps the jumbled console output was just that the headers were compressed, but after some thorough testing this is not the case.
So I'm wondering if anyone can point me in the right direction, perhaps to some reading material where I can better understand what is happening here. Why are the headers immediately encrypted before the connection to the "server" (my proxy) even completes and has a chance to communicate with the client? Is it a temp key? Do I need to ignore the initial headers and send some command back telling the client what temporary key to use or not to compress/encrypt at all? Thanks so much in advance for any help, I've been stuck on this for a while.
HTTPS passes all HTTP traffic, headers and all, over a secure SSL connection. This is by design to prevent exactly what you're trying to do which is essentially a man-in-the-middle attack. In order to succeed, you'll have to come up with a way to defeat SSL security.
One way to do this is to provide an SSL certificate that the browser will accept. There are a couple common reasons the browser complains about a certificate: (1) the certificate is not signed by an authority that the browser trusts and (2) the certificate common name (CN) does not match the URL host.
As long as you control the browser environment then (1) is easily fixed by creating your own certificate authority (CA) and installing its certificate as trusted in your operating system and/or browser. Then in your proxy you supply a certificate signed by your CA. You're basically telling the browser that it's okay to trust certificates that your proxy provides.
(2) will be more difficult because you have to supply the certificate with the correct CN before you can read the HTTP headers to determine the host the browser was trying to reach. Furthermore, unless you already know the hosts that might be requested you will have to generate (and sign) a matching certificate dynamically. Perhaps you could use a pool of IP addresses for your proxy and coordinate with your spoofing DNS service so that you know which certificate should be presented on which connection.
Generally HTTPS proxies are not a good idea. I would discourage it because you'll really be working against the grain of browser security.
I liked this book as a SSL/TLS reference. You can use a tool like OpenSSL to create and sign your own certificates.