I'm familiar with classical encryption algorithms and mathematics, like RSA and ECC, but just out of interest. I'm not a specialist in this field. I'd like to start a long-term project, but since I'm not a cryptographer, it's very difficult to research this topic and get a clear and correct answer. I'm looking to use OpenSSL as a black-box for this purpose.
My question: Does OpenSSL provide any post quantum asymmetric algorithms for both encryption and/or signatures?
If not, are there any plans in the future to support this?
PS: Please note that I'm not asking for software recommendation as I understand this is off-topic. I'm asking about OpenSSL here and its supported algorithms.
No, it does not.
However, you should monitor the Open Quantum Safe project, which creates a library that operates with OpenSSL with the view of introducing post-quantum safe algorithms into OpenSSL.
Related
In OpenSSL documentation for SHA512 there's written following recommendation:
Applications should use the higher level functions EVP_DigestInit(3) etc. instead of calling the hash functions directly.
What is the reason for that? It's safer? There is no explanation why I should use IT.
I want to make SHA512 hash and according to this recommendation, I should use for computing this hash EVP_* functions instead of SHA512_* functions. Or did I understand it wrong?
SHA512_CTX m_context;
SHA512_Init(&m_context)
SHA512_Update(&m_context, data, size)
SHA512_Final(hash, &m_context);
auto m_context = EVP_MD_CTX_create();
EVP_DigestInit_ex(m_context, EVP_get_digestbyname("sha512"), NULL);
EVP_DigestUpdate(m_context, data, size);
EVP_DigestFinal_ex(m_context, hash, NULL);
I asked the same thing and here is their answer
https://github.com/openssl/openssl/issues/12260
For a number of reasons. For example:
In some cases you get sub-optimal implementations with the low-level APIs vs the EVP APIs. An example from the "cipher" world (but the same concept applies to digests) is the low-level function AES_encrypt. If you call that you will never get the AESNI optimized version which may be available on your platform.
We want to encourage applications to use a consistent API for all types of digests/ciphers etc. This makes it much easier for everyone to update code as security advice changes. For example you mention the MD5 digest APIs. MD5 is no longer recommended. It's a lot harder to update your code to use some other digest if you've used the low-level APIs vs the high level ones. Looking into the future imagine some algorithm has some major flaw discovered in it that requires a rapid migration away from it to some other algorithm. We want the OpenSSL ecosystem to be as agile as possible to be able to deal with that.
The old APIs no longer fits architecturally with how OpenSSL 3.0 works. All algorithm implementations are now made available by "providers" in OpenSSL 3.0 - for example we have the "default" provider, the "fips" provider and a "legacy" provider. The low level APIs circumvent all of that which means we have to maintain 2 ways of doing everything (3 actually when you bring ENGINEs into the picture as well). This leads to unnecessary code bloat and complexity....which is definitely something you want to avoid in a crypto library. Ideally we would have removed the old ways completely - but that would have been too big a breaking change.
Environment: Program is written in c/c++ for ubuntu 16.04 - no need cross platform solution.
I am programming a http network daemon, and i do not have /dev/urandom in the chroot, or any other entropy system inside the chroot.
I know that for generating the key/certificate, openssl definitely needs entropy. But once they are generated, and you are only using the key/certificate for encrypting client communications with your server - does the server daemon still need an entropy source?
Yes.
It needs entropy for generating nonces, and for some asymmetric signature schemes.
It is probably possible to securely protect client communications without an entropy source - but I would be extremely nervous that one has missed a crucial part of the protocol which needs that bit of entropy.
Also, if you want perfect forward secrecy, you will need entropy to generate the temporary [EC]DH keys.
Your choices are:
Consult an expert cryptographer to devise a protocol which requires no entropy (beyond the initial key). Make sure they can construct/point to a proof that the protocol is secure.
Get /dev/urandom in your chroot.
As James K Polk suggested in a comment: implement an entropy-gathering daemon in user space. However you then probably need to consult an expert to determine if you have enough entropy.
Aside: When you say "encrypting client communications" I presume you are actually using some sort of authenticated encryption scheme (for example AES+HMAC or AES-GCM). If not, you probably have bigger problems than a lack of entropy.
If you have specific questions about whether your communication protocol needs additional entropy, https//crypto.stackexchange.com is full of people who would be happy to discuss the details of how to do it.
I am new to C++ and extremely surprised by the lack of accessible, common probability manipulation tools (i.e. the lack of things in Boost and the standard library). I've done a lot of scientific programming in other languages, but the standard and/or ubiquitious third party add-ons always include a full range of probability tools. A friend billed up Boost to be the equivalent ubiquitous add-on for C++, but as I read the Boost documentation, even it seems to have a dearth of what I would consider extremely elementary built-ins.
I cannot find a built in that takes some sort of array of discrete probabilities and produces an index chosen according to those probabilities. I can of course write my own function for this, but I just wanted to check whether I am missing a standard way to do this.
Having to write my own functions at such a low-level is a bad thing, I feel, but I am writing a new simulation module for a larger project that is all in C++. My usual go-to tactic would be to write it in Python and link the Python to the C++, but because several other people are going to have to manage this code once I finish it, and none of them know Python, I think it would be more prudent to deliver it to them all in C++.
More generally, what do people do in C++ for things like sampling from standard distributions, in particular something as basic as a multi-variate normal distribution?
Perhaps I'm misunderstanding your intention, but it seems to me what you want is simply std::discrete_distribution.
(Moved from comment.)
Did you look at Boost.Math.StatisticalDistributions? Specifically, its Discrete Probability Distributions?
Boost is not a library, it's a collection of libraries, so it can sometimes be difficult to find exactly what you're looking for – but that doesn't mean it isn't there. ;-]
As mentioned, you'll want to look at boost/math/distributions and friends to meet your needs.
Here's a very good, detailed tutorial on getting these working for you in Boost. You may also want to throw your weight behind stan as well, which looks quite promising within this space.
Boost's math libraries are pretty good for working with different distributions, but if you are only interested in sampling (as in the problem you mentioned in your post), then looking at the boost Random libraries might be more germane to your task. This link shows how to simulate rolling a weighted die, for example.
You should do less C++ bashing, and more question asking - we try to be helpful and respectful on SO. Questions like yours are often tagged as inflammatory.
Boost::math seems to provide exactly what you're looking for: https://www.quantnet.com/cplusplus-statistical-distributions-boost/ - I'm not 100% sure about how well it handles multi-variate distributions though (nor am I an expert on statistics).
Get it here: http://www.boost.org/doc/libs/1_49_0/libs/math/doc/html/index.html
I am writing an application which has an authenticity mechanism, using HMAC-sha1, plus a CBC-blowfish pass over the data for good measure. This requires 2 keys and one ivec.
I have looked at Crypto++ but the documentation is very poor (for example the HMAC documentation). So I am going oldschool and use Openssl. Whats the best way to generate and load these keys using library functions and tools ? I don't require a secure-socket therefore a x.509 certificate probably does not make sense, unless, of-course, I am missing something.
So, do I need to write my own config file, or is there any infrastructure in openssl for this ? If so, could you direct me to some documentation or examples for this.
Although it doesn't answer your question directly, if you are looking at this as a method of copy protection for your program, the following related questions may make for interesting reading.
Preventing the Circumvention of Copy Protection
What copy protection technique do you use?
Software protection by encryption
How do you protect your software from illegal distribution?
This is the solution I am going for atm. Unless of course someone comes up with a better one, or one that solves my specific problem.
I will put three files in /etc/acme/auth/file1 file2 and file3, binary files with randomly generates numbers for the 2 keys and the ivec, and do the same in windows but under c:\etc\acme\auth.
I have a custom build of a Unix OS.
My task: Adding an IPSec to the OS.
I am working on Phase I, done sending the first 2 packets.
What I am trying to do now is making the Identification Payload.
I've been reading RFC 2409 (Apendix B) which discuss the keying materials (SKEYID, SKEYID_d, SKEYID_a, SKEYID_e and the IV making).
Now, I use SHA-1 for authontication and thus I use HMAC-SHA1 and my encryption algorithm is AES-256.
The real problem is that the RFC is not clear enough of what should I do regarding the PRF.
It says:
"Use of negotiated PRFs may require the
PRF output to be expanded due to
the PRF feedback mechanism employed by
this document."
I use SHA-1, does it mean I do not negotiate a PRF?
In my opinion, AES is the only algorithm that needs expention (a fixed length of 256 bit), so, do I need to expand only the SKEYID_e?
If you happen to know a clearer, though relible, source then the RFC please post a link.
You cannot negotiate a PRF based solely on RFC2409, so don't worry about that. 3 key Triple-DES, AES-192, and AES-256 all require the key expansion algorithm in Appendix B. Many implementations have these, so testing interoperability should not be that hard.
The IETF RFCs are often not clear enough. However, they are written for the sole purpose of describing interoperability so finding a reference implementation to either explore its code or test against is almost essential. Indeed 2409 specifically notes:
The authors encourage independent implementation, and interoperability testing, of this hybrid protocol.
Finding another implementation is what you really need; finding someone else's source is better still. Failing that, read the bibliography. It has been said that some RFCs written by some firms intentionally obfuscate or simply hide the information needed to produce a conformant implementation in order to build 'market advantage'. There is no royal road to understanding 2049.