Why does reading from /dev/random nearly always block? - c++

I'm using kubuntu with kernel 2.6.38-12-generic
I want to read 16 random numbers from /dev/random at the start of my program.
However, it blocks after a relatively short time.
How long does it take for the /dev/random buffer to fill? why is it taking so long to fill.
I'm using this as a uuid generator with other sources of randomness added to seed
my mersenne twister. It's critical that I don't get duplicates or a duplicate seed.
If I change to /dev/urandom it works ok. Any view on using /dev/random over /dev/urandom.

You really should never use /dev/random. There are no known circumstances where the advantages of /dev/random over /dev/urandom matter, and the disadvantages are pretty obvious.
The difference is that /dev/urandom provides 'merely' cryptographically-secure random numbers while /dev/random provides truly random numbers (at least, that is what we believe). But there is no known situation where this difference matters and no known test that can distinguish "true" randomness from merely cryptographically-secure randomness.
I usually joke that /dev/urandom provides water and /dev/random provides holy water.

The man page of man 4 random answers the question:
When read, the /dev/random device will only return random bytes
within the estimated number of bits of noise in the entropy pool.
/dev/random should be suitable for uses that need very high quality
randomness such as one-time pad or key generation. When the
entropy pool is empty, reads from /dev/random will block until
additional environmental noise is gathered.
I'm so surprised people prefer asking than reading the man pages! You don't even need Internet to read the man pages of your system.
BTW, as I commented, the entropy pool is fed by physical phenomena (depends of the hardware), like e.g. mouse movements, key presses, ethernet packets, etc. Some few processors have a hardware random noise generator (e.g. the RDRAND machine instruction), and you can buy random USB devices (see also this list), etc.... Hence reading from /dev/random could be expansive (or even blocking). You'll use it for high quality randomness (e.g. required by cryptographic keys) or, at initialization, for seeding your PRNG. You should expect /dev/random to have a relatively small bandwidth (e.g. a few kilobytes or at most a megabyte per second at most) and it could have a lot of latency (dozens of milliseconds, or even more). Details are of course computer specific.
Read also Thomas Hühn's Myths about /dev/urandom

Reading from /dev/random is non-determinstic, because all it does is fetch the requested number of bits from the random pool. It will block until it can read the requested number of bits.
/dev/urandom, however, is the kernel's PRNG, and can supply a near-infinite stream of pseudo-random numbers.

Related

Why is random device creation expensive?

C++11 supports pseudo-random number generation through the <random> library.
I've seen multiple books which mention that it is quite expensive to keep constructing and destroying std::random_device, std::uniform_int_distribution<>, std::uniform_real_distribution<> objects, and they recommend to keep a single copy of these objects in the application.
Why is it expensive to create/destroy these objects? What exactly is meant by expensive here? Is it expensive in terms of execution speed, or executable size, or something else?
Can someone please provide some explanation?
std::random_device may be expensive to create. It may open a device feeding it data, and that takes some time. It can also be expensive to call because it requires entropy to deliver a truly random number, not a pseudo random number.
uniform_int_distribution is never expensive to create. It's designed to be cheap. It does have internal state though, so in extreme situations, keep the same distribution between calls if you can.
Pseudo-random number generators (PRNGs), like default_random_engine or mt19937, are usually semi-cheap to create, but expensive to seed. How expensive this is largely depends on the size of the state they keep internally, and what the seeding procedure is. For example, the std::mersenne_twister_engine in the standard library keeps a "large" (leaning towards "huge") state which makes it very expensive to create.
Upside:
You usually only use one temporary random_device and call it once to seed one PRNG. So, one expensive creation and call to random_device and one expensive seeding of a PRNG.
The short answer is: It depends on the system and library implementation
Types of standard libraries
In a fairytale world where you have the most shitty standard library implementation imaginable, random_device is nothing but a superfluous wrapper around std::rand(). I'm not aware of such an implementation but someone may correct me on this
On a bare-metal system, e.g. an embedded microcontroller, random_device may interact directly with a hardware random number generator or a corresponding CPU feature. That may or may not require an expensive setup, e.g. to configure the hardware, open communication channels, or discard the first N samples
Most likely you are on a hosted platform, meaning a modern operating system with a hardware abstraction level. Let's consider this case for the rest of this post
Types of random_device
Your system may have a real hardware random number generator, for example the TPM module can act as one. See How does the Trusted Platform Module generate its true random numbers? Any access to this hardware has to go through the operating system, for example on Windows this would likely be a Cryptographic Service Provider (CSP).
Or your CPU may have some built in, such as Intel's rdrand and rdseed instruction. In this case a random_device that maps directly to these just has to discover that they are available and check that they are operational. rdrand for example can detect hardware failure at which point the implementation may provide a fallback. See What are the exhaustion characteristics of RDRAND on Ivy Bridge?
However, since these features may not be available, operating systems generally provide an entropy pool to generate random numbers. If these hardware features are available, your OS may use them to feed this pool or provide a fallback once the pool is exhausted. Your standard library will most likely just access this pool through an OS-specific API.
That is what random_device is on all mainstream library implementations at the moment: an access point to the OS facilities for random number generation. So what is the setup overhead of these?
System APIs
A traditional POSIX (UNIX) operating system provides random numbers through the pseudo-devices /dev/random and /dev/urandom. So the setup cost is the same as opening and closing this file. I assume this is what your book refers to
Since this API has some downsides, new APIs have popped up, such as Linux's getrandom. This one would not have any setup cost but it may fail if the kernel does not support it, at which point a good library may try /dev/urandom again
Windows libraries likely go through its crypto API. So either the old CSP API CryptGenRandom or the new BCryptGenRandom. Both require a handle to a service or algorithm provider. So this may be similar to the /dev/urandom approach
Consequences
In all these cases you will need at least one system call to access the RNG and these are significantly slower than normal function calls. See System calls overhead And even the rdrand instruction is around 150 cycles per instruction. See What is the latency and throughput of the RDRAND instruction on Ivy Bridge? Or worse, see RDRAND and RDSEED intrinsics on various compilers?
A library (or user) may be tempted to reduce the number of system calls by buffering a larger number random bytes, e.g. with buffered file I/O. This again would make opening and closing the random_device unwise, assuming this discards the buffer.
Additionally, the OS entropy pool has a limited size and can be exhausted, potentially causing the whole system to suffer (either by working with sub-par random numbers or by blocking until entropy is available again). This and the slow performance mean that you should not usually feed the random_device directly into a uniform_int_distribution or something like this. Instead use it to initialize a pseudo random number generator.
Of course this has exceptions. If your program needs just 64 random bytes over the course of its entire runtime, it would be foolish to draw the 2.5 kiB random state to initialize a mersenne twister, for example. Or if you need the best possible entropy, e.g. to generate cryptographic keys, then by all means, use it (or better yet, use a library for this; never reinvent crypto!)

Make random numbers without using time seed

I hope you are well!
I use random numbers that depend on time for my encryption.
But this makes my encryption hackable...
My question is how can I make randoms that are not the same every time after running the randoms and also do not use seed time.
Most of the time we randomize like this:
srand ( ( unsigned ) time ( 0 ) );
cout << "the random number is: " << rand() % 11;
But we used time here and the hacker can understand the random numbers by having the program run time.
srand and/or rand aren't suited to your use at all.
Although it's not 100% guaranteed, the C++ standard library provides a random_device that's at least intended to provide access to true random number generation hardware. It has a member named entropy() that's intended to give an estimate of the actual entropy available--but it can be implemented as a pseudo-random number generator, in which case this should return 0. Assuming entropy() returns a non-zero result, use random_device to retrieve as much random data as you need.
Most operating systems provide their own access to random number generation hardware. On Windows you'd typically use BCryptGenRandom. On UNIXesque OSes, you can read an appropriate number of bytes from /dev/random/ (there's also /dev/urandom, but it's potentially non-blocking, so it could return data that's less random if there isn't enough entropy available immediately--that's probably fine if you're simulating rolling dice or shuffling cards in a game or something like that, but probably not acceptable for your purpose).
There are pseudo random number generators (PRNG) of very different quality and typical usage. Typical characteristics of PRNGs are:
The repeating period. This is the number of different numbers, that a PRNG can produce and for good generators this is almost always two to the power of the bitwidth.
Hidden patterns. Some bad PRNGs have hidden patterns. Obviously they decrease the quality of the random numbers. There are tests, that can be used to quantify the quality in this regard. This property, as well as the speed are mainly important for scientific usage like monte carlo simulation.
Cryptographic security. For cryptographic operations there is the need of special PRNGs with very specific use cases. You can reed more about them for example on wikipedia.
What I am trying to show you here, is that the choice of the right PRNG is just as important as the right choice of the seed. It turns out that the C rand() performs very bad in all three of the above categories (with exception maybe of the speed). That means if you seed rand() once and repeatedly call it and print say rand() % 11, an attacker will be able to synchronize to the PRNG after a short period of time even if you used the most secure and random seed. The rand() as well as most other, better random generators in the C++ standard library, were designed to be used for scientific calculations and are not suitable for cryptographic purposes.
If you need cryptographically secure random numbers I would suggest you to use a library that is build for exactly that purpose. A widely used crypto library, that can be used cross platform is OpenSSL. It includes a function called RAND_bytes() that can be used for secure random numbers. Do not forget to carefully read the manual page if you want to use it. You have to check the return value for errors!

Generate truly random numbers with C++ (Windows 10 x64)

I am trying to create a password generator. I used the random library, and after reading the documentation, I found out that rand() depends on an algorithm and a seed (srand()) to generate the random number. I tried using the current time as a seed (srand(time(0))), but that is not truly random. Is there a way to generate truly random numbers? Or maybe get the current time very accurately (like, in microseconds)? My platform is Windows 10 x64.
No, these are not truly random numbers. True randomness requires hardware support. Typically they work by sampling an analoge generated white noise signal.
Probably you want to seed your random generator with an always different value. A time() call would do it, or you can also hash it by the current pid ( getpid() ).
It is true that PCs cannot generate truly random numbers without dedicated hardware, however remember that each PC has at least one hardware random generator attached to it - that device is you, the user sitting in front of the computer. Human is a very random thing. Each one has its own speed of key presses, mouse movement patterns etc. This can be leveraged to your advantage.
On Linux you have a special device, /dev/random which uses exactly this. While you work on the PC, it collects random data (not security sensitive of course), such as how fast you tap the keyboard, and in addition hardware related data, such as intervals between interrupts, and some disk IO info. This all generates entropy, which is later used to generate pseudo random data, which is still not 100% random, but is based on much stronger random seed.
I am not an expert on Windows, but a quick search shows that Windows provides CryptGenRandom API, with somewhat similar functionality.
If you want to generate cryptographically strong random data, I suggest you start from this. Remember, this is still not as strong as dedicated hardware random generator. But this is good enough for most real world use cases.
If you are using rand() to generate a password as per your question then I suggest you use some kind of cryptographic method as they will be much stronger and collision-resistant than the numbers generated by rand(). Use something like Openssl or something similar to that kind.
If not, then truly random numbers as suggested by 'peterh' need hardware support. Yet you can use much better functions than rand() such as Mersenne Twister. It is a better generator than rand() as rand just uses a simple linear congruential generator. Read about mt_19937 here

Can I generate cryptographically secure random data from a combination of random_device and mt19937 with reseeding?

I need to generate cryptographically secure random data in c++11 and I'm worried that using random_device for all the data would severely limit the performance (See slide 23 of Stephan T. Lavavej's "rand() Considered Harmful" where he says that when he tested it (on his system), random_device was 1.93 MB/s and mt19937 was 499 MB/s) as this code will be running on mobile devices (Android via JNI and iOS) which are probably slower than the numbers above.
In addition I'm aware that mt19937 is not cryptographically secure, from wikipedia: "observing a sufficient number of iterations (624 in the case of MT19937, since this is the size of the state vector from which future iterations are produced) allows one to predict all future iterations".
Taking all of the above information into account, can I generate cryptographically secure random data by generating a new random seed from random_device every 624 iterations of mt19937? Or (possibly) better yet, every X iterations where X is a random number (from random_device or mt19937 seeded by random_device) between 1 and 624?
Don't do this. Seriously, just don't. This isn't just asking for trouble--it's more like asking and begging for trouble by going into the most crime-ridden part of the most dangerous city you can find, carrying lots of valuables.
Instead of trying to re-seed MT 19937 often enough to cover up how insecure it is, I'd advise generating your random numbers by running AES in counter mode. This requires that you get one (but only one) good random number of the right size to use as your initial "seed" for your generator.
You use that as the key for AES, and simply use it to encrypt sequential numbers to get a stream of output that looks random, but is easy to reproduce.
This has many advantages. First, it uses an algorithm that's actually been studied heavily, and is generally believed to be quite secure. Second, it means you only need to distribute one (fairly small) random number as the "key" for the system as a whole. Third, it probably gives better throughput. Both Intel's and (seemingly) independent tests show a range of throughput that starts out competitive with what you're quoting for MT 19937 at the low end, and up to around 4-5 times faster at the top end. Given the way you're using it, I'd expect to see you get results close to (and possibly even exceeding1) the top end of the range they show.
Bottom line: AES in counter mode is obviously a better solution to the problem at hand. The very best you can hope for is that MT 19937 ends up close to as fast and close to as secure. In reality, it'll probably disappoint both those hopes, and end up both slower and substantially less secure.
1. How would it exceed those results? Those are undoubtedly based on encrypting bulk data--i.e., reading a block of data from RAM, encrypting it, and writing the result back to RAM. In your case, you don't need to read the result from RAM--you just have to generate consecutive numbers in the CPU, encrypt them, and write out the results.
The short answer is, no you cannot. The requirements for a cryptographically secure RNG are very stringent, and if you have to ask this question here, then you are not sufficiently aware of those requirements. As Jerry says, AES-CTR is one option, if you need repeatability. Another option, which does not allow repeatability, would be to look for an implementation of Yarrow or Fortuna for your system. In general it is much better to find a CSRNG in a library than to roll your own. Library writers are sufficiently aware of the requirements for a good CSRNG.

Should I use a random engine seeded from std::random_device or use std::random_device every time

I have a class that contains two sources of randomness.
std::random_device rd;
std::mt19937 random_engine;
I seed the std::mt19937 with a call to std::random_device. If I want to generate a number and I don't care about repeatability, should I call rd() or random_engine()?
In my particular case, I'm sure both would work just fine, because this is going to be called in some networking code where performance is not at a premium, and the results are not especially sensitive. However, I am interested in some "rules of thumb" on when to use hardware entropy and when to use pseudo-random numbers.
Currently, I am only using std::random_device to seed my std::mt19937 engine, and any random number generation I need for my program, I use the std::mt19937 engine.
edit: Here's an explanation for exactly what I am using this particular example for:
This is for a game playing program. This particular game allows the user to customize their 'team' prior to beginning a round against an opponent. Part of setting up a battle involves sending a team to the server. My program has several teams and uses the random number to determine which team to load. Each new battle makes a call to std::random_device to seed the pseudo-random number generator. I log the initial state of the battle, which includes this team that I'm randomly selecting and the initial seed.
The particular random number I'm asking about in this question is for the random team selection (where it is beneficial to not have the opponent know ahead of time what team I'm using, but not mission-critical), but what I'm really looking for is a rule of thumb. Is it fine to always use std::random_device if I don't need repeatability of my numbers, or is there a real risk of using up entropy faster than it can be collected?
The standard practice, as far as I am aware, is to seed the random number generator with a number that is not calculated by the computer (but comes from some external, unpredictable source). That should be the case with your rd() function. From then on, you call the pseudo-random number generator(PRNG) for each and every pseudo-random number that you need.
If you are worried about the numbers not being random enough, then you should pick a different PRNG. Entropy is a scarce and precious resource and should be treated as such. Although, you may not be needing that many random numbers right now, you may in the future; or other applications could need them. You want that entropy to be available whenever an application asks for it.
It sounds like, for your application, that the mersenne twister will suit your needs just fine. No one who plays your game will ever feel like the teams that are loaded aren't random.
If you are not using it for encryption it is fine and well to repeatedly use mt19937 which is seeded by random_engine.
For the rest of this answer, I assume you are using the random numbers for encryption in your networking code. In short, mt19937 is not suitable for that use.
http://en.wikipedia.org/wiki/Mersenne_twister#Disadvantages
There is a potential risk that you will leak information (perhaps indirectly) over time so that an attacker could start to predict the random numbers. At least in theory, but this is what it's about. From Wikipedia
...since this figure is the size of the state vector from
which future iterates are produced) allows one to predict all future iterates.
A simple means of preventing random number generation information to leak to the user is to use one-way hash functions, but there's much more to it. You should use a random number generator designed for that purpose:
http://en.wikipedia.org/wiki/Cryptographically_secure_pseudorandom_number_generator
Various examples (with code) are found here http://xlinux.nist.gov/dads/HTML/pseudorandomNumberGen.html
If you need randomness for a simulation or a game, then that you're doing is fine. Call the random device just once, and do everything else with a randomly seeded pseudo-RNG. As a bonus, you should store the seed value in a log file so you can later replay the pseudo-random sequence:
auto const seed = std::random_device()();
// save "seed" to log file
std::mt19937 random_engine(seed);
(For multiple threads, use the PRNG in the main thread to generate seeds for further PRNGs in the spawned threads.)
If you need a lot of true randomness for cryptographic purposes, then a PRNG is never a good idea, though, since a long sequence of output contains a lot less randomness than true randomness, i.e. you can predict all of it from a small subset. If you need true randomness, you should collect it from some unpredictable source (e.g. heat sensors, user keyboard/mouse activity, etc.). Unix's /dev/random may be such a "true randomness" source, but it may not fill up very quickly.
You may want to have a look at http://channel9.msdn.com/Events/GoingNative/2013/rand-Considered-Harmful that explains why you should use uniform_int_distribution, and the relatives strengths of random_device / mt19937.
In this video, Stephan T. Lavavej specifically states that on visual C++, random_device can be used for cryptographic purposes.
The answer is platform dependent. I seem to remember that with Visual C++ 2010, std::random_device is just mt19937 seeded in some undocumented way.
Of course you realize that any ad hoc encryption scheme based on a random number generator is likely to be very weak.
Assuming this is not for cryptographic purposes, the most important thing to remember about random number generation is to think of how you want the distribution of the random numbers to be and what is the range you are expecting.
Usually standard random number generators within libraries are designed to give out uniform distribution. So the numbers will range between 0 and some RAND_MAX ( say on 32 bit machine it is 2^31 -1 )
Now here is the thing to remember with pseudo random number generators. Most of them are designed to generate random numbers and not random bits. The difference is subtle. If you need numbers between 0 and 8 most programmers will say rand()%8
This is bad because the algorithm was for randomizing 32 bits. But you are using only the bottom 3 bits. No good. This will not give you a uniform distribution (assuming that is what you are looking for)
You should use 8 * (rand() + 1) / (RAND_MAX) which will now give you a uniformly random number between 0 and 8.
Now with hardware random number generators you may have random bits being produced. If that is indeed the case, then you have each bit independently being generated. Then it is more like a set of identical independent random bits. The modeling here would have to be a bit different. Keep that in mind, especially in simulations the distribution becomes important.