What does binascii.hexlify(os.urandom(32)).decode() mean? - django

I'm trying to develop a function which would refresh token model in django rest framework.They seem to use binascii.hexlify(os.urandom(32)).decode() for generating unique tokens for every user.How does this line ensures that token generated by it will always be unique.Suppose if i want to refresh content of token after every 10 months ,then, will binascii.hexlify(os.urandom(32)).decode() will generate unique key that has not been used by any current user or i need to check whether it is being used or not?

help(os.urandom) says:
Return a bytes object containing random bytes suitable for cryptographic use.
On Linux this will use the /dev/urandom character device which is designed to be cryptographically secure. Only time it could fail to generate so would be the very early stage of boot when the entropy pool is not initialized yet 1. But once it's initialized and seeded from the previouse seed, device drives and so on you would generate cryptographic grade randomness.
Also check man 4 urandom.
1 getrandom(2) system call is there for these cases, which is blocking unlike reading from /dev/urandom.
binascii.hexlify(os.urandom(32)).decode():
os.urandom(32) returns 32 bytes of random data
binascii.hexlify returns the hex represntation of the bytes
as the return from hexlify is bytes we need to decode it to get string
So as the original random bytes are being retrieved from os.urandom this should be (cryptographically) secure randomness.

Related

When using Proof of Stake, why does a block header need to be signed by the miner?

I am reading the following article about PoS
http://earlz.net/view/2017/07/27/1904/the-missing-explanation-of-proof-of-stake-version
In this article, the author says
The block hash must be signed by the public key in the staking transaction's second vout
Why is this necessary? When using PoS, because the coin stake transaction's input comes from the miner, so the miner already provides an unlocking script on the inputs of the coin stake transaction. Why does she need to sign the block?
A reference implementation can be found in PIVX:
class CBlock : public CBlockHeader
{
public:
// network and disk
std::vector<CTransaction> vtx;
// ppcoin: block signature - signed by one of the coin base txout[N]'s owner
std::vector<unsigned char> vchBlockSig;
where vchBlockSig stores the signature which is signed by
key.Sign(block.GetHash(), block.vchBlockSig)
In the PoW systems, block signing is not needed, because of block ID id generating by hashing root of Merkle Tree (Merkle Root) of payload transactions and nonce, until hash become less than target.
If do analogous approach in PoS system, them malicious minter can generate lot of attempts with different output hashes from a same kernel UTXO (transaction output, which mints coins) - just by modify nonce and/or rearrange transaction in Merkle Tree, there is lot of combinations. And by this way, he can reduce PoS to PoW (lot of hashing attempts with same data).
To prevent such degradation, PoS cryptos (PPC, EMC, etc) limits number of attempts for some specific UTXO. And hash result (which compared to a target) depends only from kernel UTXO and from current time, and independent from nonce, block payload, and so. As result, PoS minter can make a single attempt for each matured UTXO only once per second.
But, with this approach, block content is not participate in the kernel hash, which compares to target.
As result, if minter does not sign a block, malicious actor can do following attack: He can intercept from the network a freshly-minted block from a minter, modify payload transactions and Merkle Tree and block hash (for example, add double spend TX), and redistribute the modified block over the network. Such block will contains valid coinstake transaction (which spends kernel UTXO), and will be accepted by network nodes.
To prevent this "modify freshly-minted block on the fly", block is signed by address of kernel UTXO. By this signature, minter provides a proof: Block is created by same minter, who generates a coinstake TX.
Thus, with PoS, block generation is following:
Find appropriate kernel UTXO
Generate coinstake transaction, which sends coins from kernel UTXO address to itself.
Create a new block,contains this coinstake TX and payload TXes
Sign this block with coin address of kernel UTXO.
Practically, enough to sign just a header, contains Merkle Root.

Double AES encryption with random key generation

I am using 2 embedded GSM devices. There needs to send data between devices. Suppose I require to send the string "ring" from one device to another. It is a command and needs to be sent multiple times. 1st thing I do is keep the aes_key same in both devices. Then encrypt the input_data(that is "boom") and get the enc_buffer. Send this enc_buffer over socket to other device. The other device has same aes_key. Using that I decrypt the buffer received into dec_buffer. My doubt is that will the encrypted message enc_buffer be same every time I send the encrypted text for "boom". If it is same, then I need to follow another approach. I have a 1st level aes_key, that is constant. Then I need to generate a 2nd level aes_key. Encrypt the 2nd level aes_key and send it over socket. In the receiving device, decrypt it using 1st level aes_key to get the 2nd level aes_key. Store it. In 1st device again encrypt string "boom" using 2nd level aes_key, send it over socket. In the 2nd device decrypt the encrypted message using 2nd level aes_key to get text "boom". But another problem is how to generate the 2nd level aes_key in 1st device. Is there some random key generator API present in Linux. Or can I use random number generator API. I need a 10 character-length key. For that I call random number generator 10 times to generate a number between 0-26 and convert this to character. Then gather then together to get the desired key. I am using the AES code sample as below. Pasted for reference to above text.
unsigned char aes_key[]= "asytfuhcilejnco";
unsigned char input_data[]= "Sandeep";
int data_size= strlen((char*)input_data);
int buffer_size = ((int)(data_size/AES_BLOCK_SIZE) + 1)*AES_BLOCK_SIZE;
AES_KEY enc_key,dec_key;
unsigned char iv[AES_BLOCK_SIZE];
int main()
{
unsigned char enc_buffer[buffer_size+1];
unsigned char dec_buffer[buffer_size+1];
memset(iv,0x00,AES_BLOCK_SIZE);
AES_set_encrypt_key(aes_key,sizeof(aes_key)*8,&enc_key);
AES_cbc_encrypt(input_data,enc_buffer,sizeof(input_data),&enc_key,iv,AES_ENCRYPT);
enc_buffer[buffer_size+1]='\0';
memset(iv,0x00,AES_BLOCK_SIZE);
AES_set_decrypt_key(aes_key,sizeof(aes_key)*8,&dec_key);
AES_cbc_encrypt(enc_buffer,dec_buffer,sizeof(input_data),&dec_key,iv,AES_DECRYPT);
dec_buffer[buffer_size+1]='\0';
cout<<"input_data="<<input_data<<endl;
cout<<"enc_buffer="<<enc_buffer<<endl;
cout<<"dec_buffer="<<dec_buffer<<endl;
}
so, I have 3 questions
Is encrypted data always the same for same input_data and aes_key?
Is there any random key generator API?
What systemcall is there for random numbers in Linux c++?
Is encrypted data always the same for same input_data and aes_key?
No. It is the same for the same key, iv, & data.
Is there any random key generator API?
Yes, OpenSSL has one, for instance.
What systemcall is there for random numbers in Linux c++?
A "good" random number generator is /dev/urandom. Arguably better, if available, is /dev/hwrng, however it is different, not necessarily better. /dev/random is similar, possibly better to /dev/urandom but will block when sufficient entropy is not available.
In all cases you read these devices just as you would read data from a file.
enc_buffer[buffer_size+1]='\0'
Don't do that, this is a buffer overflow. The maximum index on both your buffers is [buffer_size].
memset(iv,0x00,AES_BLOCK_SIZE);
Don't do that, using the same iv (Initialization Vector) guarantees that your ciphertext will also be the same. For encryption, initialize the iv with random bits as Claris mentioned. For decryption you can ignore the iv output.

Generation and storage of all DES keys

I'm writing Data Encryption Standard "cracker" using C++ and CUDA. It was going to be simple brute-force - trying all possible keys to decrypt encrypted data and check if result is equal to initial plain-text message.
The problem is that generation of 2^56 keys takes time (and memory). My first approach was to generate keys recursively and save them to file.
Do you have any suggestions how to improve this?
You don' really need recursion, neither you need storing your keys.
All space of DES keys (if we don't count 12 or so weak keys, which won't change anything for your purposes) is a space of 56-bit-long numbers (which BTW fit into standard uint64_t), and you can just iterate through numbers from 0 to 2^56-1, feeding the next number as a 56-bit number to your CUDA core whenever the core reports that it is done with the previous key.
If not for cores, the code could look such as:
for(uint64_t i=0;i<0xFFFFFFFFFFFFFFULL /* double-check number of F's so the number is 2^56-1 */;++i) {
uint8_t key[7];
//below is endianness-agnostic conversion
key[0] = (uint8_t)i;
key[1] = (uint8_t)(i>>8);
key[2] = (uint8_t)(i>>16);
key[3] = (uint8_t)(i>>24);
key[4] = (uint8_t)(i>>32);
key[5] = (uint8_t)(i>>40);
key[6] = (uint8_t)(i>>48);
bool found = try_your_des_code(key,data_to_decrypt);
if(found) printf("Eureka!\n");
}
To allow restarting your program in case if anything goes wrong, you need to store (in persistent storage, such as file) only this number i (with cores, strictly speaking - the number i should be written to persistent storage only after all the numbers before it has already been processed by CUDA cores, but generally the difference of 2000 or so keys won't make any difference performance-wise).

How to determine length of buffer at client side

I have a server sending a multi-dimensional character array
char buff1[][3] = { {0xff,0xfd,0x18} , {0xff,0xfd,0x1e} , {0xff,0xfd,21} }
In this case the buff1 carries 3 messages (each having 3 characters). There could be multiple instances of buffers on server side with messages of variable length (Note : each message will always have 3 characters). viz
char buff2[][3] = { {0xff,0xfd,0x20},{0xff,0xfd,0x27}}
How should I store the size of these buffers on client side while compiling the code.
The server should send information about the length (and any other structure) of the message with the message as part of the message.
An easy way to do that is to send the number of bytes in the message first, then the bytes in the message. Often you also want to send the version of the protocol (so you can detect mismatches) and maybe even a message id header (so you can send more than one kind of message).
If blazing fast performance isn't the goal (and you are talking over a network interface, which tends to be slower than computers: parsing may be cheap enough that you don't care), using a higher level protocol or format is sometimes a good idea (json, xml, whatever). This also helps with debugging problems, because instead of debugging your custom protocol, you get to debug the higher level format.
Alternatively, you can send some sign that the sequence has terminated. If there is a value that is never a valid sequence element (such as 0,0,0), you could send that to say "no more data". Or you could send each element with a header saying if it is the last element, or the header could say that this element doesn't exist and the last element was the previous one.

portaudio/libsndfile framesperbuffer variable

Can any one tell me what does portaudio callback function variable framesperbuffer is?
If i want to play audio stream through PA_WriteStream() by 64 bytes data every iteration then what value i should put in the framesperbuffer?
Also in lsbsndfilelibrary the function for reading wave file expects variable with name frame to be provided.
i.e.
samples=sf_readf_float(file,fptr,frames);
if i put frames=256 then always 64 samples are returned in fptr and rest are garbage whereas returned values from read function is 256.
I have checked through following code
memcpy(array,fptr,samples); //samples returned are 256 always but first 64 contain data
now array[0] to array[63] contain values and array[64] to array[255] contain null value in every iteration of file read.
Now i have to write data read to portaudio audio playing function then what framesperbuffer should be filled in with.
Also in some cases i need to process data and samples reduce to 32 (when i consume two samples to form one output sample)then what value should i put in the framesperbuffer variable?
framesPerBuffer The number of frames passed to the stream callback function, or the preferred block granularity for a blocking read/write stream. The special value paFramesPerBufferUnspecified (0) may be used to request that the stream callback will receive an optimal (and possibly varying) number of frames based on host requirements and the requested latency settings. Note: With some host APIs, the use of non-zero framesPerBuffer for a callback stream may introduce an additional layer of buffering which could introduce additional latency. PortAudio guarantees that the additional latency will be kept to the theoretical minimum however, it is strongly recommended that a non-zero framesPerBuffer value only be used when your algorithm requires a fixed number of frames per stream callback.