Blowfish encryption with c++ code in cocoa project - c++

I would like to encrypt a text with Bruce Schneier's class that was converted to c++ code by Jim Conger After the encryption I would like to decrypt the encrypted text. To try this out, I am using files for it. I've created a sample project, but the decrypted file isn't contains the same text as the initial file. What can be the problem?
Here's the download link for the blowfish class files.
I've created a Command line tool project in XCode and changed the main.m file to main.mm. Here you can find my contents of the main.mm file:
#import "blowfish.h"
#include <stdlib.h>
#include <stdio.h>
#define my_fopen(fileptr, filename, mode) \
fileptr = fopen(filename, mode); \
if (fileptr == NULL) { \
fprintf(stderr, "Error: Couldn't open %s.\n", filename); \
exit(1); \
}
const char *input_file_name = "test.txt";
const char *encoded_file_name = "encoded.txt";
const char *decoded_file_name = "decoded.txt";
unsigned char key[] = "thisisthekey";
int main(void) {
FILE *infile, *outfile;
int result, filesize;
const int n = 8; // make sure this is a multiple of 8
const int size = 1;
unsigned char input[n], output[n];
CBlowFish bf;
bf.Initialize(key, sizeof(key)-1); // subtract 1 to not count the null terminator
my_fopen(infile, input_file_name, "rb")
my_fopen(outfile, encoded_file_name, "wb")
filesize = 0;
while (result = fread(input, size, n, infile)) {
filesize += result;
fwrite(output, size, bf.Encode(input, output, result), outfile);
}
fclose(outfile);
fclose(infile);
my_fopen(infile, encoded_file_name, "rb")
my_fopen(outfile, decoded_file_name, "wb")
while (result = fread(input, size, n, infile)) {
bf.Decode(input, output, result);
fwrite(output, sizeof(output[0]), filesize < result ? filesize : result, outfile);
filesize -= result;
}
fclose(outfile);
fclose(infile);
return 0;
}

You're using a block cipher with padding (look at the source code to CBlowFish::Encode) to encrypt a stream. You can't do that because the decryption operation will have no way to know what constitutes a padded chunk that it should decrypt.
For example, say you're encrypting "FOOBAR", but you read "FOO" the first time and this encrypts to "XYZZY". Then you encrypt "BAR" to "ABCDE". Your written file will contain "XYZZYABCDE". But is that "XY" "ZZYA" "BCDE"? Or one block, "XYZZYABCDE" or what?
If you want to encrypt a stream, use a stream cipher. Or if you want to cut it into arbitrary blocks, you have to preserve the output block boundaries so you can decrypt the blocks.

You MUST encode/decode corresponding blocks of data. fread() and fwrite() doesn't return the same no. of bytes (result) so your plain-text data blocks and your cipher-text data blocks are not aligned.
Defines the data block length (say 64 bytes) and stick to it when encoding and decoding.
Otherwise use a stream cipher which uses "data blocks" of 1 bytes ;)

Related

base64 encoding removing carriage return from dos header

I have been trying to encode the binary data of an application as base64 (specifically boosts base64), but I have run into an issue where the carriage return after the dos header is not being encoded correctly.
it should look like this:
This program cannot be run in DOS mode.[CR]
[CR][LF]
but instead its outputting like this:
This program cannot be run in DOS mode.[CR][LF]
it seems this first carriage return is being skipped, which then causes the DOS header to be invalid when attempting to run the program.
the code for the base64 algorithm I am using can be found at: https://www.boost.org/doc/libs/1_66_0/boost/beast/core/detail/base64.hpp
Thanks so much!
void load_file(const char* filename, char** file_out, size_t& size_out)
{
FILE* file;
fopen_s(&file, filename, "r");
if (!file)
return false;
fseek(file, 0, SEEK_END);
size = ftell(file);
rewind(file);
*out = new char[size];
fread(*out, size, 1, file);
fclose(file);
}
void some_func()
{
char* file_in;
size_t file_in_size;
load_file("filename.bin", &file_in, file_in_size);
auto encoded_size = base64::encoded_size(file_in_size);
auto file_encoded = new char[encoded_size];
memset(0, file_encoded, encoded_size);
base64::encode(file_encoded, file_in, file_in_size);
std::ofstream orig("orig.bin", std::ios_base::binary);
for (int i = 0; i < file_in_size; i++)
{
auto c = file_in[i];
orig << c; // DOS header contains a NULL as the 3rd char, don't allow it to be null terminated early, may cause ending nulls but does not affect binary files.
}
orig.close();
std::ofstream encoded("encoded.txt"); //pass this output through a base64 to file website.
encoded << file_encoded; // for loop not required, does not contain nulls (besides ending null) will contain trailing encoded nulls.
encoded.close();
auto decoded_size = base64::decoded_size(encoded_size);
auto file_decoded = new char[decoded_size];
memset(0, file_decoded, decoded_size); // again trailing nulls but it doesn't matter for binary file operation. just wasted disk space.
base64::decode(file_decoded, file_encoded, encoded_size);
std::ofstream decoded("decoded.bin", std::ios_base::binary);
for (int i = 0; i < decoded_size; i++)
{
auto c = file_decoded[i];
decoded << c;
}
decoded.close();
free(file_in);
free(file_encoded);
free(file_decoded);
}
The above code will show that the file reading does not remove the carriage return, while the encoding of the file into base64 does.
Okay thanks for adding the code!
I tried it, and indeed there was "strangeness", even after I simplified the code (mostly to make it C++, instead of C).
So what do you do? You look at the documentation for the functions. That seems complicated since, after all, detail::base64 is, by definition, not part of public API, and "undocumented".
However, you can still read the comments at the functions involved, and they are pretty clear:
/** Encode a series of octets as a padded, base64 string.
The resulting string will not be null terminated.
#par Requires
The memory pointed to by `out` points to valid memory
of at least `encoded_size(len)` bytes.
#return The number of characters written to `out`. This
will exclude any null termination.
*/
std::size_t
encode(void* dest, void const* src, std::size_t len)
And
/** Decode a padded base64 string into a series of octets.
#par Requires
The memory pointed to by `out` points to valid memory
of at least `decoded_size(len)` bytes.
#return The number of octets written to `out`, and
the number of characters read from the input string,
expressed as a pair.
*/
std::pair<std::size_t, std::size_t>
decode(void* dest, char const* src, std::size_t len)
Conclusion: What Is Wrong?
Nothing about "dos headers" or "carriage returns". Perhaps maybe something about "rb" in fopen (what's the differences between r and rb in fopen), but why even use that:
template <typename Out> Out load_file(std::string const& filename, Out out) {
std::ifstream ifs(filename, std::ios::binary); // or "rb" on your fopen
ifs.exceptions(std::ios::failbit |
std::ios::badbit); // we prefer exceptions
return std::copy(std::istreambuf_iterator<char>(ifs), {}, out);
}
The real issue is: your code ignored all return values from encode/decode.
The encoded_size and decoded_size values are estimations that will give you enough space to store the result, but you have to correct it to the actual size after performing the encoding/decoding.
Here's my fixed and simplified example. Notice how the md5sums checkout:
Live On Coliru
#include <boost/beast/core/detail/base64.hpp>
#include <fstream>
#include <iostream>
#include <vector>
namespace base64 = boost::beast::detail::base64;
template <typename Out> Out load_file(std::string const& filename, Out out) {
std::ifstream ifs(filename, std::ios::binary); // or "rb" on your fopen
ifs.exceptions(std::ios::failbit |
std::ios::badbit); // we prefer exceptions
return std::copy(std::istreambuf_iterator<char>(ifs), {}, out);
}
int main() {
std::vector<char> input;
load_file("filename.bin", back_inserter(input));
// allocate "enough" space, using an upperbound prediction:
std::string encoded(base64::encoded_size(input.size()), '\0');
// encode returns the **actual** encoded_size:
auto encoded_size = base64::encode(encoded.data(), input.data(), input.size());
encoded.resize(encoded_size); // so adjust the size
std::ofstream("orig.bin", std::ios::binary)
.write(input.data(), input.size());
std::ofstream("encoded.txt") << encoded;
// allocate "enough" space, using an upperbound prediction:
std::vector<char> decoded(base64::decoded_size(encoded_size), 0);
auto [decoded_size, // decode returns the **actual** decoded_size
processed] // (as well as number of encoded bytes processed)
= base64::decode(decoded.data(), encoded.data(), encoded.size());
decoded.resize(decoded_size); // so adjust the size
std::ofstream("decoded.bin", std::ios::binary)
.write(decoded.data(), decoded.size());
}
Prints. When run on "itself" using
g++ -std=c++20 -O2 -Wall -pedantic -pthread main.cpp -o filename.bin && ./filename.bin
md5sum filename.bin orig.bin decoded.bin
base64 -d < encoded.txt | md5sum
It prints
d4c96726eb621374fa1b7f0fa92025bf filename.bin
d4c96726eb621374fa1b7f0fa92025bf orig.bin
d4c96726eb621374fa1b7f0fa92025bf decoded.bin
d4c96726eb621374fa1b7f0fa92025bf -

OpenSSL in C: after second decryption in application run, the first 16 bytes of result are garbage

I implemented simple file encryption/decryption with OpenSSL in C according to the instructions here. I do not need this to be truly secure (just want the files to not be readily readable on drive), the keys are hardcoded in the application and after reading the encrypted files from drive I decrypt them.
On first call, the decryptFileAsBytes function returns the correct decrypted file as byte vector. On the second call (within the same application run) the first 16 bytes of the result are garbage and the rest is correct. Does this have something to do with the size of the key (128 bits) I am using?
static bool decryptFileAsBytes(std::string filename, unsigned char *ckey, unsigned char *ivec, std::vector<unsigned char> &fileBytes)
{
std::ifstream ifs(filename, std::ios::binary | std::ios::ate);
if (ifs.fail())
return false;
std::ifstream::pos_type pos = ifs.tellg();
fileBytes.resize(pos);
ifs.close();
FILE *ifp;
if (fopen_s(&ifp, filename.c_str(), "rb") != NULL)
return false;
int bytesRead;
unsigned char indata[AES_BLOCK_SIZE];
unsigned char *writePtr = fileBytes.data();
/* data structure that contains the key itself */
AES_KEY key;
/* set the encryption key */
AES_set_encrypt_key(ckey, 128, &key);
/* set where on the 128 bit encrypted block to begin encryption*/
int num = 0;
while (1)
{
bytesRead = fread(indata, 1, AES_BLOCK_SIZE, ifp);
AES_cfb128_encrypt(indata, writePtr, bytesRead, &key, ivec, &num, AES_DECRYPT);
writePtr += bytesRead;
if (bytesRead < AES_BLOCK_SIZE)
break;
}
if (fclose(ifp) != NULL)
return false;
return true;
}
Alternatively to solving this, I welcome suggestions of a simple solution to the problem stated above ('encrypt' file on drive in a not bulletproof way so that it is not readily readable but the application can decrypt it).
The problem is likely that you're not retaining the original initialization vector for subsequent decryption operations.
As the AES encryption/decryption operations transpire, that memory is updated to continue with subsequent frames. If you instrument your code you'll see, with each encrypt/decrypt frame passing through the API, the ivec is changed.
If all you're doing this for is obfuscation (eg. you have a static key in your application) my suggestion is to do the following:
Don't pass the ivec into either the encryptor or decryptor.
Instead, generate a random ivec using RAND_bytes when encrypting. Store the ivec as the first block of data before continuing with the file content.
When decrypting, read the first block of data to prime your ivec.
Then, decrypt the remainder of the file as normal.
The benefits are:
Each encryption of a file will create a different byte representation, dependent on the initial random ivec. Eg. if you encrypt a file twice the resulting encrypted bytes will not be the same
You no longer have to use a static ivec from somewhere else in your code. The file contains it as the first block of data.
Just a suggestion. Unrelated, I prefer the EVP encryption interface, and suggest it worth a look.

AES_cfb128_encrypt doesn't decrypt all bytes in encrypted file

So I am using the following code in C++ with Openssl.
I got this from another SO thread.
int bytes_read, bytes_written;
unsigned char indata[AES_BLOCK_SIZE];
unsigned char outdata[AES_BLOCK_SIZE];
/* ckey and ivec are the two 128-bits keys necesary to
en- and recrypt your data. Note that ckey can be
192 or 256 bits as well */
unsigned char ckey[] = "thiskeyisverybad";
unsigned char ivec[] = "dontusethisinput";
/* data structure that contains the key itself */
AES_KEY key;
/* set the encryption key */
AES_set_encrypt_key(ckey, 128, &key);
/* set where on the 128 bit encrypted block to begin encryption*/
int num = 0;
FILE *ifp = fopen("out.txt", "r");
FILE *ofp = fopen("orig.txt", "w");
while (true) {
bytes_read = fread(indata, 1, AES_BLOCK_SIZE, ifp);
AES_cfb128_encrypt(indata, outdata, bytes_read, &key, ivec, &num,
AES_DECRYPT); //or AES_DECRYPT
bytes_written = fwrite(outdata, 1, bytes_read, ofp);
if (bytes_read < AES_BLOCK_SIZE) {
std::cout << bytes_read << std::endl;
break;
}
}
fclose(ifp);
fclose(ofp);
What I am doing is encrypting a file 'test.txt' by passing AES_ENCRYPT to AES_set_encrypt_key first and then trying to decrypt the same file. The encrypted file is stored as out.txt.
I decrypt by using the code above. My issue is that the decrypted file seems to only decrypt 454 bytes of data. It correctly decrypts the data but not all of it. I tried a test file < 454 bytes which worked fine but using 8kb file, 14kb file etc always results in only 454 bytes being decrypted. However, the size of the encrypted file is correct (ie ~14kb encrypted file for 14kb test file).
Making the 'ivec' an empty strings allows me to decrypt 545 bytes encrypted text instead.
What am I doing wrong?
Okay I managed to find a solution after looking through some open source implementations.
The issue is I was using fopen to read/write as text rather than read/write as binary.
The fix:
FILE *ifp = fopen("out.txt", "rb");
FILE *ofp = fopen("orig.txt", "wb");

Error when using byte array C++

I'm trying to read a standard 24-bit BMP file into a byte array so that I can send that byte array to libpng to be saved as a png. My code, which compiles:
#include <string>
#include <stdio.h>
#include <iostream>
#include <fstream>
#include <Windows.h>
#include "png.h"
using namespace std;
namespace BMP2PNG {
long getFileSize(FILE *file)
{
long lCurPos, lEndPos;
lCurPos = ftell(file);
fseek(file, 0, 2);
lEndPos = ftell(file);
fseek(file, lCurPos, 0);
return lEndPos;
}
private: System::Void button1_Click(System::Object^ sender, System::EventArgs^ e)
{
std::string filenamePNG = "D:\\TEST.png";
FILE *fp = fopen(filenamePNG.c_str(), "wb");
png_structp png_ptr = png_create_write_struct(PNG_LIBPNG_VER_STRING,NULL,NULL,NULL);
png_info *info_ptr = png_create_info_struct(png_ptr);
png_init_io(png_ptr, fp);
png_set_IHDR(png_ptr,info_ptr,1920,1080,16,PNG_COLOR_TYPE_RGB,PNG_INTERLACE_NONE,PNG_COMPRESSION_TYPE_BASE,PNG_FILTER_TYPE_BASE);
png_write_info(png_ptr,info_ptr);
png_set_swap(png_ptr);
const char *inputImage = "G:\\R-000.bmp";
BYTE *fileBuf;
BYTE *noHeaderBuf;
FILE *inFile = NULL;
inFile = fopen(inputImage, "rb");
long fileSize = getFileSize(inFile);
fileBuf = new BYTE[fileSize];
noHeaderBuf = new BYTE[fileSize - 54];
fread(fileBuf,fileSize,1,inFile);
for(int i = 54; i < fileSize; i++) //gets rid of 54-byte bmp header
{
noHeaderBuf[i-54] = fileBuf[i];
}
fclose(inFile);
png_write_rows(png_ptr, (png_bytep*)&noHeaderBuf, 1);
png_write_end(png_ptr, NULL);
fclose(fp);
}
};
Unfortunately, when I click the button that runs the code, I get an error "Attempted to read or write protected memory...". I'm very new to C++, but I thought I was reading in the file correctly. Why does this happen and how do I fix it?
Also, my end goal is to read a BMP one pixel row at a time so I don't use much memory. If the BMP is 1920x1080, I just need to read 1920 x 3 bytes for each row. How would I go about reading a file into a byte array n bytes at a time?
Your getFileSize() method is not actually returning the file size. You're basically moving to the correct position in the BMP header but instead of actually reading the next 4 bytes that represent the size, you're returning the position in the file (which will be always 2).
Then in the caller function you don't have any error checking and you have code that assumes the file size is always greater than 54 (the allocations for the read buffers for example).
Also keep in mind that the file size field in the BMP header might not always be correct, you should also take into account the actual file size.
You are reading filee size of your *.bmp file, but "real" data can be larger. BMP can have compression (RLE). After that when you write decompressed PNG to that array, you can have overflow size of image, because you previsouly obtained size of compressed BMP file.
In function
png_set_IHDR(png_ptr,info_ptr,1920,1080,16,PNG_COLOR_TYPE_RGB,PNG_INTERLACE_NONE,PNG_COMPRESSION_TYPE_BASE,PNG_FILTER_TYPE_BASE);
Why do you have bit depth set to 16 ? Shouldn´t it be 8, because each RGB channel from BMP is 8bit.
Also for PNG handling, I am using this library: http://lodev.org/lodepng/. It works fine.

Can you help with creating a zip archive with the LZMA (7zip) SDK?

I am trying to use the LZMA SDK to create a zip archive (either .zip or .7z format). I've downloaded and built the SDK and I just want to use the dll exports to compress or decompress a few files. When I use the LzamCompress method, it returns 0 (SZ_OK) as if it worked correctly. However, after I write the buffer to file and try to open it, I get an error that the file cannot be opened as an archive.
Here is the code I am currently using. Any suggestions would be appreciated.
#include "lzmalib.h"
typedef unsigned char byte;
using namespace std;
int main()
{
int length = 0;
char *inBuffer;
byte *outBuffer = 0;
size_t outSize;
size_t outPropsSize = 5;
byte * outProps = new byte[outPropsSize];
fstream in;
fstream out;
in.open("c:\\temp\\test.exe", ios::in | ios::binary);
in.seekg(0, ios::end);
length = in.tellg();
in.seekg(0, ios::beg);
inBuffer = new char[length];
outSize = (size_t) length / 20 * 21 + ( 1 << 16 ); //allocate 105% of file size for destination buffer
if(outSize != 0)
{
outBuffer = (byte*)malloc((size_t)outSize);
if(outBuffer == 0)
{
cout << "can't allocate output buffer" << endl;
exit(1);
}
}
in.read(inBuffer, length);
in.close();
int ret = LzmaCompress(
outBuffer, /* output buffer */
&outSize, /* output buffer size */
reinterpret_cast<byte*>(inBuffer),/* input buffer */
length, /* input buffer size */
outProps, /* archive properties out buffer */
&outPropsSize,/* archive properties out buffer size */
5, /* compression level, 5 is default */
1<<24,/* dictionary size, 16MB is default */
-1, -1, -1, -1, -1/* -1 means use default options for remaining arguments */
);
if(ret != SZ_OK)
{
cout << "There was an error creating the archive." << endl;
exit(1);
}
out.open("test.zip", ios::out | ios::binary);
out.write(reinterpret_cast<char*>(outBuffer), (int)(outSize));
out.close();
delete inBuffer;
delete outBuffer;
}
I do not know about LZMA specifically, but from what I know of compression in general, it looks like you are writing a compressed bit stream to a file without any header information that would let a decompression program know how the bit stream is compressed.
The LzmaCompress() function probably writes this information to outProps. There should be another function in the SDK that will take the compressed bit stream in outBuffer and the properties in outProps and create a proper archive from them.