I am trying to read simple BMP file and without performing any operations I am writing it back again to file.
I don't know where the mistake is in reading the file or writing it back.
I have added padding while reading as well as while writing
-- File-Read --.
std::vector<char> tempImageData;
/*tempImageData.resize(m_bmpInfo->imagesize);
file.seekg(m_bmpHeader->dataoffset);
file.read(&tempImageData[0], m_bmpInfo->imagesize);
file.close();*/
tempImageData.resize(m_bmpInfo->imagesize);
int padding = 0;
while (((m_bmpInfo->width*3+padding) % 4) != 0 )
padding++;
for(unsigned int i = 0 ; i < m_bmpInfo->height ; i++)
{
file.seekg(m_bmpHeader->dataoffset + i*(m_bmpInfo->width*3 + padding));
file.read(&tempImageData[i*m_bmpInfo->width*3], i*m_bmpInfo->width*3);
}
file.close();
//bitmaps are stored as BGR -- lets convert to RGB
assert(m_bmpInfo->imagesize % 3 == 0);
for (auto i = tempImageData.begin(); i != tempImageData.end(); i+=3)
{
m_data_red.push_back(*(i+2));
m_data_green.push_back(*(i+1));
m_data_blue.push_back(*(i+0));
}
-- write code
file.write(reinterpret_cast<const char*>(m_bmpHeader), sizeof(BITMAPFILEHEADER));
file.write(reinterpret_cast<const char*>(m_bmpInfo), sizeof(BITMAPINFOHEADER));
// this is wrong.. format asks for bgr.. we are putting all r, all g, all b
std::vector<char> img;
img.reserve(m_data_red.size() + m_data_green.size() + m_data_blue.size());
for(unsigned int i = 0 ; i < m_data_red.size() ; i++)
{
img.push_back(m_data_blue[i]);
img.push_back(m_data_green[i]);
img.push_back(m_data_red[i]);
}
char bmppad[3] = {0};
for(unsigned int i = 0 ; i < m_bmpInfo->height ; i++)
{
// maybe something is wrong
file.write(reinterpret_cast<const char*>(&img[i*m_bmpInfo->width*3]), m_bmpInfo->width * 3 * sizeof(unsigned char));
file.write(bmppad, 1 * ((4-(m_bmpInfo->width*3)%4)%4) * sizeof(char));
}
file.close();
But the results are weird.
Output image------Input image
As the padding is added to every row, I think you need to change this line:
file.seekg(m_bmpHeader->dataoffset + i*m_bmpInfo->width*3 + padding);
to this:
file.seekg(m_bmpHeader->dataoffset + i*(m_bmpInfo->width*3 + padding));
It also might be easier to save the calculated padding rather than calculating it in two different ways.
Edit:
Without all the code to debug through, it is a bit hard to pinpoint, but there is an error on this line:
file.read(&tempImageData[i*m_bmpInfo->width*3], i*m_bmpInfo->width*3);
you should not have the i* part in the amount you are reading. This means at row 200 you are reading 200 rows worth of data into the array, possibly overwriting the end of the array. once you are more than half way through the image, which is interesting given your output.
Related
I've got a function that I worked on a while ago in C++ that does AES 256 encryption using CBC.
I wrote the method a while ago so I unfortunately can't remember where I got the code from, but I've since found that I have a problem where if the string being encrypted is over 16 characters, its only the first 16 characters of the string that get encrypted.
The code is below:
string Encryption::encryptOrDecrypt(string stringToEncrypt, Mode mode)
{
HelperMethods helperMethods;
try
{
EVP_CIPHER_CTX *ctx = EVP_CIPHER_CTX_new();
EVP_CIPHER_CTX_init(ctx);
unsigned char key[33] = "my_key";
unsigned char iv[17] = "my_iv";
if (mode == Decrypt)
{
stringToEncrypt = helperMethods.base64Decode(stringToEncrypt);
}
vector<unsigned char> encrypted;
size_t max_output_len = stringToEncrypt.length() + 16 - (stringToEncrypt.length() % 16);
//size_t max_output_len = 16 - (stringToEncrypt.length() % 16);
encrypted.resize(max_output_len);
EVP_CipherInit_ex(ctx, EVP_aes_256_cbc(), NULL, key, iv, mode);
// EVP_CipherUpdate can encrypt all your data at once, or you can do
// small chunks at a time.
int actual_size = 0;
EVP_CipherUpdate(ctx,
&encrypted[0], &actual_size,
reinterpret_cast<unsigned char *>(&stringToEncrypt[0]), stringToEncrypt.size());
// EVP_CipherFinal_ex is what applies the padding. If your data is
// a multiple of the block size, you'll get an extra AES block filled
// with nothing but padding.
int final_size = 0;
EVP_CipherFinal_ex(ctx, &encrypted[actual_size], &final_size);
actual_size += final_size;
encrypted.resize(actual_size);
char * buff_str = (char*)malloc(encrypted.size() * 2 + 1);
memset(buff_str, 0, (encrypted.size() * 2 + 1));
char * buff_ptr = buff_str;
size_t index = 0;
for (index = 0; index < encrypted.size(); ++index)
{
buff_ptr += sprintf(buff_ptr, "%c", encrypted[index]);
}
EVP_CIPHER_CTX_cleanup(ctx);
EVP_CIPHER_CTX_free(ctx);
if (mode == Encrypt)
{
string encryptedString = buff_str;
free(buff_str);
return helperMethods.base64Encode(encryptedString.c_str(), encryptedString.length());
}
else
{
string decryptedString = buff_str;
free(buff_str);
return decryptedString;
}
}
catch (exception ex)
{
return stringToEncrypt;
}
}
I've stepped through the code to see where the problem might be and I can't see any reason why, its probably something stupidly simple.
The size_t max_output_len is 768 (the actual string is 752) so this is +16 for padding I believe, but I'm not sure if this should 768 as well but as the method is setting this itself not sure.
The pointer int final_size that gets passed into EVP_CipherFinal_ex becomes 16 when that method completes and the actual_size is also 768 so not sure where the problem is.
I think I've figured the problem, although not 100% sure why this works and the original way doesn't.
I don't believe its a null byte from looking at the content of buffer when debugging but I've changed it to the following which seems to resolve it.
Instead of
string encryptedString = buff_str;
I have changed it to the following
int i = 0;
stringstream encryptedStringStream;
for (i = 0; i < index; i++)
{
encryptedStringStream << buff_str[i];
}
string encryptedString = encryptedStringStream.str();
I am currently trying to decompress targa (RGB24_RLE) image data.
My algorithm looks like this:
static constexpr size_t kPacketHeaderSize = sizeof(char);
//http://paulbourke.net/dataformats/tga/
inline void DecompressRLE(unsigned int a_BytePerPixel, std::vector<CrByte>& a_In, std::vector<CrByte>& a_Out)
{
for (auto it = a_In.begin(); it != a_In.end();)
{
//Read packet header
int header = *it & 0xFF;
int count = (header & 0x7F) + 1;
if ((header & 0x80) != 0) //packet type
{
//For the run length packet, the header is followed by
//a single color value, which is assumed to be repeated
//the number of times specified in the header.
auto paStart = it + kPacketHeaderSize;
auto paEnd = paStart + a_BytePerPixel;
//Insert packets into output buffer
for (size_t pk = 0; pk < count; ++pk)
{
a_Out.insert(a_Out.end(), paStart, paEnd);
}
//Jump to next header
std::advance(it, kPacketHeaderSize + a_BytePerPixel);
}
else
{
//For the raw packet, the header s followed by
//the number of color values specified in the header.
auto paStart = it + kPacketHeaderSize;
auto paEnd = paStart + count * a_BytePerPixel;
//Insert packets into output buffer
a_Out.insert(a_Out.end(), paStart, paEnd);
//Jump to next header
std::advance(it, kPacketHeaderSize + count * a_BytePerPixel);
}
}
}
It is called here:
//Read compressed data
std::vector<CrByte> compressed(imageSize);
ifs.seekg(sizeof(Header), std::ifstream::beg);
ifs.read(reinterpret_cast<char*>(compressed.data()), imageSize);
//Decompress
std::vector<CrByte> decompressed(imageSize);
DecompressRLE(bytePerPixel, compressed, decompressed);
imageSize is defined like this:
size_t imageSize = hd.width * hd.height * bytePerPixel
However, after DecompressRLE() finishes (which takes a very long time with 2048x2048 textures) decompressed is still empty/only contains zeros. Maybe I am missing somehting out.
count seems to be unreasonably high sometimes, which I think is abnormal.
compressedSize should be less than imageSize, otherwise it's not compressed. However, using ifstream::tellg() gives me wrong results.
Any help?
If you look carefully at your variables in your debugger, you would see that std::vector<CrByte> decompressed(imageSize); declares a vector with imageSize elements in it. In DecompressRLE you then insert at the end of that vector, causing it to grow. This is why your decompressed image is full of zeros, and also why it takes so long (because the vector will be resized periodically).
What you want to do is reserve the space:
std::vector<CrByte> decompressed;
decompressed.reserve(imageSize);
Your compressed buffer looks like it is larger than the file content, so you're still decompressing past the end of the file. The compressed file size should be in Header. Use it.
After getting my first problem solved at C++ zLib compress byte array i faced another problem
void CGGCBotDlg::OnBnClickedButtonCompress()
{
// TODO: Add your control notification handler code here
z_const char hello[256];
hello[0] = 0x0A;
hello[1] = 0x0A;
hello[2] = 0x0A;
hello[3] = 0x0A;
hello[4] = 0x0A;
hello[5] = PKT_END;
hello[6] = PKT_END;
hello[7] = PKT_END;
Byte compr[256];
uLong comprLen = sizeof(compr);
int ReturnCode;
ReturnCode = Compress(compr, comprLen, hello, Z_DEFAULT_COMPRESSION);
g_CS.Send(&compr,comprLen);
}
int CGGCBotDlg::Compress(Byte Compressed[], uLong CompressedLength, CHAR YourByte[], int CompressionLevel)
{
int zReturnCode;
int Len;
for (int i = 0 ; i <= 10240 ; i++)
{
if (YourByte[i] == PKT_END && YourByte[i+1] == PKT_END && YourByte[i+2] == PKT_END)
{
Len = i - 1;
break;
}
}
uLong Length = (uLong)(sizeof(YourByte) * 1.0001) + 12;
zReturnCode = compress2(Compressed, &Length, (const Bytef*)YourByte, Len,CompressionLevel);
return zReturnCode;
}
Im trying to compress hello[] wich is actually 5 bytes (I want to compress first 5 bytes)
The expected result after compressing is : 0x78,0x9C,0xE3,0xE2,0x02,0x02,0x06,0x00,0x00,0xCE,0x00,0x33
But what I get after compressing is just the first 4 bytes of expected result and another bytes are just something else.
And my second problem is that i want to replcae 256 in Byte compr[256] with the exact number of bytes after decompressing original buffer (which is 12 in my case)
Would be great if someone correct me
Thanks
this line is wrong:
Len = i - 1;
because when i is 5, you do Len = i - 1; so len will be 4, but you want compress 5 bytes. just use:
Len = i;
another problem comprLen is never been assigned a value. In Compress(Byte Compressed[], uLong CompressedLength..), itCompressedLength is not used. I assume you want its value back. you should define like this:
Compress(Byte Compressed[], uLong& CompressedLength..)
and change the line to use CompressedLength instead of using length:
zReturnCode = compress2(Compressed, &CompressedLength, (const Bytef*)YourByte, Len,CompressionLevel);
this should be the final part of my integer class, and it seems to be very easy, and yet, something is wrong. is this code correct for multiplication using 2 deques?
// 0x12345 = {0x01, 0x23, 0x45}
integer operator*(integer rhs){
// long multiplication
unsigned int zeros = 0;
std::deque <uint8_t> row;
std::deque <std::deque <uint8_t> > temp;
integer out = 0;
for(std::deque <uint8_t>::reverse_iterator i = value.rbegin(); i != value.rend(); i++){
row = std::deque <uint8_t>(zeros++, 0); // zeros on the right hand side
uint8_t carry = 0;
for(std::deque <uint8_t>::reverse_iterator j = rhs.value.rbegin(); j != rhs.value.rend(); j++){
uint16_t prod = (uint16_t(*i) * uint16_t(*j)) + carry;// multiply through
row.push_front((uint8_t) prod);
carry = prod >> 8;
}
if (carry != 0)
row.push_front(carry);
out += integer(row);
}
return out;
}
it is giving me 4931550625 ^ 2 -> 24248133972899962689. assuming that the operator+ is correct, which i seems to be, is there some other explanation of why this is wrong
edit: i updated the code according to wxffles, but i think i did it wrong, since im still getting 2424..., and for 0x25 * 0x25 im getting 89 (decimal)
edit2: the correct code is posted
I think you are missing the last carry. Do you not need:
row.push_front(carry);
just before you add the row to out?
I'm having some seriously strange trouble writing multiple arrays of data to a file. Basically, I'm wanting to store all the array sizes at the top of the file, and then the array data following. This way I can just read the sizes and use that to construct arrays to hold the data on import, and I'll know exactly where each array begins and ends.
Here's the problem: I write the data, but it's different on import. Please take a look at my little test code. At the bottom there are comments about the values.
Thank you very much, fellow programmers! :)
#include <iostream>
#include <fstream>
int main()
{
int jcount = 100, // First item in file
kcount = 200,
in_jcount, // Third item in file. jcount is used to find where this ends.
in_kcount;
float *j = new float[jcount],
*k = new float[kcount],
*in_j,
*in_k;
for(int i = 0; i < jcount; ++i) // Write bologna data...
j[i] = (float)i;
for(int i = 0; i < kcount; ++i)
k[i] = (float)i;
std::ofstream outfile("test.dat");
outfile.write((char*)&jcount, sizeof(int)); // Good
outfile.tellp();
outfile.write((char*)&kcount, sizeof(int)); // Good
outfile.tellp();
outfile.write((char*)j, sizeof(float) * jcount); // I don't know if this works!
outfile.tellp();
outfile.write((char*)k, sizeof(float) * kcount); // I don't know if this works!
outfile.tellp();
outfile.close();
std::ifstream in("test.dat");
in.read((char*)&in_jcount, sizeof(int)); // == jcount == 100, good.
in.read((char*)&in_kcount, sizeof(int)); // == kcount == 200, good.
in_j = new float[in_jcount],
in_k = new float[in_kcount]; // Allocate arrays the exact size of what it should be
in.read((char*)in_j, sizeof(float) * in_jcount); // This is where it goes bad!
in.read((char*)in_k, sizeof(float) * in_kcount);
float jtest_min = j[0], // 0.0
jtest_max = j[jcount - 1], // this is 99.
ktest_min = k[0], // 0.0
ktest_max = k[kcount - 1], // this is 200. Why? It should be 199!
in_jtest_min = in_j[0], // 0.0
in_jtest_max = in_j[in_jcount - 1], // 99
in_ktest_min = in_k[0], // 0.0
in_ktest_max = in_k[in_kcount - 1]; // MIN_FLOAT, should be 199. What is going on here?
in.close();
delete k;
delete j;
delete in_j;
delete in_k;
}
There's nothing obviously wrong with this code (indeed, I don't see the errors you're encountering when I try running it), except for the fact that you are not checking for errors opening the input/output files.
For example, if you don't have permission to write to "test.dat", the open will silently fail, and you'll read back in whatever happened to be in the file before.
I've got the same bug, I fix it by using binary file:
ofstream outfile;
outfile.open ("test.dat", ios::out | ios::binary);
and
ifstream in;
in.open ("test.dat", ios::in | ios::binary);