I'm trying to read the header from a PNG file.
The result should be
Dec: 137 80 78 71 13 10 26 10
Hex: 89 50 4E 47 0D 0A 1A 0A
However, I get
Dec: 4294967 80 78 71 13 10 26 10
What am I doing wrong?
Code:
char T;
pngFile = fopen(Filename, "rb");
if(pngFile)
{
fread(&T, 1, 1, pngFile);
fclose(pngFile);
printf("T: %u\n", T);
}
137 is too big for signed char - use unsigned char instead...
see this link for limits of data types.
Related
Trying to figure out how to write something in C++ that will decrypt a rotating XOR to packet bytes that will be in varying sizes with a known 10 byte key (key can be ascii or hex, whatever is easier).
For example:
XOR 10-byte key in hex: 41 30 42 44 46 4c 58 53 52 54
XOR 10-byte key in ascii: A0BDFLXSRT
Here is "This is my message in clear text" in hex that needs to be decrypted with above key
15 58 2B 37 66 25 2B 73 3F 2D 61 5D 27 37 35 2D 3F 36 72 3D 2F 10 21 28 23 2D 2A 73 26 31 39 44
I need a way to apply my XOR key over the top of these bytes like this in a rotating fashion:
41 30 42 44 46 4c 58 53 52 54 41 30 42 44 46 4c 58 53 52 54 41 30 42 44 46 4c 58 53 52 54 52 54
15 58 2B 37 66 25 2B 73 3F 2D 61 5D 27 37 35 2D 3F 36 72 3D 2F 10 21 28 23 2D 2A 73 26 31 39 44
When I XOR these together, I get the data in readable format again:
"This is my message in clear text"
I've seen some examples that take the entire XOR key and apply it to each byte, but that's not what I need. It needs to somehow rotate over the bytes until the end of the data is reached.
Is there anyone who can assist?
Use the % (modulus) operator!
using byte_t = unsigned char;
std::vector< byte_t > xor_key;
std::vector< byte_t > cipher_text;
std::string plain_text;
plain_text.reserve( cipher_text.size( ) );
for( std::size_t i = 0;
i < cipher_text.size( );
++i )
{
auto const cipher_byte = cipher_text[ i ];
// i % xor_key.size( ) will "rotate" the index
auto const key_byte = xor_key[ i % xor_key.size( ) ];
// xor like usual!
plain_text += static_cast< char >( cipher_byte ^ key_byte );
}
You just use a loop, and a modulus operation, to XOR one byte of the key, with one byte of the cypher text.
void decrypt(char *cypher, int length, char* key) {
for (int i = 0; i < length; i++) cypher[i] ^= key[i % 10];
}
By indexing the key with modulus (remainder) of 10, it will always have "rotating" values between 0 and 9.
Can someone tell me whats wrong with my code?
It works fine in my test example.. but when I use it in production model it decrypts the string but adds a padded symbol to maintain some kind of block size or something.
I didn't post my encrypt/decrypt methods as they would make this post too big, plus they work fine as my test example decrypts and encrypts properly, ini.GetValue is a INI retrieval method there is nothing wrong with it, plus you can see the base64 size is the same as the example code, so I believe it works fine, I never had any problems with it before without encryption when I used it, it returns a const char*The problem is known as you can see the production code ciphertext has appended to it 2 null bytes which I find strange becasue both codes are pretty much identical, I'm not good at C++ so I'm probably overlooking some basic char array stuff
The encryption code I use is AES-256-CBC from OpenSSL 1.1.1
Look at my outputs to see whats wrong.
Good looking example code:
Ciphertext is:
000000: 7a e1 69 61 65 bb 74 ad 1a 68 8a ae 73 70 b6 0e z.iae.t..h..sp..
000010: 4f c9 45 9b 44 ca e2 be e2 aa 16 14 cd b1 79 7b O.E.D.........y{
000020: 86 a5 92 26 e6 08 3e 55 61 4e 60 03 50 f3 e4 c1 ...&..>UaN`.P...
000030: fe 5a 2c 0b df c9 1b d8 92 1f 48 75 0d f8 c2 44 .Z,.......Hu...D
Base64 (size=88):
000000: 65 75 46 70 59 57 57 37 64 4b 30 61 61 49 71 75 euFpYWW7dK0aaIqu
000010: 63 33 43 32 44 6b 2f 4a 52 5a 74 45 79 75 4b 2b c3C2Dk/JRZtEyuK+
000020: 34 71 6f 57 46 4d 32 78 65 58 75 47 70 5a 49 6d 4qoWFM2xeXuGpZIm
000030: 35 67 67 2b 56 57 46 4f 59 41 4e 51 38 2b 54 42 5gg+VWFOYANQ8+TB
000040: 2f 6c 6f 73 43 39 2f 4a 47 39 69 53 48 30 68 31 /losC9/JG9iSH0h1
000050: 44 66 6a 43 52 41 3d 3d DfjCRA==
b cip len = 64
a cip len = 16
plain b = 0
plain a = 3
Decrypted text is:
wtf
Decrypted base64 is:
wtf
000000: 77 74 66 00 wtf.
Bad production code example:
Base64 (size=88)
000000: 6a 7a 48 30 46 71 73 54 45 47 4d 76 2f 67 76 59 jzH0FqsTEGMv/gvY
000010: 4d 73 34 54 2f 39 58 32 6c 37 54 31 4d 6d 56 61 Ms4T/9X2l7T1MmVa
000020: 36 45 4f 38 52 64 45 57 42 6b 65 48 71 31 31 45 6EO8RdEWBkeHq11E
000030: 39 2b 77 37 47 4e 49 4a 47 4a 71 42 55 74 54 70 9+w7GNIJGJqBUtTp
000040: 30 36 58 46 31 4d 66 45 79 44 45 71 5a 69 58 54 06XF1MfEyDEqZiXT
000050: 79 45 53 6b 65 41 3d 3d yESkeA==
Ciphertext is:
000000: 8f 31 f4 16 ab 13 10 63 2f fe 0b d8 32 ce 13 ff .1.....c/...2...
000010: d5 f6 97 b4 f5 32 65 5a e8 43 bc 45 d1 16 06 47 .....2eZ.C.E...G
000020: 87 ab 5d 44 f7 ec 3b 18 d2 09 18 9a 81 52 d4 e9 ..]D..;......R..
000030: d3 a5 c5 d4 c7 c4 c8 31 2a 66 25 d3 c8 44 a4 78 .......1*f%..D.x
000040: 00 00 ..
b cip len = 65
a cip len = 17
crypt miss-match
plain b = 16
crypt write fail
plain a = 16
000000: 77 74 66 09 09 09 09 09 09 09 09 05 05 05 05 05 wtf.............
Here are my codes as you can see they both look very similar so I don't understand whats the problem.
Here is a little helper function for hexdump outputs I use.
void Hexdump(void* ptr, int buflen)
{
unsigned char* buf = (unsigned char*)ptr;
int i, j;
for (i = 0; i < buflen; i += 16) {
myprintf("%06x: ", i);
for (j = 0; j < 16; j++)
if (i + j < buflen)
myprintf("%02x ", buf[i + j]);
else
myprintf(" ");
myprintf(" ");
for (j = 0; j < 16; j++)
if (i + j < buflen)
myprintf("%c", isprint(buf[i + j]) ? buf[i + j] : '.');
myprintf("\n");
}
}
char* base64(const unsigned char* input, int length) {
const auto pl = 4 * ((length + 2) / 3);
auto output = reinterpret_cast<char*>(calloc(pl + 1, 1)); //+1 for the terminating null that EVP_EncodeBlock adds on
const auto ol = EVP_EncodeBlock(reinterpret_cast<unsigned char*>(output), input, length);
if (pl != ol) { myprintf("b64 calc %d,%d\n",pl, ol); }
return output;
}
unsigned char* decode64(const char* input, int length) {
const auto pl = 3 * length / 4;
auto output = reinterpret_cast<unsigned char*>(calloc(pl + 1, 1));
const auto ol = EVP_DecodeBlock(output, reinterpret_cast<const unsigned char*>(input), length);
if (pl != ol) { myprintf("d64 calc %d,%d\n", pl, ol); }
return output;
}
Here is the test example that works fine.
/* enc test */
/* Message to be encrypted */
unsigned char* plaintext = (unsigned char*)"wtf";
/*
* Buffer for ciphertext. Ensure the buffer is long enough for the
* ciphertext which may be longer than the plaintext, depending on the
* algorithm and mode.
*/
unsigned char* ciphertext = new unsigned char[128];
/* Buffer for the decrypted text */
unsigned char decryptedtext[128];
int decryptedtext_len, ciphertext_len;
/* Encrypt the plaintext */
ciphertext_len = encrypt(plaintext, strlen((char*)plaintext), ciphertext);
/* Do something useful with the ciphertext here */
myprintf("Ciphertext is:\n");
Hexdump((void*)ciphertext, ciphertext_len);
myprintf("Base64 (size=%d):\n", strlen(base64(ciphertext, ciphertext_len)));
Hexdump((void*)base64(ciphertext, ciphertext_len), 4 * ((ciphertext_len + 2) / 3));
/* Decrypt the ciphertext */
decryptedtext_len = decrypt(ciphertext, ciphertext_len, decryptedtext);
/* Add a NULL terminator. We are expecting printable text */
decryptedtext[decryptedtext_len] = '\0';
/* Show the decrypted text */
myprintf("Decrypted text is:\n");
myprintf("%s\n", decryptedtext);
myprintf("Decrypted base64 is:\n");
myprintf("%s\n", decode64(base64(decryptedtext, decryptedtext_len), 4 * ((decryptedtext_len + 2) / 3)));
Hexdump(decode64(base64(decryptedtext, decryptedtext_len), 4 * ((decryptedtext_len + 2) / 3)), 4 * ((decryptedtext_len + 2) / 3));
/* enc test end */
Here is the bad production code:
//Decrypt the username
const char* b64buffer = ini.GetValue("Credentials", "SavedPassword", "");
int b64buffer_length = strlen(b64buffer);
myprintf("Base64 (size=%d)\n", b64buffer_length);
Hexdump((void*)b64buffer, b64buffer_length);
int decryptedtext_len;
int decoded_size = 3 * b64buffer_length / 4;
unsigned char* decryptedtext = new unsigned char[decoded_size];
//unsigned char* ciphertext = decode64(b64buffer, b64buffer_length); //had this before same problem as below line, this worked without initializing new memory I perfer to fix this back up
unsigned char* ciphertext = new unsigned char[decoded_size];
memcpy(ciphertext, decode64(b64buffer, b64buffer_length), decoded_size); //same problem as top line.
myprintf("Ciphertext is:\n");
Hexdump((void*)ciphertext, decoded_size);
/* Decrypt the ciphertext */
decryptedtext_len = decrypt(ciphertext, decoded_size - 1, decryptedtext);
/* Add a NULL terminator. We are expecting printable text */
decryptedtext[decryptedtext_len] = '\0';
Hexdump(decryptedtext, decryptedtext_len);
strcpy(password_setting, (char*)decryptedtext); //save decrypted password back
delete[] decryptedtext;
delete[] ciphertext;
In the example that works, you get ciphertext_len directly from the encryption function. When you display the ciphertext, you use this length.
In the "bad production code", you calculate decoded_size from the length of the Base64 data. However, Base64 encoded data always has a length that is a multiple of 4. If the original data size is not a multiple of 3, then there are one or two padding characters added to the string. In both of your examples, you have two of these characters, the '=' at the end of the Base64 data.
When calculating the length of the decrypted data, you need to account for these bytes. If there are no '=' characters at the end of the string, use the length that you calculated (3 * N / 4). If there is one '=' character, reduce that calculated length by 1, and if there are two '=' characters, reduce the calculated length by 2. (There will not be 3 padding characters.)
Edit: Here is my fix: (sspoke)
char* base64(const unsigned char* input, int length) {
const auto pl = 4 * ((length + 2) / 3);
auto output = reinterpret_cast<char*>(calloc(pl + 1, 1)); //+1 for the terminating null that EVP_EncodeBlock adds on
const auto ol = EVP_EncodeBlock(reinterpret_cast<unsigned char*>(output), input, length);
if (pl != ol) { printf("encode64 fail size size %d,%d\n",pl, ol); }
return output;
}
unsigned char* decode64(const char* input, int* length) {
//Old code generated base length sizes because it didn't take into account the '==' signs.
const auto pl = 3 * *length / 4;
auto output = reinterpret_cast<unsigned char*>(calloc(pl + 1, 1));
const auto ol = EVP_DecodeBlock(output, reinterpret_cast<const unsigned char*>(input), *length);
if (pl != ol) { printf("decode64 fail size size %d,%d\n", pl, ol); }
//Little bug fix I added to fix incorrect length's because '==' signs are not considered in the output. -sspoke
if (*length > 3 && input[*length - 1] == '=' && input[*length - 2] == '=')
*length = ol - 2;
else if (*length > 2 && input[*length - 1] == '=')
*length = ol - 1;
else
*length = ol;
return output;
}
My Protobuf message consists of 3 doubles
syntax = "proto3";
message TestMessage{
double input = 1;
double output = 2;
double info = 3;
}
When I set these values to
test.set_input(2.3456);
test.set_output(5.4321);
test.set_info(5.0);
the serialized message looks like
00000000 09 16 fb cb ee c9 c3 02 40 11 0a 68 22 6c 78 ba |........#..h"lx.|
00000010 15 40 19 |.#.|
00000013
when using test.serializeToArray and could not be deserialized successfully by a go program using the same protobuf message. When trying to read it from a c++ program I got a 0 as info, so the message seems to be corrupted.
When using test.serializeToOstream I got this message, which could be deserialized successfully by both go and c++ programs.
00000000 09 16 fb cb ee c9 c3 02 40 11 0a 68 22 6c 78 ba |........#..h"lx.|
00000010 15 40 19 00 00 00 00 00 00 14 40 |.#........#|
0000001b
When setting the values to
test.set_input(2.3456);
test.set_output(5.4321);
test.set_info(5.5678);
the serialized messages, both produced by test.serializeToArray and test.serializeToOstream look like
00000000 09 16 fb cb ee c9 c3 02 40 11 0a 68 22 6c 78 ba |........#..h"lx.|
00000010 15 40 19 da ac fa 5c 6d 45 16 40 |.#....\mE.#|
0000001b
and could be successfully read by my go and cpp program.
What am I missing here? Why is serializeToArray not working in the first case?
EDIT:
As it turns out, serializeToString works fine, too.
Here the code I used for the comparison:
file_a.open(FILEPATH_A);
file_b.open(FILEPATH_B);
test.set_input(2.3456);
test.set_output(5.4321);
test.set_info(5.0);
//serializeToArray
int size = test.ByteSize();
char *buffer = (char*) malloc(size);
test.SerializeToArray(buffer, size);
file_a << buffer;
//serializeToString
std::string buf;
test.SerializeToString(&buf);
file_b << buf;
file_a.close();
file_b.close();
Why does serializeToArray not work as expected?
EDIT2:
When using file_b << buf.data() instead of file_b << buf.data(), the data gets corrupted as well, but why?
I think the error you're making is treating binary as character data and using character data APIs. Many of those APIs stop at the first nil byte (0), but that is a totally valid value in protobuf binary.
You need to make sure you don't use any such APIs basically - stick purely to binary safe APIs.
Since you indicate that size is 27, this all fits.
Basically, the binary representation of 5.0 includes 0 bytes, but you could easily have seen the same problem for other values in time.
I am trying to dump the memory (made with malloc) to a file. I want to dump the raw data because I don't know what's inside the memory (int float double) at the point that I want to dump the memory.
What's the best way to do this?
I have tried a few thing already but non of them worked as i wanted.
In C, it's quite trivial, really:
const size_t size = 4711;
void *data = malloc(size);
if(data != NULL)
{
FILE *out = fopen("memory.bin", "wb");
if(out != NULL)
{
size_t to_go = size;
while(to_go > 0)
{
const size_t wrote = fwrite(data, to_go, 1, out);
if(wrote == 0)
break;
to_go -= wrote;
}
fclose(out);
}
free(data);
}
The above attempts to properly loop fwrite() to handle short writes, that's where most of the complexity comes from.
It's not clear what you mean by "not working".
You could reinterpret_cast the memory to a char * and write it to file easily.
Reading it back again is a different matter.
The "C++ way" of doing it would probably involve using std::ostream::write with a stream in binary mode.
#include <fstream>
#include <string>
bool write_file_binary (std::string const & filename,
char const * data, size_t const bytes)
{
std::ofstream b_stream(filename.c_str(),
std::fstream::out | std::fstream::binary);
if (b_stream)
{
b_stream.write(data, bytes);
return (b_stream.good());
}
return false;
}
int main (void)
{
double * buffer = new double[100];
write_file_binary("test.bin",
reinterpret_cast<char const *>(buffer),
sizeof(double)*100);
delete[] buffer;
return 0;
}
If this is C++, this might help you, as part of serializing and deserializing,
I write the raw memory array to a file (using new[] is essentially the same
as malloc in the C world):
https://github.com/goblinhack/simple-c-plus-plus-serializer
#include "hexdump.h"
auto elems = 128;
static void serialize (std::ofstream out)
{
auto a = new char[elems];
for (auto i = 0; i > bits(a);
hexdump(a, elems);
}
Output:
128 bytes:
0000 00 01 02 03 04 05 06 07 08 09 0a 0b 0c 0d 0e 0f |................|
0010 10 11 12 13 14 15 16 17 18 19 1a 1b 1c 1d 1e 1f |................|
0020 20 21 22 23 24 25 26 27 28 29 2a 2b 2c 2d 2e 2f | !"#$%&'()*+,-./|
0030 30 31 32 33 34 35 36 37 38 39 3a 3b 3c 3d 3e 3f |0123456789:;?|
0040 40 41 42 43 44 45 46 47 48 49 4a 4b 4c 4d 4e 4f |#ABCDEFGHIJKLMNO|
0050 50 51 52 53 54 55 56 57 58 59 5a 5b 5c 5d 5e 5f |PQRSTUVWXYZ[\]^_|
0060 60 61 62 63 64 65 66 67 68 69 6a 6b 6c 6d 6e 6f |`abcdefghijklmno|
0070 70 71 72 73 74 75 76 77 78 79 7a 7b 7c 7d 7e 7f |pqrstuvwxyz{|}~.|
I have this function to read a triangular 2d array, but sometimes it crashes on realloc. Always on the 6th realloc (current_row = 7). Sometimes it runs fine. Cannot reproduce the error in gdb (works every time). What's wrong?
TRIANGLE *read_triangle(char *file_name)
{
std::ifstream fin(file_name);
int current_row = 0, current_column = 0, buffer;
TRIANGLE *triangle = new TRIANGLE();
triangle->triangle_values[0] = new int[1];
while (fin >> buffer)
{
if (current_column == current_row+1)
{
current_column = 0;
triangle->triangle_values = (int**)realloc(&((void*)triangle->triangle_values), (++current_row+1)*sizeof(int*));
triangle->triangle_values[current_row] = new int[current_row];
}
triangle->triangle_values[current_row][current_column++] = buffer;
}
triangle->rows = current_row-1;
return triangle;
}
The TRIANGLE Definition
struct TRIANGLE
{
int **triangle_values;
int rows;
TRIANGLE(): triangle_values(NULL)
{
triangle_values = new int*[1];
}
};
Input file example:
75
95 64
17 47 82
18 35 87 10
20 04 82 47 65
19 01 23 75 03 34
88 02 77 73 07 63 67
99 65 04 28 06 16 70 92
41 41 26 56 83 40 80 70 33
41 48 72 33 47 32 37 16 94 29
53 71 44 65 25 43 91 52 97 51 14
70 11 33 28 77 73 17 78 39 68 17 57
91 71 52 38 17 14 91 43 58 50 27 29 48
63 66 04 68 89 53 67 30 73 16 69 87 40 31
04 62 98 27 23 09 70 98 73 93 38 53 60 04 23
You're mixing the old style (realloc) and new style (new) memory allocation functions which is not guaranteed to work. By that, I mean they'll work provided you keep them separate but allocating memory with new and then trying to expand that same memory with realloc is a definite no-no.
From C++11 20.6.13, when talking about how the old style functions control the reachability of their blocks:
It also allows malloc() to be implemented with a separate allocation arena.
Hence, there's no guarantee that the memory arenas for old and new style are related to each other at all.
C++ provides all sort of wonderful collection classes with far better resizing methods than malloc/realloc. You should fully embrace the language by using them (such as vector).
Generally, C++ programmers don't use the legacy C stuff unless they're writing things that need to be usable in both C and C++, in which case they're probably better referred to as C programmers, at least temporarily :-)