I have never worked with SHA256 before. Recently, I've been trying to implement a SHA256 checksum in order to see if a library has been tampered with.
The funny thing is that OpenSSL SHA256 generates a different sum for the exact same library depending on its location. If it's located in another folder, the sum is different.
Is there anything I could do to get the same sum no matter where the file is located? I provided the code snippets and the sums I get.
unsigned char* getsum( char* filename ) {
std::ifstream pFile( filename, std::ios::binary );
SHA256_CTX sContext;
char pBuffer[ 1024*16 ];
unsigned char pSum[SHA256_DIGEST_LENGTH];
SHA256_Init( &sContext );
while( pFile.good() ) {
pFile.read( pBuffer, sizeof(pBuffer) );
SHA256_Update( &sContext, pBuffer, pFile.gcount() );
}
SHA256_Final( pSum, &sContext );
return pSum;
}
...
char* cl_sum = new char[256];
sprintf( cl_sum, "%02x", getsum("library.dll") );
MessageBoxA( NULL, cl_sum , NULL, NULL );
delete[] cl_sum;
exit( -1 );
I also tried using the SHA256() function instead of the whole SHA256 context, SHA256_Init(), Update & Final thing, but still the same result.
Your code has lots of basic mistakes:
sprintf( cl_sum, "%02x", getsum("library.dll") );
Format string says "print me int value", getsum returns char *! This is wrong and if you use compiler warnings it should explain issue.
Return value is wrong too:
unsigned char* getsum( char* filename ) {
...
unsigned char pSum[SHA256_DIGEST_LENGTH];
...
return pSum;
}
You are returning a pointer to local object which lifetime ends when function returns.
Here is my tweak to your code
using Sha256Hash = std::array<unsigned char, SHA256_DIGEST_LENGTH>;
Sha256Hash calcSha256Hash(std::istream& in)
{
SHA256_CTX sContext;
SHA256_Init(&sContext);
std::array<char, 1024> data;
while (in.good()) {
in.read(data.data(), data.size());
SHA256_Update(&sContext, data.data(), in.gcount());
}
Sha256Hash hash;
SHA256_Final(hash.data(), &sContext);
return hash;
}
https://godbolt.org/z/Y3nY13jao
Side note: I'm surprised that std::istream::read sets failure bit when end of file is reached and some data has bee read. This is a inconsistent with behavior of streaming operator.
You are returning a pointer to local variable pSum, that is undefined behaviour(UB).
An explanation for differen outputs might be that since the array is located on stack, calls to future functions like sprintf or MessageBoxA likely overwrite the array with their own variables. But anything can happen after UB.
Use and return std::vector<unsigned char> instead.
Also do not use new, instead std::array or another std::vector is much safer.
Lastly, I would strongly advise to turn on compiler warnings for your compiler, it should warn about the above issue.
Your code's output depends on what value getsum returns. The value getsum returns is an address on the stack. So your code's output depends on where the stack is located.
Somewhere in your code, you should save the result of the SHA256 operation in a buffer of some kind and you should output the contents of that buffer. Your code does neither of those two things.
If you think you save the output of the SHA256 operation in some kind of buffer, where do you think that buffer is allocated. The only buffer you use is pBuffer, and that's allocated on the stack in getsum and so doesn't exist when getsum returns.
If you think you actually look in the buffer somewhere, where do you think that code is? Your sprintf call never looks in the buffer, it just outputs a pointer in hex.
There are several problems with your code:
your function is returning a pointer to a local variable. When your function exits, the variable is destroyed, leaving the returned pointer dangling.
you are printing the value of the returned pointer itself (ie, the memory address it holds), not the SHA data that it points at. That is why you are seeing the same value being displayed in the message box. You are seeing the memory address of the local variable, and multiple invocations of your function may reuse the same memory.
your reading loop is not making sure that read() is successful before calling SHA256_Update().
Try something more like this instead:
using Sha256Digest = std::array<unsigned char, SHA256_DIGEST_LENGTH>;
Sha256Digest getsum( const char* filename ) {
std::ifstream file( filename, std::ios::binary );
SHA256_CTX sContext;
char buffer[ 1024*16 ];
Sha256Digest sum;
SHA256_Init( &sContext );
while( file.read( buffer, sizeof(buffer) ) ) {
SHA256_Update( &sContext, buffer, pFile.gcount() );
}
SHA256_Final( sum.data(), &sContext );
return sum;
}
...
Sha256Digest sum = getsum("library.dll");
char cl_sum[256] = {}, *ptr = cl_sum;
for (auto ch : sum) {
ptr += sprintf( ptr, "%02x", ch );
}
MessageBoxA( NULL, cl_sum, NULL, NULL );
/* Alternatively:
std::ostringstream cl_sum;
for (auto ch : sum) {
cl_sum << std::hex << std::setw(2) << std::setfill('0') << ch;
}
MessageBoxA( NULL, cl_sum.str().c_str(), NULL, NULL );
*/
Related
I did a sample project to read a file into a buffer.
When I use the tellg() function it gives me a larger value than the
read function is actually read from the file. I think that there is a bug.
here is my code:
EDIT:
void read_file (const char* name, int *size , char*& buffer)
{
ifstream file;
file.open(name,ios::in|ios::binary);
*size = 0;
if (file.is_open())
{
// get length of file
file.seekg(0,std::ios_base::end);
int length = *size = file.tellg();
file.seekg(0,std::ios_base::beg);
// allocate buffer in size of file
buffer = new char[length];
// read
file.read(buffer,length);
cout << file.gcount() << endl;
}
file.close();
}
main:
void main()
{
int size = 0;
char* buffer = NULL;
read_file("File.txt",&size,buffer);
for (int i = 0; i < size; i++)
cout << buffer[i];
cout << endl;
}
tellg does not report the size of the file, nor the offset
from the beginning in bytes. It reports a token value which can
later be used to seek to the same place, and nothing more.
(It's not even guaranteed that you can convert the type to an
integral type.)
At least according to the language specification: in practice,
on Unix systems, the value returned will be the offset in bytes
from the beginning of the file, and under Windows, it will be
the offset from the beginning of the file for files opened in
binary mode. For Windows (and most non-Unix systems), in text
mode, there is no direct and immediate mapping between what
tellg returns and the number of bytes you must read to get to
that position. Under Windows, all you can really count on is
that the value will be no less than the number of bytes you have
to read (and in most real cases, won't be too much greater,
although it can be up to two times more).
If it is important to know exactly how many bytes you can read,
the only way of reliably doing so is by reading. You should be
able to do this with something like:
#include <limits>
file.ignore( std::numeric_limits<std::streamsize>::max() );
std::streamsize length = file.gcount();
file.clear(); // Since ignore will have set eof.
file.seekg( 0, std::ios_base::beg );
Finally, two other remarks concerning your code:
First, the line:
*buffer = new char[length];
shouldn't compile: you have declared buffer to be a char*,
so *buffer has type char, and is not a pointer. Given what
you seem to be doing, you probably want to declare buffer as
a char**. But a much better solution would be to declare it
as a std::vector<char>& or a std::string&. (That way, you
don't have to return the size as well, and you won't leak memory
if there is an exception.)
Second, the loop condition at the end is wrong. If you really
want to read one character at a time,
while ( file.get( buffer[i] ) ) {
++ i;
}
should do the trick. A better solution would probably be to
read blocks of data:
while ( file.read( buffer + i, N ) || file.gcount() != 0 ) {
i += file.gcount();
}
or even:
file.read( buffer, size );
size = file.gcount();
EDIT: I just noticed a third error: if you fail to open the
file, you don't tell the caller. At the very least, you should
set the size to 0 (but some sort of more precise error
handling is probably better).
In C++17 there are std::filesystem file_size methods and functions, so that can streamline the whole task.
std::filesystem::file_size - cppreference.com
std::filesystem::directory_entry::file_size - cppreference.com
With those functions/methods there's a chance not to open a file, but read cached data (especially with the std::filesystem::directory_entry::file_size method)
Those functions also require only directory read permissions and not file read permission (as tellg() does)
void read_file (int *size, char* name,char* buffer)
*buffer = new char[length];
These lines do look like a bug: you create an char array and save to buffer[0] char. Then you read a file to buffer, which is still uninitialized.
You need to pass buffer by pointer:
void read_file (int *size, char* name,char** buffer)
*buffer = new char[length];
Or by reference, which is the c++ way and is less error prone:
void read_file (int *size, char* name,char*& buffer)
buffer = new char[length];
...
fseek(fptr, 0L, SEEK_END);
filesz = ftell(fptr);
will do the file if file opened through fopen
using ifstream,
in.seekg(0,ifstream::end);
dilesz = in.tellg();
would do similar
this is my first post. i am currently taking a networking class and i am required to write a client program that can download all emails from imap.gmail.com:993 to text files. i am required to write this program using winsock and openssl. I was able to connect to the server and fetch the emails. For emails with small data, i had no problem receiving it. But for emails with large data such as an images that is base64-decoded, i was able to download only a part of it.
so my question is How can i tell the client to wait until it received all the data from the server?
Here is what i have done so far:
void fetchMail(SSL *sslConnection,int lowerLimit, int UpperLimit)
{
SYSTEMTIME lt;
ofstream outfile;
GetLocalTime(<);
char szFile[MAX_PATH + 1];
char szPath[MAX_PATH+1];
char message[BUFSIZE];
char result[BUFSIZE];
::GetModuleFileName( NULL, szPath, MAX_PATH );
// Change file name to current full path
LPCTSTR psz = strchr( szPath, '\\');
if( psz != NULL )
{
szPath[psz-szPath] = '\0';
}
char szMailBox[MAX_PATH+1];
memset( szMailBox, 0, sizeof(szMailBox));
wsprintf( szMailBox, "%s\\inbox", szPath );
// Create a folder to store emails
::CreateDirectory( szMailBox, NULL );
for(int i = lowerLimit; i < UpperLimit; ++i)
{
// Create a folder to store emails
memset( szFile, 0, sizeof(szFile));
memset( result, 0, sizeof(result));
memset( message, 0, sizeof(message));
::sprintf(szFile,"%s\\%d%d%d%d%d%d.txt", szMailBox, lt.wHour, lt.wMinute,lt.wMinute,lt.wSecond, lt.wMilliseconds,i);
string Result;//string which will contain the result
stringstream convert; // stringstream used for the conversion
const char * num;
convert << i;//add the value of Number to the characters in the stream
Result = convert.str();//set Result to the content of the stream
num = Result.c_str();
strcpy(result, "tag FETCH ");
strcat(result, num);
strcat(result, " (BODY[TEXT])\r\n");
int n = 0;
cout << "\nFETCHING : \n";
SSL_write(sslConnection, result, strlen(result));
outfile.open(szFile );
SSL_read(sslConnection, message, sizeof(message)-1);
outfile <<message ;
outfile.close();
}
}
First of all some points on your code:
You use strcpy, strcat and all those unchecked, unsafe C functions. You might easily get buffer overflows and other kinds of errors. Consider to use C++ strings, vectors, arrays.
You do a lot of different things in that function, on different levels of abstraction. AFAICS only the two SSL_* function calls are really about fetching that mail. Consider to break out some functions to improve readability.
Now to your problem: Googling a bit about SSL_read, you will see that it returns an int, denoting how many bytes were actually read. You should use that return value - not only for this issue but also for error handling. If the mail data is longer than your buffer, the function will read until the buffer is filled and return its size. You should continue to call the function until all bytes have been read.
In the following code if I comment out the call to "GetCurrentDirectory" everything works fine, but if I don't then the code breaks after it, no child windows show up, but the program don't crash. The compiler doesn't give any error.
char *iniFilePath;
int lenWritten = GetCurrentDirectory( MAX_PATH, iniFilePath );
if( lenWritten )
{
lstrcat( iniFilePath, iniFileName.c_str() );
char *buffer;
GetPrivateProfileString( iniServerSectionName.c_str(), serverIp.c_str(), "", buffer, MAX_PATH, iniFilePath );// server ip
MessageBox( 0, buffer, 0, 0 );
}
else
{
MessageBox( 0,0,0,0 );
}
iniFilePath is an unintialised pointer which GetCurrentDirectory() is attempting to write to, causing undefined behaviour. GetCurrentDirectory() does not allocate a buffer for the caller: it must be provided.
Change to:
char iniFilePath[MAX_PATH]; // or similar.
Instead of using lstrcat(), which has Warning Do not use message on its reference page, construct the path use a std::string instead to avoid potential buffer overruns:
const std::string full_file_path(std::string(iniFilePath) + "/" + iniFileName);
Note similar issue with buffer, as pointed out by Wimmel.
I would do this in order to get the current directory -
int pathLength = GetCurrentDirectory(0, NULL);
std::vector<char> iniFilePath(pathLength);
GetCurrentDirectory(pathLength, iniFilePath.data());
Note however that this won't be thread safe as the directory could change from another thread between the two calls but as far as I know few programs change the current directory so it's unlikely to be an issue.
i have a code that copies contents of std::stringstream to char * dest
static size_t copyStreamData(std::stringstream & ss, char * const & source, char * dest)
{
ss.str("");
ss.clear();
ss << source;
size_t ret = ss.rdbuf()->sgetn( dest, (std::numeric_limits<size_t>::max)() ) ;
dest[ret] = 0;
return ret;
}
on iOS 5.0 and below it works fine as expected... But in iOS 5.1 it returns NULL.
What am i doing wrong? also How can i patch my code ?
What you're trying to do is basically the same as doing:
std::size_t length = std::strlen(source) + 1; // + 1 for '\0'
std::copy(source, source + length, dest);
// Assuming dest has length + 1 bytes allocated for it
I doubt sgetn is returning NULL, as sgetn returns std::streamsize and not a pointer type. Is it returning 0, or is another function returning NULL? Have you tried flushing the stream before calling rdbuf()?
You should choose your string representation.
If you have to use C-strings, for some reasons, then your function is called strncpy.
std::strncpy(dest, source, max_size_of_dest);
Read about the caveats in the link.
If you can use a better abstraction, however, then you are encouraged to move to std::string.
void copy(std::string const& source, std::string& dest) { dest = source; }
Not having to deal with buffer length (and thus not messing up more often than not) is a very powerful help.
Note that nothing prevents you from manipulating std::string within your application and still communicating with C methods: .c_str() helps a lot.
I have a need to serialize int, double, long, and float
into a character buffer and this is the way I currently do it
int value = 42;
char* data = new char[64];
std::sprintf(data, "%d", value);
// check
printf( "%s\n", data );
First I am not sure if this is the best way to do it but my immediate problem is determining the size of the buffer. The number 64 in this case is purely arbitrary.
How can I know the exact size of the passed numeric so I can allocate exact memory; not more not less than is required?
Either a C or C++ solution is fine.
EDIT
Based on Johns answer ( allocate large enough buffer ..) below, I am thinking of doing this
char *data = 0;
int value = 42;
char buffer[999];
std::sprintf(buffer, "%d", value);
data = new char[strlen(buffer)+1];
memcpy(data,buffer,strlen(buffer)+1);
printf( "%s\n", data );
Avoids waste at a cost of speed perhaps. And does not entirely solve the potential overflow Or could I just use the max value sufficient to represent the type.
In C++ you can use a string stream and stop worrying about the size of the buffer:
#include <sstream>
...
std::ostringstream os;
int value=42;
os<<42; // you use string streams as regular streams (cout, etc.)
std::string data = os.str(); // now data contains "42"
(If you want you can get a const char * from an std::string via the c_str() method)
In C, instead, you can use the snprintf to "fake" the write and get the size of the buffer to allocate; in facts, if you pass 0 as second argument of snprintf you can pass NULL as the target string and you get the characters that would have been written as the return value. So in C you can do:
int value = 42;
char * data;
size_t bufSize=snprintf(NULL, 0 "%d", value)+1; /* +1 for the NUL terminator */
data = malloc(bufSize);
if(data==NULL)
{
// ... handle allocation failure ...
}
snprintf(data, bufSize, "%d", value);
// ...
free(data);
I would serialize to a 'large enough' buffer then copy to an allocated buffer. In C
char big_buffer[999], *small_buffer;
sprintf(big_buffer, "%d", some_value);
small_buffer = malloc(strlen(big_buffer) + 1);
strcpy(small_buffer, big_buffer);