Is there a standard library or commonly used library that can be used for calculating SHA-512 hashes on Linux?
I'm looking for a C or C++ library.
Have you checked OpenSSL. I myself have not used it but documentation says it supports it.
Here is list of few more implementations.
Example code
md = EVP_get_digestbyname("sha512");
EVP_MD_CTX_init(&mdctx);
EVP_DigestInit_ex(&mdctx, md, NULL);
EVP_DigestUpdate(&mdctx, mess1, strlen(mess1));
EVP_DigestUpdate(&mdctx, mess2, strlen(mess2));
EVP_DigestFinal_ex(&mdctx, md_value, &md_len);
EVP_MD_CTX_cleanup(&mdctx);
Check this code. It is fully portable and does not need any additional configurations. Only STL would suffice. You'll just need to declare
#include "sha512.hh"
and then use the functions
sw::sha512::calculate("SHA512 of std::string") // hash of a string, or
sw::sha512::file(path) // hash of a file specified by its path, or
sw::sha512::calculate(&data, sizeof(data)) // hash of any block of data
whenever you need them. Their return value is std::string
I'm using Botan for various cryptographic purposes. It has many kinds of SHA(-512) algorithms.
When I was looking at C++ crypto libraries I also found Crypto++. The style of the Botan API was more straightforward for me, but both of these libraries are solid and mature.
I have had great success with this:
Secure Hash Algorithm (SHA)
BSD license. It covers SHA-1, SHA-224, SHA-256, SHA-384, and SHA-512. It has neat helper functions reduce steps for simple cases:
SHA256_Data(const sha2_byte* data, size_t len, char digest[SHA256_DIGEST_STRING_LENGTH])
It also has a lot of performance tuning options.
Related
I'm porting some application from wchar_t for C strings to char16_t offered by C++11.
Although I have an issue. The only library I found that can handle snprintf for char16_t types is ICU with their UChar types.
The performance of u_snprintf_u (equivalent to swprintf/snprintf, but taking Uchar as arguments) is abismal.
Some testing leads to u_snprintf_u being 25x slower than snprintf.
Example of what I get on valgrind :
As you can see, the underlying code is doing too much work and instanciating internal objects that I don't want.
Edit : The data I'm working with doesn't need to be interpreted by the underlying ICU code. It's ascii oriented. I didn't find any way to tell ICU to not try to apply locales and such on such function calls.
I want to encode/decode some basic type into/from binary.
The test code may looks like this.
int main()
{
int iter = 0;
char* binary = new char[100];
int32_t version = 1;
memcpy(binary, &version, sizeof(int32_t));
iter+=sizeof(int32_t);
const char* value1 = "myvalue";
memcpy(binary+iter, value1, strlen(value1));
iter+=strlen(value1);
double value2 = 0.1;
memcpy(binary+iter, &value2, sizeof(double));
#warning TODO - big/small endian - fixed type length
return 0;
}
But I still need to solve a lot of problems, such as the endian and fixed type length.
So I want to know if there is a standard way to implement this.
Simultaneously, I don't want to use any third-party implementation, such as Boost and so on. Because I need to keep my code simple and Independent.
If there is a function/class like NSCoding in Objc, it will be best. I wonder if there is same thing in C++ standard library.
No, there are no serialization functions within the standard library. Use a library or implement it by yourself.
Note that raw new and delete is a bad practice in C++.
The most standard thing you have in every OS base library is ntohs/ntohl and htons/htonl that you can use to go from 'host' to 'network' byte order that is considered the standard for serializing integers.
The problem is that there is not yet a standard API for 64bit types and you should anyway serialize strings by yourself (the most common method is to prepend string data with an int16/32 containing the string length in bytes).
Again C/C++ do not offer a standard way to serialize data from/to a binary buffer or an XML or a JSON, but there are tons of libraries that implement this, for example, one of the most used, also if it comes with a lot of dependencies is:
Boost serialize
Other libraries widely used but that require a precompilation step are:
Google procol buffers
FlatBuffers
I have a software framework compiled and running successfully on both mac and linux. I am now trying to port it to windows (using mingw). So far, I have the software compiling and running under windows but its inevitably buggy. In particular, I have an issue with reading data that was serialized in macos (or linux) into the windows version of the program (segfaults).
The serialization process serializes values of primitive variables (longs, ints, doubles etc.) to disk.
This is the code I am using:
#include <iostream>
#include <fstream>
template <class T>
void serializeVariable(T var, std::ofstream &outFile)
{
outFile.write (reinterpret_cast < char *>(&var),sizeof (var));
}
template <class T>
void readSerializedVariable(T &var, std::ifstream &inFile)
{
inFile.read (reinterpret_cast < char *>(&var),sizeof (var));
}
So to save the state of a bunch of variables, I call serializeVariable for each variable in turn. Then to read the data back in, calls are made to readSerializedVariable in the same order in which they were saved. For example to save:
::serializeVariable<float>(spreadx,outFile);
::serializeVariable<int>(objectDensity,outFile);
::serializeVariable<int>(popSize,outFile);
And to read:
::readSerializedVariable<float>(spreadx,inFile);
::readSerializedVariable<int>(objectDensity,inFile);
::readSerializedVariable<int>(popSize,inFile);
But in windows, this reading of serialized data is failing. I am guessing that windows serializes data a little differently. I wonder if there is a way in which I could modify the above code so that data saved on any platform can be read on any other platform...any ideas?
Cheers,
Ben.
Binary serialization like this should work fine across those platforms. You do have to honor endianness, but that is trivial. I don't think these three platforms have any conflicts in this respect.
You really can't use as loose of type specifications when you do, though. int, float, size_t sizes can all change across platforms.
For integer types, use the strict sized types found in the cstdint header. uint32_t, int32_t, etc. Windows doesn't have the header available iirc, but you can use boost/cstdint.hpp instead.
Floating point should work as most compilers follow the same IEEE specs.
C - Serialization of the floating point numbers (floats, doubles)
Binary serialization really needs thorough unit testing. I would strongly recommend investing the time.
this is just a wild guess sry I can't help you more. My idea is that the byte order is different: big endian vs little endian. So anything larger than one byte will be messed up when loaded on a machine that has the order reversed.
For example I found this peace of code in msdn:
int isLittleEndian() {
long int testInt = 0x12345678;
char *pMem;
pMem = (char *) testInt;
if (pMem[0] == 0x78)
return(1);
else
return(0);
}
I guess you will have different results on linux vs windows. Best case would be if there is a flag option for your compiler(s) to use one format or the other. Just set it to be the same on all machines.
Hope this helps,
Alex
Just one more wild guess:
you forget open file in binary reading mode, and on windows file streams
convert sequence 13,10 to 10.
Did you consider using serialization libraries or formats, like e.g.:
XDR (supported by libc) or ASN1
s11n (a C++ serialization library)
Json, a very simple textual format with many libraries for it, e.g. JsonCpp, Jansson, Jaula, ....)
YAML, a more powerful textual format, with many libraries
or even XML, which is often used for serialization purposes...
(And for serialization of scalars, the htonl and companion routines should help)
This thread is ok.
How to get Processor and Motherboard Id?
I wanted to get processor ID using C++ code not using WMI or any third party lib.
OR anything on a computer that turns out to be unique.
One thing is Ethernet ID but which is again removable on some machines. This I want to use mostly for licensing purpose.
Is processor ID unique and available on all major processors?
I had a similar problem lately and I did the following. First I gained some unique system identification values:
GetVolumeInformation for HDD serial number
GetComputerName (this of course is not unique, but our system was using the computer names to identify clients on a LAN, so it was good for me)
__cpuid (and specifically the PSN - processor serial number field)
GetAdaptersInfo for MAC addresses
I took these values and combined them in an arbitrary but deterministic way (read update below!) (adding, xoring, dividing and keeping the remainder etc.). Iterate over the values as if they were strings and be creative. In the end, you will get a byte literal which you can transform to the ASCII range of letters and numbers to get a unique, "readable" code that doesn't look like noise.
Another approach can be simply concatenating these values and then "cover them up" with xoring something over them (and maybe transforming to letters again).
I'm saying it's unique, because at least one of the inputs is supposed to be unique (the MAC address). Of course you need some understanding of number theory to not blew away this uniqueness, but it should be good enough anyway.
Important update: Since this post I learned a few things about cryptography, and I'm on the opinion that making up an arbitrary combination (essentially your own hash) is almost certainly a bad idea. Hash functions used in practice are constructed to be well-behaved (as in low probability of collisions) and to be hard to break (the ability construct a value that has the same hash value as another). Constructing such a function is a very hard computer science problem and unless you are qualified, you shouldn't attempt. The correct approach for this is to concatenate whatever information you can collect about the hardware (i.e. the ones I listed in the post) and use a cryptographic hash or digital signature to get a verifiable and secure output. Do not implement the cryptographic algorithms yourself either; there are lots of vulnerability pitfalls that take lots of knowledge to avoid. Use a well-known and trusted library for the implementation of the algorithms.
If you're using Visual Studio, Microsoft provides the __cpuid instrinsic in the <intrin.h> header. Example on the linked msdn site.
Hm...
There are special libraries to generate unique ID based on the hardware installed (so for the specified computer this ID always be the same). Most of them takes motherboard ID + HDD ID + CPU ID and mix these values.
Whe reinvent the wheel? Why not to use these libraries? Any serious reason?
You can use command line.
wmic cpu list full
wmic baseboard list full
Or WMI interface
#include <wmi.hpp>
#include <wmiexception.hpp>
#include <wmiresult.hpp>
#include <../src/wmi.cpp>
#include <../src/wmiresult.cpp> // used
#pragma comment(lib, "wbemuuid.lib")
struct Win32_WmiCpu
{
void setProperties(const WmiResult& result, std::size_t index)
{
//EXAMPLE EXTRACTING PROPERTY TO CLASS
result.extract(index, "ProcessorId", (*this).m_cpuID);
}
static std::string getWmiClassName()
{
return "Win32_Processor";
}
string m_cpuID;
//All the other properties you wish to read from WMI
}; //end struct Win32_ComputerSystem
struct Win32_WmiMotherBoard
{
void setProperties(const WmiResult& result, std::size_t index)
{
//EXAMPLE EXTRACTING PROPERTY TO CLASS
result.extract(index, "SerialNumber", (*this).m_mBId);
}
static std::string getWmiClassName()
{
return "Win32_BaseBoard";
}
string m_mBId;
}; //end struct Win32_ComputerSystem
try
{
const Win32_WmiCpu cpu = Wmi::retrieveWmi<Win32_WmiCpu>();
strncpy_s(ret.m_cpu, cpu.m_cpuID.c_str(), _TRUNCATE);
}
catch (const Wmi::WmiException& )
{
}
try
{
const Win32_WmiMotherBoard mb = Wmi::retrieveWmi<Win32_WmiMotherBoard>();
strncpy_s(ret.m_mb, mb.m_mBId.c_str(), _TRUNCATE);
}
catch (const Wmi::WmiException& )
{
}
First of all apologies if there is already a topic like this but I have not found... I need to know how to handle a really big number such as the result of 789^2346:
#include <iostream>
#include <cmath>
using namespace std;
int main () {
cout << pow(789,2346) << endl;
}
You could try the GNU MP Bignum Library or ttmath. This link point to some samples. It is very easy to use.
You need a "big number" library. A popular choice is GNU's Multiple Precision Arithmetic Library, which has a C interface. I's also been around for a while. Another one, for C++, is Big Integer Library.
I'm sure there is a list of bignum libraries on SO somewhere, but I cannot find it. There is a tag you could stroll through.
You can consider NTL (Number Theory Library) for C++ - http://www.shoup.net/ntl/ . It's very easy to use.
If you can relax C++ requirement, Perl and Python support big integers natively. PHP supports via bcmath or gmp extensions.