I want to check if some pubkey belongs to twisted edwards25519 (I guess this is used for ed25519 ?) The problem is that I have in theory some valid pubkeys like:
hash_hex = "3afe3342f7192e52e25ebc07ec77c22a8f2d1ba4ead93be774f5e4db918d82a0"
or
hash_hex = "fd739be0e59c072096693b83f67fb2a7fd4e4b487e040c5b128ff602504e6c72"
and to check if they are valid I use from libsodium:
auto result = crypto_core_ed25519_is_valid_point(reinterpret_cast<const unsigned char*>(hash_hex.c_str()));
and the thing is that for those pubkeys which should be in theory valid, I have in both cases 0 as a result, which means that checks didn't pass (according to https://doc.libsodium.org/advanced/point-arithmetic#point-validation). So my question is if I am using this function wrong ? should that key be delivered in another form ? or maybe somehow those keys are not valid for some reasons (I have them from some coin explorer, so in theory they should be valid) ? is there some online tool where I can check if those pubkey belongs to that eliptic curve ?
You need to convert your hex string into binary format. Internally, the ed25519 functions work on a 256 (crypto_core_ed25519_BYTES (32) * 8) bit unsigned integer. You can compare it with an uint64_t, which consists of 8 octets. The only difference is that there is no standard uint256_t type, so a pointer to an array of 32 unsigned char is used instead. I use std::uint8_t instead of unsigned char below, so if std::uint8_t is not an alias for unsigned char the program should fail to compile.
Converting the hex string to binary format is done like this.
A nibble is 4 bits, which is what a single hex digit can represent. 0 = 0b0000 and f = 0b1111.
You lookup each hex character in a lookup table to easily convert the character into the value 0 - 15 (decimal), 0b0000 - 0b1111 (binary).
Since an uint8_t requires two nibbles, you combine them two and two. The first nibble is left shifted to form the high part of the final uint8_t and the second nibble is just bitwise OR:ed (|) with that result. Using fe as an example:
f = 0b1111
e = 0b1110
shift f left 4 steps:
f0 = 0b11110000
bitwise OR with e
0b11110000
| 0b00001110
------------
0b11111110 = 254 (dec)
Example:
#include <sodium.h>
#include <cstdint>
#include <iostream>
#include <string_view>
#include <vector>
// a function to convert a hex string into binary format
std::vector<std::uint8_t> str2bin(std::string_view hash_hex) {
static constexpr std::string_view tab = "0123456789abcdef";
std::vector<std::uint8_t> res(crypto_core_ed25519_BYTES);
if(hash_hex.size() == crypto_core_ed25519_BYTES * 2) {
for(size_t i = 0; i < res.size(); ++i) {
// find the first nibble and left shift it 4 steps, then find the
// second nibble and do a bitwise OR to combine them:
res[i] = tab.find(hash_hex[i*2])<<4 | tab.find(hash_hex[i*2+1]);
}
}
return res;
}
int main() {
std::cout << std::boolalpha;
for(auto hash_hex : {
"3afe3342f7192e52e25ebc07ec77c22a8f2d1ba4ead93be774f5e4db918d82a0",
"fd739be0e59c072096693b83f67fb2a7fd4e4b487e040c5b128ff602504e6c72",
"this should fail" })
{
auto bin = str2bin(hash_hex);
bool result = crypto_core_ed25519_is_valid_point(bin.data());
std::cout << "is " << hash_hex << " ok: " << result << '\n';
}
}
Output:
is 3afe3342f7192e52e25ebc07ec77c22a8f2d1ba4ead93be774f5e4db918d82a0 ok: true
is fd739be0e59c072096693b83f67fb2a7fd4e4b487e040c5b128ff602504e6c72 ok: true
is this should fail ok: false
Also note: libsodium comes with helper functions to do this conversion between hex strings and binary format:
char *sodium_bin2hex(char * const hex, const size_t hex_maxlen,
const unsigned char * const bin, const size_t bin_len);
int sodium_hex2bin(unsigned char * const bin, const size_t bin_maxlen,
const char * const hex, const size_t hex_len,
const char * const ignore, size_t * const bin_len,
const char ** const hex_end);
Related
Suppose, I wanted to write decimal 31 in a binary file (which is already loaded into vector) in 4 bytes so I have to write as 00 00 00 1f, but I don't know how to convert decimal number in hex string (of 4 bytes)
So, expected hex in vector of unsigned char is:
0x00 0x00 0x00 0x1f // int value of this is 31
To do this I tried following:
std::stringstream stream;
stream << std::setfill('0') << std::setw(sizeof(int) * 2) << std::hex << 31;
cout << stream.str();
Output:
0000001f
Above line of code gives output in string format but I want it into vector of unsigned char in format of '0x', so my output vector should have elements after conversion as 0x00 0x00 0x00 0x1F.
Without bothering with endianness you could copy the int value into a character buffer of the appropriate size. This buffer could be the vector itself.
Perhaps something like this:
std::vector<uint8_t> int_to_vector(unsigned value)
{
// Create a vector of unsigned characters (bytes on a byte-oriented platform)
// The size will be set to the same size as the value type
std::vector<uint8_t> buffer(sizeof value);
// Do a byte-wise copy of the value into the vector data
std::memcpy(buffer.data(), &value, sizeof value);
return buffer;
}
The order of bytes in the vector will always in the host native order. If a specific order is mandated then each byte of the multi-byte value needs to be copied into a specific element of the array using bitwise operations (std::memcpy can't be used).
Also note that this function will break strict aliasing if uint8_t isn't an alias of unsigned char. And that uint8_t is an optional type, there are platforms which doesn't have 8-bit entities (though they are not common).
For an endianness-specific variant, where each value of a byte is extracted one by one and added to the vector, perhaps something like this:
std::vector<uint8_t> int_to_be_vector(unsigned value)
{
// Create a vector of unsigned characters (bytes on a byte-oriented platform)
// The size will be set to the same size as the value type
std::vector<uint8_t> buffer(sizeof value);
// For each byte in the multi-byte value, copy it to the "correct" place in the vector
for (size_t i = buffer.size(); i > 0; --i)
{
// The cast truncates the value, dropping all but the lowest eight bits
buffer[i - 1] = static_cast<uint8_t>(value);
value >>= 8;
}
return buffer;
}
Example of it working
You could use a loop to extract one byte at a time of the original number and store that in a vector.
#include <algorithm>
#include <cstdint>
#include <iostream>
#include <vector>
using u8 = std::uint8_t;
using u32 = std::uint32_t;
std::vector<u8> GetBytes(const u32 number) {
const u32 mask{0xFF};
u32 remaining{number};
std::vector<u8> result{};
while (remaining != 0u) {
const u32 bits{remaining & mask};
const u8 res{static_cast<u8>(bits)};
result.push_back(res);
remaining >>= 8u;
}
std::reverse(std::begin(result), std::end(result));
return result;
}
int main() {
const u32 myNumber{0xABC123};
const auto bytes{GetBytes(myNumber)};
std::cout << std::hex << std::showbase;
for (const auto b : bytes) {
std::cout << static_cast<u32>(b) << ' ';
}
std::cout << std::endl;
return 0;
}
The output of this program is:
0xab 0xc1 0x23
I need to define a character in C++. Right now I have:
#define ENQ (char)5
Is there a way to define a character similar to how you can define a long using the l suffix (e.g. 5l)?
The cast is fine, it has no run-time impact, but you can define character constants of any value directly using the \x escape sequence to specify characters by their hexadecimal character code - useful for non-printing or extended characters.
#define ASCII_ENQ `\x5`
But in C++ you'd do better to use a const (which has explicit type):
static const char ASCII_ENQ = 5 ;
or
static const char ASCII_ENQ = '\x5' ;
if you strive for complete type agreement (not really necessary).
that would be a character literal 'a' for instance:
#define ALPHA 'a'
#define FIVE_CHAR '5'
You can also use an User-defined literals and write something like this:
#include <iostream>
const char operator ""_ch( unsigned long long i) {
return i;
}
int main() {
char enq = 5_ch;
char alpha = 65_ch;
cout << alpha << '\n';
return 0;
}
But it's a bit of overkill for something you can more easily express with:
const char ENQ = 5;
Unless you are actually trying to do things as tricky as:
#include <iostream>
// convert a number (a figure) to its ASCII code
const char operator ""_ch( unsigned long long i) {
return i < 10 ? i + 48 : 48;
}
int main() {
char num_as_char = 5_ch;
std::cout << num_as_char << '\n';
// which outputs 5
return 0;
}
This question already has answers here:
Converting a hex string to a byte array
(22 answers)
Closed 3 years ago.
I have a string of hex values separated by white spaces.
std::string hexString = "0x60 0xC7 0x80" and so on...
I need to read this and store in the unsigned char array:
unsigned char array[] = {0x60, 0xC7, 0x80}.
I am stuck with this. Could someone please help?
Scenario:
I am writing AES 256 CBC encryption/decryption program. The encryption and decryption pieces are isolated. We are planning to encrypt DB passwords from clear text to encrypted one stored in the config files with (key, value). Standalone encryption binary will produce a hex equivalent. We encrypt all the necessary attributes separately and write them into the config file.
The application at run time should do the decryption of those configs to use it for connection to DB, etc.
I have to read hex string and send it as char array to the AES decryption algorithm.
Here's an example:
#include <regex>
#include <string>
#include <iostream>
template <typename Iterator>
class iterator_pair {
public:
iterator_pair(Iterator first, Iterator last): f_(first), l_(last) {}
Iterator begin() const { return f_; }
Iterator end() const { return l_; }
private:
Iterator f_;
Iterator l_;
};
template <typename Iterator>
iterator_pair<Iterator> make_iterator_pair(Iterator f, Iterator l) {
return iterator_pair<Iterator>(f, l);
}
int main() {
std::string hexString = "0x60 0xC7 0x80";
std::regex whitespace("\\s+");
std::vector<int> hexArray;
auto tokens = make_iterator_pair(
std::sregex_token_iterator(hexString.begin(), hexString.end(), whitespace, -1),
std::sregex_token_iterator());
for (auto token : tokens)
hexArray.push_back(stoi(token, 0, 0));
for (auto hex : hexArray)
std::cout << hex << std::endl;
}
I have std::string hexString... I need to store it in unsigned char array[].
Please take a look at the following solution which is based on strtoul function. The input string to strtoul may consist of any number of blanks and/or tabs, possibly followed by a sign, followed by a string of digits.
So even imperfect input like " 01 2 \t ab \t\t 3\n" can be properly parsed.
Function:
unsigned char * create_hex_array (const std::string * str, size_t * nr_of_elements);
takes pointer to the input std::string hexString creates and returns unsigned char array as well as count of elements in the array.
#include <iostream> // std::cout, std::endl
#include <sstream> // std::stringstream
#include <iomanip> // std::setfill, std::setw
#include <string> // std::string
#include <cstring> // functions to manipulate C strings and arrays
unsigned char * create_hex_array (const std::string * str, size_t * nr_of_elements)
{
size_t index = 0; // elements counter
size_t arr_len = str->size () / 2 + 1; // maximum number or array elements
const char *current = str->c_str (); // start from the beginning of the string
char *end = NULL; // init the end pointer
unsigned char *arr = new unsigned char[arr_len]; // allocate memory for our array
while (1)
{
unsigned long hex = strtoul (current, &end, 16);
if (current == end) // end of string reached
break;
arr[index] = (unsigned char) hex; // store the hex number in the array
index++; // increase index to the next slot in array
current = end; // move the to next number
}
*nr_of_elements = index; // return number of elements in the array
return arr; // return created array
}
void print_hex_array (size_t size, unsigned char *arr)
{
std::cout << "Nr of elements in the array is = " << size << std::endl;
// Fancy print out:
// Getting a buffer into a stringstream in hex representation with zero padding
// with 0-padding on small values so `5` would become `05`
std::stringstream ss;
ss << std::hex << std::setfill ('0');
for (size_t i = 0; i < size; i++)
{
// In C this is enough: printf("%02x ", arr[i]);
ss << std::setw (2) << static_cast < unsigned >(arr[i]) << " ";
}
std::cout << ss.rdbuf() << std::endl;
}
int main ()
{
const std::string hexString = " 5 8d 77 77 96 1 \t\t bc 95 b9 ab 9d 11 \n";
size_t arr_size; // Number of elements in the array
unsigned char * array = create_hex_array (&hexString, &arr_size);
print_hex_array (arr_size, array); // printout with 0- padding.
delete[]array; // release memory
return 0;
}
Output:
Nr of elements in the array is = 12
05 8d 77 77 96 01 bc 95 b9 ab 9d 11
Can I use itoa() for converting long long int to a binary string?
I have seen various examples for conversion of int to binary using itoa. Is there a risk of overflow or perhaps loss of precision, if I use long long int?
Edit
Thanks all of you for replying. I achieved what I was trying to do. itoa() was not useful enough, as it does not support long long int. Moreover I can't use itoa() in gcc as it is not a standard library function.
To convert an integer to a string containing only binary digits, you can do it by checking each bit in the integer with a one-bit mask, and append it to the string.
Something like this:
std::string convert_to_binary_string(const unsigned long long int value,
bool skip_leading_zeroes = false)
{
std::string str;
bool found_first_one = false;
const int bits = sizeof(unsigned long long) * 8; // Number of bits in the type
for (int current_bit = bits - 1; current_bit >= 0; current_bit--)
{
if ((value & (1ULL << current_bit)) != 0)
{
if (!found_first_one)
found_first_one = true;
str += '1';
}
else
{
if (!skip_leading_zeroes || found_first_one)
str += '0';
}
}
return str;
}
Edit:
A more general way of doing it might be done with templates:
#include <type_traits>
#include <cassert>
template<typename T>
std::string convert_to_binary_string(const T value, bool skip_leading_zeroes = false)
{
// Make sure the type is an integer
static_assert(std::is_integral<T>::value, "Not integral type");
std::string str;
bool found_first_one = false;
const int bits = sizeof(T) * 8; // Number of bits in the type
for (int current_bit = bits - 1; current_bit >= 0; current_bit--)
{
if ((value & (1ULL << current_bit)) != 0)
{
if (!found_first_one)
found_first_one = true;
str += '1';
}
else
{
if (!skip_leading_zeroes || found_first_one)
str += '0';
}
}
return str;
}
Note: Both static_assert and std::is_integral is part of C++11, but is supported in both Visual C++ 2010 and GCC from at least 4.4.5.
Yes, you can. As you showed yourself, itoa can be called with base 2, which means binary.
#include <stdio.h>
#include <stdlib.h>
int main()
{
int i;
char str[33];
i = 37; /* Just some number. */
itoa (i, str, 2);
printf("binary: %s\n", str);
return 0;
}
Also, yes, there will be truncation if you use an integer type larger than int, since itoa() takes only plain "int" as a value. long long is on your compiler probably 64 bit while int is probably 32 bit, so the compiler would truncate the 64 bit value to a 32 bit value before conversion.
Your wording is a bit confusing,
normally if you state 'decimal' I would take that to mean: 'a number represented as a string of decimal digits', while you seem to mean 'integer'.
and with 'binary' I would take that to mean: 'a number represented as bytes - as directly usable by the CPU'.
a better way of phrasing your subject would be: converting 64-bit integer to string of binary digits.
some systems have a _i64toa function.
You can use std::bitset for this purpose
template<typename T>
inline std::string to_binary_string(const T value)
{
return std::bitset<sizeof(T)>(value).to_string();
}
std::cout << to_binary_string(10240);
std::cout << to_binary_string(123LL);
I want to convert a four character string (i.e. four characters) into a long (i.e. convert them to ASCII codes and then put them into the long).
As I understand it, this is done by writing the first character to the first byte of the long, the second to the adjacent memory location, and so on. But I don't know how to do this in C++.
Can someone please point me in the right direction?
Thanks in advance.
Here's your set of four characters:
const unsigned char buf[4] = { 'a', '0', '%', 'Q' };
Now we assemble a 32-bit unsigned integer:
const uint32_t n = (buf[0]) | (buf[1] << 8) | (buf[2] << 16) | (buf[3] << 24);
Here I assume that buf[0] is the least significant one; if you want to go the other way round, just swap the indices around.
Let's confirm:
printf("n = 0x%08X\n", n); // we get n = 0x51253061
// Q % 0 a
Important: Make sure your original byte buffer is unsigned, or otherwise add explicit casts like (unsigned int)(unsigned char)(buf[i]); otherwise the shift operations are not well defined.
Word of warning: I would strongly prefer this algebraic solution over the possibly tempting const uint32_t n = *(uint32_t*)(buf), which is machine-endianness dependent and will make your compiler angry if you're using strict aliasing assumptions!
As was helpfully pointed out below, you can try and be even more portable by not making assumptions on the bit size of a byte:
const unsigned very long int n = buf[0] |
(buf[1] << (CHAR_BIT) |
(buf[2] << (CHAR_BIT * 2) |
(buf[3] << (CHAR_BIT * 3) ;
Feel free to write your own generalizations as needed! (Good luck figuring out the appropriate printf format string ;-) .)
If your bytes are in the correct order for a long on your machine then use memcpy, something like this -
#include <cstdlib>
#include <iostream>
int main()
{
char data[] = {'a', 'b', 'c', 'd'};
long result;
std::memcpy(&result, data, 4);
std::cout << result << "\n";
}
Note that this will be platform dependent for byte ordering in the long which may or may not be what you need. And the 4 is hard coded as the size in bytes of the long for simplicty. You would NOT hard code 4 in a real program of course. All the compilers I've tried this on optimize out the memcpy when optimization is enabled so it's likely to be efficient too.
EDIT: Go with the shift and add answer someone else posted unless this meets your specific requirements as it's much more portable and safe!
#include <string>
#include <iostream>
std::string fourCharCode_toString ( int value )
{
return std::string( reinterpret_cast<const char*>( &( value ) ), sizeof(int) );
}
int fourCharCode_toInt ( const std::string & value )
{
return *( reinterpret_cast<const int*>( value.data() ) );
}
int main()
{
int a = 'DROW';
std::string str = fourCharCode_toString( a );
int b = fourCharCode_toInt( str );
std::cout << str << "\n";
}