I am using the following code to convert the raw data values to a hexstring so I can find some information. But I am getting FFFFFFFF where I was supposed to get FF.
For example, the result should be "FF 01 00 00 EC 00 00 00 00 00 00 00 00 00 E9", but I am getting "FFFFFFFFF 01 00 00 FFFFFFEC 00 00 00 00 00 00 00 00 00 FFFFFFE9".
Does anyone know what is happening here?
std::vector<unsigned char> buf;
buf.resize( ANSWER_SIZE);
// Read from socket
m_pSocket->Read( &buf[0], buf.size() );
string result( buf.begin(), buf.end() );
result = ByteUtil::rawByteStringToHexString( result );
std::string ByteUtil::int_to_hex( int i )
{
std::stringstream sstream;
sstream << std::hex << i;
return sstream.str();
}
std::string ByteUtil::rawByteStringToHexString(std::string str)
{
std::string aux = "", temp = "";
for (unsigned int i=0; i<str.size(); i++) {
temp += int_to_hex(str[i]);
if (temp.size() == 1) {
aux += "0" + temp + " "; // completes with 0
} else if(i != (str.size() -1)){
aux += temp + " ";
}
temp = "";
}
// System.out.println(aux);
return aux;
}
UPDATE: Debugging, I noticed that the int_to_hex is returning FFFFFFFF instead of FF. How can I fix that?
Note the use of unsigned instead of int in int_to_hex().
#include <iostream>
#include <sstream>
std::string int_to_hex(unsigned i)
{
std::stringstream sstream;
sstream << std::hex << i;
return sstream.str();
}
int main() {
std::cout << int_to_hex(0xFF) << "\n";
}
Output
[1:58pm][wlynch#watermelon /tmp] ./foo
ff
Additionally...
We also can see in the question you are looking at, that they do a static_cast to achieve the same result.
ss << std::setw(2) << static_cast<unsigned>(buffer[i]);
OK Guys, sorry for the delay, I ran into the weekend and had some FIFA World Cup games to watch.
I was getting the same result no matter the change I made, so I decided to make some changes in the code before and I switched the input param from
std::string ByteUtil::rawByteStringToHexString(std::string str)
to
std::string ByteUtil::rawByteStringToHexString(vector<unsigned char> v)
and also kept the changes suggested by #sharth. Now I am getting the correct result.
Thank you all for the help!!
Related
I am trying to write a C++ program for my Computer Machine Organization class in which I perform a memory dump in hex on some address stored in memory. I don't really understand what a memory dump is, and am pretty new to writing C++. My questions are:
How can I create a method that takes two arguments in which they specify address in memory?
How can I further modify those arguments to specify a word address that is exactly 4 bytes long?
How can I then convert those addresses into hex values?
I know that this is a lot, but thank you for any suggestions.
For anyone who needs it, here is my code so far:
#include <stdio.h>
// Create something to do the methods on
char array[3] = {'a', 'b', 'c'};
void mdump(char start, char end){
// Create pointers to get the address of the starting and ending characters
char* pointer1 = (char *)& start;
char* pointer2 = (char *)& end;
// Check to see if starting pointer is in lower memory than ending pointer
if(pointer1 < pointer2){
printf("Passed");
}
else{
printf("Failed");
}
// Modify both the arguments so that each of them are exactly 4 bytes
// Create a header for the dump
// Iterate through the addresses, from start pointer to end pointer, and produce lines of hex values
// Declare a struct to format the values
// Add code that creates printable ASCII characters for each memory location (print "cntrl-xx" for values 0-31, or map them into a blank)
// Print the values in decimal and in ASCII form
}
int main(){
mdump(array[0], array[2]);
return 0;
}
How to write a Hex dump tool while learning C++:
Start with something simple:
#include <iostream>
int main()
{
char test[32] = "My sample data";
// output character
std::cout << test[0] << '\n';
}
Output:
M
Live demo on coliru
Print the hex-value instead of the character:
#include <iostream>
int main()
{
char test[32] = "My sample data";
// output a character as hex-code
std::cout << std::hex << test[0] << '\n'; // Uh oh -> still a character
std::cout << std::hex << (unsigned)(unsigned char)test[0] << '\n';
}
Output:
M
4d
Live demo on coliru
Note:
The stream output operator for char is intended to print a character (of course). There is another stream output operator for unsigned which fits better. To achieve that it's used, the char has to be converted to unsigned.
But be prepared: The C++ standard doesn't mandate whether char is signed or unsigned—this decision is left to the compiler vendor. To be on the safe side, the 'char' is first converted to 'unsigned char' then converted to unsigned.
Print the address of the variable with the character:
#include <iostream>
int main()
{
char test[32] = "My sample data";
// output an address
std::cout << &test[0] << '\n'; // Uh oh -> wrong output stream operator
std::cout << (const void*)&test[0] << '\n';
}
Output:
My sample data
0x7ffd3baf9b70
Live demo on coliru
Note:
There is one stream output operator for const char* which is intended to print a (zero-terminated) string. This is not what is intended. Hence, the (ugly) trick with the cast to const void* is necessary which triggers another stream output operator which fits better.
What if the data is not a 2 digit hex?
#include <iomanip>
#include <iostream>
int main()
{
// output character as 2 digit hex-code
std::cout << (unsigned)(unsigned char)'\x6' << '\n'; // Uh oh -> output not with two digits
std::cout << std::hex << std::setw(2) << std::setfill('0')
<< (unsigned)(unsigned char)'\x6' << '\n';
}
Output:
6
06
Live demo on coliru
Note:
There are I/O manipulators which can be used to modify the formatting of (some) stream output operators.
Now, put it all together (in loops) et voilà: a hex-dump.
#include <iomanip>
#include <iostream>
int main()
{
char test[32] = "My sample data";
// output an address
std::cout << (const void*)&test[0] << ':';
// output the contents
for (char c : test) {
std::cout << ' '
<< std::hex << std::setw(2) << std::setfill('0')
<< (unsigned)(unsigned char)c;
}
std::cout << '\n';
}
Output:
0x7ffd345d9820: 4d 79 20 73 61 6d 70 6c 65 20 64 61 74 61 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
Live demo on coliru
Make it nice:
#include <algorithm>
#include <iomanip>
#include <iostream>
int main()
{
char test[32] = "My sample data";
// hex dump
const size_t len = sizeof test;
for (size_t i = 0; i < len; i += 16) {
// output an address
std::cout << (const void*)&test[i] << ':';
// output the contents
for (size_t j = 0, n = std::min<size_t>(len - i, 16); j < n; ++j) {
std::cout << ' '
<< std::hex << std::setw(2) << std::setfill('0')
<< (unsigned)(unsigned char)test[i + j];
}
std::cout << '\n';
}
}
Output:
0x7fffd341f2b0: 4d 79 20 73 61 6d 70 6c 65 20 64 61 74 61 00 00
0x7fffd341f2c0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
Live demo on coliru
Make it a function:
#include <algorithm>
#include <iomanip>
#include <iostream>
void hexdump(const char* data, size_t len)
{
// hex dump
for (size_t i = 0; i < len; i += 16) {
// output an address
std::cout << (const void*)&data[i] << ':';
// output the contents
for (size_t j = 0, n = std::min<size_t>(len - i, 16); j < n; ++j) {
std::cout << ' '
<< std::hex << std::setw(2) << std::setfill('0')
<< (unsigned)(unsigned char)data[i + j];
}
std::cout << '\n';
}
}
int main()
{
char test[32] = "My sample data";
std::cout << "dump test:\n";
hexdump(test, sizeof test);
std::cout << "dump 4 bytes of test:\n";
hexdump(test, 4);
std::cout << "dump an int:\n";
int n = 123;
hexdump((const char*)&n, sizeof n);
}
Output:
dump test:
0x7ffe900f4ea0: 4d 79 20 73 61 6d 70 6c 65 20 64 61 74 61 00 00
0x7ffe900f4eb0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
dump 4 bytes of test:
0x7ffe900f4ea0: 4d 79 20 73
dump an int:
0x7ffe900f4e9c: 7b 00 00 00
Live demo on coliru
Note:
(const char*)&n may look a bit adventurous. In fact, conversion of pointers is always something which should be at best not necessary. However, for the dump tool this is the easiest way to access the bytes of arbitrary data. (This is one of the rare cases which is explicitly allowed by the standard.)
An even nicer hexdump can be found in
SO: How would I create a hex dump utility in C++?
(which I recommended OP beforehand).
I have a block of memory allocated through char buffer, is it legal to view it via buffer of another type?
char* buffer = new char[1000];
int64_t* int64_view = static_cast<int64_t*>(static_cast<void*>(buffer))
Is int64_view[0] guaranteed to correspond to first 8 bytes of buffer?
I am a bit concerned about aliasing, if the char buffer is only 1-byte aligned, and int64_t must be 8-byte aligned then how does compiler handle it?
Your example is violation of the strict aliasing rule.
So, int64_view anyway will point to the first byte, but it can be unaligned access. Some platforms allow it, some not. Anyway, in C++ it's UB.
For example:
#include <cstdint>
#include <cstddef>
#include <iostream>
#include <iomanip>
#define COUNT 8
struct alignas(1) S
{
char _pad;
char buf[COUNT * sizeof(int64_t)];
};
int main()
{
S s;
int64_t* int64_view alignas(8) = static_cast<int64_t*>(static_cast<void*>(&s.buf));
std::cout << std::hex << "s._pad at " << (void*)(&s._pad) << " aligned as " << alignof(s._pad) << std::endl;
std::cout << std::hex << "s.buf at " << (void*)(s.buf) << " aligned as " << alignof(s.buf) << std::endl;
std::cout << std::hex << "int64_view at " << int64_view << " aligned as " << alignof(int64_view) << std::endl;
for(std::size_t i = 0; i < COUNT; ++i)
{
int64_view[i] = i;
}
for(std::size_t i = 0; i < COUNT; ++i)
{
std::cout << std::dec << std::setw(2) << i << std::hex << " " << int64_view + i << " : " << int64_view[i] << std::endl;
}
}
Now compile and run it with -fsanitize=undefined:
$ g++ -fsanitize=undefined -Wall -Wextra -std=c++20 test.cpp -o test
$ ./test
s._pad at 0x7ffffeb42300 aligned as 1
s.buf at 0x7ffffeb42301 aligned as 1
int64_view at 0x7ffffeb42301 aligned as 8
test.cpp:26:23: runtime error: store to misaligned address 0x7ffffeb42301 for type 'int64_t', which requires 8 byte alignment
0x7ffffeb42301: note: pointer points here
7f 00 00 bf 11 00 00 00 00 00 00 ff ff 00 00 01 00 00 00 20 23 b4 fe ff 7f 00 00 7c a4 9d 2b 98
^
test.cpp:31:113: runtime error: load of misaligned address 0x7ffffeb42301 for type 'int64_t', which requires 8 byte alignment
0x7ffffeb42301: note: pointer points here
7f 00 00 bf 00 00 00 00 00 00 00 00 01 00 00 00 00 00 00 00 02 00 00 00 00 00 00 00 03 00 00 00
^
0 0x7ffffeb42301 : 0
1 0x7ffffeb42309 : 1
2 0x7ffffeb42311 : 2
3 0x7ffffeb42319 : 3
4 0x7ffffeb42321 : 4
5 0x7ffffeb42329 : 5
6 0x7ffffeb42331 : 6
7 0x7ffffeb42339 : 7
It works on x86_64, but there is undefined behavior and you pay with execution speed.
This example on godbolt
In C++20 there is bit_cast. It will not help you in this example with unaligned access, but it can resolve some issues with aliasing.
UPDATE:
There is instructions on x86_64, that requires aligned access. For example, SSE, that requires 16-bit alignment. If you will try to use these instructions with unaligned access, application will crash with "general protection fault".
void* will definitely lead to UB. static_cast lost it's value when you cast your type first to the most generic type void*, because you can cast everything to/from void*. It is no different from using reinterpret_cast for casting straight from your type to any other pointer type.
Consider the following example:
int64_t* int64_view = reinterpret_cast<int64_t*>(buffer);
It might work, and it might not - UB.
The server needs to send a std::vector<float> to a Qt application over a TCP socket. I am using Qt 5.7.
On the server side, using boost::asio:
std::vector<float> message_ = {1.2, 8.5};
asio::async_write(socket_, asio::buffer<float>(message_),
[this, self](std::error_code ec, std::size_t)
This works and I manage to get it back on my client using boost::asio's read_some(). As both Qt and asio have their own event manager, I want to avoid using asio in my Qt app.
So on the client side I have (which does not work):
client.h:
#define FLOATSIZE 4
QTcpSocket *m_socket;
QDataStream m_in;
QString *m_string;
QByteArray m_buff;
client.cpp (constructor):
m_in.setDevice(m_socket);
m_in.setFloatingPointPrecision(QDataStream::SinglePrecision);
// m_in.setByteOrder(QDataStream::LittleEndian);
client.cpp (read function, which is connected via QObject::connect(m_socket, &QIODevice::readyRead, this, &mywidget::ask2read); ):
uint availbytes = m_socket->bytesAvailable(); // which is 8, so that seems good
while (availbytes >= FLOATSIZE)
{
nbytes = m_in.readRawData(m_buff.data(), FLOATSIZE);
bool conv_ok = false;
const float f = m_buff.toFloat(&conv_ok);
availbytes = m_socket->bytesAvailable();
m_buff.clear();
}
The m_buff.toFloat() call returns 0.0 which is a fail according to the Qt doc. I have tried to change the float precision, little or big endian, but I can not manage to get my std::vector<float> back. Any hints?
Edit: everything runs on the same PC/compiler.
Edit: see my answer for a solution and sehe's for more detail on what is going on
I managed to resolve the issue, by editing the Qt side (client), to read the socket:
uint availbytes = m_socket->bytesAvailable();
while (availbytes >= 4)
{
char buffer[FLOATSIZE];
nbytes = m_in.readRawData(buffer, FLOATSIZE);
float f = bytes2float(buffer);
availbytes = m_socket->bytesAvailable();
}
I use those two conversion functions, bytes2float and bytes2int:
float bytes2float(char* buffer)
{
union {
float f;
uchar b[4];
} u;
u.b[3] = buffer[3];
u.b[2] = buffer[2];
u.b[1] = buffer[1];
u.b[0] = buffer[0];
return u.f;
}
and:
int bytes2int(char* buffer)
{
int a = int((unsigned char)(buffer[3]) << 24 |
(unsigned char)(buffer[2]) << 16 |
(unsigned char)(buffer[1]) << 8 |
(unsigned char)(buffer[0]));
return a;
}
I also found that function to display bytes, which is useful to see what is going on behind the scene (from https://stackoverflow.com/a/16063757/7272199):
template <typename T>
void print_bytes(const T& input, std::ostream& os = std::cout)
{
const unsigned char* p = reinterpret_cast<const unsigned char*>(&input);
os << std::hex << std::showbase;
os << "[";
for (unsigned int i=0; i<sizeof(T); ++i)
os << static_cast<int>(*(p++)) << " ";
os << "]" << std::endl;;
}
Re. your answer: Which side is this on? Also, are your platforms not the same (OS/architecture?). I had assumed from the question that both processes run on the same PC and compiler etc.
For one thing, you can see that ASIO does not do anything related to endianness.
#include <boost/asio.hpp>
#include <iostream>
#include <iomanip>
namespace asio = boost::asio;
#include <iostream>
void print_bytes(unsigned char const* b, unsigned char const* e)
{
std::cout << std::hex << std::setfill('0') << "[ ";
while (b!=e)
std::cout << std::setw(2) << static_cast<int>(*b++) << " ";
std::cout << "]\n";
}
template <typename T> void print_bytes(const T& input) {
using namespace std;
print_bytes(reinterpret_cast<unsigned char const*>(std::addressof(*begin(input))),
reinterpret_cast<unsigned char const*>(std::addressof(*end(input))));
}
int main() {
float const fs[] { 1.2, 8.5 };
std::cout << "fs: "; print_bytes(fs);
{
std::vector<float> gs(2);
asio::buffer_copy(asio::buffer(gs), asio::buffer(fs));
for (auto g : gs) std::cout << g << " "; std::cout << "\n";
std::cout << "gs: "; print_bytes(gs);
}
{
std::vector<char> binary(2*sizeof(float));
asio::buffer_copy(asio::buffer(binary), asio::buffer(fs));
std::cout << "binary: "; print_bytes(binary);
std::vector<float> gs(2);
asio::buffer_copy(asio::buffer(gs), asio::buffer(binary));
for (auto g : gs) std::cout << g << " "; std::cout << "\n";
std::cout << "gs: "; print_bytes(gs);
}
}
Prints
fs: [ 9a 99 99 3f 00 00 08 41 ]
1.2 8.5
gs: [ 9a 99 99 3f 00 00 08 41 ]
binary: [ 9a 99 99 3f 00 00 08 41 ]
1.2 8.5
gs: [ 9a 99 99 3f 00 00 08 41 ]
Theory
I suspect the Qt side ruins things. Since the naming of the function readRawData certainly implies a lack of endianness awareness, I'd guess your system's endianness wreaks havoc (https://stackoverflow.com/a/2945192/85371, also the comment).
Suggestion
In that case, consider using Boost Endian.
I think it's a bad idea to use high level send method server side (you try to send a c++ vector) and low level client side.
I'm quite sure there is an endianness problem somewhere.
Anyway try to do this client side:
char buffer[FLOATSIZE];
bytes = m_in.readRawData(buffer, FLOATSIZE);
if (bytes != FLOATSIZE)
return ERROR;
const float f = (float)(ntohl(*((int32_t *)buffer)));
If boost::asio uses the network byte order for the floats (as it should), this will work.
I'm trying to construct a string from a byte array (libcrypto++) but I have issues with '0' in order to connect to SQS in c++
The result is almost correct except some '0' go at the end of the string.
std::string shaDigest(const std::string &key = "") {
byte out[64] = {0};
CryptoPP::SHA256().CalculateDigest(out, reinterpret_cast<const byte*>(key.c_str()), key.size());
std::stringstream ss;
std::string rep;
for (int i = 0; i < 64; i++) {
ss << std::hex << static_cast<int>(out[i]);
}
ss >> rep;
rep.erase(rep.begin()+64, rep.end());
return rep;
}
output:
correct : c46268185ea2227958f810a84dce4ade54abc4f42a03153ef720150a40e2e07b
mine : c46268185ea2227958f810a84dce4ade54abc4f42a3153ef72015a40e2e07b00
^ ^
Edit: I'm trying to do the same that hashlib.sha256('').hexdigest() in python does.
If that indeed works, here's the solution with my suggestions incorporated.
std::string shaDigest(const std::string &key = "") {
std::array<byte, 64> out {};
CryptoPP::SHA256().CalculateDigest(out.data(), reinterpret_cast<const byte*>(key.c_str()), key.size());
std::stringstream ss;
ss << std::hex << std::setfill('0');
for (byte b : out) {
ss << std::setw(2) << static_cast<int>(b);
}
// I don't think `.substr(0,64)` is needed here;
// hex ASCII form of 64-byte array should always have 128 characters
return ss.str();
}
You correctly convert bytes in hexadecimal, and it works correctly as soon as the byte value is greater than 15. But below, the first hexa digit is a 0 and is not printed by default. The two absent 0 are for 0x03 -> 3 and 0x0a -> a.
You should use :
for (int i = 0; i < 64; i++) {
ss << std::hex << std::setw(2) << std::setfill('0') << static_cast<int>(out[i]);
}
You need to set the width for the integer numbers for the proper zero-padding of numbers with otherwise less than two hexadecimal digits. Note that you need to re-set the width before every number that is inserted into the stream.
Example:
#include <iostream>
#include <iomanip>
int main() {
std::cout << std::hex << std::setfill('0');
for (int i=0; i<0x11; i++)
std::cout << std::setw(2) << i << "\n";
}
Output:
$ g++ test.cc && ./a.out
00
01
02
03
04
05
06
07
08
09
0a
0b
0c
0d
0e
0f
10
For reference:
http://en.cppreference.com/w/cpp/io/manip/setw
http://en.cppreference.com/w/cpp/io/manip/setfill
I'm having some problems reading a binary file and converting it's bytes to hex representation.
What I've tried so far:
ifstream::pos_type size;
char * memblock;
ifstream file (toread, ios::in|ios::binary|ios::ate);
if (file.is_open())
{
size = file.tellg();
memblock = new char [size];
file.seekg (0, ios::beg);
file.read (memblock, size);
file.close();
cout << "the complete file content is in memory" << endl;
std::string tohexed = ToHex(memblock, true);
std::cout << tohexed << std::endl;
}
Converting to hex:
string ToHex(const string& s, bool upper_case)
{
ostringstream ret;
for (string::size_type i = 0; i < s.length(); ++i)
ret << std::hex << std::setfill('0') << std::setw(2) << (upper_case ? std::uppercase : std::nouppercase) << (int)s[i];
return ret.str();
}
Result: 53514C69746520666F726D61742033.
When I open the original file with a hex editor, this is what it shows:
53 51 4C 69 74 65 20 66 6F 72 6D 61 74 20 33 00
04 00 01 01 00 40 20 20 00 00 05 A3 00 00 00 47
00 00 00 2E 00 00 00 3B 00 00 00 04 00 00 00 01
00 00 00 00 00 00 00 00 00 00 00 01 00 00 00 00
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
00 00 00 00 00 00 00 00 00 00 00 00 00 00 05 A3
00 2D E2 1E 0D 03 FC 00 06 01 80 00 03 6C 03 D3
Is there a way to get the same desired output using C++?
Working solution (by Rob):
...
std::string tohexed = ToHex(std::string(memblock, size), true);
...
string ToHex(const string& s, bool upper_case)
{
ostringstream ret;
for (string::size_type i = 0; i < s.length(); ++i)
{
int z = s[i]&0xff;
ret << std::hex << std::setfill('0') << std::setw(2) << (upper_case ? std::uppercase : std::nouppercase) << z;
}
return ret.str();
}
char *memblock;
…
std::string tohexed = ToHex(memblock, true);
…
string ToHex(const string& s, bool upper_case)
There's your problem, right there. The constructor std::string::string(const char*) interprets its input as a nul-terminated string. So, only the characters leading up to '\0' are even passed to ToHex. Try one of these instead:
std::string tohexed = ToHex(std::string(memblock, memblock+size), true);
std::string tohexed = ToHex(std::string(memblock, size), true);