How to print hex from uint32_t? - c++

The code I have been working on requires that I print a variable of type uint32_t in hexadecimal, with a padding of 0s and minimum length 8. The code I have been using to do this so far is:
printf("%08lx\n",read_word(address));
Where read_word returns type uint32_t. I have used jx, llx, etc. formats to no avail, is there a correct format that can be used?
EDIT:
I have found the problem lies in what I am passing. The function read_word is returns a value from a uint32_t vector. It seems that this is the problem that is causing problems with out putting hex. Is this a passing by reference/value issue and what is the fix?
read_word function:
uint32_t memory::read_word (uint32_t address) {
if(address>(maxWord)){
return 0;
}
return mem[address];
}
mem deceleration:
std::vector<uint32_t> mem=decltype(mem)(1024,0);

To do this in C++ you need to abuse both the fill and the width manipulators:
#include <iostream>
#include <iomanip>
#include <cstdint>
int main()
{
uint32_t myInt = 123456;
std::cout << std::setfill('0') << std::setw(8) << std::hex << myInt << '\n';
}
Output
0001e240
For C it gets a little more obtuse. You use inttypes.h
#include <stdio.h>
#include <stdint.h>
#include <inttypes.h>
int main()
{
uint32_t myInt = 123456;
printf("%08" PRIx32 "\n", myInt);
return 0;
}
Output
0001e240
Note that in C, the constants from inttypes.h are used with the language string-concatenation feature to form the required format specifier. You only provide the zero-fill and minimum length as a preamble.

%jx + typecast
I think this is correct:
#include <stdio.h>
#include <stdint.h>
int main(void) {
uint32_t i = 0x123456;
printf("%08jx\n", (uintmax_t)i);
return 0;
}
compile and run:
gcc -Wall -Wextra -pedantic -std=c99 main.c
./a.out
Output:
00123456
Let me know if you can produce a failing test case.
Tested in Ubuntu 16.04, GCC 6.4.0.

Related

boost asio with little endian

I am integrating a library that requires little endian for length. It's formatted with little endian and then a custom serialized object. How do I convert 4 byte char into a int? The little endian tells me the size of the serialized object to read.
so if I receive "\x00\x00\x00H\x00" I would like to be able to get the decimal value out.
my library looks like
char buffer_size[size_desc]
m_socket->receive(boost::asio::buffer(buffer, size_desc));
int converted_int = some_function(buffer); <-- not sure what to do here
char buffer_obj[converted_int];
m_socket->receive(boost::asio::buffer(buffer, size_desc));
For a simple solution you could do couple of tricks,
Reverse with a cast:
// #include <stdafx.h>
#include <cassert>
#include <iomanip>
#include <iostream>
#include <algorithm>
#include <string>
int main()
{
char buff[4] = {3,2,1,0};
std::cout << (*reinterpret_cast<int*>(&buff[0])) << "\n";
std::reverse(buff, buff+4);
std::cout << (*reinterpret_cast<int*>(&buff[0]));
return 0;
};
Boost also comes with an endianness library:
https://www.boost.org/doc/libs/1_74_0/libs/endian/doc/html/endian.html#buffers
You can use the built in types, like:
big_int32_t
little_int16_t

C++ did not generate warning

I had a simple typo in my code. I wanted to do const std::string a = b + "bar"; but instead accidentally had const std::string a = a + "bar"; To my surprise, this did not generate any warnings from GCC 9.3.0 even though I compiled with -std=c++17 -Wall. Moreover, I did not get a warning for an unused variable b. How can that be? What flags should I have passed to GCC to generate at least some warning to catch this problem?
#include <string>
#include <iomanip>
#include <iostream>
namespace {
const std::string b = "foo";
const std::string a = a + "bar";
}
int main()
{
std::cout << "a is " << std::quoted(a) << std::endl;
return 0;
}
The way you are using the variable a, it has an indeterminate value that is either a trap representation or a unspecified value. It can in some cases(implementation) cause undefined behavior.
GCC 11.1.0 does generate warning it seems as seen here
#include <string>
#include <iomanip>
#include <iostream>
int main()
{
int a = a + 1;//this generates warning in gcc 11.1.0
std::string p = p + "some string";//this also generate warning in gcc 11.1.0
return 0;
}
But GCC 9.3.0 only gives warning for int as seen here
On the other hand clang gives warning for both.

reinterpret_cast<volatile uint8_t*>(37)' is not a constant expression

gcc fails to compile the code below, while clang compiles ok. I have no control on the macro PORTB, as it is in a 3rd party library (avr).
Is it a gcc bug? How can I work around it in gcc? As a workaround is somehow possible to create a pre-processor macro which extracts the numerical value from PORTB?
Note this question is similar, but not identical to my previous question.
It is also different from this question, where the developer has the flexibility to change the rhs of the assignment, thus avoiding the reinterpret_cast.
#include <iostream>
#include <cstdint>
#define PORTB (*(volatile uint8_t *)((0x05) + 0x20))
struct PortB {
static const uintptr_t port = reinterpret_cast<uintptr_t>(&PORTB);
};
int main() {
std::cout << PortB::port << "\n";
return 0;
}
It seems reinterpret_cast is not allowed during compilation. Thus the newer version of the compiler simply is more conforming to the language. reinterpret_cast will not be allowed where a constexpr is required.
But maybe these workaround may help (compiles with g++ 9.2):
#include <iostream>
#include <cstdint>
#define PORTB (*(volatile uint8_t *)((0x05) + 0x20))
struct PortB {
static uintptr_t port;
};
uintptr_t PortB::port = reinterpret_cast<uintptr_t>(&PORTB);
const uintptr_t freePort = reinterpret_cast<uintptr_t>(&PORTB);
#define macroPort reinterpret_cast<uintptr_t>(&PORTB)
int main() {
std::cout << PortB::port << "\n";
std::cout << freePort << "\n";
std::cout << macroPort << "\n";
return 0;
}

Strange Number Conversion C++

So I have the following code:
#include <iostream>
#include <string>
#include <array>
using namespace std;
int main()
{
array<long, 3> test_vars = { 121, 319225, 15241383936 };
for (long test_var : test_vars) {
cout << test_var << endl;
}
}
In Visual Studio I get this output:
121
319225
-1938485248
The same code executed on the website cpp.sh gave the following output:
121
319225
15241383936
I expect the output to be like the one from cpp.sh. I don't understand the output from Visual Studio. It's probably something simple; but I'd appreciate it nonetheless if someone could tell me what's wrong. It's has become a real source of annoyance to me.
The MSVC uses a 4Byte long. The C++ standard only guarantees long to be at least as large as int. Therefore the max number representable by a signed long is 2.147.483.647. What you input is too large to hold by the long and you will have to use a larger datatype with at least 64bit.
The other compiler used a 64bit wide long which is the reason why it worked there.
You could use int64_t which is defined in cstdint header. Which would guarantee the 64bit size of the signed int.
Your program would read:
#include <cstdint>
#include <iostream>
#include <array>
using namespace std;
int main()
{
array<int64_t, 3> test_vars = { 121, 319225, 15241383936 };
for (int64_t test_var : test_vars) {
cout << test_var << endl;
}
}

Avoiding compiler issues with abs()

When using the double variant of the std::abs() function without the std with g++ 4.6.1, no warning or error is given.
#include <algorithm>
#include <cmath>
double foobar(double a)
{
return abs(a);
}
This version of g++ seems to be pulling in the double variant of abs() into the global namespace through one of the includes from algorithm. This looks like it is now allowed by the standard (see this question), but not required.
If I compile the above code using a compiler that does not pull the double variant of abs() into the global namespace (such as g++ 4.2), then the following error is reported:
warning: passing 'double' for argument 1 to 'int abs(int)'
How can I force g++ 4.6.1, and other compilers that pull functions into the global namespace, to give a warning so that I can prevent errors when used with other compilers?
The function you are using is actually the integer version of abs, and GCC does an implicit conversion to integer.
This can be verified by a simple test program:
#include <iostream>
#include <cmath>
int main()
{
double a = -5.4321;
    double b = std::abs(a);
double c = abs(a);
std::cout << "a = " << a << ", b = " << b << ", c = " << c << '\n';
}
Output is:
a = -5.4321, b = 5.4321, c = 5
To get a warning about this, use the -Wconversion flag to g++. Actually, the GCC documentation for that option explicitly mentions calling abs when the argument is a double. All warning options can be found here.
Be warned, you don't need to explicitly #include <cmath>, <iostream> does the damage as well (and maybe some other headers). Also, note that -Wall doesn't give you any warnings about it.
#include <iostream>
int main() {
std::cout << abs(.5) << std::endl;
std::cout << typeid(decltype(abs)).name() << std::endl;
}
Gives output
0
FiiE
On
gcc version 7.4.0 (Ubuntu 7.4.0-1ubuntu1~18.04)