What could change display width? - c++

I have a SystemC function with signature:
sc_dt::sc_uint<12> Get()
and the lines:
cerr << "[" << hex << setw(3) << setfill('0') << 0 << dec << "]\n";
cerr << "[" << hex << setw(3) << setfill('0') << Get() << dec << "]\n";
result in this output:
[000]
[0000]
Why does the displayed width change from 3 to 4?

#include <systemc.h>
#include <iostream>
#include <iomanip>
int sc_main(int argc, char* argv[])
{
sc_dt::sc_uint <12> my_uint = 0;
std::cerr << std::hex << my_uint << std::endl;
}
g++ test.cpp -lsystemc && ./a.out prints this:
SystemC 2.3.1-Accellera --- Jul 24 2017 21:50:41
Copyright (c) 1996-2014 by all Contributors,
ALL RIGHTS RESERVED
0000
It is showing four zeros (for 16 bits) instead of three (for 12 bits), as you probably expected it would, because that is how 12-bit integer is implemented in SystemC. And it doesn't get shortened by std::setw because it sets the minimum number of characters to be written. If there are more characters, then all of them will get written. Also to mention, the std::dec in your example does nothing because there are no numbers printed afterwards.
http://www.cplusplus.com/reference/ios/ios_base/width/
http://www.cplusplus.com/reference/iomanip/setw/
This will print only last 3 characters for lower 12 bits:
#include <systemc.h>
#include <iostream>
#include <iomanip>
const unsigned CHARS = 3;
const unsigned MASK = (1u << CHARS * 4) -1; // Same as 0xFFF
int sc_main(int argc, char* argv[])
{
sc_dt::sc_uint <12> my_uint = 0xABC;
std::cerr << std::hex
<< std::setw (CHARS) << std::setfill ('0')
<< (my_uint & MASK) << std::endl;
}

Related

Convert 16 bits to 4 char ( in Hexadecimal)

I want to convert 16 bit to 4 characters which are in Hexadecimal character.
For example, a 16 bit, 1101 1010 1101 0001 in hexadecimal is DAD1 and in decimal is 56017. Now I want to convert this 16 bit into DAD1 as characters so that I can use the character to write into a text file.
My coding part, my variable "CRC" is my result from CRC checksum. Now I want to convert 16 bit "CRC" into 4 characters which are DAD1 (capital letters).
cout << hex << CRC<<endl;
char lo = CRC & 0xFF;
char hi = CRC >> 8;
cout << hi << endl;
cout << lo;
*******Result********
dad1
┌
₸
Try this:
#include <iostream>
#include <bitset>
#include <string>
int main()
{
int i = 56017;
std::cout <<hex <<i << std::endl;
std::bitset<16> bin = i;
std::string str = bin.to_string();
std::bitset<8> hi(str.substr(0, 8));
std::bitset<8> lo(str.substr(8, 8));
std::cout << bin << std::endl;
std::cout << hi << " " << hi.to_ullong() << std::endl;
std::cout << lo << " " << lo.to_ullong() << std::endl;
}
OR you can also do
std::cout <<hex << (CRC & 0xFF)<< std::endl;
std::cout << hex << (CRC >> 8) << std::endl;
Output:
Try This :
#include <iostream>
#include <bitset>
#include <limits>
int main()
{
int i = 56017;
std::bitset<std::numeric_limits<unsigned long long>::digits> b(i);
std::cout<< std::hex << b.to_ullong();
}

Why do I have to cast a byte as unsigned twice to see hex output? [duplicate]

This question already has answers here:
Are int8_t and uint8_t intended to be char types?
(5 answers)
Closed 7 years ago.
I want to print a variable as hex:
#include <iostream>
#include <string>
#include <cstdint>
int main() {
auto c = 0xb7;
std::cout << std::hex << static_cast<unsigned char>(c) << std::endl;
std::cout << std::hex << static_cast<unsigned>(static_cast<unsigned char>(c)) << std::endl;
std::cout << std::hex << (uint8_t)(c) << std::endl;
std::cout << std::hex << (unsigned)(uint8_t)(c) << std::endl;
return 0;
}
The output seems to be:
\ufffd (tries to print it as a char)
b7
\ufffd (tries to print it as a char)
b7
I do understand that c has higher bits set (10110111), but I cast it to uint8_t and unsigned char once already.
Why do I have to cast uint8_t or unsigned char to unsigned again to get the expected output?
std::hex sets the basefield of the stream str to hex as if by calling str.setf(std::ios_base::hex, std::ios_base::basefield).
When this basefield hex bit is set, iostreams use hexadecimal base for integer I/O.
Code
#include <iostream>
int main()
{
int i = 0xb7;
unsigned u = 0xb7;
char c = static_cast<char>(0xb7);
unsigned char b = 0xb7;
std::cout << std::hex << i << std::endl;
std::cout << std::hex << u << std::endl;
std::cout << std::hex << c << std::endl;
std::cout << std::hex << b << std::endl;
return 0;
}
Output
b7
b7
�
�
I suspect this output to vary on a Windows (non UTF-8) system.

Work around std::showbase not prefixing zeros

Couldn't find help online. Is there any way to work around this issue?
std::showbase only adds a prefix (for example, 0x in case of std::hex) for non-zero numbers (as explained here). I want an output formatted with 0x0, instead of 0.
However, just using: std::cout << std::hex << "0x" << .... is not an option, because the right hand side arguments might not always be integers (or equivalents). I am looking for a showbase replacement, which will prefix 0 with 0x and not distort non-ints (or equivalents), like so:
using namespace std;
/* Desired result: */
cout << showbase << hex << "here is 20 in hex: " << 20 << endl; // here is 20 in hex: 0x14
/* Undesired result: */
cout << hex << "0x" << "here is 20 in hex: " << 20 << endl; // 0xhere is 20 in hex: 20
/* Undesired result: */
cout << showbase << hex << "here is 0 in hex: " << 0 << endl; // here is 0 in hex: 0
thanks a lot.
try
std::cout << "here is 20 in hex: " << "0x" << std::noshowbase << std::hex << 20 << std::endl;
This way number will be prefixed with 0x always, but you will have to add << "0x" before every number printed.
You can even try to create your own stream manipulator
struct HexWithZeroTag { } hexwithzero;
inline ostream& operator<<(ostream& out, const HexWithZeroTag&)
{
return out << "0x" << std::noshowbase << std::hex;
}
// usage:
cout << hexwithzero << 20;
To keep setting between operator<< call, use answer from here to extend your own stream. You would have to change locale's do_put like this:
const std::ios_base::fmtflags reqFlags = (std::ios_base::showbase | std::ios_base::hex);
iter_type
do_put(iter_type s, ios_base& f, char_type fill, long v) const {
if (v == 0 && ((f.flags() & reqFlags) == reqFlags)) {
*(s++) = '0';
*(s++) = 'x';
}
return num_put<char>::do_put(s, f, fill, v);
}
Complete working solution: http://ideone.com/VGclTi

C++ can setw and setfill pad the end of a string?

Is there a way to make setw and setfill pad the end of a string instead of the front?
I have a situation where I'm printing something like this.
CONSTANT TEXT variablesizeName1 .....:number1
CONSTANT TEXT varsizeName2 ..........:number2
I want to add a variable amount of '.' to the end of
"CONSTANT TEXT variablesizeName#" so I can make ":number#" line up on the screen.
Note: I have an array of "variablesizeName#" so I know the widest case.
Or
Should I do it manually by setting setw like this
for( int x= 0; x < ARRAYSIZE; x++)
{
string temp = string("CONSTANT TEXT ")+variabletext[x];
cout << temp;
cout << setw(MAXWIDTH - temp.length) << setfill('.') <<":";
cout << Number<<"\n";
}
I guess this would do the job but it feels kind of clunky.
Ideas?
You can use manipulators std::left, std::right, and std::internal to choose where the fill characters go.
For your specific case, something like this could do:
#include <iostream>
#include <iomanip>
#include <string>
const char* C_TEXT = "Constant text ";
const size_t MAXWIDTH = 10;
void print(const std::string& var_text, int num)
{
std::cout << C_TEXT
// align output to left, fill goes to right
<< std::left << std::setw(MAXWIDTH) << std::setfill('.')
<< var_text << ": " << num << '\n';
}
int main()
{
print("1234567890", 42);
print("12345", 101);
}
Output:
Constant text 1234567890: 42
Constant text 12345.....: 101
EDIT:
As mentioned in the link, std::internal works only with integer, floating point and monetary output. For example with negative integers, it'll insert fill characters between negative sign and left-most digit.
This:
int32_t i = -1;
std::cout << std::internal
<< std::setfill('0')
<< std::setw(11) // max 10 digits + negative sign
<< i << '\n';
i = -123;
std::cout << std::internal
<< std::setfill('0')
<< std::setw(11)
<< i;
will output
-0000000001
-0000000123
Something like:
cout << left << setw(MAXWIDTH) << setfill('.') << temp << ':' << Number << endl;
Produces something like:
derp..........................:234
herpderpborp..................:12345678
#include <iostream>
#include <iomanip>
int main()
{
std::cout
<< std::setiosflags(std::ios::left) // left align this section
<< std::setw(30) // within a max of 30 characters
<< std::setfill('.') // fill with .
<< "Hello World!"
<< "\n";
}
//Output:
Hello World!..................

how to read binary file content as strings?

I need to read 16 bits from the binary file as std::string or char *. For example, a binary file contains 89 ab cd ef, and I want to be able to extract them as std::strings or char *. I have tried the following code:
ifstream *p = new ifstream();
char *buffer;
p->seekg(address, ios::beg);
buffer = new char[16];
memset(buffer, 0, 16);
p->read(buffer, 16);
When I try to std::cout the buffer, nothing appeared. How can I read these characters in the binary file?
EDIT: I was looking for the buffer to be a int type such as "0x89abcdef". Is it possible to achieve?
Something like:
#include <string>
#include <iostream>
#include <fstream>
#include <iomanip>
int main()
{
if (ifstream input("filename"))
{
std::string s(2 /*bytes*/, '\0' /*initial content - irrelevant*/);
if (input.read(&s[0], 2 /*bytes*/))
std::cout << "SUCCESS: [0] " << std::hex << (int)s[0] << " [1] " << (int)s[1] << '\n';
else
std::cerr << "Couldn't read from file\n";
}
else
std::cerr << "Couldn't open file\n";
}
You can't read a binary stream as though it were text.
You can, of course, read as binary (by using "file.read()" and "file.write()" methods on your stream object). Just like what you're doing now :)
You can also convert binary to text: "convert to hex text string" and "uuencode base 64" are two common ways to do this.
You'll want to read the bytes as numbers (of type long long probably).
Then you can print those using formatting specifiers like this:
#include <iostream>
#include <iomanip>
int main()
{
using namespace std;
int x = 2;
int y = 255;
cout << showbase // show the 0x prefix
<< internal // fill between the prefix and the number
<< setfill('0'); // fill with 0s
cout << hex << setw(4) << x << dec << " = " << setw(3) << x << endl;
cout << hex << setw(4) << y << dec << " = " << setw(3) << y << endl;
return 0;
}