I have a sensor that stores the recorded information as a .pcap file. I have managed to load the file into an unsigned char array. the sensor stores information in a unique format. For instance representing an angle of 290.16, it stores the information as binary equivalent of 0x58 0x71.
what I have to do to get the correct angle is that concatenate 0x71 and 0x58 then convert the resultant hex value into a decimal divide it by 100 and then store it for further analysis.
My current approach is this:
//all header files are included
main
{
unsigned char data[50]; //I actually have the data loaded in this from a file
data[40] = 0x58;
data[41] = 0x71;
// The above maybe incorrect. What i am trying to imply is that if i use the statement
// printf("%.2x %.2x", data[40],data[41]);
// the resultant output you see on screen is
// 58 71
//I get the decimal value i wanted using the below statement
float gar = hex2Dec(dec2Hex(data[41])+dec2Hex(data[40]))/100.0;
}
hex2Dec and dec2Hex are my own functions.
unsigned int hex2Dec (const string Hex)
{
unsigned int DecimalValue = 0;
for (unsigned int i = 0; i < Hex.size(); ++i)
{
DecimalValue = DecimalValue * 16 + hexChar2Decimal (Hex[i]);
}
return DecimalValue;
}
string dec2Hex (unsigned int Decimal)
{
string Hex = "";
while (Decimal != 0)
{
int HexValue = Decimal % 16;
// convert deimal value to a Hex digit
char HexChar = (HexValue <= 9 && HexValue >= 0 ) ?
static_cast<char>(HexValue + '0' ) : static_cast<char> (HexValue - 10 + 'A');
Hex = HexChar + Hex;
Decimal = Decimal /16;
}
return Hex;
}
int hexChar2Decimal (char Ch)
{
Ch= toupper(Ch); //Change the chara to upper case
if (Ch>= 'A' && Ch<= 'F')
{
return 10 + Ch- 'A';
}
else
return Ch- '0';
}
The pain is that I have to do this conversion billions of time which really slows down the process. Is there any other efficient way to deal with this case?
A matlab code that my friend developed for a similar sensor, took him 3 hours to extract data that was worth only 1 minute of real time. I really need it to be as fast as possible.
As far as I can tell this does the same as
float gar = ((data[45]<<8)+data[44])/100.0;
For:
unsigned char data[50];
data[44] = 0x58;
data[45] = 0x71;
the value of gar will be 290.16.
Explanation:
It is not necessary to convert the value of an integer to a string to get the hex value, because decimal, hexadecimal, binary, etc. are only different representations of the same value. data[45]<<8 shifts the value of data[45] eight bits to the left. Before the operation is performed the type of the operand is promoted to int (except for some unusual implementations where it might be unsigned int), so the new data type should be large enough to not overflow. Shifting eight bits to the left is equivalent to shifting 2 digits to the left in hexadecimal representation. So the result is 0x7100. Then the value of data[44] is added to that and you get 0x7158. The result of type int is then cast to float and divided by 100.0.
In general int might be too small to apply the shift operation without shifting the sign if it is only 16-bit long. If you want to cover that case then explicitly cast to unsigned int:
float gar = (((unsigned int)data[45]<<8)+data[44])/100.0;
In here C convert hex to decimal format, Emil H
posted some sample code that looks very similar to what you want.
#include <stdio.h>
#include <stdlib.h>
int main(void)
{
char *hex_value_string = "deadbeef";
unsigned int out;
sscanf(hex_value_string, "%x", &out);
printf("%o %o\n", out, 0xdeadbeef);
printf("%x %x\n", out, 0xdeadbeef);
return 0;
}
Your conversion functions don't look particularly efficient, so hopefully this is faster.
Related
I have a project in which I am getting a vector of 32-bit ARM instructions, and a part of the instructions (offset values) needs to be read as signed (two's complement) numbers instead of unsigned numbers.
I used a uint32_t vector because all the opcodes and registers are read as unsigned and the whole instruction was 32-bits.
For example:
I have this 32-bit ARM instruction encoding:
uint32_t addr = 0b00110001010111111111111111110110
The last 19 bits are the offset of the branch that I need to read as signed integer branch displacement.
This part: 1111111111111110110
I have this function in which the parameter is the whole 32-bit instruction:
I am shifting left 13 places and then right 13 places again to have only the offset value and move the other part of the instruction.
I have tried this function casting to different signed variables, using different ways of casting and using other c++ functions, but it prints the number as it was unsigned.
int getCat1BrOff(uint32_t inst)
{
uint32_t temp = inst << 13;
uint32_t brOff = temp >> 13;
return (int)brOff;
}
I get decimal number 524278 instead of -10.
The last option that I think is not the best one, but it may work is to set all the binary values in a string. Invert the bits and add 1 to convert them and then convert back the new binary number into decimal. As I would of do it in a paper, but it is not a good solution.
It boils down to doing a sign extension where the sign bit is the 19th one.
There are two ways.
Use arithmetic shifts.
Detect sign bit and or with ones at high bits.
There is no portable way to do 1. in C++. But it can be checked on compilation time. Please correct me if the code below is UB, but I believe it is only implementation defined - for which we check at compile time.
The only questionable thing is conversion of unsigned to signed which overflows, and the right shift, but that should be implementation defined.
int getCat1BrOff(uint32_t inst)
{
if constexpr (int32_t(0xFFFFFFFFu) >> 1 == int32_t(0xFFFFFFFFu))
{
return int32_t(inst << uint32_t{13}) >> int32_t{13};
}
else
{
int32_t offset = inst & 0x0007FFFF;
if (offset & 0x00040000)
{
offset |= 0xFFF80000;
}
return offset;
}
}
or a more generic solution
template <uint32_t N>
int32_t signExtend(uint32_t value)
{
static_assert(N > 0 && N <= 32);
constexpr uint32_t unusedBits = (uint32_t(32) - N);
if constexpr (int32_t(0xFFFFFFFFu) >> 1 == int32_t(0xFFFFFFFFu))
{
return int32_t(value << unusedBits) >> int32_t(unusedBits);
}
else
{
constexpr uint32_t mask = uint32_t(0xFFFFFFFFu) >> unusedBits;
value &= mask;
if (value & (uint32_t(1) << (N-1)))
{
value |= ~mask;
}
return int32_t(value);
}
}
https://godbolt.org/z/rb-rRB
In practice, you just need to declare temp as signed:
int getCat1BrOff(uint32_t inst)
{
int32_t temp = inst << 13;
return temp >> 13;
}
Unfortunately this is not portable:
For negative a, the value of a >> b is implementation-defined (in most
implementations, this performs arithmetic right shift, so that the
result remains negative).
But I have yet to meet a compiler that doesn't do the obvious thing here.
I have a Huffman code algorithm that compresses characters into sequences of bits of arbitrary length, smaller than the default size of a char (8 bits on most modern platforms)
If the Huffman Code compresses an 8-bit character into 3 bits, how do I represent that 3-bit value in memory? To take this further, how do I combine multiple compressed characters into a compressed representation?
For example consider l which is "00000" (5x8 bits since 0 is also character). How do I represent l with 00000 (5 bits) instead of a character sequence?
A C or C++ implementation is preferred.
Now that this question is re-opened...
To make a variable that holds a variable number of bits, we just use use the lower bits of one unsigned int to store the bits, and use another unsigned int to remember how many bits we have stored.
When writing out a Huffman-compressed file, we wait until we have at least 8 bits stored. Then we write out a char using the top 8 bits and subtract 8 from the stored bit count.
Finally, at the end if you have any bits left to write out, you round up to an even multiple of 8 and write chars.
In C++, it's useful to encapsulate the output in some kind of BitOutputStream class, like:
class BitOutputStream
{
std::ostream m_out;
unsigned m_bitsPending;
unsigned m_numPending;
public:
BitOutputStream(const char *fileName)
:m_out(... /* you can do this part */)
{
m_bitsPending = 0;
m_numPending = 0;
}
// write out the lower <count> bits of <bits>
void write(unsigned bits, unsigned count)
{
if (count > 16)
{
//do it in two steps to prevent overflow
write(bits>>16, count-16);
count=16;
}
//make space for new bits
m_numPending += count;
m_bitsPending <<= count;
//store new bits
m_bitsPending |= (bits & ((1<<count)-1));
//write out any complete bytes
while(m_numPending >= 8)
{
m_numPending-=8;
m_out.put((char)(m_bitsPending >> m_numPending));
}
}
//write out any remaining bits
void flush()
{
if (m_numPending > 0)
{
m_out.put((char)(m_bitsPending << (8-m_numPending)));
}
m_bitsPending = m_numPending = 0;
m_out.flush();
}
}
If your Huffman coder returns an array of 1s and 0s representing the bits that should and should not be set in the output, you can shift these bits onto an unsigned char. Every eight shifts, you start writing to the next character, ultimately outputting an array of unsigned char. The number of these compressed characters that you will output is equal to the number of bits divided by eight, rounded up to the nearest natural number.
In C, this is a relatively simple function, consisting of a left shift (<<) and a bitwise OR (|). Here is the function, with an example to make it runnable. To see it with more extensive comments, please refer to this GitHub gist.
#include <stdlib.h>
#include <stdio.h>
#define BYTE_SIZE 8
size_t compress_code(const int *code, const size_t code_length, unsigned char **compressed)
{
if (code == NULL || code_length == 0 || compressed == NULL) {
return 0;
}
size_t compressed_length = (code_length + BYTE_SIZE - 1) / BYTE_SIZE;
*compressed = calloc(compressed_length, sizeof(char));
for (size_t char_counter = 0, i = 0; char_counter < compressed_length && i < code_length; ++i) {
if (i > 0 && (i % BYTE_SIZE) == 0) {
++char_counter;
}
// Shift the last bit to be set left by one
(*compressed)[char_counter] <<= 1;
// Put the next bit onto the end of the unsigned char
(*compressed)[char_counter] |= (code[i] & 1);
}
// Pad the remaining space with 0s on the right-hand-side
(*compressed)[compressed_length - 1] <<= compressed_length * BYTE_SIZE - code_length;
return compressed_length;
}
int main(void)
{
const int code[] = { 0, 1, 0, 0, 0, 0, 0, 1, // 65: A
0, 1, 0, 0, 0, 0, 1, 0 }; // 66: B
const size_t code_length = 16;
unsigned char *compressed = NULL;
size_t compressed_length = compress_code(code, code_length, &compressed);
for (size_t i = 0; i < compressed_length; ++i) {
printf("%c\n", compressed[i]);
}
return 0;
}
You can then just write the characters in the array to a file, or even copy the array's memory directly to a file, to write the compressed output.
Reading the compressed characters into bits, which will allow you to traverse your Huffman tree for decoding, is done with right shifts (>>) and checking the rightmost bit with bitwise AND (&).
The dataFile.bin is a binary file with 6-byte records. The first 3
bytes of each record contain the latitude and the last 3 bytes contain
the longitude. Each 24 bit value represents radians multiplied by
0X1FFFFF
This is a task I've been working on. I havent done C++ in years so its taking me way longer than I thought it would -_-. After googling around I saw this algorthim which made sense to me.
int interpret24bitAsInt32(byte[] byteArray) {
int newInt = (
((0xFF & byteArray[0]) << 16) |
((0xFF & byteArray[1]) << 8) |
(0xFF & byteArray[2])
);
if ((newInt & 0x00800000) > 0) {
newInt |= 0xFF000000;
} else {
newInt &= 0x00FFFFFF;
}
return newInt;
}
The problem is a syntax issue I am restricting to working by the way the other guy had programmed this. I am not understanding how I can store the CHAR "data" into an INT. Wouldn't it make more sense if "data" was an Array? Since its receiving 24 integers of information stored into a BYTE.
double BinaryFile::from24bitToDouble(char *data) {
int32_t iValue;
// ****************************
// Start code implementation
// Task: Fill iValue with the 24bit integer located at data.
// The first byte is the LSB.
// ****************************
//iValue +=
// ****************************
// End code implementation
// ****************************
return static_cast<double>(iValue) / FACTOR;
}
bool BinaryFile::readNext(DataRecord &record)
{
const size_t RECORD_SIZE = 6;
char buffer[RECORD_SIZE];
m_ifs.read(buffer,RECORD_SIZE);
if (m_ifs) {
record.latitude = toDegrees(from24bitToDouble(&buffer[0]));
record.longitude = toDegrees(from24bitToDouble(&buffer[3]));
return true;
}
return false;
}
double BinaryFile::toDegrees(double radians) const
{
static const double PI = 3.1415926535897932384626433832795;
return radians * 180.0 / PI;
}
I appreciate any help or hints even if you dont understand a clue or hint will help me alot. I just need to talk to someone.
I am not understanding how I can store the CHAR "data" into an INT.
Since char is a numeric type, there is no problem combining them into a single int.
Since its receiving 24 integers of information stored into a BYTE
It's 24 bits, not bytes, so there are only three integer values that need to be combined.
An easier way of producing the same result without using conditionals is as follows:
int interpret24bitAsInt32(byte[] byteArray) {
return (
(byteArray[0] << 24)
| (byteArray[1] << 16)
| (byteArray[2] << 8)
) >> 8;
}
The idea is to store the three bytes supplied as an input into the upper three bytes of the four-byte int, and then shift it down by one byte. This way the program would sign-extend your number automatically, avoiding conditional execution.
Note on portability: This code is not portable, because it assumes 32-bit integer size. To make it portable use <cstdint> types:
int32_t interpret24bitAsInt32(const std::array<uint8_t,3> byteArray) {
return (
(const_cast<int32_t>(byteArray[0]) << 24)
| (const_cast<int32_t>(byteArray[1]) << 16)
| (const_cast<int32_t>(byteArray[2]) << 8)
) >> 8;
}
It also assumes that the most significant byte of the 24-bit number is stored in the initial element of byteArray, then comes the middle element, and finally the least significant byte.
Note on sign extension: This code automatically takes care of sign extension by constructing the value in the upper three bytes and then shifting it to the right, as opposed to constructing the value in the lower three bytes right away. This additional shift operation ensures that C++ takes care of sign-extending the result for us.
When an unsigned char is casted to an int the higher order bits are filled with 0's
When a signed char is casted to a casted int, the sign bit is extended.
ie:
int x;
char y;
unsigned char z;
y=0xFF
z=0xFF
x=y;
/*x will be 0xFFFFFFFF*/
x=z;
/*x will be 0x000000FF*/
So, your algorithm, uses 0xFF as a mask to remove C' sign extension, ie
0xFF == 0x000000FF
0xABCDEF10 & 0x000000FF == 0x00000010
Then uses bit shifts and logical ands to put the bits in their proper place.
Lastly checks the most significant bit (newInt & 0x00800000) > 0 to decide if completing with 0's or ones the highest byte.
int32_t upperByte = ((int32_t) dataRx[0] << 24);
int32_t middleByte = ((int32_t) dataRx[1] << 16);
int32_t lowerByte = ((int32_t) dataRx[2] << 8);
int32_t ADCdata32 = (((int32_t) (upperByte | middleByte | lowerByte)) >> 8); // Right-shift of signed data maintains signed bit
I've been stumped on this one for days. I've written this program from a book called Write Great Code Volume 1 Understanding the Machine Chapter four.
The project is to do Floating Point operations in C++. I plan to implement the other operations in C++ on my own; the book uses HLA (High Level Assembly) in the project for other operations like multiplication and division.
I wanted to display the exponent and other field values after they've been extracted from the FP number; for debugging. Yet I have a problem: when I look at these values in memory they are not what I think they should be. Key words: what I think. I believe I understand the IEEE FP format; its fairly simple and I understand all I've read so far in the book.
The big problem is why the Rexponent variable seems to be almost unpredictable; in this example with the given values its 5. Why is that? By my guess it should be two. Two because the decimal point is two digits right of the implied one.
I've commented the actual values that are produced in the program in to the code so you don't have to run the program to get a sense of whats happening (at least in the important parts).
It is unfinished at this point. The entire project has not been created on my computer yet.
Here is the code (quoted from the file which I copied from the book and then modified):
#include<iostream>
typedef long unsigned real; //typedef our long unsigned ints in to the label "real" so we don't confuse it with other datatypes.
using namespace std; //Just so I don't have to type out std::cout any more!
#define asreal(x) (*((float *) &x)) //Cast the address of X as a float pointer as a pointer. So we don't let the compiler truncate our FP values when being converted.
inline int extractExponent(real from) {
return ((from >> 23) & 0xFF) - 127; //Shift right 23 bits; & with eight ones (0xFF == 1111_1111 ) and make bias with the value by subtracting all ones from it.
}
void fpadd ( real left, real right, real *dest) {
//Left operand field containers
long unsigned int Lexponent = 0;
long unsigned Lmantissa = 0;
int Lsign = 0;
//RIGHT operand field containers
long unsigned int Rexponent = 0;
long unsigned Rmantissa = 0;
int Rsign = 0;
//Resulting operand field containers
long int Dexponent = 0;
long unsigned Dmantissa = 0;
int Dsign = 0;
std::cout << "Size of datatype: long unsigned int is: " << sizeof(long unsigned int); //For debugging
//Properly initialize the above variable's:
//Left
Lexponent = extractExponent(left); //Zero. This value is NOT a flat zero when displayed because we subtract 127 from the exponent after extracting it! //Value is: 0xffffff81
Lmantissa = extractMantissa (left); //Zero. We don't do anything to this number except add a whole number one to it. //Value is: 0x00000000
Lsign = extractSign(left); //Simple.
//Right
**Rexponent = extractExponent(right); //Value is: 0x00000005 <-- why???**
Rmantissa = extractMantissa (right);
Rsign = extractSign(right);
}
int main (int argc, char *argv[]) {
real a, b, c;
asreal(a) = -0.0;
asreal(b) = 45.67;
fpadd(a,b, &c);
printf("Sum of A and B is: %f", c);
std::cin >> a;
return 0;
}
Help would be much appreciated; I'm several days in to this project and very frustrated!
in this example with the given values its 5. Why is that?
The floating point number 45.67 is internally represented as
2^5 * 1.0110110101011100001010001111010111000010100011110110
which actually represents the number
45.6700000000000017053025658242404460906982421875
This is as close as you can get to 45.67 inside float.
If all you are interested in is the exponent of a number, simply compute its base 2 logarithm and round down. Since 45.67 is between 32 (2^5) and 64 (2^6), the exponent is 5.
Computers use binary representation for all numbers. Hence, the exponent is for base two, not base ten. int(log2(45.67)) = 5.
I come across a very tricky problem with bit manipulation.
As far as I know, the smallest variable size to hold a value is one byte of 8 bits. The bit operations available in C/C++ apply to an entire unit of bytes.
Imagine that I have a map to replace a binary pattern 100100 (6 bits) with a signal 10000 (5 bits). If the 1st byte of input data from a file is 10010001 (8 bits) being stored in a char variable, part of it matches the 6 bit pattern and therefore be replaced by the 5 bit signal to give a result of 1000001 (7 bits).
I can use a mask to manipulate the bits within a byte to get a result of the left most bits to 10000 (5 bit) but the right most 3 bits become very tricky to manipulate. I cannot shift the right most 3 bits of the original data to get the correct result 1000001 (7 bit) followed by 1 padding bit in that char variable that should be filled by the 1st bit of next followed byte of input.
I wonder if C/C++ can actually do this sort of replacement of bit patterns of length that do not fit into a Char (1 byte) variable or even Int (4 bytes). Can C/C++ do the trick or we have to go for other assembly languages that deal with single bits manipulations?
I heard that Power Basic may be able to do the bit-by-bit manipulation better than C/C++.
If time and space are not important then you can convert the bits to a string representation and perform replaces on the string, then convert back when needed. Not an elegant solution but one that works.
<< shiftleft
^ XOR
>> shift right
~ one's complement
Using these operations, you could easily isolate the pieces that you are interested in and compare them as integers.
say the byte 001000100 and you want to check if it contains 1000:
char k = (char)68;
char c = (char)8;
int i = 0;
while(i<5){
if((k<<i)>>(8-3-i) == c){
//do stuff
break;
}
}
This is very sketchy code, just meant to be a demonstration.
I wonder if C/C++ can actually do this
sort of replacement of bit patterns of
length that do not fit into a Char (1
byte) variable or even Int (4 bytes).
What about std::bitset?
Here's a small bit reader class which may suit your needs. Of course, you may want to create a bit writer for your use case.
#include <iostream>
#include <sstream>
#include <cassert>
class BitReader {
public:
typedef unsigned char BitBuffer;
BitReader(std::istream &input) :
input(input), bufferedBits(8) {
}
BitBuffer peekBits(int numBits) {
assert(numBits <= 8);
assert(numBits > 0);
skipBits(0); // Make sure we have a non-empty buffer
return (((input.peek() << 8) | buffer) >> bufferedBits) & ((1 << numBits) - 1);
}
void skipBits(int numBits) {
assert(numBits >= 0);
numBits += bufferedBits;
while (numBits > 8) {
buffer = input.get();
numBits -= 8;
}
bufferedBits = numBits;
}
BitBuffer readBits(int numBits) {
assert(numBits <= 8);
assert(numBits > 0);
BitBuffer ret = peekBits(numBits);
skipBits(numBits);
return ret;
}
bool eof() const {
return input.eof();
}
private:
std::istream &input;
BitBuffer buffer;
int bufferedBits; // How many bits are buffered into 'buffer' (0 = empty)
};
Use a vector<bool> if you can read your data into the vector mostly at once. It may be more difficult to find-and-replace sequences of bits, though.
If I understood your questions correctly, you have an input stream and and output stream and you want to replace the 6bits of the input with 5 in the output - and your output still should be a bit stream?
So, the most important programmer's rule can be applied: Divide et impera!
You should split your component in three parts:
Input Stream converter: Convert every pattern in the input stream to a char array (ring) buffer. If I understood you correctly your input "commands" are 8bit long, so there is nothing special about this.
Do the replacement on the ring buffer in a way that you replace every matching 6-bit pattern with the 5bit one, but "pad" the 5 bit with a leading zero, so the total length is still 8bit.
Write an output handler that reads from the ring buffer and let this output handler write only the 7 LSB to the output stream from each input byte. Of course some bit manipulation is necessary again for this.
If your ring buffer size can be divided by 8 and 7 (= is a multiple of 56) you will have a clean buffer at the end and can start again with 1.
The most simplest way to implement this is to iterate over this 3 steps as long as input data is available.
If a performance really matters and you are running on a multi-core CPU you even could split the steps and 3 threads, but then you must carefully synchronize the access to the ring buffer.
I think the following does what you want.
PATTERN_LEN = 6
PATTERNMASK = 0x3F //6 bits
PATTERN = 0x24 //b100100
REPLACE_LEN = 5
REPLACEMENT = 0x10 //b10000
void compress(uint8* inbits, uint8* outbits, int len)
{
uint16 accumulator=0;
int nbits=0;
uint8 candidate;
while (len--) //for all input bytes
{
//for each bit (msb first)
for (i=7;i<=0;i--)
{
//add 1 bit to accumulator
accumulator<<=1;
accumulator|=(*inbits&(1<<i));
nbits++;
//check for pattern
candidate = accumulator&PATTERNMASK;
if (candidate==PATTERN)
{
//remove pattern
accumulator>>=PATTERN_LEN;
//add replacement
accumulator<<=REPLACE_LEN;
accumulator|=REPLACMENT;
nbits+= (REPLACE_LEN - PATTERN_LEN);
}
}
inbits++;
//move accumulator to output to prevent overflow
while (nbits>8)
{
//copy the highest 8 bits
nbits-=8;
*outbits++ = (accumulator>>nbits)&0xFF;
//clear them from accumulator
accumulator&= ~(0xFF<<nbits);
}
}
//copy remainder of accumulator to output
while (nbits>0)
{
nbits-=8;
*outbits++ = (accumulator>>nbits)&0xFF;
accumulator&= ~(0xFF<<nbits);
}
}
You could use a switch or a loop in the middle to check the candidate against multiple patterns. There might have to be some special handling after doing a replacment to ensure the replacement pattern is not re-checked for matches.
#include <iostream>
#include <cstring>
size_t matchCount(const char* str, size_t size, char pat, size_t bsize) noexcept
{
if (bsize > 8) {
return 0;
}
size_t bcount = 0; // curr bit number
size_t pcount = 0; // curr bit in pattern char
size_t totalm = 0; // total number of patterns matched
const size_t limit = size*8;
while (bcount < limit)
{
auto offset = bcount%8;
char c = str[bcount/8];
c >>= offset;
char tpat = pat >> pcount;
if ((c & 1) == (tpat & 1))
{
++pcount;
if (pcount == bsize)
{
++totalm;
pcount = 0;
}
}
else // mismatch
{
bcount -= pcount; // backtrack
//reset
pcount = 0;
}
++bcount;
}
return totalm;
}
int main(int argc, char** argv)
{
const char* str = "abcdefghiibcdiixyz";
char pat = 'i';
std::cout << "Num matches = " << matchCount(str, 18, pat, 7) << std::endl;
return 0;
}