Multiplying two uint16_t numbers results in an int [duplicate] - c++

This question already has answers here:
Why must a short be converted to an int before arithmetic operations in C and C++?
(4 answers)
Closed 4 years ago.
Take a look at the following snippet:
#include <iostream>
#include <cstdint>
#include <boost/type_index.hpp>
using boost::typeindex::type_id_with_cvr;
int main(int argc, char** argv)
{
constexpr uint16_t b = 2;
constexpr uint16_t c = 3;
constexpr const auto bc = b * c;
std::cout << "b: " << type_id_with_cvr<decltype(b)>().pretty_name() << std::endl;
std::cout << "b * c: " << type_id_with_cvr<decltype(bc)>().pretty_name() << std::endl;
}
This results in the following:
b: unsigned short const
b * c: int const
Why does multiplying two unsinged ints results in an integer?
Compiler: g++ 5.4.0

unsigned short values are implicitly converted to an int before the multiplication.
short and char are considered "storage types" and implicitly converted to int before doing computations. It's the reason for which
unsigned char x = 255, y = 1;
printf("%i\n", x+y); // you get 256, not 0

Related

How to plus hex( No output)? [duplicate]

This question already has answers here:
C++ cout hex values?
(10 answers)
Closed 2 years ago.
My code:
#include <iostream>
using namespace std;
int main() {
unsigned int a = 0x0009, b = 0x0002;
unsigned int c = a + b;
cout << c;
}
Now
c = 11
I want to this:
c = 000B
How can I do ?
When you do this
int main() {
unsigned int a = 0x0009, b = 0x0002;
unsigned int c = a + b;
}
Then c has the value of 11, it also has the value of 0x000B. It also has the value of 10 in a representation that uses 11 as base.
11 and 0x000B (and 10) are different representations of the same value.
When you use std::cout then the number is printed as decimal by default. What representation you choose to print the value on the screen has no influence whatsoever on the actual value of c.
What I understand is that you want to retrieve the result in an hexadecimal specific format XXXX.
Computing the addition is the same as any number base, you only need to use (here I display) the result in your format.
You can do this, for instance:
#include <iostream>
#include <iomanip>
std::string displayInPersonalizedHexa(unsigned int a)
{
std::stringstream ss;
ss << std::uppercase<< std::setfill('0') << std::setw(4) << std::hex<< a;
std::string x;
ss >>x;
//std::cout << x;
return x;
}
int main() {
unsigned int a = 0x0009, b = 0x0002;
unsigned int c = a + b;
// displays 000B
std::cout << displayInPersonalizedHexa(c) << std::endl;
// adds c=c+1
c=c+1;
// displays 000C
std::cout << displayInPersonalizedHexa(c) << std::endl;
//0xC+5 = 0x11
c=c+5;
// displays 0011
std::cout << displayInPersonalizedHexa(c) << std::endl;
}
This will output
000B
000C
0011

Converting unsigned char * to hexstring

Below code takes a hex string(every byte is represented as its corresponidng hex value)
converts it to unsigned char * buffer and then converts back to hex string.
This code is testing the conversion from unsigned char* buffer to hex string
which I need to send over the network to a receiver process.
I chose hex string as unsigned char can be in range of 0 to 255 and there is no printable character after 127.
The below code just tells the portion that bugs me. Its in the comment.
#include <iostream>
#include <sstream>
#include <iomanip>
using namespace std;
// converts a hexstring to corresponding integer. i.e "c0" - > 192
int convertHexStringToInt(const string & hexString)
{
stringstream geek;
int x=0;
geek << std::hex << hexString;
geek >> x;
return x;
}
// converts a complete hexstring to unsigned char * buffer
void convertHexStringToUnsignedCharBuffer(string hexString, unsigned char*
hexBuffer)
{
int i=0;
while(hexString.length())
{
string hexStringPart = hexString.substr(0,2);
hexString = hexString.substr(2);
int hexStringOneByte = convertHexStringToInt (hexStringPart);
hexBuffer[i] = static_cast<unsigned char>((hexStringOneByte & 0xFF)) ;
i++;
}
}
int main()
{
//below hex string is a hex representation of a unsigned char * buffer.
//this is generated by an excryption algorithm in unsigned char* format
//I am converting it to hex string to make it printable for verification pupose.
//and takes the hexstring as inpuit here to test the conversion logic.
string inputHexString = "552027e33844dd7b71676b963c0b8e20";
string outputHexString;
stringstream geek;
unsigned char * hexBuffer = new unsigned char[inputHexString.length()/2];
convertHexStringToUnsignedCharBuffer(inputHexString, hexBuffer);
for (int i=0;i<inputHexString.length()/2;i++)
{
geek <<std::hex << std::setw(2) << std::setfill('0')<<(0xFF&hexBuffer[i]); // this works
//geek <<std::hex << std::setw(2) << std::setfill('0')<<(hexBuffer[i]); -- > this does not work
// I am not able to figure out why I need to do the bit wise and operation with unsigned char "0xFF&hexBuffer[i]"
// without this the conversion does not work for individual bytes having ascii values more than 127.
}
geek >> outputHexString;
cout << "input hex string: " << inputHexString<<endl;
cout << "output hex string: " << outputHexString<<endl;
if(0 == inputHexString.compare(outputHexString))
cout<<"hex encoding successful"<<endl;
else
cout<<"hex encoding failed"<<endl;
if(NULL != hexBuffer)
delete[] hexBuffer;
return 0;
}
// output
// can some one explain ? I am sure its something silly that I am missing.
the C++20 way:
unsigned char* data = new unsigned char[]{ "Hello world\n\t\r\0" };
std::size_t data_size = sizeof("Hello world\n\t\r\0") - 1;
auto sp = std::span(data, data_size );
std::transform( sp.begin(), sp.end(),
std::ostream_iterator<std::string>(std::cout),
[](unsigned char c) -> std::string {
return std::format("{:02X}", int(c));
});
or if you want to store result into string:
std::string result{};
result.reserve(size * 2 + 1);
std::transform( sp.begin(), sp.end(),
std::back_inserter(result),
[](unsigned char c) -> std::string {
return std::format("{:02X}", int(c));
});
Output:
48656C6C6F20776F726C640A090D00
The output of an unsigned char is like the output of a char which obviously does not what the OP expects.
I tested the following on coliru:
#include <iomanip>
#include <iostream>
int main()
{
std::cout << "Output of (unsigned char)0xc0: "
<< std::hex << std::setw(2) << std::setfill('0') << (unsigned char)0xc0 << '\n';
return 0;
}
and got:
Output of (unsigned char)0xc0: 0�
This is caused by the std::ostream::operator<<() which is chosen out of the available operators. I looked on cppreference
operator<<(std::basic_ostream) and
std::basic_ostream::operator<<
and found
template< class Traits >
basic_ostream<char,Traits>& operator<<( basic_ostream<char,Traits>& os,
unsigned char ch );
in the former (with a little bit help from M.M).
The OP suggested a fix: bit-wise And with 0xff which seemed to work. Checking this in coliru.com:
#include <iomanip>
#include <iostream>
int main()
{
std::cout << "Output of (unsigned char)0xc0: "
<< std::hex << std::setw(2) << std::setfill('0') << (0xff & (unsigned char)0xc0) << '\n';
return 0;
}
Output:
Output of (unsigned char)0xc0: c0
Really, this seems to work. Why?
0xff is an int constant (stricly speaking: an integer literal) and has type int. Hence, the bit-wise And promotes (unsigned char)0xc0 to int as well, yields the result of type int, and hence, the std::ostream::operator<< for int is applied.
This is an option to solve this. I can provide another one – just converting the unsigned char to unsigned.
Where the promotion of unsigned char to int introduces a possible sign-bit extension (which is undesired in this case), this doesn't happen when unsigned char is converted to unsigned. The output stream operator for unsigned provides the intended output as well:
#include <iomanip>
#include <iostream>
int main()
{
std::cout << "Output of (unsigned char)0xc0: "
<< std::hex << std::setw(2) << std::setfill('0') << (unsigned)(unsigned char)0xc0 << '\n';
const unsigned char c = 0xc0;
std::cout << "Output of unsigned char c = 0xc0: "
<< std::hex << std::setw(2) << std::setfill('0') << (unsigned)c << '\n';
return 0;
}
Output:
Output of (unsigned char)0xc0: c0
Output of unsigned char c = 0xc0: c0
Live Demo on coliru

c++ reading argv into unsigned char fixed size: Segmentation fault

I am trying to read command line argument into a fixed size unsigned char array. I get segmentation fault.
My code:
#include <stdio.h>
#include <iostream>
#include <stdlib.h>
#include <memory.h>
unsigned char key[16]={};
int main(int argc, char** argv){
std::cout << "Hello!" << std::endl;
long a = atol(argv[1]);
std::cout << a << std::endl;
memcpy(key, (unsigned char*) a, sizeof key);
// std::cout << sizeof key << std::endl;
// for (int i = 0; i < 16; i++)
// std::cout << (int) (key[i]) << std::endl;
return 0;
}
What am I doing wrong?
To call the program:
compile: g++ main.cpp
Execute: ./a.out 128
You get SEGV because your address is wrong: you convert a value to an address. Plus the size is the one of the destination, should be the size of the source
The compiler issues a warning, that's never good, you should take it into account because that was exactly your error:
xxx.c:12:38: warning: cast to pointer from integer of different size [-Wint-to-pointer-cast]
memcpy(key, (unsigned char*) a, sizeof key);
^
fix that like this:
memcpy(key, &a, sizeof(a));
BTW you don't have to declare key with 16 bytes. It would be safer to allocate it like this:
unsigned char key[sizeof(long)];
and when you print the bytes, iterate until sizeof(long) too, or you'll just print trash bytes in the end.
Here's a fix proposal using uint64_t (unsigned 64-bit integer from stdint.h which gives exact control on the size), zero initialization for your key and parsing using strtoll:
#include <stdio.h>
#include <iostream>
#include <stdlib.h>
#include <memory.h>
#include <stdint.h>
unsigned char key[sizeof(uint64_t)]={0};
int main(int argc, char** argv){
std::cout << "Hello!" << std::endl;
uint64_t a = strtoll(argv[1],NULL,10);
memcpy(key, &a, sizeof a);
for (int i = 0; i < sizeof(key); i++)
std::cout << (int) (key[i]) << std::endl;
return 0;
}
(if you want to handle signed, just change to int64_t)
Test on a little endian architecture:
% a 10000000000000
Hello!
0
160
114
78
24
9
0
0
Looks like you are copying too much data.
I also added a &a for the memcpy.
#include <stdio.h>
#include <iostream>
#include <stdlib.h>
#include <memory.h>
unsigned char key[16]={};
int main(int argc, char** argv)
{
memset(key,0x0, sizeof(key));
std::cout << "Hello!" << std::endl;
long a = atol(argv[1]);
std::cout << a << std::endl;
// the size parameter needs to be the size of a
// or the lesser of the size of key and a
memcpy(key,(void *) &a, sizeof(a));
std::cout << "size of key " << sizeof(key) << "\n";
std::cout << "key " << key << "\n";
for (int i = 0; i < 16; i++)
std::cout << " " << i << " '" << ((int) key[i]) << "'\n";
return 0;
}

Convert mpz_t to binary representation

I'm using mpz_t for big numbers. I need to convert the mpz_t to binary representation. I tried to use the mpz_export, but the returned array contains only 0s.
mpz_t test;
mpz_init(test);
string myString = "173065661579367924163593258659639227443747684437943794002725938880375168921999825584315046";
mpz_set_str(test,myString.c_str(),10);
int size = mpz_sizeinbase(test,2);
cout << "size is : "<< size<<endl;
byte *rop = new byte[size];
mpz_export(rop,NULL,1,sizeof(rop),1,0,test);
Using gmpxx (since it's taged as c++)
#include <iostream>
#include <gmpxx.h>
int main()
{
mpz_class a("123456789");
std::cout << a.get_str(2) << std::endl; //base 2 representation
}
There should be equivalent function in plain GMP
You have a minor error in your code: sizeof(rop) is either 4 or 8, depending on whether a pointer is 4 or 8 bytes on your system. You meant to pass simply size, not sizeof(rop).
Here's some code that works for me, with g++ -lgmp -lgmpxx:
#include <stdio.h>
#include <iostream>
#include <gmpxx.h>
int main()
{
mpz_class a("173065661579367924163593258659639227443747684437943794002725938880375168921999825584315046");
int size = mpz_sizeinbase(a.get_mpz_t(), 256);
std::cout << "size is : " << size << std::endl;
unsigned char *rop = new unsigned char[size];
mpz_export(rop, NULL, 1, 1, 1, 0, a.get_mpz_t());
for (size_t i = 0; i < size; ++i) {
printf("%02x", rop[i]);
}
std::cout << std::endl;
}

C++ How to output int as 32-bit binary?

I want to output an int in 32-bit binary format. Is looping and shifting my only option?
Looping is a way. You can also use bitset library.
#include <iostream>
#include <bitset>
int main(int argc, char** argv) {
int i = -5, j = 5;
unsigned k = 4000000000; // 4 billion
std::cout << std::bitset<32>(i) << "\t" << std::bitset<32>(j) << std::endl;
std::cout << std::bitset<32>(k) << std::endl;
return 0;
}
And the output will be:
11111111111111111111111111111011 00000000000000000000000000000101
11101110011010110010100000000000