I have a float matrix a and I want to access the element at the point (x,y), but I want to convert the data type to unsigned char. The float number in point(x,y) is 652.759
The code I want to use (which is based on Opencv)is
a.at<uchar>(Point(x,y))
The result of above code is 68.
But when I checked the result with simple c++ code
static_cast<unsigned char>(a.at<float>(Point(x,y)))
the result is 140.
Anyone knows why? How can I get the same results by using both the code above?
Thx!
The at() function is agnostic as to the number of bits per point, and bases its judgement on the supplied template type.
So, at<float>(2) will return a float composed of a 32 bits range starting from the 4th byte of the array, while at<uchar>(2) will simply return the second byte in the array.
For example, the following
Mat m(10, 1, CV_8U);
m.at<uchar>(0) = 44;
m.at<uchar>(1) = 1;
m.at<uchar>(2) = 0;
m.at<uchar>(3) = 0;
cout << "char 0 : " << (int)m.at<uchar>(0) << endl;
cout << "char 1 : " << (int)m.at<uchar>(1) << endl;
cout << "short 0 : " << (int)m.at<unsigned short>(0) << endl;
produces
char 0 : 44
char 1 : 1
short 0 : 300
short 0 = char 1 * 256 + char 0
It's basically the same difference than in this code:
float f = 140.f;
unsigned char c = static_cast<unsigned char>(f); // c is 140, this is ok
unsigned char wrong = *((unsigned char*)&f); // this is wrong
The last line is the same as the a.at<uchar>(Point(x,y)) you have in your code. This is wrong because it access a float and reinterprets it (its bytes) as an unsigned char. There is no actual conversion of the binary values.
Related
A char stores a numeric value from 0 to 255. But there seems to also be an implication that this type should be printed as a letter rather than a number by default.
This code produces 22:
int Bits = 0xE250;
signed int Test = ((Bits & 0x3F00) >> 8);
std::cout << "Test: " << Test <<std::endl; // 22
But I don't need Test to be 4 bytes long. One byte is enough. But if I do this:
int Bits = 0xE250;
signed char Test = ((Bits & 0x3F00) >> 8);
std::cout << "Test: " << Test <<std::endl; // "
I get " (a double quote symbol). Because char doesn't just make it an 8 bit variable, it also says, "this number represents a character".
Is there some way to specify a variable that is 8 bits long, like char, but also says, "this is meant as a number"?
I know I can cast or convert char, but I'd like to just use a number type to begin with. It there a better choice? Is it better to use short int even though it's twice the size needed?
cast your character variable to int before printing
signed char Test = ((Bits & 0x3F00) >> 8);
std::cout << "Test: " <<(int) Test <<std::endl;
I am trying to read in a binary file in a known format. I want to find the most efficient way to extract values from it. My ideas are:
Method 1: Read each value into a new char array then get it into the correct data type. For the first 4 byte positive int, I bitshift the values accordingly and assign to an integer as below.
Method 2: Keep the whole file in a char array, then create pointers to different parts of it. In the code below I am trying to point to these first 4 bytes and use reinterpret_cast to interpret them as an integer when I dereference the variable 'bui'.
But the ouput from this code is:
11000000001100000000110000000011
3224374275
00000011000011000011000011000000
51130560
My questions are
why does the endianness get swapped using my method 2 and how do I point to it correctly?
which method is more efficient? I need all of the file, and the file contains other data types too so I will need to write different methods to interpret them if using method 1. I was assuming I could just define different type pointers if using method 2 without doing extra work!
Thanks
#include <iostream>
#include <bitset>
int main(void){
unsigned char b[4];
//ifs.read((char*)b,sizeof(b));
//let's pretend the following 4 bytes are read in representing the number 3224374275:
b[0] = 0b11000000;
b[1] = 0b00110000;
b[2] = 0b00001100;
b[3] = 0b00000011;
//method 1:
unsigned int a = 0; //4 byte capacity
a = b[0] << 24 | b[1] << 16 | b[2] << 8 | b[3];
std::bitset<32> xm1(a);
std::cout << xm1 << std::endl;
std::cout << a << std::endl;
//method 2;
unsigned int* bui = reinterpret_cast<unsigned int*>(b);
std::bitset<32> xm2(*bui);
std::cout << xm2 << std::endl;
std::cout << *bui << std::endl;
}
Usually, I access an array in C++ by the syntax foo[2], where 2 is the index of an array.
In the below code. I didn't understand how this code is giving output and access this array by index 'b', 'c'. I am confused it is array index or something else.
int count[256] = {0};
count['b'] = 2;
cout << count['b'] << endl; //output 2
cout << count['c'] << endl; //output 0
Output
2
0
Remember that in c++ characters are represented as numbers. Take a look at this ascii table. http://www.asciitable.com
According to this the character 'b' is represented 98 and 'c' as 99. Therefore what your program is really saying is...
int count[256] = {0};
count[98] = 2;
cout << count[98] << endl; //output 2
cout << count[99] << endl; //output 0
Also incase you don't know saying an array = {0} means zero initialize every value so that is why count['c'] = 0.
In C/C++ there is not 8 bit / 1 byte integer. We simply use the char type to represent a single (signed or unsigned) byte and you can even put signed and unsigned infront of the char type. Char really is just another int type which we happen to use to express characters. You can also do the following.
char b = 98;
char c = 99;
char diff = c - b; //diff is now 1
Type char is actually an integral type. Every char value represented by a character literal has an underlying integral value it corresponds to in a given code page, which is probably an ASCII table. When you do:
count['b'] = 2;
you actually do:
count[98] = 2;
as character 'b' corresponds to an integral value of 98, character 'c' corresponds to an integral value of 99 and so on. To illustrate, the following statement:
char c = 'b';
is equivalent of:
char c = 98;
Here c has the same underlying value, it's the representation that differs.
Because characters are always represented by integers in the computer, it can be used as array indices.
You can verify by this:
char ch = 'b';
count[ch] = 2;
int i = ch;
cout << i << endl;
cout << count[i] << endl;
Usually the output is 98 2, but the first number may vary depending on the encoding of your environment.
I got confused with the openCV documentation mentioned here.
As per the documentation, if i create an image with "uchar", the pixels of that image can store unsigned integer values but if i create an image using the following code:
Mat image;
image = imread("someImage.jpg" , 0); // Read an image in "UCHAR" form
or by doing
image.create(10, 10, CV_8UC1);
for(int i=0; i<image.rows; i++)
{
for(int j=o; j<image.cols; j++)
{
image.at<uchar>(i,j) = (uchar)255;
}
}
and then if i try to print the values using
cout<<" "<<image.at<uchar>(i,j);
then i get some wierd results at terminal but if i use the following statement then i can get the values inbetween 0-255.
cout<<" "<<(int)image.at<uchar>(i,j); // with TYPECAST
Question: Why do i need to do typecast to get print the values in range 0-255 if the image itself can store "unsigned integer" values.
If you try to find definition of uchar (which is pressing F12 if you are using Visual Studio), then you'll end up in OpenCV's core/types_c.h:
#ifndef HAVE_IPL
typedef unsigned char uchar;
typedef unsigned short ushort;
#endif
which standard and reasonable way of defining unsigned integral 8bit type (i.e. "8-bit unsigned integer") since standard ensures that char always requires exactly 1 byte of memory. This means that:
cout << " " << image.at<uchar>(i,j);
uses the overloaded operator<< that takes unsigned char (char), which prints passed value in form of character, not number.
Explicit cast, however, causes another version of << to be used:
cout << " " << (int) image.at<uchar>(i,j);
and therefore it prints numbers. This issue is not related to the fact that you are using OpenCV at all.
Simple example:
char c = 56; // equivalent to c = '8'
unsigned char uc = 56;
int i = 56;
std::cout << c << " " << uc << " " << i;
outputs: 8 8 56
And if the fact that it is a template confuses you, then this behavior is also equivalent to:
template<class T>
T getValueAs(int i) { return static_cast<T>(i); }
typedef unsigned char uchar;
int main() {
int i = 56;
std::cout << getValueAs<uchar>(i) << " " << (int)getValueAs<uchar>(i);
}
Simply, because although uchar is an integer type, the stream operation << prints the character it represents, not a sequence of digits. Passing the type int you get a different overload of that same stream operation, which does print a sequence of digits.
First off, I apologize if this is a duplicate; but my Google-fu seems to be failing me today.
I'm in the middle of writing an image format module for Photoshop, and one of the save options for this format, includes a 4-bit alpha channel. Of course, the data I have to convert is 8-bit/1 byte alpha - so I need to essentially take every two bytes of alpha, and merge it into one.
my attempt (below), I believe has a lot of room for improvement:
for(int x=0,w=0;x < alphaData.size();x+=2,w++)
{
short ashort=(alphaData[x] << 8)+alphaData[x+1];
alphaFinal[w]=(unsigned char)ashort;
}
alphaData and alphaFinal are vectors that contains the 8-bit alpha data and the 4-bit alpha data, respectively. I realize that reducing two bytes into the value of one, is bound to result in loss of "resolution", but I can't help but think there's a better way of doing this.
For extra information, here's the loop that does the reverse (converts 4-bit alpha from the format to 8-bit for Photoshop)
alphaData serves the same purpose as above, and imgData is an unsigned char vector that holds the raw image data. (alpha data is tacked on after the actual rgb data for the image in this particular variant of the format)
for(int b=alphaOffset,x2=0;b < (alphaOffset+dataLength); b++,x2+=2)
{
unsigned char lo = (imgData[b] & 15);
unsigned char hi = ((imgData[b] >> 4) & 15);
alphaData[x2]=lo*17;
alphaData[x2+1]=hi*17;
}
Are you sure that it's
alphaData[x2]=lo*17;
alphaData[x2+1]=hi*17;
and not
alphaData[x2]=lo*16;
alphaData[x2+1]=hi*16;
In any case, to generate the values that work with the decoding function you have posted, you just have to reverse the operations. So multiplying by 17 becomes dividing by 17 and the shifts and masks get reordered to look like this:
for(int x=0,w=0;x < alphaData.size();x+=2,w++)
{
unsigned char alpha1 = alphaData[x] / 17;
unsigned char alpha2 = alphaData[x+1] / 17;
Assert(alpha1 < 16 && alpha2 < 16);
alphaFinal[w]=(alpha2 << 4) | alpha1;
}
short ashort=(alphaData[x] << 8)+alphaData[x+1];
alphaFinal[w]=(unsigned char)ashort;
You're actually losing alphaData[x] in alphaFinal. You shift alphaData[x] by 8 bits to the left and then assign 8 low bits.
Also your for loop is unsafe, if for some reason alphaData.size() is odd, you'll run out of range.
what you want to do, I think, is to truncate an 8-bit value into a 4-bit one; not to combine two 8-bit vales. In other words, you want to drop the four least significant bits of each alpha value, not to combine two different alpha values.
So, basically, you want to right-shift by 4.
output = (input >> 4); /* truncate four bits */
in case you're not familiar with binary shifts, take this random 8-bit number:
10110110
>> 1
= 01011011
>> 1
= 00101101
>> 1
= 00010110
>> 1
= 00001011
so,
10110110
>> 4
= 00001011
and to reverse, left-shift instead...
input = (output << 4); /* expand four bits */
which, using the result from that same random 8-bit number as before, would be
00001011
>> 4
= 10110000
obviously, as you noted, 4 bits of precision is lost. But you'd be surprised how little it's noticed in a fully-composited work.
This code
for(int x=0,w=0;x < alphaData.size();x+=2,w++)
{
short ashort=(alphaData[x] << 8)+alphaData[x+1];
alphaFinal[w]=(unsigned char)ashort;
}
Is broken. Given
#include <iostream>
using std::cout;
using std::endl;
typedef unsigned char uchar;
int main() {
uchar x0 = 1; // for alphaData[x]
uchar x1 = 2; // for alphaData[x+1]
short ashort = (x0 << 8) + x1; // The value 0x0102
uchar afinal = (uchar)ashort; // truncates to 0x02.
cout << std::hex
<< "x0 = 0x" << x0 << " << 8 = 0x" << (x0 << 8) << endl
<< "x1 = 0x" << x1 << endl
<< "ashort = 0x" << ashort << endl
<< "afinal = 0x" << (unsigned int)afinal << endl
;
}
If you are saying that your source stream contains sequences of 4-bit pairs stored in 8-bit storage values, which you need to re-store as a single 8-bit value, then what you want is:
for(int x=0,w=0;x < alphaData.size();x+=2,w++)
{
unsigned char aleft = alphaData[x] & 0x0f; // 4 bits.
unsigned char aright = alphaData[x + 1] & 0x0f; // 4 bits.
alphaFinal[w] = (aleft << 4) | (aright);
}
"<<4" is equivalent to "*16", as ">>4" is equivalent to "/16".