i want to put all the data of an 8 bit input colorimage (Inputfile is a.bmp file)
in a new 16 bit Mat Array.
I do this because i need to sum up several image patches and then build the meanvalue for each pixel.
Mat img = imread(Inputfile); //there now should be values between 0 and 255
Addressing the blue value for example like follows
uchar* ptr = img.data + img.step*row;
cout << ptr[n*col] << endl;
only brings up single Letters and no values.
cout << static_cast<short>(ptr[n*col]) << endl;
Typecasting to short or bigger brings up the correct values. but a cast to unsigned char (which is the correct datatype in my opinion) brings up the same Letters than without any typecast.
Short has 2 Bytes as i know, but a color .bmp should only have 1 Byte color information per channel. As i need to sum up in worst case 81 (smaller 128=7bit) pixel values, i thought the short as a target value would be great.
Any help concerning the right way to get simple access to the 8 bit values and use them in 16bit arrays would be great.
Thank you.
The cast works correct, but if you send an unsigned char into the output stream, it will be interpreted as a character and printed as a character.
Also note that OpenCV already has functionality to convert a matrix to a different datatype. You can even read your image into a matrix of the preferred datatype:
cv::Mat3s img = cv::imread(...);
And it is disregarded to use the data pointer. Read OpenCV documentation on how to access single pixels or rows in a clean fashion (iterators, operator(), operator[]...).
Related
I have a function that needs to return a 16 bit unsigned int vector, but for another from which I also call this one, I need the output in 8 bit unsigned int vector format. For example, if I start out with:
std::vector<uint16_t> myVec(640*480);
How might I convert it to the format of:
std::vector<uint8_t> myVec2(640*480*4);
UPDATE (more information):
I am working with libfreenect and its getDepth() method. I have modified it to output a 16 bit unsigned integer vector so that I can retrieve the depth data in millimeters. However, I would also like to display the depth data. I am working with some example code c++ from the freenect installation, which uses glut and requires an 8 bit unsigned int vector to display the depth, however, i need the 16 bit to retrieve the depth in millimeters and log it to a text file. Therefore, i was looking to retrieve the data as a 16 bit unsigned int vector in glut's draw function, and then convert it so that I can display it with the glut function that's already written.
As per your update, assuming the 8-bit unsigned int is going to be displayed as a gray scale image, what you need is akin to a Brightness Transfer Function. Basically, your output function is looking to map the data to the values 0-255, but you don't necessarily want those to correspond directly to millimeters. What if all of your data was from 0-3mm? Then your image would look almost completely black. What if it was all 300-400mm? Then it'd be completely white because it was clipped to 255.
A rudimentary way to do it would be to find the minimum and maximum values, and do this:
double scale = 255.0 / (double)(maxVal - minVal);
for( int i = 0; i < std::min(myVec.size(), myVec2.size()); ++i )
{
myVec2.at(i) = (unsigned int)((double)(myVec.at(i)-minVal) * scale);
}
depending on the distribution of your data, you might need to do something a little more complex to get the most out of your dynamic range.
Edit: This assumes your glut function is creating an image, if it is using the 8-bit value as an input to a graph then you can disregard.
Edit2: An update after your other update. If you want to fill a 640x480x4 vector, you are clearly doing an image. You need to do what I outlined above, but also the 4 dimensions that it is looking for are Red, Green, Blue, and Alpha. The Alpha channel needs to be 255 at all times (this controls how transparent it is, you don't want it to be transparent), as for the other 3... that value you got from the function above (the scaled value) if you set all 3 channels (channels being red, green, and blue) to the same value it will appear as grayscale. For example, if my data ranged from 0-25mm, for a pixel who's value is 10mm, I would set the data to 255/(25-0)* 10 = 102 and therefore the pixel would be (102, 102, 102, 255)
Edit 3: Adding wikipedia link about Brightness Transfer Functions - https://en.wikipedia.org/wiki/Color_mapping
How might I convert it to the format of:
std::vector myVec2; such that myVec2.size() will be twice as
big as myVec.size()?
myVec2.reserve(myVec.size() * 2);
for (auto it = begin(myVec); it!=end(myVec); ++it)
{
uint8_t val = static_cast<uint8_t>(*it); // isolate the low 8 bits
myVec2.push_back(val);
val = static_cast<uint8_t>((*it) >> 8); // isolate the upper 8 bits
myVec2.push_back(val);
}
Or you can change the order of push_back()'s if it matters which byte come first (the upper or the lower).
Straightforward way:
std::vector<std::uint8_t> myVec2(myVec.size() * 2);
std::memcpy(myVec2.data(), myVec.data(), myVec.size());
or with the use of the standard library
std::copy( begin(myVec), end(myVec), begin(myVec2));
I knew this was going to come back and bite me one day. I'm reading an image, doing a resize to 48 pixels tall (by whatever the width is), then grabbing the total image columns and reading each individual pixel to get the color values. All of this information gets written out to a file. The concise version of the code is this:
unsigned char cols, rows;
unsigned char red, green, blue;
short int myCol, myRow;
cols = processedImage.columns();
rows = processedImage.rows();
myFile.write(reinterpret_cast<const char *>(&cols), sizeof(cols));
for (myCol = cols - 1; myCol >= 0; myCol--) {
for (myRow = rows - 1; myRow >= 0; myRow--) {
Magick::ColorRGB rgb(processedImage.pixelColor(myCol, myRow));
red = rgb.red() * 255;
green = rgb.green() * 255;
blue = rgb.blue() * 255;
myFile.write(reinterpret_cast <const char*> (&red), sizeof(red));
myFile.write(reinterpret_cast <const char*> (&green), sizeof(green));
myFile.write(reinterpret_cast <const char*> (&blue), sizeof(blue));
}
}
The problem here is when the file is wider than what char can hold. For example, I'm processing a file that's 494x48 pixels.
When I look at the (binary) file created, the first line which holds the column count says it's '238'. The next line starts the RGB data:
0: 238 // Column count
1: 255 // Red
2: 0 // Green
3: 0 // Blue
4: 255 // Red
5: 0 // Green
6: 0 // Blue
So I'm stuck. How can I store the actual columns value as a single line in the resulting file?
What about using more than one character instead of one character? Presume there are say 4 characters to store the cols, rows etc. since character can store 0-255, so 4 character will store 256x256x256x256 i.e. 32 bits, long enough
Answering my own question. Thanks to everyone who responded and helped figure out what I was doing wrong. The issue here stems from months of making assumptions based on Arduino code. Arduino has a single INT/UINT and I was using that to read in values from the generated files. I assumed that data type was a uint8_t when in reality I discovered it's really a uint16_t. As it was messing up other parts in the code (namely what position to seek to in a file), I had switched to a char data type as that was only taking up 1-byte. But in doing so I nailed myself with the roll-over issue mentioned above. So the solution, now that I know more about how the data types are within Arduino code:
change the image file processing to use uint16_t for both rows and columns
(since I have access to it) change the reading on the Arduino side to also use uint16_t
change the file seek command to move one more byte after the "header" so the data being
read doesn't get mangled.
And ultimately, I've now stopped using Arduino's built-in data types and switched to platform independent data types that are actually what they say they are.
Chalk this up to another learning experience (in my entire process of actually learning c++) ...
I do have an IplImage I. and it's supposed to have values from 0 to 255 (pixels).
I'm having in it unfortunately weird special characters like:
ØÕÖÕÓÎËÍÌÈÃÃÁ»¶±«¨¤Ÿ™”‰
I did the following:
uchar* d_I = (uchar*) I->imageData;
How can I convert those characters into values from 0 to 255 ?
Since you have tagged c++, here's how you can print a char as a number using cout.
Keep your previous code.
uchar* d_I = (uchar*) I->imageData;
std::cout <<(int) *d_I;
are you taking image->widthStep into account? the pixels aren't simply the pointer cast to the data type you passed to cvCreateImage (or what got loaded by cvLoadImage) then incremented... the rows are aligned to 4 or 8 bytes, so each row isn't necessarily touching in memory (though it would be close within a few bytes).
Access your pixels with:
(unsigned char)CV_IMAGE_ELEM(myImage, unsigned char, y, x)
or if you started with a 64F depth image:
(double)CV_IMAGE_ELEM(myImage, double, row, col)
see:
How to access the elements of single channel IplImage in Opencv
Instead of casting in uchar, you should cast to int:
int* d_l = (int*)l->imageData
This may well have come up before but the following code is taken from an MSDN example I am modifying. I want to know how I can iterate through the contents of the buffer which contains data about a bitmap and print out the colors. Each pixel is 4 bytes of data so I am assuming the R G B values account for 3 of these bytes, and possibly A is the 4th.
What is the correct C++ syntax for the pointer arithmetic required (ideally inside a loop) that will store the value pointed to during that iteration in to a local variable that I can use, eg. print to the console.
Many thanks
PS. Is this safe? Or is there a safer way to read the contents of an IMFMediaBuffer? I could not find an alternative.
Here is the code:
hr = pSample->ConvertToContiguousBuffer(&pBuffer); // this is the BitmapData
// Converts a sample with multiple buffers into a sample with a single IMFMediaBuffer which we Lock in memory next...
// IMFMediaBuffer represents a block of memory that contains media data
hr = pBuffer->Lock(&pBitmapData, NULL, &cbBitmapData); // pBuffer is IMFMediaBuffer
/* Lock method gives the caller access to the memory in the buffer, for reading or writing:
pBitmapData - receives a pointer to start of buffer
NULL - receives the maximum amount of data that can be written to the buffer. This parameter can be NULL.
cbBitmapData - receives the length of the valid data in the buffer, in bytes. This parameter can be NULL.
*/
I solved the problem myself and thought it best to add the answer here so that it formats correctly and maybe others will benefit from it. Basically in this situation we use 32 bits for the image data and what is great is that we are reading raw from memory so there is not yet a Bitmap header to skip because this is just raw color information.
NOTE: Across these 4 bytes we have (from bit 0 - 31) B G R A, which we can verify by using my code:
int x = 0;
while(x < cbBitmapData){
Console::Write("B: {0}", (*(pBitmapData + x++)));
Console::Write("\tG: {0}", (*(pBitmapData + x++)));
Console::Write("\tR: {0}", (*(pBitmapData + x++)));
Console::Write("\tA: {0}\n", (*(pBitmapData + x++)));
}
From the output you will see that the A value is 0 for each pixel because there is no concept of transparency or depth here, which is what we expect.
Also to verify that all we have in the buffer is raw image data and no other data I used this calculation which you may also find of use:
Console::Write("no of pixels in buffer: {0} \nexpected no of pixels based on dimensions:{1}", (cbBitmapData/4), (m_format.imageWidthPels * m_format.imageHeightPels) );
Where we divide the value of cbBitmapData by 4 because it is a count of the bytes, and as aforementioned for each pixel we have a width of 4 bytes (32-bit DWORDS in actual fact because the length of a byte is not always strictly uniform across hardware apparently!?). We compare this to the image width multiplied by its height. They are equal and thus we have just pixel color information in the buffer.
Hope this helps someone.
I am using C++ GDI+ to open a gif
however I find the frame interval is really strange.
It is different from played it by window's pic viewer.
The code I written is as follow.
pMultiPageImg = new Bitmap(XXXXX);
int size = m_pMultiPageImg->GetPropertyItemSize(PropertyTagFrameDelay);
m_pTimeDelays = (PropertyItem*) malloc (size);
m_pMultiPageImg->GetPropertyItem(PropertyTagFrameDelay, size, m_pTimeDelays);
int frameSize = m_pMultiPageImg->GetFrameDimensionsCount();();
// the interal of frame FrameNumber:
long lPause = ((long*)m_pTimeDelays->value)[FrameNumber] * 10;
however I found some frame the lPause <= 0.
What does this mean?
And are code I listed right for get the interval?
Many thanks!
The frame duration field in the gif header is only two bytes long (interpreted as 100ths of a second - allowing values from 0 to 32.768 seconds).
You seem to be interpreting it as long, which is probably 4 bytes on your platform so you will be reading another field along with the duration. It is hard to tell from the code you provide, but I think this is the problem.
Frame delays should not be negative numbers. I think the error comes in during the array type conversion or "FrameNumber" goes out of bounds.
GetPropertyItemSize(PropertyTagFrameDelay) returns a native byte array. It'll be safer to convert it to an Int32 array instead of a "long" array. "long" is always 4 bytes long under 32-bit systems, but could be 8 bytes under some 64-bit systems.
m_pMultiPageImg->GetFrameDimensionsCount() returns the number of frame dimensions in the image, not the number of frames. The dimension of the first frame (master image) is usually used in order to get the frame count.
In your case, the code looks like
int count = m_pMultiPageImg->GetFrameDimensionsCount();
GUID* dimensionIDs = new GUID[count];
m_pMultiPageImg->GetFrameDimensionsList(dimensionIDs, count);
int frameCount = m_pMultiPageImg->GetFrameCount(&m_pDimensionIDs[0]);
Hope this helps.