The question i have is the following:
I have a camera( with resolution Resolution : 640 x 480 px) and I get an image from that camera (I get an 8 bit/ pixel grayscale image) after the image acquisition I save the image in a bmp format. My code is the followig :
Mat img2(640,480,CV_8UC1,0);
cap.read(img2);
bool succes = imwrite("D:\\TestImage3.bmp",img2);
if(!succes){
cout << "Failed to save the image";
return -1;
}
namedWindow("myWindow",CV_WINDOW_AUTOSIZE);
imshow("myWindow",img2);
The saved image is very large almost 1 MB and i want a smaller image without losing any information (without compresing the image)???
The second question on this topic is:
even if the image is gray some times I still get some rgb noise, its like I would have set a 3 channel setting instead of 1 channel setting for my image
If anyone knows the answer please let me know, I would be very grateful
Thanks for your time!
You can save your image as PNG which is an lossless image compression format.
bool succes = imwrite("D:\\TestImage3.png",img2);
With the cv::imwrite function you can pass additional parameters depending on the image format.
PNG is a lossless image format but you can still chose the level of compression for example :
Mat img2;
cap.read(img2);
cvtColor(img2, img2, CV_BGR2GRAY); // Convert to single channel
vector<int> compression_params;
compression_params.push_back(CV_IMWRITE_PNG_COMPRESSION);
compression_params.push_back(9);
bool succes = imwrite("D:\\TestImage3.bmp", img2, compression_params);
if(!succes)
{
cout << "Failed to save the image"; return -1;
}
imshow("myWindow",img2);
waitKey(0);
Just use the default constructor for Mat with no params.
Mat img2;
cap.read(img2);
cvtColor(img2, img2, CV_BGR2GRAY); // Convert to single channel
bool succes = imwrite("D:\\TestImage3.bmp", img2);
if(!succes)
{
cout << "Failed to save the image"; return -1;
}
imshow("myWindow",img2);
waitKey(0);
Also, bmp is known for its large uncompressed size. Use .png instead.
Related
I am stuck trying to figure out how to use the opencv demosaicing function. I have OpenCV 4.4.0 installed with CUDA support compiled in, and so far what I think I need to do is:
Read in the raw image data
Load in raw image data to a Mat object
Upload the Mat data to a GpuMat (host to device upload)
Demosaic
Download the GpuMat data (device to host download) to a Mat object
Display or write out the result
Here is a snippet of the code I have.
ifstream ifs("image.raw", ios_base::binary);
ifs.read(buffer, length);
// snip ...buffer contains the entire file...
Mat src_host(6464, 4860, CV_16UC1, buffer);
GpuMat dst, src;
src.upload(src_host);
// Debayer here
cv::cuda::demosaicing(src, dst, COLOR_BayerRG2BGR);
// have a look
Mat result_host;
dst.download(result_host);
namedWindow("Debayered Image", WINDOW_KEEPRATIO);
resizeWindow("Debayered Image", 6464/5, 4860/5);
imshow("Debayered Image", result_host);
waitKey(0);
I have raw frames from cameras that have 12 bits per pixel, RGGB, dimensions 6464 x 4860. I'm uncertain of how to specify this for OpenCV in terms of width and height, what CV_TYPE to give it, whether I am reading in and uploading the data properly for demosaicing, what COLOR_code to give it for demosaicing, and how to download the result for display and saving to file (preferably a high level routine to write a png or similar).
Does anyone know whether I'm on the right track or not?
Thanks! James
Yes, I'm on the right track. The rows and columns are accidentally swapped, so the corrected code is:
ifstream ifs("image.raw", ios_base::binary);
ifs.read(buffer, length);
// snip ...buffer contains the entire file...
Mat src_host(4860, 6464, CV_16UC1, buffer);
GpuMat dst, src;
src.upload(src_host);
// Debayer here
cv::cuda::demosaicing(src, dst, COLOR_BayerRG2BGR);
// have a look
Mat result_host;
dst.download(result_host);
namedWindow("Debayered Image", WINDOW_KEEPRATIO);
resizeWindow("Debayered Image", 4860/2, 6464/2);
imshow("Debayered Image", result_host);
waitKey(0);
While the sensor data is 12 bit, each 12 bits sits inside 16 bits, which makes it a lot easier to deal with.
I'm trying to load and display a .PGM image using OpenCV(2.4.0) for C++.
void open(char* location, int flag, int windowFlag)
{
Mat image = imread(location, flag);
namedWindow("Image window", windowFlag);
imshow("Image window", image);
waitKey(0);
}
I'm calling open like this:
open("./img_00245_c1.pgm", IMREAD_UNCHANGED, CV_WINDOW_AUTOSIZE);
The problem is that the image shown when the window is opened is darker than if I'm opening the file with IrfanView.
Also if I'm trying to write this image to another file like this:
Mat imgWrite;
imgWrite = image;
imwrite("newImage.pgm", imgWrite)
I will get a different file content than the original one and IrfanView will display this as my function displays with imshow.
Is there a different flag in imread for .PGM files such that I can get the original file to be displayed and saved ?
EDIT: Image pgm file
EDIT 2 : Remarked that: IrfanView normalizes the image to a maximum pixel value of 255 .
In order to see the image clearly using OpenCV I should normalize the image also when loading in Mat. Is this possible directly with OpenCV functions without iterating through pixels and modifying their values ?
The problem is not in the way data are loaded, but in the way they are displayed.
Your image is a CV_16UC1, and both imshow and imwrite normalize the values from original range [0, 65535] to the range [0, 255] to fit the range of the type CV_8U.
Since your PGM image has max_value of 4096:
P2
1176 640 // width height
4096 // max_value
it should be normalized from range [0, 4096] instead of [0, 65535].
You can do this with:
Mat img = imread("path_to_image", IMREAD_UNCHANGED);
img.convertTo(img, CV_8U, 255.0 / 4096.0);
imshow("Image", img);
waitKey();
Please note that the values range in your image doesn't correspond to [0, 4096], but:
double minv, maxv;
minMaxLoc(img, &minv, &maxv);
// minv = 198
// maxv = 2414
So the straightforward normalization in [0,255] like:
normalize(img, img, 0, 255, NORM_MINMAX);
img.convertTo(img, CV_8U);
won't work, as it will produce an image brighter than it should be.
This means that to properly show your image you need to know the max_value (here 4096). If it changes every time, you can retrieve it parsing the .pgm file.
Again, it's just a problem with visualization. Data are correct.
I have an algorithm that does some stuff. Among them, there is a conversion that works fine if I'm working on a CV_8UC3 image but goes wrong if the file type is C_16UC3.
This is some code:
//new image is created
Mat3w img(100,100,Vec3w(1000,0,0));
//Image Conversion - ERROR!
cv::Mat inputSource;
//saving the image here will work
img.convertTo(inputSource, CV_64FC3);
//saving the image here will not work -> black image
The problem is that the CV_16UC3 image's processing result is an image of the right dimensions but fully black.
The problem is in the conversion because saving the image right before will give a legit one while saving it right after will give an almost completely white one.
EDIT:
I made some changes: cut off some useless code and added the inputSource declaration.
Now, while I was trying stuff, I arrived at the conclusion that either I haven't understood the CV Types, or something strange is happening.
I always thought that the number in the type was indicating the number of bits per channel. So, in my head, CV_16UC3 is a 3 channel with 16bits per channel. That idea is strengthened by the fact that the image I save during as tests (before the img.convertTo) actually had matching bits per channel number. The strange thing, is that the saved inputSource (type CV_64FC3) is an 8bpc image.
What's am I missing?
You get confused with the way imwrite and imread work in OpenCV. From the OpenCV documentation
imwrite
The function imwrite saves the image to the specified file. The image format is chosen based on the filename extension (see imread() for the list of extensions). Only 8-bit (or 16-bit unsigned (CV_16U) in case of PNG, JPEG 2000, and TIFF) single-channel or 3-channel (with ‘BGR’ channel order) images can be saved using this function. If the format, depth or channel order is different, use Mat::convertTo() , and cvtColor() to convert it before saving. Or, use the universal FileStorage I/O functions to save the image to XML or YAML format.
imread
The function imread loads an image from the specified file and returns it. Possible flags are:
IMREAD_UNCHANGED : If set, return the loaded image as is (with alpha channel, otherwise it gets cropped).
IMREAD_GRAYSCALE : If set, always convert image to the single channel grayscale image.
IMREAD_COLOR : If set, always convert image to the 3 channel BGR color image.
IMREAD_ANYDEPTH : If set, return 16-bit/32-bit image when the input has the corresponding depth, otherwise convert it to 8-bit.
IMREAD_ANYCOLOR : If set, the image is read in any possible color format.
So for your case, CV_16U are saved without conversion, while CV_64F is converted and saved as CV_8U. If you want to store double data, you should use FileStorage.
You should also take care to use imread the image with the appropriate flag.
This example should clarify:
#include <opencv2\opencv.hpp>
using namespace cv;
int main()
{
// Create a 16-bit 3 channel image
Mat3w img16UC3(100, 200, Vec3w(1000, 0, 0));
img16UC3(Rect(0, 0, 20, 50)) = Vec3w(0, 2000, 0);
// Convert to 64-bit (double) 3 channel image
Mat3d img64FC3;
img16UC3.convertTo(img64FC3, CV_64FC3);
// Save to disk
imwrite("16UC3.png", img16UC3); // No conversion
imwrite("64FC3.png", img64FC3); // Converted to CV_8UC3
FileStorage fout("64FC3.yml", FileStorage::WRITE);
fout << "img" << img64FC3; // No conversion
fout.release();
Mat img_maybe16UC3_a = imread("16UC3.png" /*, IMREAD_COLOR*/); // Will be CV_8UC3
Mat img_maybe16UC3_b = imread("16UC3.png", IMREAD_ANYDEPTH); // Will be CV_16UC1
Mat img_maybe16UC3_c = imread("16UC3.png", IMREAD_UNCHANGED); // Will be CV_16UC3
Mat img_maybe64FC3_a = imread("64FC3.png" /*, IMREAD_COLOR*/); // Will be CV_8UC3
Mat img_maybe64FC3_b = imread("64FC3.png", IMREAD_ANYDEPTH); // Will be CV_8UC1
Mat img_maybe64FC3_c = imread("64FC3.png", IMREAD_UNCHANGED); // Will be CV_8UC3
Mat img_mustbe64FC3;
FileStorage fin("64FC3.yml", FileStorage::READ);
fin["img"] >> img_mustbe64FC3; // Will be CV_64FC3
fin.release();
return 0;
}
I'm trying to show LiveView image in real time. I use EDSDK 2.14 + Qt5 + opencv+mingw32 under Windows. I'm not very sophisticated in image processing so now I have the following problem. I use example from Canon EDSDK and all was ok until this part of code:
//
// Display image
//
I googled a lot of examples but all of them was written on C# or MFC or VB. Also I found advise to use libjpegTurbo for decompressing image and then showing it using opencv. I tried to use libjpegTurbo but failed to undestand what to do :(. Maybe somebody here have code example of the conversion LiveView stream to opencv Mat or QImage (because I use Qt)?
Here is what worked for me after following the SAMPLE 10 from the Canon EDSDK Reference. It's a starting point for a more robust solution.
In the downloadEvfData function, I replaced the "Display Image" part by the code bellow:
unsigned char *data = NULL;
EdsUInt32 size = 0;
EdsSize coords ;
// get image coordinates
EdsGetPropertyData(evfImage, kEdsPropsID_Evf_CoordinateSystem, 0, sizeof(coords), &coords);
// get buffer pointer and size
EdsGetPointer(stream, (EdsVoid**)&data);
EdsGetLenth(stream, &size);
//
// release stream and evfImage
//
// create mat object
Mat img(coords.height, coords.width, CV_8U, data);
image = imdecode(img, CV_LOAD_IMAGE_COLOR);
I've also changed the function signature:
EdsError downloadEvfData(EdsCameraRef camera, Mat& image)
And in the main function:
Mat image;
namedWindow("main", WINDOW_NORMAL);
startLiveView(camera);
for(;;) {
dowloadEvfData(camera, image);
imshow("main", image);
if (waitkey(10) >= 0);
break;
}
Based on the Canon EDSDKs example, you may append your EdsStreamRef 'stream' data with its correct length into a QByteArray. Then, use for example the following to parse the raw data from the QByteArray as a JPG into a new QImage:
QImage my_image = QImage::fromData(limagedata,"JPG"); Once it's in a QImage you can convert it into a OpenCV cv::Mat (see How to convert QImage to opencv Mat)
Well it depends on the format of the liveview-stream.
There must be some kind of delimiter in it and you need then to convert each image and update your QImage with it.
Check out this tutorial for more information: Canon EDSDK Tutorial in C#
QImage img = QImage::fromData(data, length, "JPG");
m_image = QImageToMat(img);
// -----------------------------------------
cv::Mat MainWindow::QImageToMat(QImage& src)
{
cv::Mat tmp(src.height(),src.width(),CV_8UC4,(uchar*)src.bits(),src.bytesPerLine());
cv::Mat result = tmp.clone();
return result;
}
// -------------------------
void MainWindow::ShowVideo()
{
namedWindow("yunhu",WINDOW_NORMAL);
while(1)
{
requestLiveViewImage();
if(m_image.data != NULL)
{
imshow("yunhu", m_image);
cvWaitKey(50);
}
}
}
How to save a Magick::Image in a grayscale format? I'm using ImageMagick to decode images and write the result to OpenCV matrix. What I'm doing now is reading the color image and then converting it to grayscale by OpenCV:
Magick::Image image("test.png");
cv::Mat mat(image.rows(), image.columns(), CV_8UC3);
image.write(0, 0, image.columns(), image.rows(), "BGR", Magick::CharPixel, mat.data);
cv::cvtColor(mat, mat, CV_BGR2GRAY);
I'd like to write image to the cv::Mat already in grayscale, without the intermidiate color image. This should be very simple, but wasn't able to find it out from the docs and would appreciate any help.
Also I'd like to know how to detect if an image contains an alpha channel?
To set an image to grayscale, simple call Magick::Image.type( Magick::ImageType ) before writing the image blob to cv.
Magick::Image image("test.png");
image.type( Magick::GrayscaleType );
image.write(0, 0, image.columns(), image.rows(), "BGR", Magick::CharPixel, mat.data);
For detecting if an image has transparent, simply check if Magick::Image.matte() returns true.
Magick::Image image("test.png");
std::cout << "transparent = " << ( image.matte() ? "true" : "false") << std::endl;