C++ - Image Conversion - c++

I am new to C++ and would like to know how to read in a .jpg image and then convert it to binary (black and white/bi-level/two-level)?
Thank you.

Your better choice is probably boost Gil.
Boost libraries are not especially for beginner, but they are often well designed.
#include <boost/gil/image.hpp>
#include <boost/gil/typedefs.hpp>
#include <boost/gil/extension/io/jpeg_io.hpp>
int main() {
using namespace boost::gil;
rgb8_image_t img;
jpeg_read_image("test.jpg",img);
gray8s_view_t view(img.dimensions());
color_converted_view<gray8_pixel_t>(const_view(img), view);
jpeg_write_view("grey.jpg", view);
}

You can use DevIL to read the image. It supports a lot of different formats.
To convert it to pure black and white, you then go through the whole image data and compute the intensity or light contribution of each pixel and if it falls below a certain threshold you'll output a black pixel otherwise a white pixel.
You could do it as simply as check the RGB-values of each pixel against a threshold of RGB(0.5, 0.5, 0.5). But you might get better results if you convert the image to HSI and use the intensity value for each pixel, but that's more work.

There is the option for libpng, which as been used on many projects. For additional reading on how to write a grayscale image, take a look at this chapter from their website.

Related

How to use CImg functions with pixel data?

I am using Visual Studio and looking to find a useful image processing library that will take care of basic image processing functions such as rotation so that I don't have to keep coding them manually. I came across CImg and it supports this, as well as many other useful functions, along with interpolation.
However, all the examples I've seen show CImg being used by loading and using full images. I want to work with pixel data. So my loops are the typical:
for (x=0;x<width; x++)
for (y=0;y<height; y++)
I want to perform bilinear or bicubic rotation in this instance and I see CImg supports this. It provides a rotate() and get_rotate function, among others.
I can't find any examples online that show how to use this with pixel data. Ideally, I could simply pass it the pixel color, x, y, and interpolation method, and have it return the result.
Could anyone provide any helpful suggestions? If CImg is not the right library for this type of this, could anyone recommend a simple, light-weight, easy-to-use one?
Thank you!
You can copy pixel data to CImg class using iterators, and copy it back when you are done.
std::vector<uint8_t> pixels_src, pixels_dst;
size_t width, height, n_colors;
// Copy from pixel data
cimg_library::CImg<uint8_t> image(width, height, 1, n_colors);
std::copy(pixels_src.begin(), pixels_src.end(), image.begin());
// Do image processing
// Copy to pixel data
pixels_dst.resize(width * height * n_colors);
std::copy(image.begin(), image.end(), pixels_dst.begin());

ITK - Calculate texture features for segmented 3D brain MRI

I'm trying to calculate texture features for a segmented 3D brain MRI using ITK library with C++. So I followed this example. The example takes a 3D image, and extracts 3 different features for all 13 possible spatial directions. In my program, I just want for a given 3D image to get :
Energy
Correlation
Inertia
Haralick Correlation
Inverse Difference Moment
Cluster Prominence
Cluster Shade
Here is what I have so far :
//definitions of used types
typedef itk::Image<float, 3> InternalImageType;
typedef itk::Image<unsigned char, 3> VisualizingImageType;
typedef itk::Neighborhood<float, 3> NeighborhoodType;
typedef itk::Statistics::ScalarImageToCooccurrenceMatrixFilter<InternalImageType>
Image2CoOccuranceType;
typedef Image2CoOccuranceType::HistogramType HistogramType;
typedef itk::Statistics::HistogramToTextureFeaturesFilter<HistogramType> Hist2FeaturesType;
typedef InternalImageType::OffsetType OffsetType;
typedef itk::AddImageFilter <InternalImageType> AddImageFilterType;
typedef itk::MultiplyImageFilter<InternalImageType> MultiplyImageFilterType;
void calcTextureFeatureImage (OffsetType offset, InternalImageType::Pointer inputImage)
{
// principal variables
//Gray Level Co-occurance Matrix Generator
Image2CoOccuranceType::Pointer glcmGenerator=Image2CoOccuranceType::New();
glcmGenerator->SetOffset(offset);
glcmGenerator->SetNumberOfBinsPerAxis(16); //reasonable number of bins
glcmGenerator->SetPixelValueMinMax(0, 255); //for input UCHAR pixel type
Hist2FeaturesType::Pointer featureCalc=Hist2FeaturesType::New();
//Region Of Interest
typedef itk::RegionOfInterestImageFilter<InternalImageType,InternalImageType> roiType;
roiType::Pointer roi=roiType::New();
roi->SetInput(inputImage);
InternalImageType::RegionType window;
InternalImageType::RegionType::SizeType size;
size.Fill(50);
window.SetSize(size);
window.SetIndex(0,0);
window.SetIndex(1,0);
window.SetIndex(2,0);
roi->SetRegionOfInterest(window);
roi->Update();
glcmGenerator->SetInput(roi->GetOutput());
glcmGenerator->Update();
featureCalc->SetInput(glcmGenerator->GetOutput());
featureCalc->Update();
std::cout<<"\n Entropy : ";
std::cout<<featureCalc->GetEntropy()<<"\n Energy";
std::cout<<featureCalc->GetEnergy()<<"\n Correlation";
std::cout<<featureCalc->GetCorrelation()<<"\n Inertia";
std::cout<<featureCalc->GetInertia()<<"\n HaralickCorrelation";
std::cout<<featureCalc->GetHaralickCorrelation()<<"\n InverseDifferenceMoment";
std::cout<<featureCalc->GetInverseDifferenceMoment()<<"\nClusterProminence";
std::cout<<featureCalc->GetClusterProminence()<<"\nClusterShade";
std::cout<<featureCalc->GetClusterShade();
}
The program works. However I have this problem : it gives the same results for different 3D images, even when I change the window size.
Does any one used ITK to do this ? If there is any other method to achieve that, could anyone point me to a solution please ?
Any help will be much apreciated.
I think that your images have only one gray scale level. For example, if you segment your images using itk-snap tool, when you save the result of the segmentation, itk-snap save it with one gray scale level. So, if you try to calculate texture features for images segmented with itk-snap you'll always have the same results even if you change the images or the window size because you have only one gray scale level in the co-occurrence matrix. Try to run your program using unsegmented images, you'll certainly have different results.
EDIT :
To calculate texture features for segmented images, try another segmentation method which saves the original gray scale levels of the unsegmented image.
Something strange in your code is size.Fill(50), while in the example they show it should hold the image dimension:
size.Fill(3); //window size=3x3x3

Image Segmentation using OpenCV

I am pretty new to openCV and would like a little help
So my basic idea was to use opencv to create a small application for interior designing.
Problem
How to differentiate between walls and floor of a picture (even when we have some noise in the picture).
For Ex.
Now, my idea was, if somehow i can find the edges of the wall or tile, and then if any object which will be used for interior decoration(for example any chair), then that object will be placed perfectly over the floor(i.e the two image gets blended)
My approach
#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv/cv.h>
using namespace cv;
using namespace std;
int main(){
Mat image=imread("/home/ayusun/Downloads/IMG_20140104_143443.jpg");
Mat resized_img,dst,contours,detected_edges;
resize(image, resized_img, Size(1024, 768), 0, 0, INTER_CUBIC);
threshold(resized_img, dst, 128, 255, CV_THRESH_BINARY);
//Canny(image,contours,10,350);
namedWindow("resized image");
imshow("resized image",resized_img);
//imshow("threshold",dst);
blur( resized_img, detected_edges, Size(2,2) );
imshow("blurred", detected_edges);
Canny(detected_edges,contours,10,350);
imshow("contour",contours);
waitKey(0);
return 1;
}
I tried canny edge detection algorithm, but it seems to find a lot of edges. And i still don't know how to combine floor of the room with that of the chair
Thanks
Sorry for involuntary advertisement but IKEA uses a catalog smartphone app which uses augmented reality to position objects/furniture around an image of your room. Is that what you're trying to do?
In order to achieve this you would need a "pinpoint", a fixed point where to hook your objects to. That usually helps differentiate between walls and floor in the app above (and renders things easy).
Distinguishing walls from floors is hard even for a human if they're hanging by their feet and walls/floors have the same texture on them (but we manage to do it thanks to our "gravity feeling").
Find some keypoints or please state if you're planning to do it with a fixed camera (i.e. it will never be put horizontally)
OpenCV's POSIT may be userful for you (here is an example): http://opencv-users.1802565.n2.nabble.com/file/n6908580/main.cpp
Also take a look at augmented reality toolkits ArUco for example.
For advanced methods take a look at ptam.
And you can find some userful links and papers here: http://www.doc.ic.ac.uk/~ajd/
Segmenting walls and floors out of a single image is possible to some extent but requires a lot of work, it will require quite a complex system if you want to achieve decent results. You can probably do much better with a pair of images (stereo reconstruction)

Extracting Depth images of Kinect using opencv

Does anyone know what is the simplest way to extract the gray-level depth images of Kinect using OpenCV and C++? any source code in this field?
if you use OpenNI SDK, you can simply point to the buffer:
//on setup:
xn::DepthGenerator depthGenerator;
xn::DepthMetaData depthMD;
cv::Mat depthWrapper;
//on update loop,
//after context.WaitAnyUpdateAll();
depthGenerator.GetMetaData(depthMD);
depthWrapper = cv::Mat(depthMD.YRes(), depthMD.XRes(), CV_16UC1, (void*) depthMD.Data());
note that depthWrapper is const so you need to clone it in order to manipulate it
The documentation has everything you need. Can't elaborate better than this.
You need to do two things (apart from reading about context, depth generator and initialization of Kinect):
Create Mat of the type CV_16U a.
context.WaitOneUpdateAll(depth_map); b. Mdepth_original =
Mat(h_depth, w_depth, CV_16U, (void*) depth_map.GetData()) c. copy
the Mat since it will be destroyed during next read:
Mdepth_original.copyTo(depth);
Map depth to gray or color. Color seems like a good idea (256^3 levels) but a human eye is more sensitive to the luminance change. Even with 256 levels you can map 10,000 Kinect levels reasonably well using [histogram equalization][1] technique. A simplest way though is to loose precision and just do I(x, y) = 255.0*z(x, y)/z_range
Here is how histogram equalization is implemented in openNI2:
https://github.com/OpenNI/OpenNI2/blob/master/Samples/Common/OniSampleUtilities.h

Recoloring an image based on current theme?

I want to develop a program which recolors the input image based on the given theme the same way as ms-powerpoint application does.
I am giving following link that shows what exactly i want to do.
I want to generate images same as images in below link under the Dark Variations and light Variations title based on the current theme.
http://blogs.msdn.com/powerpoint/archive/2006/07/06/658238.aspx
Can anybody give me idea,info regarding how to achieve it efficiently ??
You can give a look to the HSL colorspace to be able to have the same result. HSL means Hue, Saturation, Lightness.
You can keep the lightness of each pixel of your image and change only the hue. I think this will allow you to achieve what you want. You can find the RGB to HSL conversion on the wiki page.
Hope that helps.
Step 1: Choose the colors you want to represent black and white. For the dark variations, choose black and a light color; for the light variations, choose a dark color and white.
Step 2: Convert a pixel to gray. A common formula for this is L = R*0.3 + G*0.59 + B*0.11.
Step 3: Interpolate between the colors using the gray value. output.R = (L/255)*light.R + (1-(L/255))*dark.R and likewise for green and blue.
You can use a library like CxImage and convert the image to grayscale, then use the mix command with another image that you have made that is the same size as the original, and mix the two with the Mix command, using the filters. You can do mix-screen, and this should tint the pixels the color of the second image in the resultant image. Try playing with CxImage a bit, see if it will do what you want it to do. This is all coming off the top of my head, and its been a while since I have tried to do anything like this. YMMV, but this would be the simplest implementation. You could always look at how CxImage does the blend, and apply it to the image yourself.
I must say thanks to Mark and Patrice for ur guidance which helped me achieved it.
For light variation, I have done it by converting the theme colors to HSV colorspace and found relation between output color and theme color for black color (input) .
The relation was found to be linear for saturation and value and hue was almost constant.
I have used interpolation formula to make it generic for any given theme.
I have also make use of color matrix to achieve desired result.
Similarly for dark variation i have used white color as input and apply the same technique.