I have a gray scale image that I want to display in color by mapping the gray scale values with a color palette (like colormap in Matlab).
I managed to do it by using OpenCV cvSet2D function, but I would like to access to the pixels directly for performance reasons.
But when I do that the image has strange colors. I tried to set the colors in different orders (RGB, BGR,…) but can’t seem to get around it.
There is my code:
IplImage* temp = cvCreateImage( cvSize(img->width/scale,img->height/scale), IPL_DEPTH_8U, 3 );
for (int y=0; y<temp->height; y++)
{
uchar* ptr1 = (uchar*) ( temp->imageData + y * temp->widthStep );
uchar* ptr2 = (uchar*) ( img->imageData + y * img->widthStep );
for (int x=0; x<temp->width; x++)
{
CvScalar v1;
int intensity = (int)ptr2[x];
int b=0, g=0, r=0;
r = colormap[intensity][0];
g = colormap[intensity][1];
b = colormap[intensity][2];
if (true)
{
ptr1[3*x] = b;
ptr1[3*x+1] = g;
ptr1[3*x+2] = r;
}
else
{
v1.val[0] = r;
v1.val[1] = g;
v1.val[2] = b;
cvSet2D(temp, y, x, v1);
}
}
}
Change the if (true) to if (false) for different pixel access.
The correct result is with cvSet2D:
The wrong result with the direct memory access:
Thank you for your help
I have done something similar for coloring depth maps from Microsoft Kinect Sensor. The code I used for converting a grayscale depth map into color image will work for what you are trying to do. You may require slight modifications as in my case the depth values were in the range 500 to 2000, and I had to rescale them.
The function for colouring a grayscale image into colour image is:
void colorizeDepth( const Mat& gray, Mat& rgb)
{
double maxDisp= 255;
float S=1.f;
float V=1.f ;
rgb.create( gray.size(), CV_8UC3 );
rgb = Scalar::all(0);
if( maxDisp < 1 )
return;
for( int y = 0; y < gray.rows; y++ )
{
for( int x = 0; x < gray.cols; x++ )
{
uchar d = gray.at<uchar>(y,x);
unsigned int H = 255 - ((uchar)maxDisp - d) * 280/ (uchar)maxDisp;
unsigned int hi = (H/60) % 6;
float f = H/60.f - H/60;
float p = V * (1 - S);
float q = V * (1 - f * S);
float t = V * (1 - (1 - f) * S);
Point3f res;
if( hi == 0 ) //R = V, G = t, B = p
res = Point3f( p, t, V );
if( hi == 1 ) // R = q, G = V, B = p
res = Point3f( p, V, q );
if( hi == 2 ) // R = p, G = V, B = t
res = Point3f( t, V, p );
if( hi == 3 ) // R = p, G = q, B = V
res = Point3f( V, q, p );
if( hi == 4 ) // R = t, G = p, B = V
res = Point3f( V, p, t );
if( hi == 5 ) // R = V, G = p, B = q
res = Point3f( q, p, V );
uchar b = (uchar)(std::max(0.f, std::min (res.x, 1.f)) * 255.f);
uchar g = (uchar)(std::max(0.f, std::min (res.y, 1.f)) * 255.f);
uchar r = (uchar)(std::max(0.f, std::min (res.z, 1.f)) * 255.f);
rgb.at<Point3_<uchar> >(y,x) = Point3_<uchar>(b, g, r);
}
}
}
For an input image which looks like this:
Output of this code is:
I'm posting this to close the question like asked in the comments of my question.
The answer was:
I found out my error... In fact it's RGB but that wasn't the problem, I had color values at 256 instead of 255... Really sorry... I guess asking the question helped me found the answer
Thank you
Related
I have the next problem
I want to use this equation in openCV
X = blue channel , Y = green channel and Z = red channel
x = X / (X + Y + Z);
y = Y / (X + Y + Z);
Z = Z / (X + Y + Z);
But when i run my code I have a window whit 3 images on 1.
First I extract the luminance because I want the crominance of the red channel.
Can anybody help me?
Here is my code:
Mat source = cv::imread("Image.bmp");
imshow("Original",source);
Mat src(source.rows, source.cols, CV_32FC3);
normalize(source,src,0,1,CV_MINMAX,CV_32FC1);
Mat Lum = Mat::zeros(src.rows, src.cols, CV_32FC1);
Mat Crom = Mat::zeros(src.rows, src.cols, CV_32FC3);
for (size_t i = 0; i < src.rows; i++)
{
for (size_t j = 0; j < src.cols; j++)
{
Vec3f pixel = src.at<Vec3f>(i, j);
float B = pixel[0];
float G = pixel[1];
float R = pixel[2];
Lum.at<float>(i, j) = ( B + G + R ) /3;
}
}
imshow("Lum",Lum);
///Codigo para la Cromancia
for (size_t i = 0; i < Lum.rows; i++)
{
for (size_t j = 0; j < Lum.cols; j++)
{
Vec3f pixel = src.at<Vec3f>(i,j);
float B = pixel[0];
float G = pixel[1];
float R = pixel[2];
Crom.at<Vec3f>(i,j)[0] = ( Lum.at<Vec3f>(i,j)[0] )/ ( Lum.at<Vec3f>(i,j)[0] + Lum.at<Vec3f>(i,j)[1] + Lum.at<Vec3f>(i,j)[2] );
Crom.at<Vec3f>(i,j)[1] = ( Lum.at<Vec3f>(i,j)[1] )/ ( Lum.at<Vec3f>(i,j)[0] + Lum.at<Vec3f>(i,j)[1] + Lum.at<Vec3f>(i,j)[2] );
Crom.at<Vec3f>(i,j)[2] = ( Lum.at<Vec3f>(i,j)[2] )/ ( Lum.at<Vec3f>(i,j)[0] + Lum.at<Vec3f>(i,j)[1] + Lum.at<Vec3f>(i,j)[2] );
}
}
imshow("Cromancia",Crom);
i am trying to write a program that converts RGB color space to HSV space. it is based on the conversion algorithm described here . most of the places i searched for has a similar algorithm for conversion.
my input to the program: 67 42 96
output: 267.778 , 0.5625, 96
^^^^^ // it should be 38 as below
expected output: 268 56% 38%
^^^^^^
i can see for other inputs also H & S values are as they should be but the V value is different. what could be the reason fot that?
#include<iostream>
#include <algorithm> // std::max
using namespace std;
void rgb2hsv(float r, float g, float b) {
float h = 0.0;
float s = 0.0;
float v = 0.0;
float min = std::min( std::min(r, g), b );
float max = std::max( std::max(r, g), b );
v = max; // v
float delta = max - min;
if( max != 0.0 )
s = delta / max; // s
else {
// r = g = b = 0 // s = 0, v is undefined
s = 0.0;
h = -1.0;
cout<<h<<" , "<<s<<" , "<<v<<endl;
}
if( r == max )
h = ( g - b ) / delta; // between yellow & magenta
else if( g == max )
h = 2.0 + ( b - r ) / delta; // between cyan & yellow
else
h = 4.0 + ( r - g ) / delta; // between magenta & cyan
h = h * 60.0; // degrees
if( h < 0.0 )
h += 360.0;
cout<<h<<" , "<<s<<" , "<<v<<endl;
}
int main(){
while(1){
float r,g,b;
cin>>r>>g>>b;
rgb2hsv(r,g,b);
}
return 0;
}
I am newbie in openCV and trying to convert the image of 16 bit grayscale image to a color image using color palatte. I have done this operation when I have 8 bit image but now I am fetching the image frame from the thermal camera which give me 16 bit frame and I can't convert that to 8 bit because it decreases the quality of the image which is useless. In 8 bit image I have use LUT function for doing this task.
My Lookup table Code for 8 bit image
Mat palette, im;
palette = imread("1.bmp", IMREAD_COLOR);
im = imread("C:\\Users\\Chandrapal Singh\\Desktop\\New folder\\IMG_0_10_34_45_2018_1.bmp", IMREAD_GRAYSCALE);
im.convertTo(im, CV_16U);
cvtColor(im.clone(), im, COLOR_GRAY2BGR);
double scale = (double)palette.rows / 256;
uchar b[256], g[256], r[256];
int i = 0;
for (double x = 1; x <= palette.rows;) {
b[i] = palette.at<Vec3b>((int)x, palette.cols / 2)[0];
g[i] = palette.at<Vec3b>((int)x, palette.cols / 2)[1];
r[i] = palette.at<Vec3b>((int)x, palette.cols / 2)[2];
i++;
x += scale;
}
Mat channels[] = { Mat(256,1, CV_8U, b), Mat(256,1, CV_8U, g), Mat(256,1, CV_8U, r) };
Mat lut;
cv::merge(channels, 3, lut);
Mat color;
cv::LUT(im, lut, color);
In above code palette is a color palatte and im is a grayscale image. I am reading the color of palette and put that in lut and then using LUT function just making a colored image.
So, can anyone help me how I can do the above with 16 bit image. Thanks in Advance.
When I run this code I got execption which says:-
I am getting a exception which says Assertion failed ((lutcn == cn || lutcn == 1) && _lut.total() == 256 && _lut.isContinuous() && (depth == 0 || depth == 1)) in cv:: LUT
Code to generate a 8 bit color image from 16 bit grayscale. Maps white to red, and black to blue.
#include "opencv2/imgcodecs.hpp"
using namespace cv;
using namespace std;
float a;
Vec3b getBGR(float _val) //unique values only if a >= ~ 0.3 ~= 300 / float(1024);
{ //from https://stackoverflow.com/questions/3018313/algorithm-to-convert-rgb-to-hsv-and-hsv-to-rgb-in-range-0-255-for-both
float H = a*_val;
float S = 1;
float V = 1;
double hh, p, q, t, ff;
long i;
Vec3b BGR;
if (S <= 0.0)
{
BGR[2] = V * 255;//R
BGR[1] = V * 255;//G
BGR[0] = V * 255;//B
return BGR;
}
hh = H;
if (hh >= 360.0) hh = 0.0;// not sure about that
hh /= 60.0;
i = (long)hh;
ff = hh - i;
p = V * (1.0 - S);
q = V * (1.0 - (S * ff));
t = V * (1.0 - (S * (1.0 - ff)));
switch (i)
{
case 0:
BGR[2] = V * 255;
BGR[1] = t * 255;
BGR[0] = p * 255;
break;
case 1:
BGR[2] = q * 255;
BGR[1] = V * 255;
BGR[0] = p * 255;
break;
case 2:
BGR[2] = p * 255;
BGR[1] = V * 255;
BGR[0] = t * 255;
break;
case 3:
BGR[2] = p * 255;
BGR[1] = q * 255;
BGR[0] = V * 255;
break;
case 4:
BGR[2] = t * 255;
BGR[1] = p * 255;
BGR[0] = V * 255;
break;
case 5:
default:
BGR[2] = V * 255;
BGR[1] = p * 255;
BGR[0] = q * 255;
break;
}
return BGR;
}
void color_from_gray(Mat& gray, Mat& color)
{
double minVal, maxVal;
minMaxLoc(gray, &minVal, &maxVal);
//HSV range from 0 (red) to 240 (blue)
a = 240 / (maxVal - minVal);//global variable!
MatIterator_<ushort> it, end;
MatIterator_<Vec3b> iit = color.begin<Vec3b>();//not the most efficient way to iterate
for (it = gray.begin<ushort>(), end = gray.end<ushort>(); it != end; ++it, ++iit)
*iit = getBGR(maxVal - *it);
}
int main(int argc, char** argv)
{
Mat gray = imread("gray.tif", IMREAD_ANYDEPTH);
Mat color(gray.size(), CV_8UC3);
color_from_gray(gray, color);
imwrite("color.tif", color);
return 0;
}
LUT quick and dirty:
Mat gray = imread("gray.tif", IMREAD_ANYDEPTH);
Mat palette = imread("palette.png", IMREAD_COLOR);
Mat color(gray.size(), CV_8UC3);
Vec3b lut[65536];
double scale = (double)palette.rows / 65536;
int i = 0;
for (double x = 1; x <= palette.rows;)
{
lut[i] = palette.at<Vec3b>((int)x, 0);
i++;
x += scale;
}
for (int j = 0; j < gray.rows; j++)//rows
for (int i = 0; i < gray.cols; i++)//cols
color.at<Vec3b>(j, i) = lut[gray.at<ushort>(j, i)];
imwrite("color_lut.tif", color);
I'm making a project where i need to change the lightness, and contrast of an image, it's lightness not brightness.
So my code at the start was
for (int y = 0; y < dst.rows; y++) {
for (int x = 0; x < dst.cols; x++) {
int b = dst.data[dst.channels() * (dst.cols * y + x) + 0];
int g = dst.data[dst.channels() * (dst.cols * y + x) + 1];
int r = dst.data[dst.channels() * (dst.cols * y + x) + 2];
... other processing stuff i'm doing
and it's good, doing it really fast, but when i try to make the hsv to hsl conversion to set the l value that i need it gets reaaaaaaally slow;
my hsl to hsl lines of code are
cvtColor(dst, dst, CV_BGR2HSV);
Vec3b pixel = dst.at<cv::Vec3b>(y, x); // read pixel (0,0)
double H = pixel.val[0];
double S = pixel.val[1];
double V = pixel.val[2];
h = H;
l = (2 - S) * V;
s = s * V;
s /= (l <= 1) ? (l) : 2 - (l);
l /= 2;
/* i will further make here the calcs to set the l i want */
H = h;
l *= 2;
s *= (l <= 1) ? l : 2 - l;
V = (l + s) / 2;
S = (2 * s) / (l + s);
pixel.val[0] = H;
pixel.val[1] = S;
pixel.val[2] = V;
cvtColor(dst, dst, CV_HSV2BGR);
and i ran it and was slow, so i was take of the lines to see which one was making it slow and i figure out it was cvtColor(dst, dst, CV_BGR2HSV);
So there's a way to make it faster than using cvtCOlor, or its time issue is something that can be handled?
I think (I haven't opened the text editor, but it seems) that you need to generate the entire image in HSV and then call cvtColor once for the entire image. Meaning that you should call cvtColor once instead of once for every pixel. That should give you a significant boost in speed.
You would do this:
cvtColor(dst, dst, CV_BGR2HSV);
for (int y = 0; y < dst.rows; y++) {
for (int x = 0; x < dst.cols; x++) {
Vec3b pixel = dst.at<cv::Vec3b>(y, x); // read current pixel
double H = pixel.val[0];
double S = pixel.val[1];
double V = pixel.val[2];
h = H;
l = (2 - S) * V;
s = s * V;
s /= (l <= 1) ? (l) : 2 - (l);
l /= 2;
H = h;
l *= 2;
s *= (l <= 1) ? l : 2 - l;
V = (l + s) / 2;
S = (2 * s) / (l + s);
pixel.val[0] = H;
pixel.val[1] = S;
pixel.val[2] = V;
}
}
cvtColor(dst, dst, CV_HSV2BGR);
I want to change:
int q[10] ;
double weight[10];
for ( int i = 0; i < 10; i ++ )
{
q ++ ;
weight[i] = 10;
}
into cv::Mat form, so I did it like this:
cv::Mat q = cv::Mat ( 1, 10, CV_8UC3 );
cv::Mat w = cv::Mat ( 1, 10, CV_8UC3 );
for ( int i = 0; i < 10; i ++ )
{
uchar* p = q.ptr ( i );
*p += 1;
}
weight.setTo ( 10 );
The code compiles without error, but since I don't have any reference to judge the result, I doubt there might be mistakes in my changes. Or am I doing everything right here? Thank you.
int q[10] will be changed to cv::Mat q = cv::Mat(1,10,CV_32SC1);
double w[10] will be changed to cv::Mat w = cv::Mat(1,10,CV_64FC1);.
You can access the raw pointers as:
int* qPtr = q.ptr<int>();
double* wPtr = w.ptr<double>();