Converting an RGB image into an HSI image without using opencv functions - c++

I am sort of a beginner when it comes to c++ and opencv.
I have this assignment where i have to successfully convert an image from RGB to HSI and then split the HSI image into the 3 channels Hue, Saturation and respectively Intensity, without using any library functions when implementing my algorithm, after that the 3 images have to be displayed.
I was able to do most of these but when converting from RGB to HSI i am completely lost. From what i saw on other posts the pixel values should be put in a matrix and then changed based on my algorithms, after that the new values should go into a new matrix (HSI).
My main problem (i think) is that i cannot seem to put the values into the new matrix, i tried different methods but the outcome was the same.
Any input is welcome.
Best Regards
Stefan
#include <opencv\highgui.h>
#include <opencv\cv.h>
#include <iostream>
using namespace cv;
using namespace std;
int main()
{
Mat rgb;
rgb = imread("Flower.jpg", CV_LOAD_IMAGE_COLOR);
unsigned char *input = (unsigned char*)(rgb.data);
Mat hsi = rgb.clone();
double R,G,B,a,H,S,I;
int i,j;
const double PI= 3.14;
for(int i = 0;i < hsi.rows ;i++){
for(int j = 0;j < hsi.cols ;j++){
B = input[hsi.step * j + i ];
G = input[hsi.step * j + i + 1];
R = input[hsi.step * j + i + 2];
}
if (R < G && R < B)
a = R;
if (G < R && G < B)
a = G;
if (B < G && B < R)
a = B;
I = (R+G+B)/3.0;
S = 1 - 3.0/(R+G+B)*a;
if (S == 0.0)
{
H = 0.0;
}
else
{
if (B <= G)
H = acos((((R-G)+(R-B))/2.0)/(sqrt((R-G)*(R-G) + (R-B)*(G-B))));
else
{
if (B > G)
H = 2*PI - acos((((R-G)+(R-B))/2.0)/(sqrt((R-G)*(R-G) + (R-B)*(G-B))));
}
}
}
namedWindow("RGB", CV_WINDOW_AUTOSIZE);
imshow("RGB", rgb);
namedWindow("HSI", CV_WINDOW_AUTOSIZE);
imshow("HSI", hsi);
waitKey(0);
return 0;
}

Your code has two problems.
First, you perform calculation outside the second loop:
for(int i = 0;i < hsi.rows ;i++){
for(int j = 0;j < hsi.cols ;j++){
}
// HSI calculation
}
So, you work only with last pixel in each row.
Second, you don't write results to hsi matrix.
Use this code template:
Mat bgr = ...; // OpenCV uses BGR format
Mat hsi(bgr.size(), CV_8UC3);
for (int i = 0; i < bgr.rows; ++i)
{
const Vec3b* bgr_row = bgr.ptr<Vec3b>();
Vec3b* hsi_row = hsi.ptr<Vec3b>();
for (int j = 0; j < bgr.cols; ++j)
{
double B = bgr_row[j][0];
double G = bgr_row[j][1];
double R = bgr_row[j][2];
double H = ...;
double S = ...;
double I = ...;
hsi_row[j] = Vec3b(H, S, I);
}
}

Related

Image pixels not equal size opencv/C++

I'm attempting to use opencv2 to preprocess and display images for classification but the pixels do not seem to be formatting properly, I'm not sure if formatting is the proper term. The image was originally 1080p and I used ffmpeg to crop and scale to 480X800. I was getting weird results so testing the program using the following code to overlay a simple checkerboard pattern where all squares should be the same size and square -
std::string image_path = samples::findFile("/home/pi/test.jpg");
Mat img = imread(image_path, IMREAD_COLOR);
cv::cvtColor(img, img, COLOR_BGR2RGB);
for (int i = 0; i < 15 ; i++ ) {
for (int j = 0; j < 25 ; j++ ) {
int x;
if ((i + j) % 2 == 0) x = 1;
else x = 0;
for (int a = i * 32; a < (i + 1) * 32 ; a++) {
for (int b = j * 32; b < (j + 1) * 32 ; b++) {
img.at<int>(a, b) = x * img.at<int>(a, b);
}
}
}
}
I get the following
checkerboard_test
the original image looks exactly like it should without any stretching or other issues. This is being displayed on a small touch screen attached to a raspberry pi. Any help would be greatly appreciated.
I figured it out, code should be -
img.at<Vec3b>(a, b) = x * img.at<Vec3b>(a, b);
instead of
img.at<int>(a, b) = x * img.at<int>(a, b);

Image Rotation gives grayscale image

I got a problem with my Rotate image function in C++, Using OpenCV and Qt.
It kinda does his job, but not as expected, apart of being in grayscale, a part of the image seems to be duplicated at the top right.
Before
After
void ImgProcessing::rotate(cv::Mat &img, cv::Mat &tmp, int angle){
float rads = angle*3.1415926/180.0;
float cs = cos(-rads);
float ss = sin(-rads);
float xcenter = (float)(img.cols)/2.0;
float ycenter = (float)(img.rows)/2.0;
for(int i = 0; i < img.rows; i++)
for(int j = 0; j < img.cols; j++){
int rorig = ycenter + ((float)(i)-ycenter)*cs - ((float)(j)-xcenter)*ss;
int corig = xcenter + ((float)(i)-ycenter)*ss + ((float)(j)-xcenter)*cs;
int pixel = 0;
if (rorig >= 0 && rorig < img.rows && corig >= 0 && corig < img.cols) {
tmp.at<int>(i ,j) = img.at<int>(rorig, corig);
}else tmp.at<int>(i ,j) = 0;
}
}
Can the problem be in accessing to the image pixels?
It depends on how you read in the image but I think you are accessing it incorrectly. It should be something like this:
Vec3b intensity = image.at<Vec3b>(j, i);

Subtract opencv matrix from 3 channel matrix

I have two matrices:
cv::Mat bgr(rows, cols, CV_16UC3);
cv::Mat ir(rows, cols, CV_16UC1 );
and I want to subtract ir from each channel of bgr element-wise. I couldn't find an elegant solution yet.
EDIT
One possible solution might be:
// subtract IR from BGR
Vec3u tmp;
for (int i = 0; i < ir.rows; i++) {
for (int j = 0; j < ir.cols; j++) {
tmp = bgr.at<Vec3u>(i,j);
tmp[0] = tmp[0] - ir.at<ushort>(i,j);
tmp[1] = tmp[1] - ir.at<ushort>(i,j);
tmp[2] = tmp[2] - ir.at<ushort>(i,j);
bgr.at<Vec3u>(i, j) = tmp;
}
}
The question is that whether there is a faster solution.
If we're talking about an elegant way, it could be like this:
Mat mat = Mat::ones(2,2,CV_8UC1);
Mat mat1 = Mat::ones(2,2,CV_8UC2)*3;
Mat mats[2];
split(mat1,mats);
mats[0]-=mat;
mats[1]-=mat;
merge(mats,2,mat1);
You shouldn't use at(), if you wanted your code to be more efficient. Use pointers and check Mats for continuity:
int rows = mat.rows;
int cols = mat.cols;
if(mat.isContinuous() && mat1.isContinuous())
{
cols*=rows;
rows = 1;
}
for(int j = 0;j<rows;j++) {
auto channe2limg = mat1.ptr<Vec2b>(j);
auto channelimg = mat.ptr<uchar>(j);
for (int i = 0; i < cols; i++) {
channe2limg[i][0]-=channelimg[i];
channe2limg[i][1]-=channelimg[i];
}
}

variance of sliding window in image

I work on traffic sign detection, firstly I am applied a segmentation on RGB image to obtain red channel image as it is illustrated in image 1:
Secondely I try to find homogeneous region to eliminate not interested region (not a traffic sign) by calculating the variance of sliding window above the image
I use this code but I have always exception
int main(int argc, char** argv)
{
IplImage *image1;
if ((image1 = cvLoadImage("segmenter1/00051.jpg", 0)) == 0)
return NULL;
int rows = image1->width;
int cols = image1->height;
Mat image = Mat::zeros(cols, rows, CV_32FC1);
double x = 0;
double temp = 0;
for (int i = 0; i < rows; i++){
for (int j = 0; j < cols; j++){
temp = cvGet2D(image1, j, i).val[0];
x = temp / 255;
image.at<float>(j, i) = x;
x = image.at<float>(j, i);
}
}
int k = 16;
double seuil = 0.0013;
CvScalar blanc;//pixel blanc
blanc.val[0] = 255;
cv::Scalar mean, stddev; //0:1st channel, 1:2nd channel and 2:3rd channel
for (int j = 0; j < rows - k; j++)
{
for (int i = 0; i < cols - k; i++)
{
double som = 0;
double var = 0;
double t = 0;
for (int jj = j; jj < k+j; jj++)
{
for (int ii = i; ii < k+i; ii++)
{
t = image.at<float>(jj, ii);
som = som + t;
t = t*t;
var =var+ t;
}
}
som = som / (k*k);
if (som>0.18){
var = (var / (k*k)) - (som*som);
if (var < seuil)
cvSet2D(image1, j, i, blanc);
}
}
}
char stsave[80];
cvSaveImage("variance/00051.jpg", image1);
cv::waitKey(0);
return 0;
}
Without the specific exception, I can only guess it is out_of_range. According to opencv docs, cvGet2D and cvSet2D parameters are image, y, x which effectively translates to image, rows, cols. You have flipped the definition of rows, cols and have conflicting usage between the two loops. Maybe fix these and try again.

Alpha-trimmed filter troubles

I am trying to make an alphatrimmed filter in openCV library. My code is not working properly and the resultant image is not looking as image after filtering.
The filter should work in the following way.
Chossing some (array) of pixels in my example it is 9 pixels '3x3' window.
Ordering them in increasing way.
Cutting our 'array' both sides for alpha-2.
calculating arithmetic mean of remaining pixels and inserting them in proper place.
int alphatrimmed(Mat img, int alpha)
{
Mat img9 = img.clone();
const int start = alpha/2 ;
const int end = 9 - (alpha/2);
//going through whole image
for (int i = 1; i < img.rows - 1; i++)
{
for (int j = 1; j < img.cols - 1; j++)
{
uchar element[9];
Vec3b element3[9];
int k = 0;
int a = 0;
//selecting elements for window 3x3
for (int m = i -1; m < i + 2; m++)
{
for (int n = j - 1; n < j + 2; n++)
{
element3[a] = img.at<Vec3b>(m*img.cols + n);
a++;
for (int c = 0; c < img.channels(); c++)
{
element[k] += img.at<Vec3b>(m*img.cols + n)[c];
}
k++;
}
}
//comparing and sorting elements in window (uchar element [9])
for (int b = 0; b < end; b++)
{
int min = b;
for (int d = b + 1; d < 9; d++)
{
if (element[d] < element[min])
{
min = d;
const uchar temp = element[b];
element[b] = element[min];
element[min] = temp;
const Vec3b temporary = element3[b];
element3[b] = element3[min];
element3[min] = temporary;
}
}
}
// index in resultant image( after alpha-trimmed filter)
int result = (i - 1) * (img.cols - 2) + j - 1;
for (int l = start ; l < end; l++)
img9.at<Vec3b>(result) += element3[l];
img9.at<Vec3b>(result) /= (9 - alpha);
}
}
namedWindow("AlphaTrimmed Filter", WINDOW_AUTOSIZE);
imshow("AlphaTrimmed Filter", img9);
return 0;
}
Without actual data, it's somewhat of a guess, but an uchar can't hold the sum of 3 channels. It works modulo 256 (at least on any platform OpenCV supports).
The proper solution is std::sort with a proper comparator for your Vec3b :
void L1(Vec3b a, Vec3b b) { return a[0]+a[1]+a[2] < b[0]+b[1]+b[2]; }