how to convert image in to matrix using opencv? - c++

I am trying to make a program in OpenCV to convert an image into matrix form, with each value representing an image's pixel. I have converted the image into binary form and now I want to convert it's pixel values into a matrix.

If You need to use CvMat object, You may want to try to use cvCopy function. It takes CvArr* as its arguments, so both IPLImage and CvMat will fit. If You would leave the C API and go to something more modern, You can use cv::Mat object to load image into and use C++ threshold.
The question is why do You want to convert the format of matrix that you already have (IPLImage as well as all others are matrices already). If You want to have a matrix of bool type, use Matx or Mat_ template class for this.

First glance at your question raises more questions... try to specify a bit (I don't seem to be able to see your code example, I'm new to stackoverflow)
Such as, your open cv version and IDE (like codeblocks or Microsoft Visual Studio). But include it in your question. What I would also like to know, is what is the purpose of this? Why do you need a matrix and so forth :)
attempted answer
from what I can gather
"but I have installed OpenCV version 2.3.1 on Visual C++ 2010 – Ayesha Khan"
OpenCV uses the class called Mat, which you should have encountered a lot. This class is essentially a matrix already. If I remember correctly it is very similar to vectors, which I won't cover here.
so if you need to access any pixel value in, lets say.
Mat Img;
you would use a function in this instance of the class, as such
cout << Img.at<uchar>(x,y);
This will access and print the value of the pixel with the coordinates of x,y, to console. In this example I use uchar inside the pointy brackets <>. uchar is used for 8bit picures. You will have to change this if you work with images of more detail (more bits).
When using a binary picture, OpenCV will most likely will allocate the memory of 8bit, which means you need the example above.
I'd like to give more details, but not before you've specified what exactly it is that you are attempting to do.
Regards Scrub # Stackoverflow

Your code uses OpenCV version 1. I'll let someone else answer, since it's not my forte. In my opinion, the 2.0 template-based interface is much more intuitive, and it's my recommendation to use it for all new endeavors.
Have a look at the way I use imread() in this program...
Please inspect the type of value returned from imread()...
Also, search in the code for originalColor = imageArg(/*row*/chosenX, /*column*/chosenY); It's a way to index into the matrix returned from imread
// HW1 Intro to Digital Image Processing
// used OpenCV 2.3.1 and VS2010 SP1 to develop this solution
#include "opencv2/core/core.hpp"
#include "opencv2/highgui/highgui.hpp"
#include <iostream>
#include <cassert>
using namespace cv;
Mat_<Vec3b> image;
int discreteAngles = 512;
void on_mouse(int eventCode, int centerX, int centerY, int flags, void* params);
int str2int(const std::string &str);
int main(int argc, char* argv[])
{
// command itself is one element of argument array...
if(argc != 1 && argc != 3)
{
std::cout << "Expecting two arguments to the application: angular granularity as a whole number and a file name." << std::endl;
exit(0);
}
std::string discreteAnglesStr, fileName;
if(argc == 3)
{
discreteAnglesStr = argv[1];
fileName = argv[2];
}
else
{
discreteAnglesStr = "64";
fileName = "boats.tif";
}
try
{
discreteAngles = str2int(discreteAnglesStr);
auto image_ = imread(fileName);
int channels = image_.channels();
assert(channels == 3);
image = image_;
if(image.rows == 0)
throw new std::exception();
auto originalImageStr = "Original Image";
namedWindow(originalImageStr);
setMouseCallback(originalImageStr, on_mouse);
imshow(originalImageStr, image);
}
catch(std::exception e)
{
std::cout << "could not load image." << std::endl;
}
waitKey(0);
return -1;
}
// borrowed from http://stackoverflow.com/q/194465/90475, courtesy of Luka Marinko
int str2int(const std::string &str)
{
std::stringstream ss(str);
int num;
if((ss >> num).fail())
{
throw new std::exception("could not parse user input!");
}
return num;
}
double compute_max_madius(int imageRows, int imageCols, int centerX, int centerY)
{
auto otherX = imageCols - centerX;
auto otherY = imageRows - centerY;
auto a = sqrt((double)centerX * centerX + centerY * centerY);
auto b = sqrt((double)otherX * otherX + centerY * centerY);
auto c = sqrt((double)centerX * centerX + otherY * otherY);
auto d = sqrt((double)otherX * otherX + otherY * otherY);
return max(max(a,b), max(c,d));
}
Vec3b interpolate_with_nearest(const Mat_<Vec3b>& imageArg, double x, double y)
{
auto x0 = static_cast<int>(floor(x)); auto y0 = static_cast<int>(floor(y));
auto x1 = static_cast<int>(ceil(x)); auto y1 = static_cast<int>(ceil(y));
// Rolls over to the other side, esp. for angles
if(x0 < 0) x0 = imageArg.rows - 1;
if(y0 < 0) y0 = imageArg.cols - 1;
if (x1 == imageArg.rows) x1 = 0;
if (y1 == imageArg.cols) y1 = 0;
int chosenX, chosenY;
if (x - x0 < 0.5) chosenX = x0; else chosenX = x1;
if (y - y0 < 0.5) chosenY = y0; else chosenY = y1;
Vec3b originalColor = Vec3b(0, 0, 0);
if (chosenX >= 0 && chosenX < imageArg.rows &&
chosenY >= 0 && chosenY < imageArg.cols)
{
originalColor = imageArg(/*row*/chosenX, /*column*/chosenY);
}
return originalColor;
}
Vec3b interpolate_with_bilinear(const Mat_<Vec3b>& imageArg, double x, double y)
{
auto x0 = static_cast<int>(floor(x)); auto y0 = static_cast<int>(floor(y));
auto x1 = static_cast<int>(ceil(x)); auto y1 = static_cast<int>(ceil(y));
// Rolls over to the other side, esp. for angles
if(x0 < 0) x0 = imageArg.rows - 1;
if(y0 < 0) y0 = imageArg.cols - 1;
if (x1 == imageArg.rows) x1 = 0;
if (y1 == imageArg.cols) y1 = 0;
if (!(
x0 >= 0 && x0 < imageArg.rows &&
x1 >= 0 && x1 < imageArg.rows &&
y0 >= 0 && y0 < imageArg.cols &&
y1 >= 0 && y1 < imageArg.cols))
return Vec3b(0, 0, 0);
auto f00 = imageArg(x0, y0);
auto f01 = imageArg(x0, y1);
auto f10 = imageArg(x1, y0);
auto f11 = imageArg(x1, y1);
auto b1 = f00;
auto b2 = f10 - f00;
auto b3 = f01 - f00;
auto b4 = f00 + f11 - f01 - f10;
x = x - x0;
y = y - y0;
return b1 + b2 * x + b3 * y + b4 * x * y;
}
void on_mouse(int eventCode, int centerX, int centerY, int flags, void* params)
{
if(eventCode == 0)
return;
switch( eventCode )
{
case CV_EVENT_LBUTTONDOWN:
{
std::cout << "Center was (" << centerX << ", " << centerY << ")" << std::endl;
auto maxRadiusXY = compute_max_madius(image.rows, image.cols, centerX, centerY);
int discreteRadii = static_cast<int>(floor(maxRadiusXY));
Mat_<Vec3b> polarImg1;
polarImg1.create(/*rows*/discreteRadii, /*cols*/discreteAngles);
Mat_<Vec3b> polarImg2;
polarImg2.create(/*rows*/discreteRadii, /*cols*/discreteAngles);
for (int radius = 0; radius < discreteRadii; radius++) // radii
{
for (int discreteAngle = 0; discreteAngle < discreteAngles; discreteAngle++) // discreteAngles
{
// 3
auto angleRad = discreteAngle * 2.0 * CV_PI / discreteAngles;
// 2
auto xTranslated = cos(angleRad) * radius;
auto yTranslated = sin(angleRad) * radius;
// 1
auto x = centerX + xTranslated;
auto y = centerY - yTranslated;
polarImg1(/*row*/ radius, /*column*/ discreteAngle) = interpolate_with_nearest(image, /*row*/y, /*column*/x);
polarImg2(/*row*/ radius, /*column*/ discreteAngle) = interpolate_with_bilinear(image, /*row*/y, /*column*/x);
}
}
auto polarImage1Str = "Polar (nearest)";
namedWindow(polarImage1Str);
imshow(polarImage1Str, polarImg1);
auto polarImage2Str = "Polar (bilinear)";
namedWindow(polarImage2Str);
imshow(polarImage2Str, polarImg2);
Mat_<Vec3b> reprocessedImg1;
reprocessedImg1.create(Size(image.rows, image.cols));
Mat_<Vec3b> reprocessedImg2;
reprocessedImg2.create(Size(image.rows, image.cols));
for(int y = 0; y < image.rows; y++)
{
for(int x = 0; x < image.cols; x++)
{
// 1
auto xTranslated = x - centerX;
auto yTranslated = -(y - centerY);
// 2
auto radius = sqrt((double)xTranslated * xTranslated + yTranslated * yTranslated);
double angleRad;
if(xTranslated != 0)
{
angleRad = atan((double)abs(yTranslated) / abs(xTranslated));
// I Quadrant
if (xTranslated > 0 && yTranslated > 0)
angleRad = angleRad;
// II Quadrant
if (xTranslated < 0 && yTranslated > 0)
angleRad = CV_PI - angleRad;
// III Quadrant
if (xTranslated < 0 && yTranslated < 0)
angleRad = CV_PI + angleRad;
/// IV Quadrant
if (xTranslated > 0 && yTranslated < 0)
angleRad = 2 * CV_PI - angleRad;
if (yTranslated == 0)
if (xTranslated > 0) angleRad = 0;
else angleRad = CV_PI;
}
else
{
if (yTranslated > 0) angleRad = CV_PI / 2;
else angleRad = 3 * CV_PI / 2;
}
// 3
auto discreteAngle = angleRad * discreteAngles / (2.0 * CV_PI);
reprocessedImg1(/*row*/ y, /*column*/ x) = interpolate_with_nearest(polarImg1, /*row*/radius, /*column*/discreteAngle);
reprocessedImg2(/*row*/ y, /*column*/ x) = interpolate_with_bilinear(polarImg2, /*row*/radius, /*column*/discreteAngle);
}
}
auto reprocessedImg1Str = "Re-processed (nearest)";
namedWindow(reprocessedImg1Str);
imshow(reprocessedImg1Str, reprocessedImg1);
auto reprocessedImg2Str = "Re-processed (bilinear)";
namedWindow(reprocessedImg2Str);
imshow(reprocessedImg2Str, reprocessedImg2);
} break;
}
}

Related

Large height map interpolation

I have a vector<vector<double>> heightmap that is dynamically loaded from a CSV file of GPS data to be around 4000x4000. However, only provides 140,799 points.
It produces a greyscale map as shown bellow:
I wish to interpolate the heights between all the points to generate a height map of the area.
The below code finds all known points will look in a 10m radius of the point to find any other known points. If another point is found then it will linearly interpolate between the 2 points. Interpolated points are defined by - height and unset values are defined as -1337.
This approach is incredibly slow I am sure there are better ways to achieve this.
bool run_interp = true;
bool interp_interp = false;
int counter = 0;
while (run_interp)
{
for (auto x = 0; x < map.size(); x++)
{
for (auto y = 0; y < map.at(x).size(); y++)
{
const auto height = map.at(x).at(y);
if (height == -1337) continue;
if (!interp_interp && height < 0) continue;
//Look in a 10m radius of a known value to see if there
//Is another known value to linearly interp between
//Set height to a negative if it has been interped
const int radius = (1 / resolution) * 10;
for (auto rxi = 0; rxi < radius * 2; rxi++)
{
//since we want to expand outwards
const int rx = x + ((rxi % 2 == 0) ? rxi / 2 : -(rxi - 1) / 2);
if (rx < 0 || rx >= map.size()) continue;
for (auto ryi = 0; ryi < radius * 2; ryi++)
{
const int ry = y + ((rxi % 2 == 0) ? rxi / 2 : -(rxi - 1) / 2);
if (ry < 0 || ry >= map.at(x).size()) continue;
const auto new_height = map.at(rx).at(ry);
if (new_height == -1337) continue;
//First go around we don't want to interp
//Interps
if (!interp_interp && new_height < 0) continue;
//We have found a known point within 10m
const auto delta = new_height - height;
const auto distance = sqrt((rx- x) * (rx - x)
+ (ry - y) * (ry - y));
const auto angle = atan2(ry - y, rx - x);
const auto ratio = delta / distance;
//Backtrack from found point until we get to know point
for (auto radi = 0; radi < distance; radi++)
{
const auto new_x = static_cast<int>(x + radi * cos(angle));
const auto new_y = static_cast<int>(y + radi * sin(angle));
if (new_x < 0 || new_x >= map.size()) continue;
if (new_y < 0 || new_y >= map.at(new_x).size()) continue;
const auto interp_height = map.at(new_x).at(new_y);
//If it is a known height don't interp it
if (interp_height > 0)
continue;
counter++;
set_height(new_x, new_y, -interp_height);
}
}
}
}
std::cout << x << " " << counter << std::endl;;
}
if (interp_interp)
run_interp = false;
interp_interp = true;
}
set_height(const int x, const int y, const double height)
{
//First time data being set
if (map.at(x).at(y) == -1337)
{
map.at(x).at(y) = height;
}
else // Data set already so average it
{
//While this isn't technically correct and weights
//Later data significantly more favourablily
//It should be fine
//TODO: fix it.
map.at(x).at(y) += height;
map.at(x).at(y) /= 2;
}
}
If you put the points into a kd-tree, it will be much faster to find the closest point (O(nlogn)).
I'm not sure that will solve all your issues, but it is a start.

Segmentation fault caused by copying QList

lastly, I run into a very crazy Segfault. I have nothing done to my source code, the only thing I might have done is updated my QT Creator and MinGW. Now my program causes a segmentation fault, before that it works perfectly.
void Parameter::calculateKeyframes() {
auto kfs = Bezier::calculateControlPoints(keyframes.values());
for (auto kf : kfs) {
setKeyframe(kf);
}
paramUpdate();
}
When it runs this function with a valid "keyframes" map, I know it thanks to debugging, it crashes in the Bezier::calculateControlPoints(QList) function at the marked line below.
QList<Keyframe> calculateControlPoints(QList<Keyframe> keyframes) {
if (keyframes.size() < 2) {
return keyframes;
}
int n = keyframes.size();
for (int i = 0; i<n; i++) {
Keyframe last_kf(0, ValueDouble(0.0));
Keyframe kf;
kf = keyframes.at(i);
Keyframe next_kf(0, ValueDouble(0.0));
if (-1 < i-1) last_kf = keyframes[i-1];
else last_kf.frame = -1;
if (keyframes.size() > i+1) next_kf = keyframes[i+1];
else next_kf.frame = -1;
if (kf.mode == Keyframe::STEP || kf.mode == Keyframe::LINEAR) continue;
if (next_kf.frame > -1 && (kf.mode == Keyframe::EASEIN || (kf.mode == Keyframe::EASE && last_kf.frame < 0))) {
double vecx_TtN = (double)next_kf.frame - (double)kf.frame; // vx = nx - x
double vecy_TtN = next_kf.data.toDouble() - kf.data.toDouble(); // vy = ny - y
kf.control2x = (double)kf.frame + vecx_TtN / 4.5; // x = x + vx / 4.5
kf.control2y = (vecy_TtN / vecx_TtN) * (kf.control2x - kf.frame) + kf.data.toDouble(); // y = m * x + t
} else if (last_kf.frame > -1 && (kf.mode == Keyframe::EASEOUT || (kf.mode == Keyframe::EASE && next_kf.frame < 0))) {
double vecx_TtL = (double)last_kf.frame - (double)kf.frame; // vx = lx - x
double vecy_TtL = last_kf.data.toDouble() - kf.data.toDouble(); // vy = ly - y
kf.control1x = (double)kf.frame + vecx_TtL / 4.5; // x = x + vx / 4.5
kf.control1y = (vecy_TtL / vecx_TtL) * (kf.control1x - kf.frame) + kf.data.toDouble(); // y = m * x + t
} else if (kf.mode == Keyframe::EASE && last_kf.frame > -1 && next_kf.frame > -1) {
double vecx_TtL = (double)last_kf.frame - (double)kf.frame; // vx = lx - x
double vecx_TtN = (double)next_kf.frame - (double)kf.frame; // vx = nx - x
double vecx_LtN = (double)next_kf.frame - (double)last_kf.frame; // vx = nx - lx
/* ---> */ double vecy_LtN = next_kf.data.toDouble() - last_kf.data.toDouble(); // vy = ny - ly
kf.control1x = (double)kf.frame + vecx_TtL / 4.5; // x = x + vx / 4.5
kf.control2x = (double)kf.frame + vecx_TtN / 4.5; // x = x + vx / 4.5
kf.control1y = (vecy_LtN/vecx_LtN) * (kf.control1x - kf.frame) + kf.data.toDouble(); // y = m * x + t
kf.control2y = (vecy_LtN/vecx_LtN) * (kf.control2x - kf.frame) + kf.data.toDouble(); // y = m * x + t
}
keyframes[i] = kf;
}
return keyframes;
}
It is caused in the second loop run because the "QList keyframes" has in its member with the index 0 (that means in the second run this member is also copied into "last_kf") an invalid pointer-address in the Keyframes "data" pointer. Now my question is why is data now a invalid pointer... in Parameter::calculateKeyframes() it wasn't.
Here my Keyframe.cpp (if it is important):
#include "keyframe.h"
#include "value.h"
#include "valuedouble.h"
#include <iostream>
Keyframe::Keyframe(long frame, Value v) : frame(frame), control1x(frame), control2x(frame), data(v), control1y(v), control2y(v) {
}
Keyframe::Keyframe() : Keyframe(0.0, ValueDouble(0.0)) {}
void Keyframe::toPipeKF(tutorial::Keyframe* k) {
k->set_mode((tutorial::Keyframe_Mode)(int)mode);
k->set_frame(frame);
k->set_data((const char*)data.toByteArray());
k->set_control1x(control1x);
k->set_control1y(control1y.toByteArray());
k->set_control2x(control2x);
k->set_control2y(control2y.toByteArray());
}
Keyframe.h:
#ifndef KEYFRAME_H
#define KEYFRAME_H
#include "pipeendpoint.h"
#include "value.h"
class Keyframe {
public:
Keyframe(long frame, Value v);
Keyframe();
enum Mode {
STEP,
LINEAR,
EASEIN,
EASE,
EASEOUT,
EASEFIX,
EASECUSTOM
};
Mode mode = EASE;
Value data;
long frame;
double control1x = 0;
Value control1y;
double control2x = 0;
Value control2y;
void toPipeKF(tutorial::Keyframe* kf);
};
#endif // KEYFRAME_H

Implement RGBtoHSV C++ , wrong H output

I am trying to do Sobel operator in the HSV dimension (told to do this in the HSV by my guide but I dont understand why it will work better on HSV than on RGB) .
I have built a function that converts from RGB to HSV . while I have some mediocre knowledge in C++ I am getting confused by the Image Processing thus I tried to keep the code as simple as possible , meaning I dont care (at this stage) about time nor space .
From looking on the results I got in gray levels bmp photos , my V and S seems to be fine but my H looks very gibbrish .
I got 2 questions here :
1. How a normal H photo in gray level should look a like comparing to the source photo ?
2. Where was I wrong in the code :
void RGBtoHSV(unsigned char image[][NUMBER_OF_COLUMNS][NUMBER_OF_COLORS],
float Him[][NUMBER_OF_COLUMNS],
float Vim[][NUMBER_OF_COLUMNS],
float Sim[][NUMBER_OF_COLUMNS])
{
double Rn, Gn, Bn;
double C;
double H, S, V;
for (int row = 0; row < NUMBER_OF_ROWS; row++)
{
for (int column = 0; column < NUMBER_OF_COLUMNS; column++)
{
Rn = (1.0*image[row][column][R]) / 255;
Gn = (1.0*image[row][column][G] )/ 255;
Bn = (1.0*image[row][column][B] )/ 255;
//double RGBn[3] = { Rn, Gn, Bn };
double max = Rn;
if (max < Gn) max = Gn;
if (max < Bn) max = Bn;
double min = Rn;
if (min > Gn) min = Gn;
if (min > Bn) min = Bn;
C = max - min;
H = 0;
if (max==0)
{
S = 0;
H = -1; //undifined;
V = max;
}
else
{
/* if (max == Rn)
H = (60.0* ((int)((Gn - Bn) / C) % 6));
else if (max == Gn)
H = 60.0*( (Bn - Rn)/C + 2);
else
H = 60.0*( (Rn - Gn)/C + 4);
*/
if (max == Rn)
H = ( 60.0* ( (Gn - Bn) / C) ) ;
else if (max == Gn)
H = 60.0*((Bn - Rn) / C + 2);
else
H = 60.0*((Rn - Gn) / C + 4);
V = max; //AKA lightness
S = C / max; //saturation
}
while (H < 0)
H += 360;
while (H>360)
H -= 360;
Him[row][column] = (float)H;
Vim[row][column] = (float)V;
Sim[row][column] = (float)S;
}
}
}
also my hsvtorgb :
void HSVtoRGB(unsigned char image[][NUMBER_OF_COLUMNS][NUMBER_OF_COLORS],
float Him[][NUMBER_OF_COLUMNS],
float Vim[][NUMBER_OF_COLUMNS],
float Sim[][NUMBER_OF_COLUMNS])
{
double R1, G1, B1;
double C;
double V;
double S;
double H;
int Htag;
double Htag2;
double x;
double m;
for (int row = 0; row < NUMBER_OF_ROWS; row++)
{
for (int column = 0; column < NUMBER_OF_COLUMNS; column++)
{
H = (double)Him[row][column];
S = (double)Sim[row][column];
V = (double)Vim[row][column];
C = V*S;
Htag = (int) (H / 60.0);
Htag2 = H/ 60.0;
//x = C*(1 - abs(Htag % 2 - 1));
double tmp1 = fmod(Htag2, 2);
double temp=(1 - abs(tmp1 - 1));
x = C*temp;
//switch (Htag)
switch (Htag)
{
case 0 :
R1 = C;
G1 = x;
B1 = 0;
break;
case 1:
R1 = x;
G1 = C;
B1 = 0;
break;
case 2:
R1 = 0;
G1 = C;
B1 = x;
break;
case 3:
R1 = 0;
G1 = x;
B1 = C;
break;
case 4:
R1 = x;
G1 = 0;
B1 = C;
break;
case 5:
R1 = C;
G1 = 0;
B1 = x;
break;
default:
R1 = 0;
G1 = 0;
B1 = 0;
break;
}
m = V - C;
//this is also good change I found
//image[row][column][R] = unsigned char( (R1 + m)*255);
//image[row][column][G] = unsigned char( (G1 + m)*255);
//image[row][column][B] = unsigned char( (B1 + m)*255);
image[row][column][R] = round((R1 + m) * 255);
image[row][column][G] = round((G1 + m) * 255);
image[row][column][B] = round((B1 + m) * 255);
}
}
}
void HSVfloattoGrayconvert(unsigned char grayimage[NUMBER_OF_ROWS] [NUMBER_OF_COLUMNS], float hsvimage[NUMBER_OF_ROWS][NUMBER_OF_COLUMNS], char hsv)
{
//grayimage , flaotimage , h/s/v
float factor;
if (hsv == 'h' || hsv == 'H') factor = (float) 1 / 360;
else factor = 1;
for (int row = 0; row < NUMBER_OF_ROWS; row++)
{
for (int column = 0; column < NUMBER_OF_COLUMNS; column++)
{
grayimage[row][column] = (unsigned char) (0.5f + 255.0f * (float)hsvimage[row][column] / factor);
}
}
}
and my main:
unsigned char ColorImage1[NUMBER_OF_ROWS][NUMBER_OF_COLUMNS] [NUMBER_OF_COLORS];
float Himage[NUMBER_OF_ROWS][NUMBER_OF_COLUMNS];
float Vimage[NUMBER_OF_ROWS][NUMBER_OF_COLUMNS];
float Simage[NUMBER_OF_ROWS][NUMBER_OF_COLUMNS];
unsigned char ColorImage2[NUMBER_OF_ROWS][NUMBER_OF_COLUMNS] [NUMBER_OF_COLORS];
unsigned char HimageGray[NUMBER_OF_ROWS][NUMBER_OF_COLUMNS];
unsigned char VimageGray[NUMBER_OF_ROWS][NUMBER_OF_COLUMNS];
unsigned char SimageGray[NUMBER_OF_ROWS][NUMBER_OF_COLUMNS];
unsigned char HAfterSobel[NUMBER_OF_ROWS][NUMBER_OF_COLUMNS];
unsigned char VAfterSobel[NUMBER_OF_ROWS][NUMBER_OF_COLUMNS];
unsigned char SAfterSobal[NUMBER_OF_ROWS][NUMBER_OF_COLUMNS];
unsigned char HSVcolorAfterSobal[NUMBER_OF_ROWS][NUMBER_OF_COLUMNS][NUMBER_OF_COLORS];
unsigned char RGBAfterSobal[NUMBER_OF_ROWS][NUMBER_OF_COLUMNS][NUMBER_OF_COLORS];
int KernelX[3][3] = {
{-1,0,+1}, {-2,0,2}, {-1,0,1 }
};
int KernelY[3][3] = {
{-1,-2,-1}, {0,0,0}, {1,2,1}
};
void main()
{
//work
LoadBgrImageFromTrueColorBmpFile(ColorImage1, "P22A.bmp");
// add noise
AddSaltAndPepperNoiseRGB(ColorImage1, 350, 255);
StoreBgrImageAsTrueColorBmpFile(ColorImage1, "saltandpepper.bmp");
AddGaussNoiseCPPstileRGB(ColorImage1, 0.0, 1.0);
StoreBgrImageAsTrueColorBmpFile(ColorImage1, "Saltandgauss.bmp");
//saves hsv in float array
RGBtoHSV(ColorImage1, Himage, Vimage, Simage);
//saves hsv float arrays in unsigned char arrays
HSVfloattoGrayconvert(HimageGray, Himage, 'h');
HSVfloattoGrayconvert(VimageGray, Vimage, 'v');
HSVfloattoGrayconvert(SimageGray, Simage, 's');
StoreGrayImageAsGrayBmpFile(HimageGray, "P22H.bmp");
StoreGrayImageAsGrayBmpFile(VimageGray, "P22V.bmp");
StoreGrayImageAsGrayBmpFile(SimageGray, "P22S.bmp");
WaitForUserPressKey();
}
edit : Changed Code + add sources for equations :
Soruce : for equations :
http://www.rapidtables.com/convert/color/hsv-to-rgb.htm
http://www.rapidtables.com/convert/color/rgb-to-hsv.htm
edit3:
listening to #gpasch advice and using better reference and deleting the mod6 I am now able to restore the RGB original photo!!! but unfortunately now my H photo in grayscale is even more chaotic than before .
I'll edit the code about so it will have more info about how I am saving the H grayscale photo .
That is the peril of going through garbage web sites; I suggest the following:
https://www.cs.rit.edu/~ncs/color/t_convert.html
That mod 6 seems fishy there.
You also need to make sure you understand that H is in degrees from 0 to 360; if your filter expects 0..1 you have the change.
I am trying to do Sobel operator in the HSV dimension (told to do this in the HSV by my guide but I dont understand why it will work better on HSV than on RGB)
It depends on what you are trying to achieve. If you're trying to do edge detection based on brightness for example, then just working with say the V channel might be simpler than processing all three channels of RGB and combining them afterwards.
How a normal H photo in gray level should look a like comparing to the source photo ?
You would see regions which are a similar colour appear as a similar shade of grey, and for a real-world scene you would still see gradients. But where there are spatially adjacent regions with colours far apart in hue, there would be a sharp jump. The shapes would generally be recognisable though.
Where was I wrong in the code :
There are two main problems with your code. The first is that the hue scaling in HSVfloattoGrayconvert is wrong. Your code is setting factor=1.0/360.0f but then dividing by the factor, which means it's multiplying by 360. If you simply multiply by the factor, it produces the expected output. This is because the earlier calculation uses normalised values (0..1) for S and V but angle in degrees for H, so you need to divide by 360 to normalise H.
Second, the conversion back to RGB has a problem, mainly to do with calculating Htag where you want the original value for calculating x but the floor only when switching on the sector.
Note that despite what #gpasch suggested, the mod 6 operation is actually correct. This is because the conversion you are using is based on the hexagonal colour space model for HSV, and this is used to determine which sector your colour is in. For a continuous model, you could use a radial conversion instead which is slightly different. Both are well explained on Wikipedia.
I took your code, added a few functions to generate input data and save output files so it is completely standalone, and fixed the bugs above while making minimal changes to the source.
Given the following generated input image:
the Hue channel extracted is:
The saturation channel is:
and finally value:
After fixing up the HSV to RGB conversion, I verified that the resulting output image matches the original.
The updated code is below (as mentioned above, changed minimally to make a standalone test):
#include <string>
#include <cmath>
#include <cstdlib>
enum ColorIndex
{
R = 0,
G = 1,
B = 2,
};
namespace
{
const unsigned NUMBER_OF_COLUMNS = 256;
const unsigned NUMBER_OF_ROWS = 256;
const unsigned NUMBER_OF_COLORS = 3;
};
void RGBtoHSV(unsigned char image[][NUMBER_OF_COLUMNS][NUMBER_OF_COLORS],
float Him[][NUMBER_OF_COLUMNS],
float Vim[][NUMBER_OF_COLUMNS],
float Sim[][NUMBER_OF_COLUMNS])
{
double Rn, Gn, Bn;
double C;
double H, S, V;
for (int row = 0; row < NUMBER_OF_ROWS; row++)
{
for (int column = 0; column < NUMBER_OF_COLUMNS; column++)
{
Rn = image[row][column][R] / 255.0;
Gn = image[row][column][G] / 255.0;
Bn = image[row][column][B] / 255.0;
double max = Rn;
if (max < Gn) max = Gn;
if (max < Bn) max = Bn;
double min = Rn;
if (min > Gn) min = Gn;
if (min > Bn) min = Bn;
C = max - min;
H = 0;
if (max==0)
{
S = 0;
H = 0; // Undefined
V = max;
}
else
{
if (max == Rn)
H = 60.0*fmod((Gn - Bn) / C, 6.0);
else if (max == Gn)
H = 60.0*((Bn - Rn) / C + 2);
else
H = 60.0*((Rn - Gn) / C + 4);
V = max; //AKA lightness
S = C / max; //saturation
}
while (H < 0)
H += 360.0;
while (H > 360)
H -= 360.0;
Him[row][column] = (float)H;
Vim[row][column] = (float)V;
Sim[row][column] = (float)S;
}
}
}
void HSVtoRGB(unsigned char image[][NUMBER_OF_COLUMNS][NUMBER_OF_COLORS],
float Him[][NUMBER_OF_COLUMNS],
float Vim[][NUMBER_OF_COLUMNS],
float Sim[][NUMBER_OF_COLUMNS])
{
double R1, G1, B1;
double C;
double V;
double S;
double H;
double Htag;
double x;
double m;
for (int row = 0; row < NUMBER_OF_ROWS; row++)
{
for (int column = 0; column < NUMBER_OF_COLUMNS; column++)
{
H = (double)Him[row][column];
S = (double)Sim[row][column];
V = (double)Vim[row][column];
C = V*S;
Htag = H / 60.0;
double x = C*(1.0 - fabs(fmod(Htag, 2.0) - 1.0));
int i = floor(Htag);
switch (i)
{
case 0 :
R1 = C;
G1 = x;
B1 = 0;
break;
case 1:
R1 = x;
G1 = C;
B1 = 0;
break;
case 2:
R1 = 0;
G1 = C;
B1 = x;
break;
case 3:
R1 = 0;
G1 = x;
B1 = C;
break;
case 4:
R1 = x;
G1 = 0;
B1 = C;
break;
case 5:
R1 = C;
G1 = 0;
B1 = x;
break;
default:
R1 = 0;
G1 = 0;
B1 = 0;
break;
}
m = V - C;
image[row][column][R] = round((R1 + m) * 255);
image[row][column][G] = round((G1 + m) * 255);
image[row][column][B] = round((B1 + m) * 255);
}
}
}
void HSVfloattoGrayconvert(unsigned char grayimage[][NUMBER_OF_COLUMNS], float hsvimage[][NUMBER_OF_COLUMNS], char hsv)
{
//grayimage , flaotimage , h/s/v
float factor;
if (hsv == 'h' || hsv == 'H') factor = 1.0f/360.0f;
else factor = 1.0f;
for (int row = 0; row < NUMBER_OF_ROWS; row++)
{
for (int column = 0; column < NUMBER_OF_COLUMNS; column++)
{
grayimage[row][column] = (unsigned char) (0.5f + 255.0f * (float)hsvimage[row][column] * factor);
}
}
}
int KernelX[3][3] = {
{-1,0,+1}, {-2,0,2}, {-1,0,1 }
};
int KernelY[3][3] = {
{-1,-2,-1}, {0,0,0}, {1,2,1}
};
void GenerateTestImage(unsigned char image[][NUMBER_OF_COLUMNS][NUMBER_OF_COLORS])
{
for (unsigned y = 0; y < NUMBER_OF_ROWS; y++)
{
for (unsigned x = 0; x < NUMBER_OF_COLUMNS; x++)
{
image[y][x][R] = x % 256;
image[y][x][G] = y % 256;
image[y][x][B] = (255-x) % 256;
}
}
}
void GenerateTestImage(unsigned char image[][NUMBER_OF_COLUMNS])
{
for (unsigned y = 0; y < NUMBER_OF_ROWS; y++)
{
for (unsigned x = 0; x < NUMBER_OF_COLUMNS; x++)
{
image[x][y] = x % 256;
}
}
}
// Color (three channel) images
void SaveImage(unsigned char image[][NUMBER_OF_COLUMNS][NUMBER_OF_COLORS], const std::string& filename)
{
FILE* fp = fopen(filename.c_str(), "w");
fprintf(fp, "P6\n%u %u\n255\n", NUMBER_OF_COLUMNS, NUMBER_OF_ROWS);
fwrite(image, NUMBER_OF_COLORS, NUMBER_OF_ROWS*NUMBER_OF_COLUMNS, fp);
fclose(fp);
}
// Grayscale (single channel) images
void SaveImage(unsigned char image[][NUMBER_OF_COLUMNS], const std::string& filename)
{
FILE* fp = fopen(filename.c_str(), "w");
fprintf(fp, "P5\n%u %u\n255\n", NUMBER_OF_COLUMNS, NUMBER_OF_ROWS);
fwrite(image, 1, NUMBER_OF_ROWS*NUMBER_OF_COLUMNS, fp);
fclose(fp);
}
unsigned char ColorImage1[NUMBER_OF_ROWS][NUMBER_OF_COLUMNS][NUMBER_OF_COLORS];
unsigned char Himage[NUMBER_OF_ROWS][NUMBER_OF_COLUMNS];
unsigned char Simage[NUMBER_OF_ROWS][NUMBER_OF_COLUMNS];
unsigned char Vimage[NUMBER_OF_ROWS][NUMBER_OF_COLUMNS];
float HimageGray[NUMBER_OF_ROWS][NUMBER_OF_COLUMNS];
float SimageGray[NUMBER_OF_ROWS][NUMBER_OF_COLUMNS];
float VimageGray[NUMBER_OF_ROWS][NUMBER_OF_COLUMNS];
int main()
{
// Test input
GenerateTestImage(ColorImage1);
SaveImage(ColorImage1, "test_input.ppm");
//saves hsv in float array
RGBtoHSV(ColorImage1, HimageGray, VimageGray, SimageGray);
//saves hsv float arrays in unsigned char arrays
HSVfloattoGrayconvert(Himage, HimageGray, 'h');
HSVfloattoGrayconvert(Vimage, VimageGray, 'v');
HSVfloattoGrayconvert(Simage, SimageGray, 's');
SaveImage(Himage, "P22H.pgm");
SaveImage(Vimage, "P22V.pgm");
SaveImage(Simage, "P22S.pgm");
// Convert back to get the original test image
HSVtoRGB(ColorImage1, HimageGray, VimageGray, SimageGray);
SaveImage(ColorImage1, "test_output.ppm");
return 0;
}
The input image was generated by a very simple algorithm which gives us gradients in each dimension, so we can easily inspect and verify the expected output. I used ppm/pgm files as they are simpler to write and more portable than BMP.
Hope this helps - let me know if you have any questions.

How to count pixels in color segment in OpenCV

I have a OpenCV C++ application.
I have segmented an image with pyrMeanShiftFiltering function.
Now I need to count the pixel in a segment and the number of pixel having the most frequent value in the same segment in order to compute a ratio between them. How could I do that?
I am using Tsukuba image and the code is.
Mat image, segmented;
image = imread("TsukubaL.jpg", 1 );
pyrMeanShiftFiltering(image, segmented, 16, 32);
The segmented image is:
If I consider a pixel in a single segment, the part where I count the pixel in that segment is:
int cont=0;
Vec3b x = segmented.at<Vec3b>(160, 136);
for(int i = 160; i < segmented.rows; ++i) { //check right-down
for(int j = 136; j < segmented.cols; ++j) {
if(segmented.at<Vec3b>(i, j) == x)
cont++;
else
continue;
}
}
for(int i = 160; i > 0; --i) { //check right-up
for(int j = 136; j < segmented.cols; ++j) {
if(segmented.at<Vec3b>(i, j) == x)
cont++;
else
continue;
}
}
for(int i = 160; i < segmented.rows; ++i) { //check down-left
for(int j = 136; j > 0; --j) {
if(segmented.at<Vec3b>(i, j) == x)
cont++;
else
continue;
}
}
for(int i = 160; i > 0; --i) { //check up-left
for(int j = 136; j > 0; --j) {
if(segmented.at<Vec3b>(i, j) == x)
cont++;
else
continue;
}
}
cout<<"Pixel "<<x<<"cont = "<<cont<<endl;
In this example, I consider a white pixel in position (160, 136) and count the same pixel to the central one in the four direction starting from it, and the output is:
Pixel [206, 222, 240]cont = 127
Could it be a possible good way to do it?
First you need to define a mask with pixels having the same color of your initial point (called seed here). You can use inRange with a given tolerance. Assuming a seed on the head, you'll get something like:
Now you need to find the connected component that contains your seed. You can do this in many ways. Here I modified a generative labeling algorithm (the can be found here). You get the list of points of the blob that contains the seed. You can then make a mask with these points:
Now that you have all points it's trivial to find the number of points in the segment. To find the most frequent color you can make an histogram with the BGR values contained in the segment. Since an histogram with all RGB values will have 256*256*256 bins, it's more practical to use a map. I modified the code found here to make an histogram with a given mask.
Now you just need to find the color value with higher frequency.
For this example, I got:
# points in segment: 2860
Most frequent color: [209, 226, 244] #: 168
Take a look at the code:
#include <opencv2/opencv.hpp>
#include <vector>
#include <stack>
#include <map>
using namespace cv;
using namespace std;
vector<Point> connected_components(const Mat1b& img, Point seed)
{
Mat1b src = img > 0;
int label = 0;
int w = src.cols;
int h = src.rows;
int i;
cv::Point point;
// Start from seed
std::stack<int, std::vector<int>> stack2;
i = seed.x + seed.y*w;
stack2.push(i);
// Current component
std::vector<cv::Point> comp;
while (!stack2.empty())
{
i = stack2.top();
stack2.pop();
int x2 = i%w;
int y2 = i / w;
src(y2, x2) = 0;
point.x = x2;
point.y = y2;
comp.push_back(point);
// 4 connected
if (x2 > 0 && (src(y2, x2 - 1) != 0))
{
stack2.push(i - 1);
src(y2, x2 - 1) = 0;
}
if (y2 > 0 && (src(y2 - 1, x2) != 0))
{
stack2.push(i - w);
src(y2 - 1, x2) = 0;
}
if (y2 < h - 1 && (src(y2 + 1, x2) != 0))
{
stack2.push(i + w);
src(y2 + 1, x2) = 0;
}
if (x2 < w - 1 && (src(y2, x2 + 1) != 0))
{
stack2.push(i + 1);
src(y2, x2 + 1) = 0;
}
// 8 connected
if (x2 > 0 && y2 > 0 && (src(y2 - 1, x2 - 1) != 0))
{
stack2.push(i - w - 1);
src(y2 - 1, x2 - 1) = 0;
}
if (x2 > 0 && y2 < h - 1 && (src(y2 + 1, x2 - 1) != 0))
{
stack2.push(i + w - 1);
src(y2 + 1, x2 - 1) = 0;
}
if (x2 < w - 1 && y2>0 && (src(y2 - 1, x2 + 1) != 0))
{
stack2.push(i - w + 1);
src(y2 - 1, x2 + 1) = 0;
}
if (x2 < w - 1 && y2 < h - 1 && (src(y2 + 1, x2 + 1) != 0))
{
stack2.push(i + w + 1);
src(y2 + 1, x2 + 1) = 0;
}
}
return comp;
}
struct lessVec3b
{
bool operator()(const Vec3b& lhs, const Vec3b& rhs) {
return (lhs[0] != rhs[0]) ? (lhs[0] < rhs[0]) : ((lhs[1] != rhs[1]) ? (lhs[1] < rhs[1]) : (lhs[2] < rhs[2]));
}
};
map<Vec3b, int, lessVec3b> getPalette(const Mat3b& src, const Mat1b& mask)
{
map<Vec3b, int, lessVec3b> palette;
for (int r = 0; r < src.rows; ++r)
{
for (int c = 0; c < src.cols; ++c)
{
if (mask(r, c))
{
Vec3b color = src(r, c);
if (palette.count(color) == 0)
{
palette[color] = 1;
}
else
{
palette[color] = palette[color] + 1;
}
}
}
}
return palette;
}
int main()
{
// Read the image
Mat3b image = imread("tsukuba.jpg");
// Segment
Mat3b segmented;
pyrMeanShiftFiltering(image, segmented, 16, 32);
// Seed
Point seed(140, 160);
// Define a tolerance
Vec3b tol(10,10,10);
// Extract mask of pixels with same value as seed
Mat1b mask;
inRange(segmented, segmented(seed) - tol, segmented(seed) + tol, mask);
// Find the connected component containing the seed
vector<Point> pts = connected_components(mask, seed);
// Number of pixels in the segment
int n_of_pixels_in_segment = pts.size();
Mat1b mask_segment(image.rows, image.cols, uchar(0));
for (const auto& pt : pts)
{
mask_segment(pt) = uchar(255);
}
// Get palette
map<Vec3b, int, lessVec3b> palette = getPalette(segmented, mask_segment);
// Get most frequent color
Vec3b most_frequent_color;
int freq = 0;
for (const auto& pal : palette)
{
if (pal.second > freq)
{
most_frequent_color = pal.first;
freq = pal.second;
}
}
cout << "# points in segment: " << n_of_pixels_in_segment << endl;
cout << "Most frequent color: " << most_frequent_color << " \t#: " << freq << endl;
return 0;
}
After creating the required mask as shown in previous answer or by any other means, you can create a contour around the mask image. This will give allow you to directly count the number of pixels within segment by using contourArea function.
You can segment out the selected area into a new submat and calculate histogram on it get most frequent values. If you are concerned with color values only and not the intensity values, you should also convert your image into HSV, LAB, or YCbCr color space as per requirement.

SDL2.0 screen nullptr on render of Window

Hey so I'm relatively new to the SDL library and just trying to get to grips with it.
I found a C++ conversion for Minecraft4k but it was based on SDL1.x so I'm trying to convert it to SDL2.0
At present the build is successful, but when it gets to;
plot(x, y, rgbmul(col, fxmul(br, ddist)));
It throws a read access violation exception:
screen was nullptr
This is my code;
// C++ port of Minecraft 4k JS (http://jsdo.it/notch/dB1E)
// By The8BitPimp
// See: the8bitpimp.wordpress.com
#include <SDL.h>
#include <math.h>
#include <windows.h>
#include <tchar.h>
#include "plot.h"
#include "llist.h"
const int w = 320;
const int h = 240;
SDL_Surface *screen = nullptr;
const float math_pi = 3.14159265359f;
static inline float math_sin(float x) {
return sinf(x);
}
static inline float math_cos(float x) {
return cosf(x);
}
// the texture map
int texmap[16 * 16 * 16 * 3];
// the voxel map
char map[64 * 64 * 64];
static inline int random(int max) {
return (rand() ^ (rand() << 16)) % max;
}
static inline void plot(int x, int y, int c) {
int *p = (int*)screen->pixels;
p[y * w + x] = c;
}
static void makeTextures(void) {
// each texture
for (int j = 0; j<16; j++) {
int k = 255 - random(96);
// each pixel in the texture
for (int m = 0; m<16 * 3; m++)
for (int n = 0; n<16; n++) {
int i1 = 0x966C4A;
int i2 = 0;
int i3 = 0;
if (j == 4)
i1 = 0x7F7F7F;
if ((j != 4) || (random(3) == 0))
k = 255 - random(96);
if (j == 1)
{
if (m < (((n * n * 3 + n * 81) >> 2) & 0x3) + 18)
i1 = 0x6AAA40;
else if (m < (((n * n * 3 + n * 81) >> 2) & 0x3) + 19)
k = k * 2 / 3;
}
if (j == 7)
{
i1 = 0x675231;
if ((n > 0) && (n < 15) && (((m > 0) && (m < 15)) || ((m > 32) && (m < 47))))
{
i1 = 0xBC9862;
i2 = n - 7;
i3 = (m & 0xF) - 7;
if (i2 < 0)
i2 = 1 - i2;
if (i3 < 0)
i3 = 1 - i3;
if (i3 > i2)
i2 = i3;
k = 196 - random(32) + i2 % 3 * 32;
}
else if (random(2) == 0)
k = k * (150 - (n & 0x1) * 100) / 100;
}
if (j == 5)
{
i1 = 0xB53A15;
if (((n + m / 4 * 4) % 8 == 0) || (m % 4 == 0))
i1 = 0xBCAFA5;
}
i2 = k;
if (m >= 32)
i2 /= 2;
if (j == 8)
{
i1 = 5298487;
if (random(2) == 0)
{
i1 = 0;
i2 = 255;
}
}
// fixed point colour multiply between i1 and i2
i3 =
((((i1 >> 16) & 0xFF) * i2 / 255) << 16) |
((((i1 >> 8) & 0xFF) * i2 / 255) << 8) |
((i1 & 0xFF) * i2 / 255);
// pack the colour away
texmap[n + m * 16 + j * 256 * 3] = i3;
}
}
}
static void makeMap(void) {
// add random blocks to the map
for (int x = 0; x < 64; x++) {
for (int y = 0; y < 64; y++) {
for (int z = 0; z < 64; z++) {
int i = (z << 12) | (y << 6) | x;
float yd = (y - 32.5) * 0.4;
float zd = (z - 32.5) * 0.4;
map[i] = random(16);
float th = random(256) / 256.0f;
if (th > sqrtf(sqrtf(yd * yd + zd * zd)) - 0.8f)
map[i] = 0;
}
}
}
}
static void init(void) {
makeTextures();
makeMap();
}
// fixed point byte byte multiply
static inline int fxmul(int a, int b) {
return (a*b) >> 8;
}
// fixed point 8bit packed colour multiply
static inline int rgbmul(int a, int b) {
int _r = (((a >> 16) & 0xff) * b) >> 8;
int _g = (((a >> 8) & 0xff) * b) >> 8;
int _b = (((a)& 0xff) * b) >> 8;
return (_r << 16) | (_g << 8) | _b;
}
static void render(void) {
float now = (float)(SDL_GetTicks() % 10000) / 10000.f;
float xRot = math_sin(now * math_pi * 2) * 0.4 + math_pi / 2;
float yRot = math_cos(now * math_pi * 2) * 0.4;
float yCos = math_cos(yRot);
float ySin = math_sin(yRot);
float xCos = math_cos(xRot);
float xSin = math_sin(xRot);
float ox = 32.5 + now * 64.0;
float oy = 32.5;
float oz = 32.5;
// for each column
for (int x = 0; x < w; x++) {
// get the x axis delta
float ___xd = ((float)x - (float)w / 2.f) / (float)h;
// for each row
for (int y = 0; y < h; y++) {
// get the y axis delta
float __yd = ((float)y - (float)h / 2.f) / (float)h;
float __zd = 1;
float ___zd = __zd * yCos + __yd * ySin;
float _yd = __yd * yCos - __zd * ySin;
float _xd = ___xd * xCos + ___zd * xSin;
float _zd = ___zd * xCos - ___xd * xSin;
int col = 0;
int br = 255;
float ddist = 0;
float closest = 32.f;
// for each principle axis x,y,z
for (int d = 0; d < 3; d++) {
float dimLength = _xd;
if (d == 1)
dimLength = _yd;
if (d == 2)
dimLength = _zd;
float ll = 1.0f / (dimLength < 0.f ? -dimLength : dimLength);
float xd = (_xd)* ll;
float yd = (_yd)* ll;
float zd = (_zd)* ll;
float initial = ox - floor(ox);
if (d == 1) initial = oy - floor(oy);
if (d == 2) initial = oz - floor(oz);
if (dimLength > 0) initial = 1 - initial;
float dist = ll * initial;
float xp = ox + xd * initial;
float yp = oy + yd * initial;
float zp = oz + zd * initial;
if (dimLength < 0) {
if (d == 0) xp--;
if (d == 1) yp--;
if (d == 2) zp--;
}
// while we are concidering a ray that is still closer then the best so far
while (dist < closest) {
// quantize to the map grid
int tex = map[(((int)zp & 63) << 12) | (((int)yp & 63) << 6) | ((int)xp & 63)];
// if this voxel has a texture applied
if (tex > 0) {
// find the uv coordinates of the intersection point
int u = ((int)((xp + zp) * 16.f)) & 15;
int v = ((int)(yp * 16.f) & 15) + 16;
// fix uvs for alternate directions?
if (d == 1) {
u = ((int)(xp * 16.f)) & 15;
v = (((int)(zp * 16.f)) & 15);
if (yd < 0)
v += 32;
}
// find the colour at the intersection point
int cc = texmap[u + v * 16 + tex * 256 * 3];
// if the colour is not transparent
if (cc > 0) {
col = cc;
ddist = 255 - ((dist / 32 * 255));
br = 255 * (255 - ((d + 2) % 3) * 50) / 255;
// we now have the closest hit point (also terminates this ray)
closest = dist;
}
}
// advance the ray
xp += xd;
yp += yd;
zp += zd;
dist += ll;
}
}
plot(x, y, rgbmul(col, fxmul(br, ddist)));
}
}
}
int main(int argc, char *argv[]) {
SDL_Init(SDL_INIT_EVERYTHING);
SDL_Window *screen;
screen = SDL_CreateWindow(
"Minecraft4k", // window title
SDL_WINDOWPOS_CENTERED, // initial x position
SDL_WINDOWPOS_CENTERED, // initial y position
320, // width, in pixels
240, // height, in pixels
SDL_WINDOW_OPENGL // flags - see below
);
SDL_Renderer* renderer;
renderer = SDL_CreateRenderer(screen, -1, SDL_RENDERER_ACCELERATED);
if (screen == nullptr) {
return 1;
}
init();
bool running = true;
while (running) {
SDL_Event event;
while (SDL_PollEvent(&event)) {
running &= (event.type != SDL_QUIT);
}
SDL_RenderPresent(renderer);
render();
}
SDL_DestroyWindow(screen);
SDL_Quit();
return 0;
}
When I actually run the code I do get a black screen, but the debugger lands on the line
plot(x, y, rgbmul(col, fxmul(br, ddist)));
in ;
static void render(void)
This is all just "for fun" so any information or guidance is appreciated.
You define screen twice (the first time as a global variable, the second time within your main), but you initialize it only once (within your main).
Because of that, the global variable screen actually is set to nullptr and plot fails trying to use it, as the error message states.